Sep 30 12:50:38 localhost kernel: Linux version 5.14.0-617.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-11), GNU ld version 2.35.2-67.el9) #1 SMP PREEMPT_DYNAMIC Mon Sep 15 21:46:13 UTC 2025
Sep 30 12:50:38 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Sep 30 12:50:38 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-617.el9.x86_64 root=UUID=d6a81468-b74c-4055-b485-def635ab40f8 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Sep 30 12:50:38 localhost kernel: BIOS-provided physical RAM map:
Sep 30 12:50:38 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Sep 30 12:50:38 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Sep 30 12:50:38 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Sep 30 12:50:38 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Sep 30 12:50:38 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Sep 30 12:50:38 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Sep 30 12:50:38 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Sep 30 12:50:38 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Sep 30 12:50:38 localhost kernel: NX (Execute Disable) protection: active
Sep 30 12:50:38 localhost kernel: APIC: Static calls initialized
Sep 30 12:50:38 localhost kernel: SMBIOS 2.8 present.
Sep 30 12:50:38 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Sep 30 12:50:38 localhost kernel: Hypervisor detected: KVM
Sep 30 12:50:38 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Sep 30 12:50:38 localhost kernel: kvm-clock: using sched offset of 3831009590 cycles
Sep 30 12:50:38 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Sep 30 12:50:38 localhost kernel: tsc: Detected 2800.000 MHz processor
Sep 30 12:50:38 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Sep 30 12:50:38 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Sep 30 12:50:38 localhost kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Sep 30 12:50:38 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Sep 30 12:50:38 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Sep 30 12:50:38 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Sep 30 12:50:38 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Sep 30 12:50:38 localhost kernel: Using GB pages for direct mapping
Sep 30 12:50:38 localhost kernel: RAMDISK: [mem 0x2d7d0000-0x32bdffff]
Sep 30 12:50:38 localhost kernel: ACPI: Early table checksum verification disabled
Sep 30 12:50:38 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Sep 30 12:50:38 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Sep 30 12:50:38 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Sep 30 12:50:38 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Sep 30 12:50:38 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Sep 30 12:50:38 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Sep 30 12:50:38 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Sep 30 12:50:38 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Sep 30 12:50:38 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Sep 30 12:50:38 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Sep 30 12:50:38 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Sep 30 12:50:38 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Sep 30 12:50:38 localhost kernel: No NUMA configuration found
Sep 30 12:50:38 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Sep 30 12:50:38 localhost kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Sep 30 12:50:38 localhost kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Sep 30 12:50:38 localhost kernel: Zone ranges:
Sep 30 12:50:38 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Sep 30 12:50:38 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Sep 30 12:50:38 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000023fffffff]
Sep 30 12:50:38 localhost kernel:   Device   empty
Sep 30 12:50:38 localhost kernel: Movable zone start for each node
Sep 30 12:50:38 localhost kernel: Early memory node ranges
Sep 30 12:50:38 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Sep 30 12:50:38 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Sep 30 12:50:38 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000023fffffff]
Sep 30 12:50:38 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Sep 30 12:50:38 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Sep 30 12:50:38 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Sep 30 12:50:38 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Sep 30 12:50:38 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Sep 30 12:50:38 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Sep 30 12:50:38 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Sep 30 12:50:38 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Sep 30 12:50:38 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Sep 30 12:50:38 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Sep 30 12:50:38 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Sep 30 12:50:38 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Sep 30 12:50:38 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Sep 30 12:50:38 localhost kernel: TSC deadline timer available
Sep 30 12:50:38 localhost kernel: CPU topo: Max. logical packages:   8
Sep 30 12:50:38 localhost kernel: CPU topo: Max. logical dies:       8
Sep 30 12:50:38 localhost kernel: CPU topo: Max. dies per package:   1
Sep 30 12:50:38 localhost kernel: CPU topo: Max. threads per core:   1
Sep 30 12:50:38 localhost kernel: CPU topo: Num. cores per package:     1
Sep 30 12:50:38 localhost kernel: CPU topo: Num. threads per package:   1
Sep 30 12:50:38 localhost kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Sep 30 12:50:38 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Sep 30 12:50:38 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Sep 30 12:50:38 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Sep 30 12:50:38 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Sep 30 12:50:38 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Sep 30 12:50:38 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Sep 30 12:50:38 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Sep 30 12:50:38 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Sep 30 12:50:38 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Sep 30 12:50:38 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Sep 30 12:50:38 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Sep 30 12:50:38 localhost kernel: Booting paravirtualized kernel on KVM
Sep 30 12:50:38 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Sep 30 12:50:38 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Sep 30 12:50:38 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Sep 30 12:50:38 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152
Sep 30 12:50:38 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Sep 30 12:50:38 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Sep 30 12:50:38 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-617.el9.x86_64 root=UUID=d6a81468-b74c-4055-b485-def635ab40f8 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Sep 30 12:50:38 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-617.el9.x86_64", will be passed to user space.
Sep 30 12:50:38 localhost kernel: random: crng init done
Sep 30 12:50:38 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Sep 30 12:50:38 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Sep 30 12:50:38 localhost kernel: Fallback order for Node 0: 0 
Sep 30 12:50:38 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Sep 30 12:50:38 localhost kernel: Policy zone: Normal
Sep 30 12:50:38 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Sep 30 12:50:38 localhost kernel: software IO TLB: area num 8.
Sep 30 12:50:38 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Sep 30 12:50:38 localhost kernel: ftrace: allocating 49329 entries in 193 pages
Sep 30 12:50:38 localhost kernel: ftrace: allocated 193 pages with 3 groups
Sep 30 12:50:38 localhost kernel: Dynamic Preempt: voluntary
Sep 30 12:50:38 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Sep 30 12:50:38 localhost kernel: rcu:         RCU event tracing is enabled.
Sep 30 12:50:38 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Sep 30 12:50:38 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Sep 30 12:50:38 localhost kernel:         Rude variant of Tasks RCU enabled.
Sep 30 12:50:38 localhost kernel:         Tracing variant of Tasks RCU enabled.
Sep 30 12:50:38 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Sep 30 12:50:38 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Sep 30 12:50:38 localhost kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Sep 30 12:50:38 localhost kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Sep 30 12:50:38 localhost kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Sep 30 12:50:38 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Sep 30 12:50:38 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Sep 30 12:50:38 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Sep 30 12:50:38 localhost kernel: Console: colour VGA+ 80x25
Sep 30 12:50:38 localhost kernel: printk: console [ttyS0] enabled
Sep 30 12:50:38 localhost kernel: ACPI: Core revision 20230331
Sep 30 12:50:38 localhost kernel: APIC: Switch to symmetric I/O mode setup
Sep 30 12:50:38 localhost kernel: x2apic enabled
Sep 30 12:50:38 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Sep 30 12:50:38 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Sep 30 12:50:38 localhost kernel: Calibrating delay loop (skipped) preset value.. 5600.00 BogoMIPS (lpj=2800000)
Sep 30 12:50:38 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Sep 30 12:50:38 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Sep 30 12:50:38 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Sep 30 12:50:38 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Sep 30 12:50:38 localhost kernel: Spectre V2 : Mitigation: Retpolines
Sep 30 12:50:38 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Sep 30 12:50:38 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Sep 30 12:50:38 localhost kernel: RETBleed: Mitigation: untrained return thunk
Sep 30 12:50:38 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Sep 30 12:50:38 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Sep 30 12:50:38 localhost kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Sep 30 12:50:38 localhost kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Sep 30 12:50:38 localhost kernel: x86/bugs: return thunk changed
Sep 30 12:50:38 localhost kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Sep 30 12:50:38 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Sep 30 12:50:38 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Sep 30 12:50:38 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Sep 30 12:50:38 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Sep 30 12:50:38 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Sep 30 12:50:38 localhost kernel: Freeing SMP alternatives memory: 40K
Sep 30 12:50:38 localhost kernel: pid_max: default: 32768 minimum: 301
Sep 30 12:50:38 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Sep 30 12:50:38 localhost kernel: landlock: Up and running.
Sep 30 12:50:38 localhost kernel: Yama: becoming mindful.
Sep 30 12:50:38 localhost kernel: SELinux:  Initializing.
Sep 30 12:50:38 localhost kernel: LSM support for eBPF active
Sep 30 12:50:38 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Sep 30 12:50:38 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Sep 30 12:50:38 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Sep 30 12:50:38 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Sep 30 12:50:38 localhost kernel: ... version:                0
Sep 30 12:50:38 localhost kernel: ... bit width:              48
Sep 30 12:50:38 localhost kernel: ... generic registers:      6
Sep 30 12:50:38 localhost kernel: ... value mask:             0000ffffffffffff
Sep 30 12:50:38 localhost kernel: ... max period:             00007fffffffffff
Sep 30 12:50:38 localhost kernel: ... fixed-purpose events:   0
Sep 30 12:50:38 localhost kernel: ... event mask:             000000000000003f
Sep 30 12:50:38 localhost kernel: signal: max sigframe size: 1776
Sep 30 12:50:38 localhost kernel: rcu: Hierarchical SRCU implementation.
Sep 30 12:50:38 localhost kernel: rcu:         Max phase no-delay instances is 400.
Sep 30 12:50:38 localhost kernel: smp: Bringing up secondary CPUs ...
Sep 30 12:50:38 localhost kernel: smpboot: x86: Booting SMP configuration:
Sep 30 12:50:38 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Sep 30 12:50:38 localhost kernel: smp: Brought up 1 node, 8 CPUs
Sep 30 12:50:38 localhost kernel: smpboot: Total of 8 processors activated (44800.00 BogoMIPS)
Sep 30 12:50:38 localhost kernel: node 0 deferred pages initialised in 33ms
Sep 30 12:50:38 localhost kernel: Memory: 7765628K/8388068K available (16384K kernel code, 5784K rwdata, 13988K rodata, 4072K init, 7304K bss, 616480K reserved, 0K cma-reserved)
Sep 30 12:50:38 localhost kernel: devtmpfs: initialized
Sep 30 12:50:38 localhost kernel: x86/mm: Memory block size: 128MB
Sep 30 12:50:38 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Sep 30 12:50:38 localhost kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear)
Sep 30 12:50:38 localhost kernel: pinctrl core: initialized pinctrl subsystem
Sep 30 12:50:38 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Sep 30 12:50:38 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Sep 30 12:50:38 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Sep 30 12:50:38 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Sep 30 12:50:38 localhost kernel: audit: initializing netlink subsys (disabled)
Sep 30 12:50:38 localhost kernel: audit: type=2000 audit(1759236636.799:1): state=initialized audit_enabled=0 res=1
Sep 30 12:50:38 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Sep 30 12:50:38 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Sep 30 12:50:38 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Sep 30 12:50:38 localhost kernel: cpuidle: using governor menu
Sep 30 12:50:38 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Sep 30 12:50:38 localhost kernel: PCI: Using configuration type 1 for base access
Sep 30 12:50:38 localhost kernel: PCI: Using configuration type 1 for extended access
Sep 30 12:50:38 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Sep 30 12:50:38 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Sep 30 12:50:38 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Sep 30 12:50:38 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Sep 30 12:50:38 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Sep 30 12:50:38 localhost kernel: Demotion targets for Node 0: null
Sep 30 12:50:38 localhost kernel: cryptd: max_cpu_qlen set to 1000
Sep 30 12:50:38 localhost kernel: ACPI: Added _OSI(Module Device)
Sep 30 12:50:38 localhost kernel: ACPI: Added _OSI(Processor Device)
Sep 30 12:50:38 localhost kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Sep 30 12:50:38 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Sep 30 12:50:38 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Sep 30 12:50:38 localhost kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Sep 30 12:50:38 localhost kernel: ACPI: Interpreter enabled
Sep 30 12:50:38 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Sep 30 12:50:38 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Sep 30 12:50:38 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Sep 30 12:50:38 localhost kernel: PCI: Using E820 reservations for host bridge windows
Sep 30 12:50:38 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Sep 30 12:50:38 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Sep 30 12:50:38 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Sep 30 12:50:38 localhost kernel: acpiphp: Slot [3] registered
Sep 30 12:50:38 localhost kernel: acpiphp: Slot [4] registered
Sep 30 12:50:38 localhost kernel: acpiphp: Slot [5] registered
Sep 30 12:50:38 localhost kernel: acpiphp: Slot [6] registered
Sep 30 12:50:38 localhost kernel: acpiphp: Slot [7] registered
Sep 30 12:50:38 localhost kernel: acpiphp: Slot [8] registered
Sep 30 12:50:38 localhost kernel: acpiphp: Slot [9] registered
Sep 30 12:50:38 localhost kernel: acpiphp: Slot [10] registered
Sep 30 12:50:38 localhost kernel: acpiphp: Slot [11] registered
Sep 30 12:50:38 localhost kernel: acpiphp: Slot [12] registered
Sep 30 12:50:38 localhost kernel: acpiphp: Slot [13] registered
Sep 30 12:50:38 localhost kernel: acpiphp: Slot [14] registered
Sep 30 12:50:38 localhost kernel: acpiphp: Slot [15] registered
Sep 30 12:50:38 localhost kernel: acpiphp: Slot [16] registered
Sep 30 12:50:38 localhost kernel: acpiphp: Slot [17] registered
Sep 30 12:50:38 localhost kernel: acpiphp: Slot [18] registered
Sep 30 12:50:38 localhost kernel: acpiphp: Slot [19] registered
Sep 30 12:50:38 localhost kernel: acpiphp: Slot [20] registered
Sep 30 12:50:38 localhost kernel: acpiphp: Slot [21] registered
Sep 30 12:50:38 localhost kernel: acpiphp: Slot [22] registered
Sep 30 12:50:38 localhost kernel: acpiphp: Slot [23] registered
Sep 30 12:50:38 localhost kernel: acpiphp: Slot [24] registered
Sep 30 12:50:38 localhost kernel: acpiphp: Slot [25] registered
Sep 30 12:50:38 localhost kernel: acpiphp: Slot [26] registered
Sep 30 12:50:38 localhost kernel: acpiphp: Slot [27] registered
Sep 30 12:50:38 localhost kernel: acpiphp: Slot [28] registered
Sep 30 12:50:38 localhost kernel: acpiphp: Slot [29] registered
Sep 30 12:50:38 localhost kernel: acpiphp: Slot [30] registered
Sep 30 12:50:38 localhost kernel: acpiphp: Slot [31] registered
Sep 30 12:50:38 localhost kernel: PCI host bridge to bus 0000:00
Sep 30 12:50:38 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Sep 30 12:50:38 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Sep 30 12:50:38 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Sep 30 12:50:38 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Sep 30 12:50:38 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Sep 30 12:50:38 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Sep 30 12:50:38 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Sep 30 12:50:38 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Sep 30 12:50:38 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Sep 30 12:50:38 localhost kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Sep 30 12:50:38 localhost kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Sep 30 12:50:38 localhost kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Sep 30 12:50:38 localhost kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Sep 30 12:50:38 localhost kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Sep 30 12:50:38 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Sep 30 12:50:38 localhost kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Sep 30 12:50:38 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Sep 30 12:50:38 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Sep 30 12:50:38 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Sep 30 12:50:38 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Sep 30 12:50:38 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Sep 30 12:50:38 localhost kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Sep 30 12:50:38 localhost kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Sep 30 12:50:38 localhost kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Sep 30 12:50:38 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Sep 30 12:50:38 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Sep 30 12:50:38 localhost kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Sep 30 12:50:38 localhost kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Sep 30 12:50:38 localhost kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Sep 30 12:50:38 localhost kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Sep 30 12:50:38 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Sep 30 12:50:38 localhost kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Sep 30 12:50:38 localhost kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Sep 30 12:50:38 localhost kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Sep 30 12:50:38 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Sep 30 12:50:38 localhost kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Sep 30 12:50:38 localhost kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Sep 30 12:50:38 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Sep 30 12:50:38 localhost kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Sep 30 12:50:38 localhost kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Sep 30 12:50:38 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Sep 30 12:50:38 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Sep 30 12:50:38 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Sep 30 12:50:38 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Sep 30 12:50:38 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Sep 30 12:50:38 localhost kernel: iommu: Default domain type: Translated
Sep 30 12:50:38 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Sep 30 12:50:38 localhost kernel: SCSI subsystem initialized
Sep 30 12:50:38 localhost kernel: ACPI: bus type USB registered
Sep 30 12:50:38 localhost kernel: usbcore: registered new interface driver usbfs
Sep 30 12:50:38 localhost kernel: usbcore: registered new interface driver hub
Sep 30 12:50:38 localhost kernel: usbcore: registered new device driver usb
Sep 30 12:50:38 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Sep 30 12:50:38 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Sep 30 12:50:38 localhost kernel: PTP clock support registered
Sep 30 12:50:38 localhost kernel: EDAC MC: Ver: 3.0.0
Sep 30 12:50:38 localhost kernel: NetLabel: Initializing
Sep 30 12:50:38 localhost kernel: NetLabel:  domain hash size = 128
Sep 30 12:50:38 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Sep 30 12:50:38 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Sep 30 12:50:38 localhost kernel: PCI: Using ACPI for IRQ routing
Sep 30 12:50:38 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Sep 30 12:50:38 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Sep 30 12:50:38 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Sep 30 12:50:38 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Sep 30 12:50:38 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Sep 30 12:50:38 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Sep 30 12:50:38 localhost kernel: vgaarb: loaded
Sep 30 12:50:38 localhost kernel: clocksource: Switched to clocksource kvm-clock
Sep 30 12:50:38 localhost kernel: VFS: Disk quotas dquot_6.6.0
Sep 30 12:50:38 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Sep 30 12:50:38 localhost kernel: pnp: PnP ACPI init
Sep 30 12:50:38 localhost kernel: pnp 00:03: [dma 2]
Sep 30 12:50:38 localhost kernel: pnp: PnP ACPI: found 5 devices
Sep 30 12:50:38 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Sep 30 12:50:38 localhost kernel: NET: Registered PF_INET protocol family
Sep 30 12:50:38 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Sep 30 12:50:38 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Sep 30 12:50:38 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Sep 30 12:50:38 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Sep 30 12:50:38 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Sep 30 12:50:38 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Sep 30 12:50:38 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Sep 30 12:50:38 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Sep 30 12:50:38 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Sep 30 12:50:38 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Sep 30 12:50:38 localhost kernel: NET: Registered PF_XDP protocol family
Sep 30 12:50:38 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Sep 30 12:50:38 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Sep 30 12:50:38 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Sep 30 12:50:38 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Sep 30 12:50:38 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Sep 30 12:50:38 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Sep 30 12:50:38 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Sep 30 12:50:38 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Sep 30 12:50:38 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x140 took 106928 usecs
Sep 30 12:50:38 localhost kernel: PCI: CLS 0 bytes, default 64
Sep 30 12:50:38 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Sep 30 12:50:38 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Sep 30 12:50:38 localhost kernel: ACPI: bus type thunderbolt registered
Sep 30 12:50:38 localhost kernel: Trying to unpack rootfs image as initramfs...
Sep 30 12:50:38 localhost kernel: Initialise system trusted keyrings
Sep 30 12:50:38 localhost kernel: Key type blacklist registered
Sep 30 12:50:38 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Sep 30 12:50:38 localhost kernel: zbud: loaded
Sep 30 12:50:38 localhost kernel: integrity: Platform Keyring initialized
Sep 30 12:50:38 localhost kernel: integrity: Machine keyring initialized
Sep 30 12:50:38 localhost kernel: Freeing initrd memory: 86080K
Sep 30 12:50:38 localhost kernel: NET: Registered PF_ALG protocol family
Sep 30 12:50:38 localhost kernel: xor: automatically using best checksumming function   avx       
Sep 30 12:50:38 localhost kernel: Key type asymmetric registered
Sep 30 12:50:38 localhost kernel: Asymmetric key parser 'x509' registered
Sep 30 12:50:38 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Sep 30 12:50:38 localhost kernel: io scheduler mq-deadline registered
Sep 30 12:50:38 localhost kernel: io scheduler kyber registered
Sep 30 12:50:38 localhost kernel: io scheduler bfq registered
Sep 30 12:50:38 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Sep 30 12:50:38 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Sep 30 12:50:38 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Sep 30 12:50:38 localhost kernel: ACPI: button: Power Button [PWRF]
Sep 30 12:50:38 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Sep 30 12:50:38 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Sep 30 12:50:38 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Sep 30 12:50:38 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Sep 30 12:50:38 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Sep 30 12:50:38 localhost kernel: Non-volatile memory driver v1.3
Sep 30 12:50:38 localhost kernel: rdac: device handler registered
Sep 30 12:50:38 localhost kernel: hp_sw: device handler registered
Sep 30 12:50:38 localhost kernel: emc: device handler registered
Sep 30 12:50:38 localhost kernel: alua: device handler registered
Sep 30 12:50:38 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Sep 30 12:50:38 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Sep 30 12:50:38 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Sep 30 12:50:38 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Sep 30 12:50:38 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Sep 30 12:50:38 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Sep 30 12:50:38 localhost kernel: usb usb1: Product: UHCI Host Controller
Sep 30 12:50:38 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-617.el9.x86_64 uhci_hcd
Sep 30 12:50:38 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Sep 30 12:50:38 localhost kernel: hub 1-0:1.0: USB hub found
Sep 30 12:50:38 localhost kernel: hub 1-0:1.0: 2 ports detected
Sep 30 12:50:38 localhost kernel: usbcore: registered new interface driver usbserial_generic
Sep 30 12:50:38 localhost kernel: usbserial: USB Serial support registered for generic
Sep 30 12:50:38 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Sep 30 12:50:38 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Sep 30 12:50:38 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Sep 30 12:50:38 localhost kernel: mousedev: PS/2 mouse device common for all mice
Sep 30 12:50:38 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Sep 30 12:50:38 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Sep 30 12:50:38 localhost kernel: rtc_cmos 00:04: registered as rtc0
Sep 30 12:50:38 localhost kernel: rtc_cmos 00:04: setting system clock to 2025-09-30T12:50:37 UTC (1759236637)
Sep 30 12:50:38 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Sep 30 12:50:38 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Sep 30 12:50:38 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Sep 30 12:50:38 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Sep 30 12:50:38 localhost kernel: usbcore: registered new interface driver usbhid
Sep 30 12:50:38 localhost kernel: usbhid: USB HID core driver
Sep 30 12:50:38 localhost kernel: drop_monitor: Initializing network drop monitor service
Sep 30 12:50:38 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Sep 30 12:50:38 localhost kernel: Initializing XFRM netlink socket
Sep 30 12:50:38 localhost kernel: NET: Registered PF_INET6 protocol family
Sep 30 12:50:38 localhost kernel: Segment Routing with IPv6
Sep 30 12:50:38 localhost kernel: NET: Registered PF_PACKET protocol family
Sep 30 12:50:38 localhost kernel: mpls_gso: MPLS GSO support
Sep 30 12:50:38 localhost kernel: IPI shorthand broadcast: enabled
Sep 30 12:50:38 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Sep 30 12:50:38 localhost kernel: AES CTR mode by8 optimization enabled
Sep 30 12:50:38 localhost kernel: sched_clock: Marking stable (1288001550, 139728690)->(1506139430, -78409190)
Sep 30 12:50:38 localhost kernel: registered taskstats version 1
Sep 30 12:50:38 localhost kernel: Loading compiled-in X.509 certificates
Sep 30 12:50:38 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: bb2966091bafcba340f8183756023c985dcc8fe9'
Sep 30 12:50:38 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Sep 30 12:50:38 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Sep 30 12:50:38 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Sep 30 12:50:38 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Sep 30 12:50:38 localhost kernel: Demotion targets for Node 0: null
Sep 30 12:50:38 localhost kernel: page_owner is disabled
Sep 30 12:50:38 localhost kernel: Key type .fscrypt registered
Sep 30 12:50:38 localhost kernel: Key type fscrypt-provisioning registered
Sep 30 12:50:38 localhost kernel: Key type big_key registered
Sep 30 12:50:38 localhost kernel: Key type encrypted registered
Sep 30 12:50:38 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Sep 30 12:50:38 localhost kernel: Loading compiled-in module X.509 certificates
Sep 30 12:50:38 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: bb2966091bafcba340f8183756023c985dcc8fe9'
Sep 30 12:50:38 localhost kernel: ima: Allocated hash algorithm: sha256
Sep 30 12:50:38 localhost kernel: ima: No architecture policies found
Sep 30 12:50:38 localhost kernel: evm: Initialising EVM extended attributes:
Sep 30 12:50:38 localhost kernel: evm: security.selinux
Sep 30 12:50:38 localhost kernel: evm: security.SMACK64 (disabled)
Sep 30 12:50:38 localhost kernel: evm: security.SMACK64EXEC (disabled)
Sep 30 12:50:38 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Sep 30 12:50:38 localhost kernel: evm: security.SMACK64MMAP (disabled)
Sep 30 12:50:38 localhost kernel: evm: security.apparmor (disabled)
Sep 30 12:50:38 localhost kernel: evm: security.ima
Sep 30 12:50:38 localhost kernel: evm: security.capability
Sep 30 12:50:38 localhost kernel: evm: HMAC attrs: 0x1
Sep 30 12:50:38 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Sep 30 12:50:38 localhost kernel: Running certificate verification RSA selftest
Sep 30 12:50:38 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Sep 30 12:50:38 localhost kernel: Running certificate verification ECDSA selftest
Sep 30 12:50:38 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Sep 30 12:50:38 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Sep 30 12:50:38 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Sep 30 12:50:38 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Sep 30 12:50:38 localhost kernel: usb 1-1: Manufacturer: QEMU
Sep 30 12:50:38 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Sep 30 12:50:38 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Sep 30 12:50:38 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Sep 30 12:50:38 localhost kernel: clk: Disabling unused clocks
Sep 30 12:50:38 localhost kernel: Freeing unused decrypted memory: 2028K
Sep 30 12:50:38 localhost kernel: Freeing unused kernel image (initmem) memory: 4072K
Sep 30 12:50:38 localhost kernel: Write protecting the kernel read-only data: 30720k
Sep 30 12:50:38 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 348K
Sep 30 12:50:38 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Sep 30 12:50:38 localhost kernel: Run /init as init process
Sep 30 12:50:38 localhost kernel:   with arguments:
Sep 30 12:50:38 localhost kernel:     /init
Sep 30 12:50:38 localhost kernel:   with environment:
Sep 30 12:50:38 localhost kernel:     HOME=/
Sep 30 12:50:38 localhost kernel:     TERM=linux
Sep 30 12:50:38 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-617.el9.x86_64
Sep 30 12:50:38 localhost systemd[1]: systemd 252-55.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Sep 30 12:50:38 localhost systemd[1]: Detected virtualization kvm.
Sep 30 12:50:38 localhost systemd[1]: Detected architecture x86-64.
Sep 30 12:50:38 localhost systemd[1]: Running in initrd.
Sep 30 12:50:38 localhost systemd[1]: No hostname configured, using default hostname.
Sep 30 12:50:38 localhost systemd[1]: Hostname set to <localhost>.
Sep 30 12:50:38 localhost systemd[1]: Initializing machine ID from VM UUID.
Sep 30 12:50:38 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Sep 30 12:50:38 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Sep 30 12:50:38 localhost systemd[1]: Reached target Local Encrypted Volumes.
Sep 30 12:50:38 localhost systemd[1]: Reached target Initrd /usr File System.
Sep 30 12:50:38 localhost systemd[1]: Reached target Local File Systems.
Sep 30 12:50:38 localhost systemd[1]: Reached target Path Units.
Sep 30 12:50:38 localhost systemd[1]: Reached target Slice Units.
Sep 30 12:50:38 localhost systemd[1]: Reached target Swaps.
Sep 30 12:50:38 localhost systemd[1]: Reached target Timer Units.
Sep 30 12:50:38 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Sep 30 12:50:38 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Sep 30 12:50:38 localhost systemd[1]: Listening on Journal Socket.
Sep 30 12:50:38 localhost systemd[1]: Listening on udev Control Socket.
Sep 30 12:50:38 localhost systemd[1]: Listening on udev Kernel Socket.
Sep 30 12:50:38 localhost systemd[1]: Reached target Socket Units.
Sep 30 12:50:38 localhost systemd[1]: Starting Create List of Static Device Nodes...
Sep 30 12:50:38 localhost systemd[1]: Starting Journal Service...
Sep 30 12:50:38 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Sep 30 12:50:38 localhost systemd[1]: Starting Apply Kernel Variables...
Sep 30 12:50:38 localhost systemd[1]: Starting Create System Users...
Sep 30 12:50:38 localhost systemd[1]: Starting Setup Virtual Console...
Sep 30 12:50:38 localhost systemd[1]: Finished Create List of Static Device Nodes.
Sep 30 12:50:38 localhost systemd[1]: Finished Apply Kernel Variables.
Sep 30 12:50:38 localhost systemd[1]: Finished Create System Users.
Sep 30 12:50:38 localhost systemd-journald[308]: Journal started
Sep 30 12:50:38 localhost systemd-journald[308]: Runtime Journal (/run/log/journal/294e3813d40945a19fb5458cf671b312) is 8.0M, max 153.5M, 145.5M free.
Sep 30 12:50:38 localhost systemd-sysusers[312]: Creating group 'users' with GID 100.
Sep 30 12:50:38 localhost systemd-sysusers[312]: Creating group 'dbus' with GID 81.
Sep 30 12:50:38 localhost systemd-sysusers[312]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Sep 30 12:50:38 localhost systemd[1]: Started Journal Service.
Sep 30 12:50:38 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Sep 30 12:50:38 localhost systemd[1]: Starting Create Volatile Files and Directories...
Sep 30 12:50:38 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Sep 30 12:50:38 localhost systemd[1]: Finished Create Volatile Files and Directories.
Sep 30 12:50:38 localhost systemd[1]: Finished Setup Virtual Console.
Sep 30 12:50:38 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Sep 30 12:50:38 localhost systemd[1]: Starting dracut cmdline hook...
Sep 30 12:50:38 localhost dracut-cmdline[329]: dracut-9 dracut-057-102.git20250818.el9
Sep 30 12:50:38 localhost dracut-cmdline[329]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-617.el9.x86_64 root=UUID=d6a81468-b74c-4055-b485-def635ab40f8 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Sep 30 12:50:38 localhost systemd[1]: Finished dracut cmdline hook.
Sep 30 12:50:38 localhost systemd[1]: Starting dracut pre-udev hook...
Sep 30 12:50:38 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Sep 30 12:50:38 localhost kernel: device-mapper: uevent: version 1.0.3
Sep 30 12:50:38 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Sep 30 12:50:38 localhost kernel: RPC: Registered named UNIX socket transport module.
Sep 30 12:50:38 localhost kernel: RPC: Registered udp transport module.
Sep 30 12:50:38 localhost kernel: RPC: Registered tcp transport module.
Sep 30 12:50:38 localhost kernel: RPC: Registered tcp-with-tls transport module.
Sep 30 12:50:38 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Sep 30 12:50:39 localhost rpc.statd[446]: Version 2.5.4 starting
Sep 30 12:50:39 localhost rpc.statd[446]: Initializing NSM state
Sep 30 12:50:39 localhost rpc.idmapd[451]: Setting log level to 0
Sep 30 12:50:39 localhost systemd[1]: Finished dracut pre-udev hook.
Sep 30 12:50:39 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Sep 30 12:50:39 localhost systemd-udevd[464]: Using default interface naming scheme 'rhel-9.0'.
Sep 30 12:50:39 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Sep 30 12:50:39 localhost systemd[1]: Starting dracut pre-trigger hook...
Sep 30 12:50:39 localhost systemd[1]: Finished dracut pre-trigger hook.
Sep 30 12:50:39 localhost systemd[1]: Starting Coldplug All udev Devices...
Sep 30 12:50:39 localhost systemd[1]: Created slice Slice /system/modprobe.
Sep 30 12:50:39 localhost systemd[1]: Starting Load Kernel Module configfs...
Sep 30 12:50:39 localhost systemd[1]: Finished Coldplug All udev Devices.
Sep 30 12:50:39 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Sep 30 12:50:39 localhost systemd[1]: Finished Load Kernel Module configfs.
Sep 30 12:50:39 localhost systemd[1]: Mounting Kernel Configuration File System...
Sep 30 12:50:39 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Sep 30 12:50:39 localhost systemd[1]: Reached target Network.
Sep 30 12:50:39 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Sep 30 12:50:39 localhost systemd[1]: Starting dracut initqueue hook...
Sep 30 12:50:39 localhost systemd[1]: Mounted Kernel Configuration File System.
Sep 30 12:50:39 localhost systemd[1]: Reached target System Initialization.
Sep 30 12:50:39 localhost systemd[1]: Reached target Basic System.
Sep 30 12:50:39 localhost kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Sep 30 12:50:39 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Sep 30 12:50:39 localhost kernel: libata version 3.00 loaded.
Sep 30 12:50:39 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Sep 30 12:50:39 localhost kernel:  vda: vda1
Sep 30 12:50:39 localhost systemd-udevd[478]: Network interface NamePolicy= disabled on kernel command line.
Sep 30 12:50:39 localhost kernel: scsi host0: ata_piix
Sep 30 12:50:39 localhost kernel: scsi host1: ata_piix
Sep 30 12:50:39 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Sep 30 12:50:39 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Sep 30 12:50:39 localhost systemd[1]: Found device /dev/disk/by-uuid/d6a81468-b74c-4055-b485-def635ab40f8.
Sep 30 12:50:39 localhost systemd[1]: Reached target Initrd Root Device.
Sep 30 12:50:39 localhost kernel: ata1: found unknown device (class 0)
Sep 30 12:50:39 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Sep 30 12:50:39 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Sep 30 12:50:39 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Sep 30 12:50:39 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Sep 30 12:50:39 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Sep 30 12:50:39 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Sep 30 12:50:39 localhost systemd[1]: Finished dracut initqueue hook.
Sep 30 12:50:39 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Sep 30 12:50:39 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Sep 30 12:50:39 localhost systemd[1]: Reached target Remote File Systems.
Sep 30 12:50:39 localhost systemd[1]: Starting dracut pre-mount hook...
Sep 30 12:50:39 localhost systemd[1]: Finished dracut pre-mount hook.
Sep 30 12:50:39 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/d6a81468-b74c-4055-b485-def635ab40f8...
Sep 30 12:50:39 localhost systemd-fsck[558]: /usr/sbin/fsck.xfs: XFS file system.
Sep 30 12:50:39 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/d6a81468-b74c-4055-b485-def635ab40f8.
Sep 30 12:50:39 localhost systemd[1]: Mounting /sysroot...
Sep 30 12:50:40 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Sep 30 12:50:40 localhost kernel: XFS (vda1): Mounting V5 Filesystem d6a81468-b74c-4055-b485-def635ab40f8
Sep 30 12:50:40 localhost kernel: XFS (vda1): Ending clean mount
Sep 30 12:50:40 localhost systemd[1]: Mounted /sysroot.
Sep 30 12:50:40 localhost systemd[1]: Reached target Initrd Root File System.
Sep 30 12:50:40 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Sep 30 12:50:40 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Sep 30 12:50:40 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Sep 30 12:50:40 localhost systemd[1]: Reached target Initrd File Systems.
Sep 30 12:50:40 localhost systemd[1]: Reached target Initrd Default Target.
Sep 30 12:50:40 localhost systemd[1]: Starting dracut mount hook...
Sep 30 12:50:40 localhost systemd[1]: Finished dracut mount hook.
Sep 30 12:50:40 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Sep 30 12:50:40 localhost rpc.idmapd[451]: exiting on signal 15
Sep 30 12:50:40 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Sep 30 12:50:40 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Sep 30 12:50:40 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Sep 30 12:50:40 localhost systemd[1]: Stopped target Network.
Sep 30 12:50:40 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Sep 30 12:50:40 localhost systemd[1]: Stopped target Timer Units.
Sep 30 12:50:40 localhost systemd[1]: dbus.socket: Deactivated successfully.
Sep 30 12:50:40 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Sep 30 12:50:40 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Sep 30 12:50:40 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Sep 30 12:50:40 localhost systemd[1]: Stopped target Initrd Default Target.
Sep 30 12:50:40 localhost systemd[1]: Stopped target Basic System.
Sep 30 12:50:40 localhost systemd[1]: Stopped target Initrd Root Device.
Sep 30 12:50:40 localhost systemd[1]: Stopped target Initrd /usr File System.
Sep 30 12:50:40 localhost systemd[1]: Stopped target Path Units.
Sep 30 12:50:40 localhost systemd[1]: Stopped target Remote File Systems.
Sep 30 12:50:40 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Sep 30 12:50:40 localhost systemd[1]: Stopped target Slice Units.
Sep 30 12:50:40 localhost systemd[1]: Stopped target Socket Units.
Sep 30 12:50:40 localhost systemd[1]: Stopped target System Initialization.
Sep 30 12:50:40 localhost systemd[1]: Stopped target Local File Systems.
Sep 30 12:50:40 localhost systemd[1]: Stopped target Swaps.
Sep 30 12:50:40 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Sep 30 12:50:40 localhost systemd[1]: Stopped dracut mount hook.
Sep 30 12:50:40 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Sep 30 12:50:40 localhost systemd[1]: Stopped dracut pre-mount hook.
Sep 30 12:50:40 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Sep 30 12:50:40 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Sep 30 12:50:40 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Sep 30 12:50:40 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Sep 30 12:50:40 localhost systemd[1]: Stopped dracut initqueue hook.
Sep 30 12:50:40 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Sep 30 12:50:40 localhost systemd[1]: Stopped Apply Kernel Variables.
Sep 30 12:50:40 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Sep 30 12:50:40 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Sep 30 12:50:40 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Sep 30 12:50:40 localhost systemd[1]: Stopped Coldplug All udev Devices.
Sep 30 12:50:40 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Sep 30 12:50:40 localhost systemd[1]: Stopped dracut pre-trigger hook.
Sep 30 12:50:40 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Sep 30 12:50:40 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Sep 30 12:50:40 localhost systemd[1]: Stopped Setup Virtual Console.
Sep 30 12:50:40 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Sep 30 12:50:40 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Sep 30 12:50:40 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Sep 30 12:50:40 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Sep 30 12:50:40 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Sep 30 12:50:40 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Sep 30 12:50:40 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Sep 30 12:50:40 localhost systemd[1]: Closed udev Control Socket.
Sep 30 12:50:40 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Sep 30 12:50:40 localhost systemd[1]: Closed udev Kernel Socket.
Sep 30 12:50:40 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Sep 30 12:50:40 localhost systemd[1]: Stopped dracut pre-udev hook.
Sep 30 12:50:40 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Sep 30 12:50:40 localhost systemd[1]: Stopped dracut cmdline hook.
Sep 30 12:50:40 localhost systemd[1]: Starting Cleanup udev Database...
Sep 30 12:50:40 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Sep 30 12:50:40 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Sep 30 12:50:40 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Sep 30 12:50:40 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Sep 30 12:50:40 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Sep 30 12:50:40 localhost systemd[1]: Stopped Create System Users.
Sep 30 12:50:40 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Sep 30 12:50:40 localhost systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Sep 30 12:50:40 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Sep 30 12:50:40 localhost systemd[1]: Finished Cleanup udev Database.
Sep 30 12:50:40 localhost systemd[1]: Reached target Switch Root.
Sep 30 12:50:40 localhost systemd[1]: Starting Switch Root...
Sep 30 12:50:40 localhost systemd[1]: Switching root.
Sep 30 12:50:40 localhost systemd-journald[308]: Journal stopped
Sep 30 12:50:41 localhost systemd-journald[308]: Received SIGTERM from PID 1 (systemd).
Sep 30 12:50:41 localhost kernel: audit: type=1404 audit(1759236640.991:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Sep 30 12:50:41 localhost kernel: SELinux:  policy capability network_peer_controls=1
Sep 30 12:50:41 localhost kernel: SELinux:  policy capability open_perms=1
Sep 30 12:50:41 localhost kernel: SELinux:  policy capability extended_socket_class=1
Sep 30 12:50:41 localhost kernel: SELinux:  policy capability always_check_network=0
Sep 30 12:50:41 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Sep 30 12:50:41 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Sep 30 12:50:41 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Sep 30 12:50:41 localhost kernel: audit: type=1403 audit(1759236641.173:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Sep 30 12:50:41 localhost systemd[1]: Successfully loaded SELinux policy in 186.541ms.
Sep 30 12:50:41 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 26.752ms.
Sep 30 12:50:41 localhost systemd[1]: systemd 252-55.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Sep 30 12:50:41 localhost systemd[1]: Detected virtualization kvm.
Sep 30 12:50:41 localhost systemd[1]: Detected architecture x86-64.
Sep 30 12:50:41 localhost systemd-rc-local-generator[639]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 12:50:41 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully.
Sep 30 12:50:41 localhost systemd[1]: Stopped Switch Root.
Sep 30 12:50:41 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Sep 30 12:50:41 localhost systemd[1]: Created slice Slice /system/getty.
Sep 30 12:50:41 localhost systemd[1]: Created slice Slice /system/serial-getty.
Sep 30 12:50:41 localhost systemd[1]: Created slice Slice /system/sshd-keygen.
Sep 30 12:50:41 localhost systemd[1]: Created slice User and Session Slice.
Sep 30 12:50:41 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Sep 30 12:50:41 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Sep 30 12:50:41 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Sep 30 12:50:41 localhost systemd[1]: Reached target Local Encrypted Volumes.
Sep 30 12:50:41 localhost systemd[1]: Stopped target Switch Root.
Sep 30 12:50:41 localhost systemd[1]: Stopped target Initrd File Systems.
Sep 30 12:50:41 localhost systemd[1]: Stopped target Initrd Root File System.
Sep 30 12:50:41 localhost systemd[1]: Reached target Local Integrity Protected Volumes.
Sep 30 12:50:41 localhost systemd[1]: Reached target Path Units.
Sep 30 12:50:41 localhost systemd[1]: Reached target rpc_pipefs.target.
Sep 30 12:50:41 localhost systemd[1]: Reached target Slice Units.
Sep 30 12:50:41 localhost systemd[1]: Reached target Swaps.
Sep 30 12:50:41 localhost systemd[1]: Reached target Local Verity Protected Volumes.
Sep 30 12:50:41 localhost systemd[1]: Listening on RPCbind Server Activation Socket.
Sep 30 12:50:41 localhost systemd[1]: Reached target RPC Port Mapper.
Sep 30 12:50:41 localhost systemd[1]: Listening on Process Core Dump Socket.
Sep 30 12:50:41 localhost systemd[1]: Listening on initctl Compatibility Named Pipe.
Sep 30 12:50:41 localhost systemd[1]: Listening on udev Control Socket.
Sep 30 12:50:41 localhost systemd[1]: Listening on udev Kernel Socket.
Sep 30 12:50:41 localhost systemd[1]: Mounting Huge Pages File System...
Sep 30 12:50:41 localhost systemd[1]: Mounting POSIX Message Queue File System...
Sep 30 12:50:41 localhost systemd[1]: Mounting Kernel Debug File System...
Sep 30 12:50:41 localhost systemd[1]: Mounting Kernel Trace File System...
Sep 30 12:50:41 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Sep 30 12:50:41 localhost systemd[1]: Starting Create List of Static Device Nodes...
Sep 30 12:50:41 localhost systemd[1]: Starting Load Kernel Module configfs...
Sep 30 12:50:41 localhost systemd[1]: Starting Load Kernel Module drm...
Sep 30 12:50:41 localhost systemd[1]: Starting Load Kernel Module efi_pstore...
Sep 30 12:50:41 localhost systemd[1]: Starting Load Kernel Module fuse...
Sep 30 12:50:41 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network...
Sep 30 12:50:41 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Sep 30 12:50:41 localhost systemd[1]: Stopped File System Check on Root Device.
Sep 30 12:50:41 localhost systemd[1]: Stopped Journal Service.
Sep 30 12:50:41 localhost systemd[1]: Starting Journal Service...
Sep 30 12:50:41 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Sep 30 12:50:41 localhost systemd[1]: Starting Generate network units from Kernel command line...
Sep 30 12:50:41 localhost systemd[1]: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Sep 30 12:50:41 localhost systemd[1]: Starting Remount Root and Kernel File Systems...
Sep 30 12:50:41 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met.
Sep 30 12:50:41 localhost systemd[1]: Starting Apply Kernel Variables...
Sep 30 12:50:41 localhost systemd[1]: Starting Coldplug All udev Devices...
Sep 30 12:50:41 localhost systemd[1]: Mounted Huge Pages File System.
Sep 30 12:50:41 localhost systemd[1]: Mounted POSIX Message Queue File System.
Sep 30 12:50:41 localhost kernel: fuse: init (API version 7.37)
Sep 30 12:50:41 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Sep 30 12:50:41 localhost systemd[1]: Mounted Kernel Debug File System.
Sep 30 12:50:41 localhost systemd[1]: Mounted Kernel Trace File System.
Sep 30 12:50:41 localhost systemd[1]: Finished Create List of Static Device Nodes.
Sep 30 12:50:41 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Sep 30 12:50:41 localhost systemd[1]: Finished Load Kernel Module configfs.
Sep 30 12:50:41 localhost systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Sep 30 12:50:41 localhost systemd[1]: Finished Load Kernel Module efi_pstore.
Sep 30 12:50:41 localhost systemd-journald[680]: Journal started
Sep 30 12:50:41 localhost systemd-journald[680]: Runtime Journal (/run/log/journal/21983c68f36a73745cc172a394ebc51d) is 8.0M, max 153.5M, 145.5M free.
Sep 30 12:50:41 localhost systemd[1]: Queued start job for default target Multi-User System.
Sep 30 12:50:41 localhost systemd[1]: systemd-journald.service: Deactivated successfully.
Sep 30 12:50:41 localhost systemd[1]: Started Journal Service.
Sep 30 12:50:41 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully.
Sep 30 12:50:41 localhost systemd[1]: Finished Load Kernel Module fuse.
Sep 30 12:50:41 localhost kernel: ACPI: bus type drm_connector registered
Sep 30 12:50:41 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Sep 30 12:50:41 localhost systemd[1]: modprobe@drm.service: Deactivated successfully.
Sep 30 12:50:41 localhost systemd[1]: Finished Load Kernel Module drm.
Sep 30 12:50:41 localhost systemd[1]: Finished Generate network units from Kernel command line.
Sep 30 12:50:41 localhost systemd[1]: Finished Remount Root and Kernel File Systems.
Sep 30 12:50:41 localhost systemd[1]: Finished Apply Kernel Variables.
Sep 30 12:50:41 localhost systemd[1]: Mounting FUSE Control File System...
Sep 30 12:50:41 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Sep 30 12:50:41 localhost systemd[1]: Starting Rebuild Hardware Database...
Sep 30 12:50:41 localhost systemd[1]: Starting Flush Journal to Persistent Storage...
Sep 30 12:50:41 localhost systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Sep 30 12:50:41 localhost systemd[1]: Starting Load/Save OS Random Seed...
Sep 30 12:50:41 localhost systemd[1]: Starting Create System Users...
Sep 30 12:50:41 localhost systemd[1]: Mounted FUSE Control File System.
Sep 30 12:50:41 localhost systemd-journald[680]: Runtime Journal (/run/log/journal/21983c68f36a73745cc172a394ebc51d) is 8.0M, max 153.5M, 145.5M free.
Sep 30 12:50:41 localhost systemd-journald[680]: Received client request to flush runtime journal.
Sep 30 12:50:41 localhost systemd[1]: Finished Flush Journal to Persistent Storage.
Sep 30 12:50:41 localhost systemd[1]: Finished Load/Save OS Random Seed.
Sep 30 12:50:41 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Sep 30 12:50:41 localhost systemd[1]: Finished Create System Users.
Sep 30 12:50:41 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Sep 30 12:50:41 localhost systemd[1]: Finished Coldplug All udev Devices.
Sep 30 12:50:41 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Sep 30 12:50:41 localhost systemd[1]: Reached target Preparation for Local File Systems.
Sep 30 12:50:41 localhost systemd[1]: Reached target Local File Systems.
Sep 30 12:50:41 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache...
Sep 30 12:50:41 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Sep 30 12:50:41 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Sep 30 12:50:41 localhost systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Sep 30 12:50:41 localhost systemd[1]: Starting Automatic Boot Loader Update...
Sep 30 12:50:41 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Sep 30 12:50:41 localhost systemd[1]: Starting Create Volatile Files and Directories...
Sep 30 12:50:41 localhost bootctl[699]: Couldn't find EFI system partition, skipping.
Sep 30 12:50:41 localhost systemd[1]: Finished Automatic Boot Loader Update.
Sep 30 12:50:42 localhost systemd[1]: Finished Create Volatile Files and Directories.
Sep 30 12:50:42 localhost systemd[1]: Starting Security Auditing Service...
Sep 30 12:50:42 localhost systemd[1]: Starting RPC Bind...
Sep 30 12:50:42 localhost systemd[1]: Starting Rebuild Journal Catalog...
Sep 30 12:50:42 localhost auditd[705]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Sep 30 12:50:42 localhost auditd[705]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Sep 30 12:50:42 localhost systemd[1]: Finished Rebuild Journal Catalog.
Sep 30 12:50:42 localhost systemd[1]: Started RPC Bind.
Sep 30 12:50:42 localhost augenrules[710]: /sbin/augenrules: No change
Sep 30 12:50:42 localhost augenrules[725]: No rules
Sep 30 12:50:42 localhost augenrules[725]: enabled 1
Sep 30 12:50:42 localhost augenrules[725]: failure 1
Sep 30 12:50:42 localhost augenrules[725]: pid 705
Sep 30 12:50:42 localhost augenrules[725]: rate_limit 0
Sep 30 12:50:42 localhost augenrules[725]: backlog_limit 8192
Sep 30 12:50:42 localhost augenrules[725]: lost 0
Sep 30 12:50:42 localhost augenrules[725]: backlog 3
Sep 30 12:50:42 localhost augenrules[725]: backlog_wait_time 60000
Sep 30 12:50:42 localhost augenrules[725]: backlog_wait_time_actual 0
Sep 30 12:50:42 localhost augenrules[725]: enabled 1
Sep 30 12:50:42 localhost augenrules[725]: failure 1
Sep 30 12:50:42 localhost augenrules[725]: pid 705
Sep 30 12:50:42 localhost augenrules[725]: rate_limit 0
Sep 30 12:50:42 localhost augenrules[725]: backlog_limit 8192
Sep 30 12:50:42 localhost augenrules[725]: lost 0
Sep 30 12:50:42 localhost augenrules[725]: backlog 3
Sep 30 12:50:42 localhost augenrules[725]: backlog_wait_time 60000
Sep 30 12:50:42 localhost augenrules[725]: backlog_wait_time_actual 0
Sep 30 12:50:42 localhost augenrules[725]: enabled 1
Sep 30 12:50:42 localhost augenrules[725]: failure 1
Sep 30 12:50:42 localhost augenrules[725]: pid 705
Sep 30 12:50:42 localhost augenrules[725]: rate_limit 0
Sep 30 12:50:42 localhost augenrules[725]: backlog_limit 8192
Sep 30 12:50:42 localhost augenrules[725]: lost 0
Sep 30 12:50:42 localhost augenrules[725]: backlog 3
Sep 30 12:50:42 localhost augenrules[725]: backlog_wait_time 60000
Sep 30 12:50:42 localhost augenrules[725]: backlog_wait_time_actual 0
Sep 30 12:50:42 localhost systemd[1]: Started Security Auditing Service.
Sep 30 12:50:42 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Sep 30 12:50:42 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Sep 30 12:50:42 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache.
Sep 30 12:50:42 localhost systemd[1]: Finished Rebuild Hardware Database.
Sep 30 12:50:42 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Sep 30 12:50:42 localhost systemd[1]: Starting Update is Completed...
Sep 30 12:50:42 localhost systemd[1]: Finished Update is Completed.
Sep 30 12:50:42 localhost systemd-udevd[734]: Using default interface naming scheme 'rhel-9.0'.
Sep 30 12:50:42 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Sep 30 12:50:42 localhost systemd[1]: Reached target System Initialization.
Sep 30 12:50:42 localhost systemd[1]: Started dnf makecache --timer.
Sep 30 12:50:42 localhost systemd[1]: Started Daily rotation of log files.
Sep 30 12:50:42 localhost systemd[1]: Started Daily Cleanup of Temporary Directories.
Sep 30 12:50:42 localhost systemd[1]: Reached target Timer Units.
Sep 30 12:50:42 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Sep 30 12:50:42 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Sep 30 12:50:42 localhost systemd[1]: Reached target Socket Units.
Sep 30 12:50:42 localhost systemd[1]: Starting D-Bus System Message Bus...
Sep 30 12:50:42 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Sep 30 12:50:42 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Sep 30 12:50:42 localhost systemd[1]: Starting Load Kernel Module configfs...
Sep 30 12:50:42 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Sep 30 12:50:42 localhost systemd[1]: Finished Load Kernel Module configfs.
Sep 30 12:50:42 localhost systemd-udevd[752]: Network interface NamePolicy= disabled on kernel command line.
Sep 30 12:50:42 localhost systemd[1]: Started D-Bus System Message Bus.
Sep 30 12:50:42 localhost systemd[1]: Reached target Basic System.
Sep 30 12:50:42 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Sep 30 12:50:42 localhost dbus-broker-lau[769]: Ready
Sep 30 12:50:42 localhost systemd[1]: Starting NTP client/server...
Sep 30 12:50:42 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Sep 30 12:50:42 localhost kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Sep 30 12:50:42 localhost kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Sep 30 12:50:42 localhost chronyd[790]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
Sep 30 12:50:42 localhost chronyd[790]: Loaded 0 symmetric keys
Sep 30 12:50:42 localhost chronyd[790]: Using right/UTC timezone to obtain leap second data
Sep 30 12:50:42 localhost chronyd[790]: Loaded seccomp filter (level 2)
Sep 30 12:50:42 localhost systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Sep 30 12:50:42 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Sep 30 12:50:42 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Sep 30 12:50:42 localhost kernel: Console: switching to colour dummy device 80x25
Sep 30 12:50:42 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Sep 30 12:50:42 localhost kernel: [drm] features: -context_init
Sep 30 12:50:42 localhost kernel: [drm] number of scanouts: 1
Sep 30 12:50:42 localhost kernel: [drm] number of cap sets: 0
Sep 30 12:50:42 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Sep 30 12:50:42 localhost kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Sep 30 12:50:42 localhost kernel: Console: switching to colour frame buffer device 128x48
Sep 30 12:50:42 localhost systemd[1]: Starting Restore /run/initramfs on shutdown...
Sep 30 12:50:42 localhost kernel: kvm_amd: TSC scaling supported
Sep 30 12:50:42 localhost kernel: kvm_amd: Nested Virtualization enabled
Sep 30 12:50:42 localhost kernel: kvm_amd: Nested Paging enabled
Sep 30 12:50:42 localhost kernel: kvm_amd: LBR virtualization supported
Sep 30 12:50:42 localhost kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Sep 30 12:50:42 localhost systemd[1]: Starting IPv4 firewall with iptables...
Sep 30 12:50:42 localhost systemd[1]: Started irqbalance daemon.
Sep 30 12:50:42 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Sep 30 12:50:42 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Sep 30 12:50:42 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Sep 30 12:50:42 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Sep 30 12:50:42 localhost systemd[1]: Reached target sshd-keygen.target.
Sep 30 12:50:42 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Sep 30 12:50:42 localhost systemd[1]: Reached target User and Group Name Lookups.
Sep 30 12:50:42 localhost systemd[1]: Starting User Login Management...
Sep 30 12:50:42 localhost systemd[1]: Started NTP client/server.
Sep 30 12:50:42 localhost systemd[1]: Finished Restore /run/initramfs on shutdown.
Sep 30 12:50:42 localhost systemd-logind[808]: New seat seat0.
Sep 30 12:50:42 localhost systemd-logind[808]: Watching system buttons on /dev/input/event0 (Power Button)
Sep 30 12:50:42 localhost systemd-logind[808]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Sep 30 12:50:42 localhost systemd[1]: Started User Login Management.
Sep 30 12:50:42 localhost kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Sep 30 12:50:43 localhost kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Sep 30 12:50:43 localhost iptables.init[799]: iptables: Applying firewall rules: [  OK  ]
Sep 30 12:50:43 localhost systemd[1]: Finished IPv4 firewall with iptables.
Sep 30 12:50:43 localhost cloud-init[843]: Cloud-init v. 24.4-7.el9 running 'init-local' at Tue, 30 Sep 2025 12:50:43 +0000. Up 7.26 seconds.
Sep 30 12:50:43 localhost kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Sep 30 12:50:43 localhost kernel: ISO 9660 Extensions: RRIP_1991A
Sep 30 12:50:43 localhost systemd[1]: run-cloud\x2dinit-tmp-tmpgmap_o7k.mount: Deactivated successfully.
Sep 30 12:50:43 localhost systemd[1]: Starting Hostname Service...
Sep 30 12:50:43 localhost systemd[1]: Started Hostname Service.
Sep 30 12:50:43 np0005462840.novalocal systemd-hostnamed[857]: Hostname set to <np0005462840.novalocal> (static)
Sep 30 12:50:44 np0005462840.novalocal systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Sep 30 12:50:44 np0005462840.novalocal systemd[1]: Reached target Preparation for Network.
Sep 30 12:50:44 np0005462840.novalocal systemd[1]: Starting Network Manager...
Sep 30 12:50:44 np0005462840.novalocal NetworkManager[861]: <info>  [1759236644.2388] NetworkManager (version 1.54.1-1.el9) is starting... (boot:1819ccf5-a897-485a-80b9-c42731ad5ac8)
Sep 30 12:50:44 np0005462840.novalocal NetworkManager[861]: <info>  [1759236644.2395] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Sep 30 12:50:44 np0005462840.novalocal NetworkManager[861]: <info>  [1759236644.2548] manager[0x556b70621080]: monitoring kernel firmware directory '/lib/firmware'.
Sep 30 12:50:44 np0005462840.novalocal NetworkManager[861]: <info>  [1759236644.2613] hostname: hostname: using hostnamed
Sep 30 12:50:44 np0005462840.novalocal NetworkManager[861]: <info>  [1759236644.2614] hostname: static hostname changed from (none) to "np0005462840.novalocal"
Sep 30 12:50:44 np0005462840.novalocal NetworkManager[861]: <info>  [1759236644.2621] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Sep 30 12:50:44 np0005462840.novalocal NetworkManager[861]: <info>  [1759236644.2812] manager[0x556b70621080]: rfkill: Wi-Fi hardware radio set enabled
Sep 30 12:50:44 np0005462840.novalocal NetworkManager[861]: <info>  [1759236644.2814] manager[0x556b70621080]: rfkill: WWAN hardware radio set enabled
Sep 30 12:50:44 np0005462840.novalocal systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Sep 30 12:50:44 np0005462840.novalocal NetworkManager[861]: <info>  [1759236644.2917] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Sep 30 12:50:44 np0005462840.novalocal NetworkManager[861]: <info>  [1759236644.2917] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Sep 30 12:50:44 np0005462840.novalocal NetworkManager[861]: <info>  [1759236644.2918] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Sep 30 12:50:44 np0005462840.novalocal NetworkManager[861]: <info>  [1759236644.2919] manager: Networking is enabled by state file
Sep 30 12:50:44 np0005462840.novalocal NetworkManager[861]: <info>  [1759236644.2921] settings: Loaded settings plugin: keyfile (internal)
Sep 30 12:50:44 np0005462840.novalocal NetworkManager[861]: <info>  [1759236644.2954] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Sep 30 12:50:44 np0005462840.novalocal NetworkManager[861]: <info>  [1759236644.2991] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Sep 30 12:50:44 np0005462840.novalocal NetworkManager[861]: <info>  [1759236644.3020] dhcp: init: Using DHCP client 'internal'
Sep 30 12:50:44 np0005462840.novalocal NetworkManager[861]: <info>  [1759236644.3024] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Sep 30 12:50:44 np0005462840.novalocal NetworkManager[861]: <info>  [1759236644.3045] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Sep 30 12:50:44 np0005462840.novalocal NetworkManager[861]: <info>  [1759236644.3061] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Sep 30 12:50:44 np0005462840.novalocal NetworkManager[861]: <info>  [1759236644.3072] device (lo): Activation: starting connection 'lo' (5742ac42-8bba-40d6-bdcd-b6cbacaa64c1)
Sep 30 12:50:44 np0005462840.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Sep 30 12:50:44 np0005462840.novalocal NetworkManager[861]: <info>  [1759236644.3086] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Sep 30 12:50:44 np0005462840.novalocal NetworkManager[861]: <info>  [1759236644.3090] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Sep 30 12:50:44 np0005462840.novalocal systemd[1]: Started Network Manager.
Sep 30 12:50:44 np0005462840.novalocal NetworkManager[861]: <info>  [1759236644.3153] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Sep 30 12:50:44 np0005462840.novalocal systemd[1]: Reached target Network.
Sep 30 12:50:44 np0005462840.novalocal NetworkManager[861]: <info>  [1759236644.3161] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Sep 30 12:50:44 np0005462840.novalocal NetworkManager[861]: <info>  [1759236644.3166] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Sep 30 12:50:44 np0005462840.novalocal NetworkManager[861]: <info>  [1759236644.3168] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Sep 30 12:50:44 np0005462840.novalocal NetworkManager[861]: <info>  [1759236644.3171] device (eth0): carrier: link connected
Sep 30 12:50:44 np0005462840.novalocal NetworkManager[861]: <info>  [1759236644.3174] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Sep 30 12:50:44 np0005462840.novalocal NetworkManager[861]: <info>  [1759236644.3185] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Sep 30 12:50:44 np0005462840.novalocal NetworkManager[861]: <info>  [1759236644.3193] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Sep 30 12:50:44 np0005462840.novalocal systemd[1]: Starting Network Manager Wait Online...
Sep 30 12:50:44 np0005462840.novalocal NetworkManager[861]: <info>  [1759236644.3199] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Sep 30 12:50:44 np0005462840.novalocal NetworkManager[861]: <info>  [1759236644.3200] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Sep 30 12:50:44 np0005462840.novalocal NetworkManager[861]: <info>  [1759236644.3201] manager: NetworkManager state is now CONNECTING
Sep 30 12:50:44 np0005462840.novalocal NetworkManager[861]: <info>  [1759236644.3202] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Sep 30 12:50:44 np0005462840.novalocal NetworkManager[861]: <info>  [1759236644.3211] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Sep 30 12:50:44 np0005462840.novalocal NetworkManager[861]: <info>  [1759236644.3214] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Sep 30 12:50:44 np0005462840.novalocal systemd[1]: Starting GSSAPI Proxy Daemon...
Sep 30 12:50:44 np0005462840.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Sep 30 12:50:44 np0005462840.novalocal NetworkManager[861]: <info>  [1759236644.3303] dhcp4 (eth0): state changed new lease, address=38.102.83.20
Sep 30 12:50:44 np0005462840.novalocal NetworkManager[861]: <info>  [1759236644.3314] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Sep 30 12:50:44 np0005462840.novalocal NetworkManager[861]: <info>  [1759236644.3342] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Sep 30 12:50:44 np0005462840.novalocal NetworkManager[861]: <info>  [1759236644.3351] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Sep 30 12:50:44 np0005462840.novalocal NetworkManager[861]: <info>  [1759236644.3354] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Sep 30 12:50:44 np0005462840.novalocal NetworkManager[861]: <info>  [1759236644.3363] device (lo): Activation: successful, device activated.
Sep 30 12:50:44 np0005462840.novalocal NetworkManager[861]: <info>  [1759236644.3380] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Sep 30 12:50:44 np0005462840.novalocal NetworkManager[861]: <info>  [1759236644.3382] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Sep 30 12:50:44 np0005462840.novalocal NetworkManager[861]: <info>  [1759236644.3386] manager: NetworkManager state is now CONNECTED_SITE
Sep 30 12:50:44 np0005462840.novalocal NetworkManager[861]: <info>  [1759236644.3391] device (eth0): Activation: successful, device activated.
Sep 30 12:50:44 np0005462840.novalocal NetworkManager[861]: <info>  [1759236644.3395] manager: NetworkManager state is now CONNECTED_GLOBAL
Sep 30 12:50:44 np0005462840.novalocal NetworkManager[861]: <info>  [1759236644.3398] manager: startup complete
Sep 30 12:50:44 np0005462840.novalocal systemd[1]: Started GSSAPI Proxy Daemon.
Sep 30 12:50:44 np0005462840.novalocal systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Sep 30 12:50:44 np0005462840.novalocal systemd[1]: Reached target NFS client services.
Sep 30 12:50:44 np0005462840.novalocal systemd[1]: Reached target Preparation for Remote File Systems.
Sep 30 12:50:44 np0005462840.novalocal systemd[1]: Reached target Remote File Systems.
Sep 30 12:50:44 np0005462840.novalocal systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Sep 30 12:50:44 np0005462840.novalocal systemd[1]: Finished Network Manager Wait Online.
Sep 30 12:50:44 np0005462840.novalocal systemd[1]: Starting Cloud-init: Network Stage...
Sep 30 12:50:44 np0005462840.novalocal cloud-init[922]: Cloud-init v. 24.4-7.el9 running 'init' at Tue, 30 Sep 2025 12:50:44 +0000. Up 8.39 seconds.
Sep 30 12:50:44 np0005462840.novalocal cloud-init[922]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Sep 30 12:50:44 np0005462840.novalocal cloud-init[922]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Sep 30 12:50:44 np0005462840.novalocal cloud-init[922]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Sep 30 12:50:44 np0005462840.novalocal cloud-init[922]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Sep 30 12:50:44 np0005462840.novalocal cloud-init[922]: ci-info: |  eth0  | True |         38.102.83.20         | 255.255.255.0 | global | fa:16:3e:ed:d5:50 |
Sep 30 12:50:44 np0005462840.novalocal cloud-init[922]: ci-info: |  eth0  | True | fe80::f816:3eff:feed:d550/64 |       .       |  link  | fa:16:3e:ed:d5:50 |
Sep 30 12:50:44 np0005462840.novalocal cloud-init[922]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Sep 30 12:50:44 np0005462840.novalocal cloud-init[922]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Sep 30 12:50:44 np0005462840.novalocal cloud-init[922]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Sep 30 12:50:44 np0005462840.novalocal cloud-init[922]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Sep 30 12:50:44 np0005462840.novalocal cloud-init[922]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Sep 30 12:50:44 np0005462840.novalocal cloud-init[922]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Sep 30 12:50:44 np0005462840.novalocal cloud-init[922]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Sep 30 12:50:44 np0005462840.novalocal cloud-init[922]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Sep 30 12:50:44 np0005462840.novalocal cloud-init[922]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Sep 30 12:50:44 np0005462840.novalocal cloud-init[922]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Sep 30 12:50:44 np0005462840.novalocal cloud-init[922]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Sep 30 12:50:44 np0005462840.novalocal cloud-init[922]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Sep 30 12:50:44 np0005462840.novalocal cloud-init[922]: ci-info: +-------+-------------+---------+-----------+-------+
Sep 30 12:50:44 np0005462840.novalocal cloud-init[922]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Sep 30 12:50:44 np0005462840.novalocal cloud-init[922]: ci-info: +-------+-------------+---------+-----------+-------+
Sep 30 12:50:44 np0005462840.novalocal cloud-init[922]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Sep 30 12:50:44 np0005462840.novalocal cloud-init[922]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Sep 30 12:50:44 np0005462840.novalocal cloud-init[922]: ci-info: +-------+-------------+---------+-----------+-------+
Sep 30 12:50:45 np0005462840.novalocal useradd[988]: new group: name=cloud-user, GID=1001
Sep 30 12:50:45 np0005462840.novalocal useradd[988]: new user: name=cloud-user, UID=1001, GID=1001, home=/home/cloud-user, shell=/bin/bash, from=none
Sep 30 12:50:45 np0005462840.novalocal useradd[988]: add 'cloud-user' to group 'adm'
Sep 30 12:50:45 np0005462840.novalocal useradd[988]: add 'cloud-user' to group 'systemd-journal'
Sep 30 12:50:45 np0005462840.novalocal useradd[988]: add 'cloud-user' to shadow group 'adm'
Sep 30 12:50:45 np0005462840.novalocal useradd[988]: add 'cloud-user' to shadow group 'systemd-journal'
Sep 30 12:50:46 np0005462840.novalocal cloud-init[922]: Generating public/private rsa key pair.
Sep 30 12:50:46 np0005462840.novalocal cloud-init[922]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Sep 30 12:50:46 np0005462840.novalocal cloud-init[922]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Sep 30 12:50:46 np0005462840.novalocal cloud-init[922]: The key fingerprint is:
Sep 30 12:50:46 np0005462840.novalocal cloud-init[922]: SHA256:3DISxvrxkOhBVQY0+RaAGgpUqYzfCrgRqjFJos7p9h4 root@np0005462840.novalocal
Sep 30 12:50:46 np0005462840.novalocal cloud-init[922]: The key's randomart image is:
Sep 30 12:50:46 np0005462840.novalocal cloud-init[922]: +---[RSA 3072]----+
Sep 30 12:50:46 np0005462840.novalocal cloud-init[922]: |.....o*=o        |
Sep 30 12:50:46 np0005462840.novalocal cloud-init[922]: |. ...o.o.        |
Sep 30 12:50:46 np0005462840.novalocal cloud-init[922]: |+..o. +. .       |
Sep 30 12:50:46 np0005462840.novalocal cloud-init[922]: |=+.. + +o.       |
Sep 30 12:50:46 np0005462840.novalocal cloud-init[922]: |*+ .+ =.S .      |
Sep 30 12:50:46 np0005462840.novalocal cloud-init[922]: |X ...o = o       |
Sep 30 12:50:46 np0005462840.novalocal cloud-init[922]: |+*.E. . .        |
Sep 30 12:50:46 np0005462840.novalocal cloud-init[922]: |o=. .            |
Sep 30 12:50:46 np0005462840.novalocal cloud-init[922]: |o.oo             |
Sep 30 12:50:46 np0005462840.novalocal cloud-init[922]: +----[SHA256]-----+
Sep 30 12:50:46 np0005462840.novalocal cloud-init[922]: Generating public/private ecdsa key pair.
Sep 30 12:50:46 np0005462840.novalocal cloud-init[922]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Sep 30 12:50:46 np0005462840.novalocal cloud-init[922]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Sep 30 12:50:46 np0005462840.novalocal cloud-init[922]: The key fingerprint is:
Sep 30 12:50:46 np0005462840.novalocal cloud-init[922]: SHA256:JeR+zK3fMt0vJ1Fr6qkm4MJF6eq155cfy7YSmKVpK04 root@np0005462840.novalocal
Sep 30 12:50:46 np0005462840.novalocal cloud-init[922]: The key's randomart image is:
Sep 30 12:50:46 np0005462840.novalocal cloud-init[922]: +---[ECDSA 256]---+
Sep 30 12:50:46 np0005462840.novalocal cloud-init[922]: |        .        |
Sep 30 12:50:46 np0005462840.novalocal cloud-init[922]: |       o         |
Sep 30 12:50:46 np0005462840.novalocal cloud-init[922]: |        + .      |
Sep 30 12:50:46 np0005462840.novalocal cloud-init[922]: |       + = o    .|
Sep 30 12:50:46 np0005462840.novalocal cloud-init[922]: |      o S X .  ..|
Sep 30 12:50:46 np0005462840.novalocal cloud-init[922]: |       + B o  .o |
Sep 30 12:50:46 np0005462840.novalocal cloud-init[922]: |    . +Eo o oooo |
Sep 30 12:50:46 np0005462840.novalocal cloud-init[922]: |     +oo.+ *++B o|
Sep 30 12:50:46 np0005462840.novalocal cloud-init[922]: |    ..oo+.+.BXo+o|
Sep 30 12:50:46 np0005462840.novalocal cloud-init[922]: +----[SHA256]-----+
Sep 30 12:50:46 np0005462840.novalocal cloud-init[922]: Generating public/private ed25519 key pair.
Sep 30 12:50:46 np0005462840.novalocal cloud-init[922]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Sep 30 12:50:46 np0005462840.novalocal cloud-init[922]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Sep 30 12:50:46 np0005462840.novalocal cloud-init[922]: The key fingerprint is:
Sep 30 12:50:46 np0005462840.novalocal cloud-init[922]: SHA256:laOaQvJoX/IrCB2+u7MuUEJH7o2iqg2raF0wKzWvEdQ root@np0005462840.novalocal
Sep 30 12:50:46 np0005462840.novalocal cloud-init[922]: The key's randomart image is:
Sep 30 12:50:46 np0005462840.novalocal cloud-init[922]: +--[ED25519 256]--+
Sep 30 12:50:46 np0005462840.novalocal cloud-init[922]: |  ...            |
Sep 30 12:50:46 np0005462840.novalocal cloud-init[922]: | ..o E     .     |
Sep 30 12:50:46 np0005462840.novalocal cloud-init[922]: |. o.      +      |
Sep 30 12:50:46 np0005462840.novalocal cloud-init[922]: |. +*o    o .     |
Sep 30 12:50:46 np0005462840.novalocal cloud-init[922]: | *o+O.  S        |
Sep 30 12:50:46 np0005462840.novalocal cloud-init[922]: |+.+B o o         |
Sep 30 12:50:46 np0005462840.novalocal cloud-init[922]: |+.=oB +          |
Sep 30 12:50:46 np0005462840.novalocal cloud-init[922]: |+*++.=           |
Sep 30 12:50:46 np0005462840.novalocal cloud-init[922]: |Xo*=..o.         |
Sep 30 12:50:46 np0005462840.novalocal cloud-init[922]: +----[SHA256]-----+
Sep 30 12:50:46 np0005462840.novalocal sm-notify[1003]: Version 2.5.4 starting
Sep 30 12:50:46 np0005462840.novalocal systemd[1]: Finished Cloud-init: Network Stage.
Sep 30 12:50:46 np0005462840.novalocal systemd[1]: Reached target Cloud-config availability.
Sep 30 12:50:46 np0005462840.novalocal systemd[1]: Reached target Network is Online.
Sep 30 12:50:46 np0005462840.novalocal systemd[1]: Starting Cloud-init: Config Stage...
Sep 30 12:50:46 np0005462840.novalocal systemd[1]: Starting Notify NFS peers of a restart...
Sep 30 12:50:46 np0005462840.novalocal systemd[1]: Starting System Logging Service...
Sep 30 12:50:46 np0005462840.novalocal systemd[1]: Starting OpenSSH server daemon...
Sep 30 12:50:46 np0005462840.novalocal systemd[1]: Starting Permit User Sessions...
Sep 30 12:50:46 np0005462840.novalocal systemd[1]: Started Notify NFS peers of a restart.
Sep 30 12:50:46 np0005462840.novalocal sshd[1005]: Server listening on 0.0.0.0 port 22.
Sep 30 12:50:46 np0005462840.novalocal sshd[1005]: Server listening on :: port 22.
Sep 30 12:50:46 np0005462840.novalocal systemd[1]: Started OpenSSH server daemon.
Sep 30 12:50:46 np0005462840.novalocal systemd[1]: Finished Permit User Sessions.
Sep 30 12:50:46 np0005462840.novalocal rsyslogd[1004]: [origin software="rsyslogd" swVersion="8.2506.0-2.el9" x-pid="1004" x-info="https://www.rsyslog.com"] start
Sep 30 12:50:46 np0005462840.novalocal rsyslogd[1004]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Sep 30 12:50:46 np0005462840.novalocal systemd[1]: Started Command Scheduler.
Sep 30 12:50:46 np0005462840.novalocal systemd[1]: Started Getty on tty1.
Sep 30 12:50:46 np0005462840.novalocal systemd[1]: Started Serial Getty on ttyS0.
Sep 30 12:50:46 np0005462840.novalocal crond[1007]: (CRON) STARTUP (1.5.7)
Sep 30 12:50:46 np0005462840.novalocal crond[1007]: (CRON) INFO (Syslog will be used instead of sendmail.)
Sep 30 12:50:46 np0005462840.novalocal systemd[1]: Reached target Login Prompts.
Sep 30 12:50:46 np0005462840.novalocal crond[1007]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 43% if used.)
Sep 30 12:50:46 np0005462840.novalocal crond[1007]: (CRON) INFO (running with inotify support)
Sep 30 12:50:46 np0005462840.novalocal systemd[1]: Started System Logging Service.
Sep 30 12:50:46 np0005462840.novalocal systemd[1]: Reached target Multi-User System.
Sep 30 12:50:46 np0005462840.novalocal systemd[1]: Starting Record Runlevel Change in UTMP...
Sep 30 12:50:46 np0005462840.novalocal systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Sep 30 12:50:46 np0005462840.novalocal systemd[1]: Finished Record Runlevel Change in UTMP.
Sep 30 12:50:46 np0005462840.novalocal rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Sep 30 12:50:46 np0005462840.novalocal cloud-init[1017]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Tue, 30 Sep 2025 12:50:46 +0000. Up 10.20 seconds.
Sep 30 12:50:46 np0005462840.novalocal systemd[1]: Finished Cloud-init: Config Stage.
Sep 30 12:50:46 np0005462840.novalocal systemd[1]: Starting Cloud-init: Final Stage...
Sep 30 12:50:46 np0005462840.novalocal cloud-init[1021]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Tue, 30 Sep 2025 12:50:46 +0000. Up 10.60 seconds.
Sep 30 12:50:46 np0005462840.novalocal sshd-session[1024]: Unable to negotiate with 38.102.83.114 port 35144: no matching host key type found. Their offer: ssh-ed25519,ssh-ed25519-cert-v01@openssh.com [preauth]
Sep 30 12:50:46 np0005462840.novalocal cloud-init[1028]: #############################################################
Sep 30 12:50:46 np0005462840.novalocal cloud-init[1030]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Sep 30 12:50:46 np0005462840.novalocal cloud-init[1033]: 256 SHA256:JeR+zK3fMt0vJ1Fr6qkm4MJF6eq155cfy7YSmKVpK04 root@np0005462840.novalocal (ECDSA)
Sep 30 12:50:46 np0005462840.novalocal cloud-init[1035]: 256 SHA256:laOaQvJoX/IrCB2+u7MuUEJH7o2iqg2raF0wKzWvEdQ root@np0005462840.novalocal (ED25519)
Sep 30 12:50:46 np0005462840.novalocal sshd-session[1032]: Unable to negotiate with 38.102.83.114 port 35152: no matching host key type found. Their offer: ecdsa-sha2-nistp384,ecdsa-sha2-nistp384-cert-v01@openssh.com [preauth]
Sep 30 12:50:46 np0005462840.novalocal cloud-init[1038]: 3072 SHA256:3DISxvrxkOhBVQY0+RaAGgpUqYzfCrgRqjFJos7p9h4 root@np0005462840.novalocal (RSA)
Sep 30 12:50:46 np0005462840.novalocal cloud-init[1040]: -----END SSH HOST KEY FINGERPRINTS-----
Sep 30 12:50:46 np0005462840.novalocal cloud-init[1041]: #############################################################
Sep 30 12:50:46 np0005462840.novalocal sshd-session[1039]: Unable to negotiate with 38.102.83.114 port 35166: no matching host key type found. Their offer: ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@openssh.com [preauth]
Sep 30 12:50:47 np0005462840.novalocal sshd-session[1045]: Connection reset by 38.102.83.114 port 35174 [preauth]
Sep 30 12:50:47 np0005462840.novalocal sshd-session[1022]: Connection closed by 38.102.83.114 port 35130 [preauth]
Sep 30 12:50:47 np0005462840.novalocal cloud-init[1021]: Cloud-init v. 24.4-7.el9 finished at Tue, 30 Sep 2025 12:50:47 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 10.77 seconds
Sep 30 12:50:47 np0005462840.novalocal sshd-session[1026]: Connection closed by 38.102.83.114 port 35146 [preauth]
Sep 30 12:50:47 np0005462840.novalocal sshd-session[1050]: Unable to negotiate with 38.102.83.114 port 35180: no matching host key type found. Their offer: ssh-rsa,ssh-rsa-cert-v01@openssh.com [preauth]
Sep 30 12:50:47 np0005462840.novalocal sshd-session[1052]: Unable to negotiate with 38.102.83.114 port 35194: no matching host key type found. Their offer: ssh-dss,ssh-dss-cert-v01@openssh.com [preauth]
Sep 30 12:50:47 np0005462840.novalocal systemd[1]: Finished Cloud-init: Final Stage.
Sep 30 12:50:47 np0005462840.novalocal systemd[1]: Reached target Cloud-init target.
Sep 30 12:50:47 np0005462840.novalocal systemd[1]: Startup finished in 1.784s (kernel) + 2.937s (initrd) + 6.132s (userspace) = 10.854s.
Sep 30 12:50:47 np0005462840.novalocal sshd-session[1048]: Connection closed by 38.102.83.114 port 35178 [preauth]
Sep 30 12:50:49 np0005462840.novalocal chronyd[790]: Selected source 54.39.23.64 (2.centos.pool.ntp.org)
Sep 30 12:50:49 np0005462840.novalocal chronyd[790]: System clock wrong by 1.331453 seconds
Sep 30 12:50:50 np0005462840.novalocal chronyd[790]: System clock was stepped by 1.331453 seconds
Sep 30 12:50:50 np0005462840.novalocal chronyd[790]: System clock TAI offset set to 37 seconds
Sep 30 12:50:54 np0005462840.novalocal irqbalance[801]: Cannot change IRQ 25 affinity: Operation not permitted
Sep 30 12:50:54 np0005462840.novalocal irqbalance[801]: IRQ 25 affinity is now unmanaged
Sep 30 12:50:54 np0005462840.novalocal irqbalance[801]: Cannot change IRQ 31 affinity: Operation not permitted
Sep 30 12:50:54 np0005462840.novalocal irqbalance[801]: IRQ 31 affinity is now unmanaged
Sep 30 12:50:54 np0005462840.novalocal irqbalance[801]: Cannot change IRQ 28 affinity: Operation not permitted
Sep 30 12:50:54 np0005462840.novalocal irqbalance[801]: IRQ 28 affinity is now unmanaged
Sep 30 12:50:54 np0005462840.novalocal irqbalance[801]: Cannot change IRQ 32 affinity: Operation not permitted
Sep 30 12:50:54 np0005462840.novalocal irqbalance[801]: IRQ 32 affinity is now unmanaged
Sep 30 12:50:54 np0005462840.novalocal irqbalance[801]: Cannot change IRQ 30 affinity: Operation not permitted
Sep 30 12:50:54 np0005462840.novalocal irqbalance[801]: IRQ 30 affinity is now unmanaged
Sep 30 12:50:54 np0005462840.novalocal irqbalance[801]: Cannot change IRQ 29 affinity: Operation not permitted
Sep 30 12:50:54 np0005462840.novalocal irqbalance[801]: IRQ 29 affinity is now unmanaged
Sep 30 12:50:55 np0005462840.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Sep 30 12:51:01 np0005462840.novalocal sshd-session[1055]: Received disconnect from 121.204.171.142 port 54520:11: Bye Bye [preauth]
Sep 30 12:51:01 np0005462840.novalocal sshd-session[1055]: Disconnected from authenticating user root 121.204.171.142 port 54520 [preauth]
Sep 30 12:51:15 np0005462840.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Sep 30 12:51:17 np0005462840.novalocal sshd-session[1060]: Invalid user kelvin from 181.212.34.237 port 49701
Sep 30 12:51:17 np0005462840.novalocal sshd-session[1060]: Received disconnect from 181.212.34.237 port 49701:11: Bye Bye [preauth]
Sep 30 12:51:17 np0005462840.novalocal sshd-session[1060]: Disconnected from invalid user kelvin 181.212.34.237 port 49701 [preauth]
Sep 30 12:51:57 np0005462840.novalocal sshd-session[1062]: Invalid user thomas from 23.95.128.167 port 42064
Sep 30 12:51:57 np0005462840.novalocal sshd-session[1062]: Received disconnect from 23.95.128.167 port 42064:11: Bye Bye [preauth]
Sep 30 12:51:57 np0005462840.novalocal sshd-session[1062]: Disconnected from invalid user thomas 23.95.128.167 port 42064 [preauth]
Sep 30 12:52:10 np0005462840.novalocal sshd-session[1064]: Invalid user test from 51.75.194.10 port 56402
Sep 30 12:52:10 np0005462840.novalocal sshd-session[1064]: Received disconnect from 51.75.194.10 port 56402:11: Bye Bye [preauth]
Sep 30 12:52:10 np0005462840.novalocal sshd-session[1064]: Disconnected from invalid user test 51.75.194.10 port 56402 [preauth]
Sep 30 12:53:24 np0005462840.novalocal sshd-session[1066]: Received disconnect from 82.29.72.161 port 33976:11: Bye Bye [preauth]
Sep 30 12:53:24 np0005462840.novalocal sshd-session[1066]: Disconnected from authenticating user root 82.29.72.161 port 33976 [preauth]
Sep 30 12:53:28 np0005462840.novalocal sshd-session[1068]: Received disconnect from 59.36.78.66 port 59382:11: Bye Bye [preauth]
Sep 30 12:53:28 np0005462840.novalocal sshd-session[1068]: Disconnected from authenticating user root 59.36.78.66 port 59382 [preauth]
Sep 30 12:53:51 np0005462840.novalocal sshd-session[1070]: Invalid user loader from 87.251.77.103 port 35188
Sep 30 12:53:51 np0005462840.novalocal sshd-session[1070]: Received disconnect from 87.251.77.103 port 35188:11: Bye Bye [preauth]
Sep 30 12:53:51 np0005462840.novalocal sshd-session[1070]: Disconnected from invalid user loader 87.251.77.103 port 35188 [preauth]
Sep 30 12:54:33 np0005462840.novalocal sshd-session[1072]: Received disconnect from 193.46.255.7 port 51094:11:  [preauth]
Sep 30 12:54:33 np0005462840.novalocal sshd-session[1072]: Disconnected from authenticating user root 193.46.255.7 port 51094 [preauth]
Sep 30 12:55:25 np0005462840.novalocal sshd-session[1074]: Invalid user siesa from 51.75.194.10 port 53766
Sep 30 12:55:25 np0005462840.novalocal sshd-session[1074]: Received disconnect from 51.75.194.10 port 53766:11: Bye Bye [preauth]
Sep 30 12:55:25 np0005462840.novalocal sshd-session[1074]: Disconnected from invalid user siesa 51.75.194.10 port 53766 [preauth]
Sep 30 12:55:38 np0005462840.novalocal sshd-session[1076]: Invalid user jin from 181.212.34.237 port 6888
Sep 30 12:55:38 np0005462840.novalocal sshd-session[1076]: Received disconnect from 181.212.34.237 port 6888:11: Bye Bye [preauth]
Sep 30 12:55:38 np0005462840.novalocal sshd-session[1076]: Disconnected from invalid user jin 181.212.34.237 port 6888 [preauth]
Sep 30 12:55:49 np0005462840.novalocal sshd-session[1078]: Received disconnect from 121.204.171.142 port 60584:11: Bye Bye [preauth]
Sep 30 12:55:49 np0005462840.novalocal sshd-session[1078]: Disconnected from authenticating user root 121.204.171.142 port 60584 [preauth]
Sep 30 12:55:55 np0005462840.novalocal sshd-session[1080]: Invalid user loader from 82.29.72.161 port 46444
Sep 30 12:55:56 np0005462840.novalocal sshd-session[1080]: Received disconnect from 82.29.72.161 port 46444:11: Bye Bye [preauth]
Sep 30 12:55:56 np0005462840.novalocal sshd-session[1080]: Disconnected from invalid user loader 82.29.72.161 port 46444 [preauth]
Sep 30 12:56:08 np0005462840.novalocal sshd-session[1082]: Invalid user thomas from 87.251.77.103 port 58712
Sep 30 12:56:08 np0005462840.novalocal sshd-session[1082]: Received disconnect from 87.251.77.103 port 58712:11: Bye Bye [preauth]
Sep 30 12:56:08 np0005462840.novalocal sshd-session[1082]: Disconnected from invalid user thomas 87.251.77.103 port 58712 [preauth]
Sep 30 12:56:23 np0005462840.novalocal sshd-session[1084]: Invalid user thomas from 51.75.194.10 port 37768
Sep 30 12:56:23 np0005462840.novalocal sshd-session[1084]: Received disconnect from 51.75.194.10 port 37768:11: Bye Bye [preauth]
Sep 30 12:56:23 np0005462840.novalocal sshd-session[1084]: Disconnected from invalid user thomas 51.75.194.10 port 37768 [preauth]
Sep 30 12:56:23 np0005462840.novalocal sshd-session[1086]: Invalid user ambari from 23.95.128.167 port 35688
Sep 30 12:56:23 np0005462840.novalocal sshd-session[1086]: Received disconnect from 23.95.128.167 port 35688:11: Bye Bye [preauth]
Sep 30 12:56:23 np0005462840.novalocal sshd-session[1086]: Disconnected from invalid user ambari 23.95.128.167 port 35688 [preauth]
Sep 30 12:56:49 np0005462840.novalocal sshd-session[1088]: Invalid user ntuser from 59.36.78.66 port 42034
Sep 30 12:56:50 np0005462840.novalocal sshd-session[1088]: Received disconnect from 59.36.78.66 port 42034:11: Bye Bye [preauth]
Sep 30 12:56:50 np0005462840.novalocal sshd-session[1088]: Disconnected from invalid user ntuser 59.36.78.66 port 42034 [preauth]
Sep 30 12:56:53 np0005462840.novalocal sshd-session[1090]: Invalid user deepak from 181.212.34.237 port 26959
Sep 30 12:56:53 np0005462840.novalocal sshd-session[1090]: Received disconnect from 181.212.34.237 port 26959:11: Bye Bye [preauth]
Sep 30 12:56:53 np0005462840.novalocal sshd-session[1090]: Disconnected from invalid user deepak 181.212.34.237 port 26959 [preauth]
Sep 30 12:56:54 np0005462840.novalocal irqbalance[801]: Cannot change IRQ 26 affinity: Operation not permitted
Sep 30 12:56:54 np0005462840.novalocal irqbalance[801]: IRQ 26 affinity is now unmanaged
Sep 30 12:57:01 np0005462840.novalocal sshd-session[1092]: Invalid user siesa from 82.29.72.161 port 42366
Sep 30 12:57:02 np0005462840.novalocal sshd-session[1092]: Received disconnect from 82.29.72.161 port 42366:11: Bye Bye [preauth]
Sep 30 12:57:02 np0005462840.novalocal sshd-session[1092]: Disconnected from invalid user siesa 82.29.72.161 port 42366 [preauth]
Sep 30 12:57:21 np0005462840.novalocal sshd-session[1094]: Invalid user git from 51.75.194.10 port 52080
Sep 30 12:57:21 np0005462840.novalocal sshd-session[1094]: Received disconnect from 51.75.194.10 port 52080:11: Bye Bye [preauth]
Sep 30 12:57:21 np0005462840.novalocal sshd-session[1094]: Disconnected from invalid user git 51.75.194.10 port 52080 [preauth]
Sep 30 12:57:28 np0005462840.novalocal sshd-session[1096]: Invalid user in from 23.95.128.167 port 39088
Sep 30 12:57:28 np0005462840.novalocal sshd-session[1096]: Received disconnect from 23.95.128.167 port 39088:11: Bye Bye [preauth]
Sep 30 12:57:28 np0005462840.novalocal sshd-session[1096]: Disconnected from invalid user in 23.95.128.167 port 39088 [preauth]
Sep 30 12:57:52 np0005462840.novalocal sshd-session[1098]: Invalid user ambari from 87.251.77.103 port 34356
Sep 30 12:57:52 np0005462840.novalocal sshd-session[1098]: Received disconnect from 87.251.77.103 port 34356:11: Bye Bye [preauth]
Sep 30 12:57:52 np0005462840.novalocal sshd-session[1098]: Disconnected from invalid user ambari 87.251.77.103 port 34356 [preauth]
Sep 30 12:58:04 np0005462840.novalocal sshd-session[1100]: Invalid user foundry from 82.29.72.161 port 38292
Sep 30 12:58:04 np0005462840.novalocal sshd-session[1100]: Received disconnect from 82.29.72.161 port 38292:11: Bye Bye [preauth]
Sep 30 12:58:04 np0005462840.novalocal sshd-session[1100]: Disconnected from invalid user foundry 82.29.72.161 port 38292 [preauth]
Sep 30 12:58:11 np0005462840.novalocal sshd-session[1102]: Invalid user test from 181.212.34.237 port 22752
Sep 30 12:58:11 np0005462840.novalocal sshd-session[1102]: Received disconnect from 181.212.34.237 port 22752:11: Bye Bye [preauth]
Sep 30 12:58:11 np0005462840.novalocal sshd-session[1102]: Disconnected from invalid user test 181.212.34.237 port 22752 [preauth]
Sep 30 12:58:15 np0005462840.novalocal sshd-session[1104]: Invalid user mysftp from 51.75.194.10 port 53334
Sep 30 12:58:15 np0005462840.novalocal sshd-session[1104]: Received disconnect from 51.75.194.10 port 53334:11: Bye Bye [preauth]
Sep 30 12:58:15 np0005462840.novalocal sshd-session[1104]: Disconnected from invalid user mysftp 51.75.194.10 port 53334 [preauth]
Sep 30 12:58:37 np0005462840.novalocal sshd-session[1107]: Received disconnect from 23.95.128.167 port 45488:11: Bye Bye [preauth]
Sep 30 12:58:37 np0005462840.novalocal sshd-session[1107]: Disconnected from authenticating user root 23.95.128.167 port 45488 [preauth]
Sep 30 12:59:03 np0005462840.novalocal sshd-session[1110]: Invalid user mysftp from 82.29.72.161 port 34210
Sep 30 12:59:03 np0005462840.novalocal sshd-session[1110]: Received disconnect from 82.29.72.161 port 34210:11: Bye Bye [preauth]
Sep 30 12:59:03 np0005462840.novalocal sshd-session[1110]: Disconnected from invalid user mysftp 82.29.72.161 port 34210 [preauth]
Sep 30 12:59:09 np0005462840.novalocal sshd-session[1112]: Invalid user loader from 51.75.194.10 port 53862
Sep 30 12:59:09 np0005462840.novalocal sshd-session[1112]: Received disconnect from 51.75.194.10 port 53862:11: Bye Bye [preauth]
Sep 30 12:59:09 np0005462840.novalocal sshd-session[1112]: Disconnected from invalid user loader 51.75.194.10 port 53862 [preauth]
Sep 30 12:59:18 np0005462840.novalocal sshd-session[1114]: Invalid user ubnt from 185.156.73.233 port 31550
Sep 30 12:59:18 np0005462840.novalocal sshd-session[1114]: Connection closed by invalid user ubnt 185.156.73.233 port 31550 [preauth]
Sep 30 12:59:30 np0005462840.novalocal sshd-session[1116]: Received disconnect from 87.251.77.103 port 56446:11: Bye Bye [preauth]
Sep 30 12:59:30 np0005462840.novalocal sshd-session[1116]: Disconnected from authenticating user root 87.251.77.103 port 56446 [preauth]
Sep 30 12:59:35 np0005462840.novalocal sshd-session[1118]: Invalid user foundry from 23.95.128.167 port 34622
Sep 30 12:59:35 np0005462840.novalocal sshd-session[1118]: Received disconnect from 23.95.128.167 port 34622:11: Bye Bye [preauth]
Sep 30 12:59:35 np0005462840.novalocal sshd-session[1118]: Disconnected from invalid user foundry 23.95.128.167 port 34622 [preauth]
Sep 30 12:59:37 np0005462840.novalocal sshd-session[1120]: Invalid user hive from 181.212.34.237 port 26929
Sep 30 12:59:37 np0005462840.novalocal sshd-session[1120]: Received disconnect from 181.212.34.237 port 26929:11: Bye Bye [preauth]
Sep 30 12:59:37 np0005462840.novalocal sshd-session[1120]: Disconnected from invalid user hive 181.212.34.237 port 26929 [preauth]
Sep 30 12:59:50 np0005462840.novalocal sshd-session[1123]: Received disconnect from 193.46.255.20 port 33836:11:  [preauth]
Sep 30 12:59:50 np0005462840.novalocal sshd-session[1123]: Disconnected from authenticating user root 193.46.255.20 port 33836 [preauth]
Sep 30 13:00:00 np0005462840.novalocal sshd-session[1126]: Received disconnect from 51.75.194.10 port 36118:11: Bye Bye [preauth]
Sep 30 13:00:00 np0005462840.novalocal sshd-session[1126]: Disconnected from authenticating user root 51.75.194.10 port 36118 [preauth]
Sep 30 13:00:01 np0005462840.novalocal sshd-session[1128]: Invalid user mastodon from 82.29.72.161 port 58364
Sep 30 13:00:01 np0005462840.novalocal sshd-session[1128]: Received disconnect from 82.29.72.161 port 58364:11: Bye Bye [preauth]
Sep 30 13:00:01 np0005462840.novalocal sshd-session[1128]: Disconnected from invalid user mastodon 82.29.72.161 port 58364 [preauth]
Sep 30 13:00:28 np0005462840.novalocal sshd[1005]: Timeout before authentication for connection from 59.36.78.66 to 38.102.83.20, pid = 1106
Sep 30 13:00:34 np0005462840.novalocal sshd-session[1130]: Received disconnect from 23.95.128.167 port 55514:11: Bye Bye [preauth]
Sep 30 13:00:34 np0005462840.novalocal sshd-session[1130]: Disconnected from authenticating user root 23.95.128.167 port 55514 [preauth]
Sep 30 13:00:45 np0005462840.novalocal sshd-session[1132]: Invalid user wildfly from 181.212.34.237 port 3932
Sep 30 13:00:45 np0005462840.novalocal sshd-session[1132]: Received disconnect from 181.212.34.237 port 3932:11: Bye Bye [preauth]
Sep 30 13:00:45 np0005462840.novalocal sshd-session[1132]: Disconnected from invalid user wildfly 181.212.34.237 port 3932 [preauth]
Sep 30 13:00:52 np0005462840.novalocal sshd-session[1134]: Invalid user test123 from 51.75.194.10 port 38708
Sep 30 13:00:52 np0005462840.novalocal sshd-session[1134]: Received disconnect from 51.75.194.10 port 38708:11: Bye Bye [preauth]
Sep 30 13:00:52 np0005462840.novalocal sshd-session[1134]: Disconnected from invalid user test123 51.75.194.10 port 38708 [preauth]
Sep 30 13:00:58 np0005462840.novalocal sshd-session[1136]: Invalid user foundry from 82.29.72.161 port 54282
Sep 30 13:00:58 np0005462840.novalocal sshd-session[1136]: Received disconnect from 82.29.72.161 port 54282:11: Bye Bye [preauth]
Sep 30 13:00:58 np0005462840.novalocal sshd-session[1136]: Disconnected from invalid user foundry 82.29.72.161 port 54282 [preauth]
Sep 30 13:01:01 np0005462840.novalocal CROND[1139]: (root) CMD (run-parts /etc/cron.hourly)
Sep 30 13:01:01 np0005462840.novalocal run-parts[1142]: (/etc/cron.hourly) starting 0anacron
Sep 30 13:01:01 np0005462840.novalocal anacron[1150]: Anacron started on 2025-09-30
Sep 30 13:01:01 np0005462840.novalocal anacron[1150]: Will run job `cron.daily' in 38 min.
Sep 30 13:01:01 np0005462840.novalocal anacron[1150]: Will run job `cron.weekly' in 58 min.
Sep 30 13:01:01 np0005462840.novalocal anacron[1150]: Will run job `cron.monthly' in 78 min.
Sep 30 13:01:01 np0005462840.novalocal anacron[1150]: Jobs will be executed sequentially
Sep 30 13:01:01 np0005462840.novalocal run-parts[1152]: (/etc/cron.hourly) finished 0anacron
Sep 30 13:01:01 np0005462840.novalocal CROND[1138]: (root) CMDEND (run-parts /etc/cron.hourly)
Sep 30 13:01:14 np0005462840.novalocal sshd-session[1153]: Invalid user superadmin from 87.251.77.103 port 47404
Sep 30 13:01:14 np0005462840.novalocal sshd-session[1153]: Received disconnect from 87.251.77.103 port 47404:11: Bye Bye [preauth]
Sep 30 13:01:14 np0005462840.novalocal sshd-session[1153]: Disconnected from invalid user superadmin 87.251.77.103 port 47404 [preauth]
Sep 30 13:01:36 np0005462840.novalocal sshd-session[1156]: Received disconnect from 23.95.128.167 port 52404:11: Bye Bye [preauth]
Sep 30 13:01:36 np0005462840.novalocal sshd-session[1156]: Disconnected from authenticating user root 23.95.128.167 port 52404 [preauth]
Sep 30 13:01:38 np0005462840.novalocal sshd[1005]: drop connection #2 from [59.36.78.66]:44176 on [38.102.83.20]:22 penalty: exceeded LoginGraceTime
Sep 30 13:01:47 np0005462840.novalocal sshd-session[1158]: Invalid user zx from 51.75.194.10 port 32936
Sep 30 13:01:47 np0005462840.novalocal sshd-session[1158]: Received disconnect from 51.75.194.10 port 32936:11: Bye Bye [preauth]
Sep 30 13:01:47 np0005462840.novalocal sshd-session[1158]: Disconnected from invalid user zx 51.75.194.10 port 32936 [preauth]
Sep 30 13:01:51 np0005462840.novalocal sshd[1005]: Timeout before authentication for connection from 121.204.171.142 to 38.102.83.20, pid = 1122
Sep 30 13:01:56 np0005462840.novalocal sshd-session[1160]: Invalid user minecraft from 181.212.34.237 port 49188
Sep 30 13:01:56 np0005462840.novalocal sshd-session[1160]: Received disconnect from 181.212.34.237 port 49188:11: Bye Bye [preauth]
Sep 30 13:01:56 np0005462840.novalocal sshd-session[1160]: Disconnected from invalid user minecraft 181.212.34.237 port 49188 [preauth]
Sep 30 13:01:58 np0005462840.novalocal sshd-session[1162]: Invalid user auser from 82.29.72.161 port 50208
Sep 30 13:01:58 np0005462840.novalocal sshd-session[1162]: Received disconnect from 82.29.72.161 port 50208:11: Bye Bye [preauth]
Sep 30 13:01:58 np0005462840.novalocal sshd-session[1162]: Disconnected from invalid user auser 82.29.72.161 port 50208 [preauth]
Sep 30 13:02:39 np0005462840.novalocal sshd-session[1164]: Invalid user foundry from 23.95.128.167 port 60016
Sep 30 13:02:39 np0005462840.novalocal sshd-session[1164]: Received disconnect from 23.95.128.167 port 60016:11: Bye Bye [preauth]
Sep 30 13:02:39 np0005462840.novalocal sshd-session[1164]: Disconnected from invalid user foundry 23.95.128.167 port 60016 [preauth]
Sep 30 13:02:42 np0005462840.novalocal sshd-session[1166]: Invalid user platform from 51.75.194.10 port 60088
Sep 30 13:02:42 np0005462840.novalocal sshd-session[1166]: Received disconnect from 51.75.194.10 port 60088:11: Bye Bye [preauth]
Sep 30 13:02:42 np0005462840.novalocal sshd-session[1166]: Disconnected from invalid user platform 51.75.194.10 port 60088 [preauth]
Sep 30 13:02:57 np0005462840.novalocal sshd[1005]: drop connection #1 from [121.204.171.142]:59370 on [38.102.83.20]:22 penalty: exceeded LoginGraceTime
Sep 30 13:02:58 np0005462840.novalocal sshd-session[1168]: Invalid user extern from 82.29.72.161 port 46130
Sep 30 13:02:58 np0005462840.novalocal sshd-session[1168]: Received disconnect from 82.29.72.161 port 46130:11: Bye Bye [preauth]
Sep 30 13:02:58 np0005462840.novalocal sshd-session[1168]: Disconnected from invalid user extern 82.29.72.161 port 46130 [preauth]
Sep 30 13:03:06 np0005462840.novalocal sshd-session[1170]: Invalid user tecnica from 181.212.34.237 port 27455
Sep 30 13:03:06 np0005462840.novalocal sshd-session[1170]: Received disconnect from 181.212.34.237 port 27455:11: Bye Bye [preauth]
Sep 30 13:03:06 np0005462840.novalocal sshd-session[1170]: Disconnected from invalid user tecnica 181.212.34.237 port 27455 [preauth]
Sep 30 13:03:30 np0005462840.novalocal sshd[1005]: Timeout before authentication for connection from 121.204.171.142 to 38.102.83.20, pid = 1155
Sep 30 13:03:39 np0005462840.novalocal sshd-session[1172]: Invalid user iot from 51.75.194.10 port 50184
Sep 30 13:03:39 np0005462840.novalocal sshd-session[1172]: Received disconnect from 51.75.194.10 port 50184:11: Bye Bye [preauth]
Sep 30 13:03:39 np0005462840.novalocal sshd-session[1172]: Disconnected from invalid user iot 51.75.194.10 port 50184 [preauth]
Sep 30 13:03:47 np0005462840.novalocal sshd-session[1174]: Invalid user ya from 23.95.128.167 port 50278
Sep 30 13:03:47 np0005462840.novalocal sshd-session[1174]: Received disconnect from 23.95.128.167 port 50278:11: Bye Bye [preauth]
Sep 30 13:03:47 np0005462840.novalocal sshd-session[1174]: Disconnected from invalid user ya 23.95.128.167 port 50278 [preauth]
Sep 30 13:04:05 np0005462840.novalocal sshd-session[1176]: Received disconnect from 82.29.72.161 port 42056:11: Bye Bye [preauth]
Sep 30 13:04:05 np0005462840.novalocal sshd-session[1176]: Disconnected from authenticating user root 82.29.72.161 port 42056 [preauth]
Sep 30 13:04:10 np0005462840.novalocal sshd[1005]: drop connection #0 from [121.204.171.142]:60660 on [38.102.83.20]:22 penalty: exceeded LoginGraceTime
Sep 30 13:04:20 np0005462840.novalocal sshd-session[1179]: Invalid user superadmin from 181.212.34.237 port 4888
Sep 30 13:04:20 np0005462840.novalocal sshd-session[1179]: Received disconnect from 181.212.34.237 port 4888:11: Bye Bye [preauth]
Sep 30 13:04:20 np0005462840.novalocal sshd-session[1179]: Disconnected from invalid user superadmin 181.212.34.237 port 4888 [preauth]
Sep 30 13:04:30 np0005462840.novalocal sshd-session[1181]: Connection closed by authenticating user root 80.94.95.116 port 44356 [preauth]
Sep 30 13:04:33 np0005462840.novalocal sshd-session[1183]: Invalid user ambari from 51.75.194.10 port 52416
Sep 30 13:04:33 np0005462840.novalocal sshd-session[1183]: Received disconnect from 51.75.194.10 port 52416:11: Bye Bye [preauth]
Sep 30 13:04:33 np0005462840.novalocal sshd-session[1183]: Disconnected from invalid user ambari 51.75.194.10 port 52416 [preauth]
Sep 30 13:04:46 np0005462840.novalocal sshd-session[1185]: Received disconnect from 91.224.92.79 port 43908:11:  [preauth]
Sep 30 13:04:46 np0005462840.novalocal sshd-session[1185]: Disconnected from authenticating user root 91.224.92.79 port 43908 [preauth]
Sep 30 13:04:48 np0005462840.novalocal sshd-session[1187]: Invalid user test from 23.95.128.167 port 41646
Sep 30 13:04:48 np0005462840.novalocal sshd-session[1187]: Received disconnect from 23.95.128.167 port 41646:11: Bye Bye [preauth]
Sep 30 13:04:48 np0005462840.novalocal sshd-session[1187]: Disconnected from invalid user test 23.95.128.167 port 41646 [preauth]
Sep 30 13:05:04 np0005462840.novalocal sshd-session[1189]: Received disconnect from 82.29.72.161 port 37970:11: Bye Bye [preauth]
Sep 30 13:05:04 np0005462840.novalocal sshd-session[1189]: Disconnected from authenticating user root 82.29.72.161 port 37970 [preauth]
Sep 30 13:05:25 np0005462840.novalocal sshd-session[1191]: Received disconnect from 51.75.194.10 port 53114:11: Bye Bye [preauth]
Sep 30 13:05:25 np0005462840.novalocal sshd-session[1191]: Disconnected from authenticating user root 51.75.194.10 port 53114 [preauth]
Sep 30 13:05:29 np0005462840.novalocal sshd-session[1194]: Received disconnect from 181.212.34.237 port 64742:11: Bye Bye [preauth]
Sep 30 13:05:29 np0005462840.novalocal sshd-session[1194]: Disconnected from authenticating user root 181.212.34.237 port 64742 [preauth]
Sep 30 13:05:44 np0005462840.novalocal systemd[1]: Starting Cleanup of Temporary Directories...
Sep 30 13:05:44 np0005462840.novalocal systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Sep 30 13:05:44 np0005462840.novalocal systemd[1]: Finished Cleanup of Temporary Directories.
Sep 30 13:05:44 np0005462840.novalocal systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Sep 30 13:05:46 np0005462840.novalocal sshd-session[1200]: Received disconnect from 23.95.128.167 port 34394:11: Bye Bye [preauth]
Sep 30 13:05:46 np0005462840.novalocal sshd-session[1200]: Disconnected from authenticating user root 23.95.128.167 port 34394 [preauth]
Sep 30 13:05:49 np0005462840.novalocal sshd-session[1198]: Invalid user eloa from 59.36.78.66 port 46454
Sep 30 13:06:03 np0005462840.novalocal sshd-session[1202]: Invalid user karthavya from 82.29.72.161 port 33886
Sep 30 13:06:03 np0005462840.novalocal sshd-session[1202]: Received disconnect from 82.29.72.161 port 33886:11: Bye Bye [preauth]
Sep 30 13:06:03 np0005462840.novalocal sshd-session[1202]: Disconnected from invalid user karthavya 82.29.72.161 port 33886 [preauth]
Sep 30 13:06:09 np0005462840.novalocal sshd-session[1204]: Received disconnect from 87.251.77.103 port 44038:11: Bye Bye [preauth]
Sep 30 13:06:09 np0005462840.novalocal sshd-session[1204]: Disconnected from authenticating user root 87.251.77.103 port 44038 [preauth]
Sep 30 13:06:18 np0005462840.novalocal sshd-session[1207]: Invalid user ya from 51.75.194.10 port 52400
Sep 30 13:06:18 np0005462840.novalocal sshd-session[1207]: Received disconnect from 51.75.194.10 port 52400:11: Bye Bye [preauth]
Sep 30 13:06:18 np0005462840.novalocal sshd-session[1207]: Disconnected from invalid user ya 51.75.194.10 port 52400 [preauth]
Sep 30 13:06:35 np0005462840.novalocal sshd-session[1209]: Invalid user ntuser from 181.212.34.237 port 26064
Sep 30 13:06:35 np0005462840.novalocal sshd-session[1209]: Received disconnect from 181.212.34.237 port 26064:11: Bye Bye [preauth]
Sep 30 13:06:35 np0005462840.novalocal sshd-session[1209]: Disconnected from invalid user ntuser 181.212.34.237 port 26064 [preauth]
Sep 30 13:06:47 np0005462840.novalocal sshd-session[1211]: Invalid user printer from 23.95.128.167 port 38012
Sep 30 13:06:47 np0005462840.novalocal sshd-session[1211]: Received disconnect from 23.95.128.167 port 38012:11: Bye Bye [preauth]
Sep 30 13:06:47 np0005462840.novalocal sshd-session[1211]: Disconnected from invalid user printer 23.95.128.167 port 38012 [preauth]
Sep 30 13:06:56 np0005462840.novalocal sshd-session[1213]: Invalid user siva from 121.204.171.142 port 32816
Sep 30 13:06:57 np0005462840.novalocal sshd-session[1213]: Received disconnect from 121.204.171.142 port 32816:11: Bye Bye [preauth]
Sep 30 13:06:57 np0005462840.novalocal sshd-session[1213]: Disconnected from invalid user siva 121.204.171.142 port 32816 [preauth]
Sep 30 13:07:00 np0005462840.novalocal sshd-session[1215]: Invalid user minecraft from 82.29.72.161 port 58038
Sep 30 13:07:00 np0005462840.novalocal sshd-session[1215]: Received disconnect from 82.29.72.161 port 58038:11: Bye Bye [preauth]
Sep 30 13:07:00 np0005462840.novalocal sshd-session[1215]: Disconnected from invalid user minecraft 82.29.72.161 port 58038 [preauth]
Sep 30 13:07:10 np0005462840.novalocal sshd-session[1217]: Invalid user deploy from 51.75.194.10 port 41000
Sep 30 13:07:10 np0005462840.novalocal sshd-session[1217]: Received disconnect from 51.75.194.10 port 41000:11: Bye Bye [preauth]
Sep 30 13:07:10 np0005462840.novalocal sshd-session[1217]: Disconnected from invalid user deploy 51.75.194.10 port 41000 [preauth]
Sep 30 13:07:28 np0005462840.novalocal sshd[1005]: Timeout before authentication for connection from 121.204.171.142 to 38.102.83.20, pid = 1193
Sep 30 13:07:43 np0005462840.novalocal sshd-session[1219]: Invalid user nutanix from 181.212.34.237 port 22688
Sep 30 13:07:44 np0005462840.novalocal sshd-session[1219]: Received disconnect from 181.212.34.237 port 22688:11: Bye Bye [preauth]
Sep 30 13:07:44 np0005462840.novalocal sshd-session[1219]: Disconnected from invalid user nutanix 181.212.34.237 port 22688 [preauth]
Sep 30 13:07:45 np0005462840.novalocal sshd[1005]: Timeout before authentication for connection from 59.36.78.66 to 38.102.83.20, pid = 1198
Sep 30 13:07:48 np0005462840.novalocal sshd-session[1221]: Invalid user minecraft from 23.95.128.167 port 54126
Sep 30 13:07:48 np0005462840.novalocal sshd-session[1221]: Received disconnect from 23.95.128.167 port 54126:11: Bye Bye [preauth]
Sep 30 13:07:48 np0005462840.novalocal sshd-session[1221]: Disconnected from invalid user minecraft 23.95.128.167 port 54126 [preauth]
Sep 30 13:07:51 np0005462840.novalocal sshd-session[1223]: Invalid user test from 87.251.77.103 port 50784
Sep 30 13:07:51 np0005462840.novalocal sshd-session[1223]: Received disconnect from 87.251.77.103 port 50784:11: Bye Bye [preauth]
Sep 30 13:07:51 np0005462840.novalocal sshd-session[1223]: Disconnected from invalid user test 87.251.77.103 port 50784 [preauth]
Sep 30 13:08:01 np0005462840.novalocal sshd-session[1225]: Invalid user git from 82.29.72.161 port 53960
Sep 30 13:08:01 np0005462840.novalocal sshd-session[1225]: Received disconnect from 82.29.72.161 port 53960:11: Bye Bye [preauth]
Sep 30 13:08:01 np0005462840.novalocal sshd-session[1225]: Disconnected from invalid user git 82.29.72.161 port 53960 [preauth]
Sep 30 13:08:06 np0005462840.novalocal sshd-session[1227]: Invalid user in from 51.75.194.10 port 35196
Sep 30 13:08:06 np0005462840.novalocal sshd-session[1227]: Received disconnect from 51.75.194.10 port 35196:11: Bye Bye [preauth]
Sep 30 13:08:06 np0005462840.novalocal sshd-session[1227]: Disconnected from invalid user in 51.75.194.10 port 35196 [preauth]
Sep 30 13:08:52 np0005462840.novalocal sshd-session[1230]: Received disconnect from 23.95.128.167 port 35128:11: Bye Bye [preauth]
Sep 30 13:08:52 np0005462840.novalocal sshd-session[1230]: Disconnected from authenticating user root 23.95.128.167 port 35128 [preauth]
Sep 30 13:08:56 np0005462840.novalocal sshd-session[1232]: Invalid user nana from 181.212.34.237 port 31831
Sep 30 13:08:56 np0005462840.novalocal sshd-session[1232]: Received disconnect from 181.212.34.237 port 31831:11: Bye Bye [preauth]
Sep 30 13:08:56 np0005462840.novalocal sshd-session[1232]: Disconnected from invalid user nana 181.212.34.237 port 31831 [preauth]
Sep 30 13:09:06 np0005462840.novalocal sshd-session[1234]: Received disconnect from 82.29.72.161 port 49882:11: Bye Bye [preauth]
Sep 30 13:09:06 np0005462840.novalocal sshd-session[1234]: Disconnected from authenticating user root 82.29.72.161 port 49882 [preauth]
Sep 30 13:09:07 np0005462840.novalocal sshd-session[1236]: Received disconnect from 51.75.194.10 port 44048:11: Bye Bye [preauth]
Sep 30 13:09:07 np0005462840.novalocal sshd-session[1236]: Disconnected from authenticating user root 51.75.194.10 port 44048 [preauth]
Sep 30 13:09:29 np0005462840.novalocal sshd-session[1238]: Received disconnect from 87.251.77.103 port 59102:11: Bye Bye [preauth]
Sep 30 13:09:29 np0005462840.novalocal sshd-session[1238]: Disconnected from authenticating user root 87.251.77.103 port 59102 [preauth]
Sep 30 13:09:56 np0005462840.novalocal sshd-session[1240]: Received disconnect from 91.224.92.28 port 40094:11:  [preauth]
Sep 30 13:09:56 np0005462840.novalocal sshd-session[1240]: Disconnected from authenticating user root 91.224.92.28 port 40094 [preauth]
Sep 30 13:10:00 np0005462840.novalocal sshd-session[1242]: Invalid user platform from 23.95.128.167 port 54938
Sep 30 13:10:00 np0005462840.novalocal sshd-session[1242]: Received disconnect from 23.95.128.167 port 54938:11: Bye Bye [preauth]
Sep 30 13:10:00 np0005462840.novalocal sshd-session[1242]: Disconnected from invalid user platform 23.95.128.167 port 54938 [preauth]
Sep 30 13:10:04 np0005462840.novalocal sshd-session[1244]: Invalid user superadmin from 51.75.194.10 port 48890
Sep 30 13:10:04 np0005462840.novalocal sshd-session[1244]: Received disconnect from 51.75.194.10 port 48890:11: Bye Bye [preauth]
Sep 30 13:10:04 np0005462840.novalocal sshd-session[1244]: Disconnected from invalid user superadmin 51.75.194.10 port 48890 [preauth]
Sep 30 13:10:09 np0005462840.novalocal sshd-session[1246]: Invalid user kodi from 181.212.34.237 port 35453
Sep 30 13:10:09 np0005462840.novalocal sshd-session[1246]: Received disconnect from 181.212.34.237 port 35453:11: Bye Bye [preauth]
Sep 30 13:10:09 np0005462840.novalocal sshd-session[1246]: Disconnected from invalid user kodi 181.212.34.237 port 35453 [preauth]
Sep 30 13:10:11 np0005462840.novalocal sshd-session[1248]: Received disconnect from 82.29.72.161 port 45804:11: Bye Bye [preauth]
Sep 30 13:10:11 np0005462840.novalocal sshd-session[1248]: Disconnected from authenticating user root 82.29.72.161 port 45804 [preauth]
Sep 30 13:10:59 np0005462840.novalocal sshd-session[1252]: Received disconnect from 51.75.194.10 port 55378:11: Bye Bye [preauth]
Sep 30 13:10:59 np0005462840.novalocal sshd-session[1252]: Disconnected from authenticating user root 51.75.194.10 port 55378 [preauth]
Sep 30 13:11:04 np0005462840.novalocal sshd-session[1254]: Invalid user test123 from 23.95.128.167 port 36576
Sep 30 13:11:04 np0005462840.novalocal sshd-session[1254]: Received disconnect from 23.95.128.167 port 36576:11: Bye Bye [preauth]
Sep 30 13:11:04 np0005462840.novalocal sshd-session[1254]: Disconnected from invalid user test123 23.95.128.167 port 36576 [preauth]
Sep 30 13:11:16 np0005462840.novalocal sshd-session[1257]: Invalid user zx from 82.29.72.161 port 41722
Sep 30 13:11:16 np0005462840.novalocal sshd-session[1257]: Received disconnect from 82.29.72.161 port 41722:11: Bye Bye [preauth]
Sep 30 13:11:16 np0005462840.novalocal sshd-session[1257]: Disconnected from invalid user zx 82.29.72.161 port 41722 [preauth]
Sep 30 13:11:20 np0005462840.novalocal sshd-session[1261]: Received disconnect from 181.212.34.237 port 60820:11: Bye Bye [preauth]
Sep 30 13:11:20 np0005462840.novalocal sshd-session[1261]: Disconnected from authenticating user mail 181.212.34.237 port 60820 [preauth]
Sep 30 13:11:51 np0005462840.novalocal sshd-session[1263]: Received disconnect from 51.75.194.10 port 46308:11: Bye Bye [preauth]
Sep 30 13:11:51 np0005462840.novalocal sshd-session[1263]: Disconnected from authenticating user root 51.75.194.10 port 46308 [preauth]
Sep 30 13:12:07 np0005462840.novalocal sshd-session[1265]: Received disconnect from 23.95.128.167 port 43238:11: Bye Bye [preauth]
Sep 30 13:12:07 np0005462840.novalocal sshd-session[1265]: Disconnected from authenticating user root 23.95.128.167 port 43238 [preauth]
Sep 30 13:12:15 np0005462840.novalocal sshd-session[1267]: Received disconnect from 82.29.72.161 port 37644:11: Bye Bye [preauth]
Sep 30 13:12:15 np0005462840.novalocal sshd-session[1267]: Disconnected from authenticating user root 82.29.72.161 port 37644 [preauth]
Sep 30 13:12:24 np0005462840.novalocal sshd[1005]: Timeout before authentication for connection from 59.36.78.66 to 38.102.83.20, pid = 1250
Sep 30 13:12:27 np0005462840.novalocal sshd-session[1269]: Invalid user rana from 181.212.34.237 port 59505
Sep 30 13:12:28 np0005462840.novalocal sshd-session[1269]: Received disconnect from 181.212.34.237 port 59505:11: Bye Bye [preauth]
Sep 30 13:12:28 np0005462840.novalocal sshd-session[1269]: Disconnected from invalid user rana 181.212.34.237 port 59505 [preauth]
Sep 30 13:12:42 np0005462840.novalocal sshd-session[1271]: Received disconnect from 51.75.194.10 port 53916:11: Bye Bye [preauth]
Sep 30 13:12:42 np0005462840.novalocal sshd-session[1271]: Disconnected from authenticating user root 51.75.194.10 port 53916 [preauth]
Sep 30 13:12:49 np0005462840.novalocal sshd-session[1273]: Invalid user test from 121.204.171.142 port 53548
Sep 30 13:12:50 np0005462840.novalocal sshd-session[1273]: Received disconnect from 121.204.171.142 port 53548:11: Bye Bye [preauth]
Sep 30 13:12:50 np0005462840.novalocal sshd-session[1273]: Disconnected from invalid user test 121.204.171.142 port 53548 [preauth]
Sep 30 13:13:09 np0005462840.novalocal sshd-session[1275]: Invalid user extern from 23.95.128.167 port 48964
Sep 30 13:13:09 np0005462840.novalocal sshd-session[1275]: Received disconnect from 23.95.128.167 port 48964:11: Bye Bye [preauth]
Sep 30 13:13:09 np0005462840.novalocal sshd-session[1275]: Disconnected from invalid user extern 23.95.128.167 port 48964 [preauth]
Sep 30 13:13:15 np0005462840.novalocal sshd-session[1277]: Invalid user deploy from 82.29.72.161 port 33560
Sep 30 13:13:15 np0005462840.novalocal sshd-session[1277]: Received disconnect from 82.29.72.161 port 33560:11: Bye Bye [preauth]
Sep 30 13:13:15 np0005462840.novalocal sshd-session[1277]: Disconnected from invalid user deploy 82.29.72.161 port 33560 [preauth]
Sep 30 13:13:19 np0005462840.novalocal sshd[1005]: Timeout before authentication for connection from 121.204.171.142 to 38.102.83.20, pid = 1259
Sep 30 13:13:21 np0005462840.novalocal sshd-session[1279]: Connection closed by authenticating user root 80.94.95.115 port 33068 [preauth]
Sep 30 13:13:25 np0005462840.novalocal sshd[1005]: drop connection #0 from [59.36.78.66]:59500 on [38.102.83.20]:22 penalty: exceeded LoginGraceTime
Sep 30 13:13:36 np0005462840.novalocal sshd-session[1281]: Invalid user eloa from 181.212.34.237 port 3262
Sep 30 13:13:36 np0005462840.novalocal sshd-session[1283]: Received disconnect from 51.75.194.10 port 41210:11: Bye Bye [preauth]
Sep 30 13:13:36 np0005462840.novalocal sshd-session[1283]: Disconnected from authenticating user root 51.75.194.10 port 41210 [preauth]
Sep 30 13:13:36 np0005462840.novalocal sshd-session[1281]: Received disconnect from 181.212.34.237 port 3262:11: Bye Bye [preauth]
Sep 30 13:13:36 np0005462840.novalocal sshd-session[1281]: Disconnected from invalid user eloa 181.212.34.237 port 3262 [preauth]
Sep 30 13:14:09 np0005462840.novalocal sshd-session[1285]: Invalid user mike from 23.95.128.167 port 33636
Sep 30 13:14:09 np0005462840.novalocal sshd-session[1285]: Received disconnect from 23.95.128.167 port 33636:11: Bye Bye [preauth]
Sep 30 13:14:09 np0005462840.novalocal sshd-session[1285]: Disconnected from invalid user mike 23.95.128.167 port 33636 [preauth]
Sep 30 13:14:12 np0005462840.novalocal sshd[1005]: drop connection #0 from [121.204.171.142]:46406 on [38.102.83.20]:22 penalty: exceeded LoginGraceTime
Sep 30 13:14:18 np0005462840.novalocal sshd-session[1287]: Received disconnect from 82.29.72.161 port 57716:11: Bye Bye [preauth]
Sep 30 13:14:18 np0005462840.novalocal sshd-session[1287]: Disconnected from authenticating user root 82.29.72.161 port 57716 [preauth]
Sep 30 13:14:30 np0005462840.novalocal sshd-session[1289]: Invalid user karthavya from 51.75.194.10 port 38520
Sep 30 13:14:30 np0005462840.novalocal sshd-session[1289]: Received disconnect from 51.75.194.10 port 38520:11: Bye Bye [preauth]
Sep 30 13:14:30 np0005462840.novalocal sshd-session[1289]: Disconnected from invalid user karthavya 51.75.194.10 port 38520 [preauth]
Sep 30 13:14:44 np0005462840.novalocal sshd-session[1292]: Invalid user administrator from 181.212.34.237 port 23928
Sep 30 13:14:45 np0005462840.novalocal sshd-session[1292]: Received disconnect from 181.212.34.237 port 23928:11: Bye Bye [preauth]
Sep 30 13:14:45 np0005462840.novalocal sshd-session[1292]: Disconnected from invalid user administrator 181.212.34.237 port 23928 [preauth]
Sep 30 13:15:11 np0005462840.novalocal sshd-session[1296]: Invalid user iot from 23.95.128.167 port 36856
Sep 30 13:15:11 np0005462840.novalocal sshd-session[1296]: Received disconnect from 23.95.128.167 port 36856:11: Bye Bye [preauth]
Sep 30 13:15:11 np0005462840.novalocal sshd-session[1296]: Disconnected from invalid user iot 23.95.128.167 port 36856 [preauth]
Sep 30 13:15:11 np0005462840.novalocal sshd-session[1294]: Received disconnect from 91.224.92.108 port 30220:11:  [preauth]
Sep 30 13:15:11 np0005462840.novalocal sshd-session[1294]: Disconnected from authenticating user root 91.224.92.108 port 30220 [preauth]
Sep 30 13:15:19 np0005462840.novalocal sshd-session[1299]: Invalid user test from 82.29.72.161 port 53634
Sep 30 13:15:19 np0005462840.novalocal sshd-session[1299]: Received disconnect from 82.29.72.161 port 53634:11: Bye Bye [preauth]
Sep 30 13:15:19 np0005462840.novalocal sshd-session[1299]: Disconnected from invalid user test 82.29.72.161 port 53634 [preauth]
Sep 30 13:15:27 np0005462840.novalocal sshd-session[1301]: Received disconnect from 51.75.194.10 port 43886:11: Bye Bye [preauth]
Sep 30 13:15:27 np0005462840.novalocal sshd-session[1301]: Disconnected from authenticating user root 51.75.194.10 port 43886 [preauth]
Sep 30 13:15:51 np0005462840.novalocal sshd-session[1303]: Invalid user tester from 121.204.171.142 port 52888
Sep 30 13:15:51 np0005462840.novalocal sshd-session[1303]: Received disconnect from 121.204.171.142 port 52888:11: Bye Bye [preauth]
Sep 30 13:15:51 np0005462840.novalocal sshd-session[1303]: Disconnected from invalid user tester 121.204.171.142 port 52888 [preauth]
Sep 30 13:15:56 np0005462840.novalocal sshd-session[1305]: Invalid user ftpuser from 181.212.34.237 port 32705
Sep 30 13:15:57 np0005462840.novalocal sshd-session[1305]: Received disconnect from 181.212.34.237 port 32705:11: Bye Bye [preauth]
Sep 30 13:15:57 np0005462840.novalocal sshd-session[1305]: Disconnected from invalid user ftpuser 181.212.34.237 port 32705 [preauth]
Sep 30 13:16:16 np0005462840.novalocal sshd-session[1307]: Invalid user git from 87.251.77.103 port 52540
Sep 30 13:16:16 np0005462840.novalocal sshd-session[1307]: Received disconnect from 87.251.77.103 port 52540:11: Bye Bye [preauth]
Sep 30 13:16:16 np0005462840.novalocal sshd-session[1307]: Disconnected from invalid user git 87.251.77.103 port 52540 [preauth]
Sep 30 13:16:16 np0005462840.novalocal sshd-session[1309]: Invalid user mastodon from 23.95.128.167 port 54324
Sep 30 13:16:16 np0005462840.novalocal sshd-session[1309]: Received disconnect from 23.95.128.167 port 54324:11: Bye Bye [preauth]
Sep 30 13:16:16 np0005462840.novalocal sshd-session[1309]: Disconnected from invalid user mastodon 23.95.128.167 port 54324 [preauth]
Sep 30 13:16:21 np0005462840.novalocal sshd-session[1311]: Received disconnect from 82.29.72.161 port 49556:11: Bye Bye [preauth]
Sep 30 13:16:21 np0005462840.novalocal sshd-session[1311]: Disconnected from authenticating user root 82.29.72.161 port 49556 [preauth]
Sep 30 13:16:23 np0005462840.novalocal sshd-session[1314]: Received disconnect from 51.75.194.10 port 41870:11: Bye Bye [preauth]
Sep 30 13:16:23 np0005462840.novalocal sshd-session[1314]: Disconnected from authenticating user root 51.75.194.10 port 41870 [preauth]
Sep 30 13:17:08 np0005462840.novalocal sshd-session[1317]: Received disconnect from 181.212.34.237 port 41733:11: Bye Bye [preauth]
Sep 30 13:17:08 np0005462840.novalocal sshd-session[1317]: Disconnected from authenticating user root 181.212.34.237 port 41733 [preauth]
Sep 30 13:17:14 np0005462840.novalocal sshd-session[1319]: Invalid user admin from 139.19.117.130 port 50550
Sep 30 13:17:14 np0005462840.novalocal sshd-session[1319]: userauth_pubkey: signature algorithm ssh-rsa not in PubkeyAcceptedAlgorithms [preauth]
Sep 30 13:17:17 np0005462840.novalocal sshd-session[1321]: Invalid user mysftp from 23.95.128.167 port 34960
Sep 30 13:17:17 np0005462840.novalocal sshd-session[1321]: Received disconnect from 23.95.128.167 port 34960:11: Bye Bye [preauth]
Sep 30 13:17:17 np0005462840.novalocal sshd-session[1321]: Disconnected from invalid user mysftp 23.95.128.167 port 34960 [preauth]
Sep 30 13:17:19 np0005462840.novalocal sshd-session[1323]: Invalid user printer from 51.75.194.10 port 47344
Sep 30 13:17:19 np0005462840.novalocal sshd-session[1323]: Received disconnect from 51.75.194.10 port 47344:11: Bye Bye [preauth]
Sep 30 13:17:19 np0005462840.novalocal sshd-session[1323]: Disconnected from invalid user printer 51.75.194.10 port 47344 [preauth]
Sep 30 13:17:23 np0005462840.novalocal sshd-session[1319]: Connection closed by invalid user admin 139.19.117.130 port 50550 [preauth]
Sep 30 13:17:28 np0005462840.novalocal sshd-session[1327]: Received disconnect from 82.29.72.161 port 45478:11: Bye Bye [preauth]
Sep 30 13:17:28 np0005462840.novalocal sshd-session[1327]: Disconnected from authenticating user root 82.29.72.161 port 45478 [preauth]
Sep 30 13:18:13 np0005462840.novalocal sshd-session[1329]: Invalid user test from 51.75.194.10 port 56084
Sep 30 13:18:13 np0005462840.novalocal sshd-session[1329]: Received disconnect from 51.75.194.10 port 56084:11: Bye Bye [preauth]
Sep 30 13:18:13 np0005462840.novalocal sshd-session[1329]: Disconnected from invalid user test 51.75.194.10 port 56084 [preauth]
Sep 30 13:18:17 np0005462840.novalocal sshd-session[1332]: Received disconnect from 23.95.128.167 port 50188:11: Bye Bye [preauth]
Sep 30 13:18:17 np0005462840.novalocal sshd-session[1332]: Disconnected from authenticating user root 23.95.128.167 port 50188 [preauth]
Sep 30 13:18:17 np0005462840.novalocal sshd-session[1334]: Invalid user christine from 181.212.34.237 port 14282
Sep 30 13:18:18 np0005462840.novalocal sshd-session[1334]: Received disconnect from 181.212.34.237 port 14282:11: Bye Bye [preauth]
Sep 30 13:18:18 np0005462840.novalocal sshd-session[1334]: Disconnected from invalid user christine 181.212.34.237 port 14282 [preauth]
Sep 30 13:18:30 np0005462840.novalocal sshd-session[1337]: Invalid user test123 from 82.29.72.161 port 41396
Sep 30 13:18:30 np0005462840.novalocal sshd-session[1337]: Received disconnect from 82.29.72.161 port 41396:11: Bye Bye [preauth]
Sep 30 13:18:30 np0005462840.novalocal sshd-session[1337]: Disconnected from invalid user test123 82.29.72.161 port 41396 [preauth]
Sep 30 13:18:43 np0005462840.novalocal sshd[1005]: Timeout before authentication for connection from 59.36.78.66 to 38.102.83.20, pid = 1316
Sep 30 13:19:06 np0005462840.novalocal sshd-session[1339]: Invalid user foundry from 51.75.194.10 port 39066
Sep 30 13:19:06 np0005462840.novalocal sshd-session[1339]: Received disconnect from 51.75.194.10 port 39066:11: Bye Bye [preauth]
Sep 30 13:19:06 np0005462840.novalocal sshd-session[1339]: Disconnected from invalid user foundry 51.75.194.10 port 39066 [preauth]
Sep 30 13:19:21 np0005462840.novalocal sshd-session[1341]: Invalid user deploy from 23.95.128.167 port 58940
Sep 30 13:19:21 np0005462840.novalocal sshd-session[1341]: Received disconnect from 23.95.128.167 port 58940:11: Bye Bye [preauth]
Sep 30 13:19:21 np0005462840.novalocal sshd-session[1341]: Disconnected from invalid user deploy 23.95.128.167 port 58940 [preauth]
Sep 30 13:19:26 np0005462840.novalocal sshd-session[1343]: Invalid user wcs from 181.212.34.237 port 7313
Sep 30 13:19:26 np0005462840.novalocal sshd-session[1343]: Received disconnect from 181.212.34.237 port 7313:11: Bye Bye [preauth]
Sep 30 13:19:26 np0005462840.novalocal sshd-session[1343]: Disconnected from invalid user wcs 181.212.34.237 port 7313 [preauth]
Sep 30 13:19:29 np0005462840.novalocal sshd-session[1345]: Invalid user ya from 82.29.72.161 port 37314
Sep 30 13:19:29 np0005462840.novalocal sshd-session[1345]: Received disconnect from 82.29.72.161 port 37314:11: Bye Bye [preauth]
Sep 30 13:19:29 np0005462840.novalocal sshd-session[1345]: Disconnected from invalid user ya 82.29.72.161 port 37314 [preauth]
Sep 30 13:19:32 np0005462840.novalocal sshd-session[1347]: Invalid user sanket from 87.251.77.103 port 59062
Sep 30 13:19:32 np0005462840.novalocal sshd-session[1347]: Received disconnect from 87.251.77.103 port 59062:11: Bye Bye [preauth]
Sep 30 13:19:32 np0005462840.novalocal sshd-session[1347]: Disconnected from invalid user sanket 87.251.77.103 port 59062 [preauth]
Sep 30 13:19:48 np0005462840.novalocal sshd[1005]: drop connection #1 from [59.36.78.66]:53082 on [38.102.83.20]:22 penalty: exceeded LoginGraceTime
Sep 30 13:20:00 np0005462840.novalocal sshd-session[1349]: Invalid user foundry from 51.75.194.10 port 49786
Sep 30 13:20:00 np0005462840.novalocal sshd-session[1349]: Received disconnect from 51.75.194.10 port 49786:11: Bye Bye [preauth]
Sep 30 13:20:00 np0005462840.novalocal sshd-session[1349]: Disconnected from invalid user foundry 51.75.194.10 port 49786 [preauth]
Sep 30 13:20:17 np0005462840.novalocal sshd[1005]: Timeout before authentication for connection from 59.36.78.66 to 38.102.83.20, pid = 1331
Sep 30 13:20:22 np0005462840.novalocal sshd-session[1351]: Received disconnect from 23.95.128.167 port 57124:11: Bye Bye [preauth]
Sep 30 13:20:22 np0005462840.novalocal sshd-session[1351]: Disconnected from authenticating user root 23.95.128.167 port 57124 [preauth]
Sep 30 13:20:24 np0005462840.novalocal sshd-session[1353]: Received disconnect from 91.224.92.28 port 30540:11:  [preauth]
Sep 30 13:20:24 np0005462840.novalocal sshd-session[1353]: Disconnected from authenticating user root 91.224.92.28 port 30540 [preauth]
Sep 30 13:20:29 np0005462840.novalocal sshd-session[1355]: Received disconnect from 82.29.72.161 port 33244:11: Bye Bye [preauth]
Sep 30 13:20:29 np0005462840.novalocal sshd-session[1355]: Disconnected from authenticating user root 82.29.72.161 port 33244 [preauth]
Sep 30 13:20:36 np0005462840.novalocal sshd-session[1357]: Invalid user administrator from 181.212.34.237 port 49443
Sep 30 13:20:36 np0005462840.novalocal sshd-session[1357]: Received disconnect from 181.212.34.237 port 49443:11: Bye Bye [preauth]
Sep 30 13:20:36 np0005462840.novalocal sshd-session[1357]: Disconnected from invalid user administrator 181.212.34.237 port 49443 [preauth]
Sep 30 13:20:55 np0005462840.novalocal sshd-session[1359]: Invalid user mike from 51.75.194.10 port 40610
Sep 30 13:20:55 np0005462840.novalocal sshd-session[1359]: Received disconnect from 51.75.194.10 port 40610:11: Bye Bye [preauth]
Sep 30 13:20:55 np0005462840.novalocal sshd-session[1359]: Disconnected from invalid user mike 51.75.194.10 port 40610 [preauth]
Sep 30 13:21:07 np0005462840.novalocal sshd[1005]: drop connection #0 from [59.36.78.66]:44434 on [38.102.83.20]:22 penalty: exceeded LoginGraceTime
Sep 30 13:21:26 np0005462840.novalocal sshd-session[1361]: Invalid user loader from 23.95.128.167 port 40556
Sep 30 13:21:26 np0005462840.novalocal sshd-session[1361]: Received disconnect from 23.95.128.167 port 40556:11: Bye Bye [preauth]
Sep 30 13:21:26 np0005462840.novalocal sshd-session[1361]: Disconnected from invalid user loader 23.95.128.167 port 40556 [preauth]
Sep 30 13:21:33 np0005462840.novalocal sshd-session[1363]: Invalid user superadmin from 82.29.72.161 port 57406
Sep 30 13:21:33 np0005462840.novalocal sshd-session[1363]: Received disconnect from 82.29.72.161 port 57406:11: Bye Bye [preauth]
Sep 30 13:21:33 np0005462840.novalocal sshd-session[1363]: Disconnected from invalid user superadmin 82.29.72.161 port 57406 [preauth]
Sep 30 13:21:52 np0005462840.novalocal sshd-session[1365]: Invalid user ammar from 181.212.34.237 port 2418
Sep 30 13:21:52 np0005462840.novalocal sshd-session[1365]: Received disconnect from 181.212.34.237 port 2418:11: Bye Bye [preauth]
Sep 30 13:21:52 np0005462840.novalocal sshd-session[1365]: Disconnected from invalid user ammar 181.212.34.237 port 2418 [preauth]
Sep 30 13:21:54 np0005462840.novalocal sshd-session[1367]: Received disconnect from 51.75.194.10 port 46762:11: Bye Bye [preauth]
Sep 30 13:21:54 np0005462840.novalocal sshd-session[1367]: Disconnected from authenticating user root 51.75.194.10 port 46762 [preauth]
Sep 30 13:22:32 np0005462840.novalocal sshd-session[1370]: Invalid user auser from 23.95.128.167 port 48274
Sep 30 13:22:32 np0005462840.novalocal sshd-session[1370]: Received disconnect from 23.95.128.167 port 48274:11: Bye Bye [preauth]
Sep 30 13:22:32 np0005462840.novalocal sshd-session[1370]: Disconnected from invalid user auser 23.95.128.167 port 48274 [preauth]
Sep 30 13:22:39 np0005462840.novalocal sshd-session[1373]: Invalid user iot from 82.29.72.161 port 53332
Sep 30 13:22:39 np0005462840.novalocal sshd-session[1373]: Received disconnect from 82.29.72.161 port 53332:11: Bye Bye [preauth]
Sep 30 13:22:39 np0005462840.novalocal sshd-session[1373]: Disconnected from invalid user iot 82.29.72.161 port 53332 [preauth]
Sep 30 13:22:53 np0005462840.novalocal sshd-session[1376]: Invalid user sanket from 51.75.194.10 port 36390
Sep 30 13:22:53 np0005462840.novalocal sshd-session[1376]: Received disconnect from 51.75.194.10 port 36390:11: Bye Bye [preauth]
Sep 30 13:22:53 np0005462840.novalocal sshd-session[1376]: Disconnected from invalid user sanket 51.75.194.10 port 36390 [preauth]
Sep 30 13:22:54 np0005462840.novalocal sshd-session[1378]: Invalid user foundry from 87.251.77.103 port 51172
Sep 30 13:22:55 np0005462840.novalocal sshd-session[1378]: Received disconnect from 87.251.77.103 port 51172:11: Bye Bye [preauth]
Sep 30 13:22:55 np0005462840.novalocal sshd-session[1378]: Disconnected from invalid user foundry 87.251.77.103 port 51172 [preauth]
Sep 30 13:23:05 np0005462840.novalocal sshd-session[1380]: Received disconnect from 181.212.34.237 port 8189:11: Bye Bye [preauth]
Sep 30 13:23:05 np0005462840.novalocal sshd-session[1380]: Disconnected from authenticating user root 181.212.34.237 port 8189 [preauth]
Sep 30 13:23:36 np0005462840.novalocal sshd-session[1383]: Invalid user sanket from 23.95.128.167 port 41004
Sep 30 13:23:36 np0005462840.novalocal sshd-session[1383]: Received disconnect from 23.95.128.167 port 41004:11: Bye Bye [preauth]
Sep 30 13:23:36 np0005462840.novalocal sshd-session[1383]: Disconnected from invalid user sanket 23.95.128.167 port 41004 [preauth]
Sep 30 13:23:39 np0005462840.novalocal sshd-session[1385]: Connection closed by authenticating user root 185.156.73.233 port 60352 [preauth]
Sep 30 13:23:42 np0005462840.novalocal sshd-session[1387]: Invalid user in from 82.29.72.161 port 49254
Sep 30 13:23:42 np0005462840.novalocal sshd-session[1387]: Received disconnect from 82.29.72.161 port 49254:11: Bye Bye [preauth]
Sep 30 13:23:42 np0005462840.novalocal sshd-session[1387]: Disconnected from invalid user in 82.29.72.161 port 49254 [preauth]
Sep 30 13:23:44 np0005462840.novalocal sshd-session[1389]: banner exchange: Connection from 65.49.1.108 port 57338: invalid format
Sep 30 13:23:48 np0005462840.novalocal sshd-session[1390]: Invalid user auser from 51.75.194.10 port 45502
Sep 30 13:23:49 np0005462840.novalocal sshd-session[1390]: Received disconnect from 51.75.194.10 port 45502:11: Bye Bye [preauth]
Sep 30 13:23:49 np0005462840.novalocal sshd-session[1390]: Disconnected from invalid user auser 51.75.194.10 port 45502 [preauth]
Sep 30 13:24:16 np0005462840.novalocal sshd-session[1392]: Received disconnect from 59.36.78.66 port 55374:11: Bye Bye [preauth]
Sep 30 13:24:16 np0005462840.novalocal sshd-session[1392]: Disconnected from authenticating user root 59.36.78.66 port 55374 [preauth]
Sep 30 13:24:21 np0005462840.novalocal sshd-session[1396]: Invalid user web from 181.212.34.237 port 14553
Sep 30 13:24:21 np0005462840.novalocal sshd-session[1396]: Received disconnect from 181.212.34.237 port 14553:11: Bye Bye [preauth]
Sep 30 13:24:21 np0005462840.novalocal sshd-session[1396]: Disconnected from invalid user web 181.212.34.237 port 14553 [preauth]
Sep 30 13:24:33 np0005462840.novalocal sshd-session[1398]: Invalid user printer from 87.251.77.103 port 34664
Sep 30 13:24:33 np0005462840.novalocal sshd-session[1398]: Received disconnect from 87.251.77.103 port 34664:11: Bye Bye [preauth]
Sep 30 13:24:33 np0005462840.novalocal sshd-session[1398]: Disconnected from invalid user printer 87.251.77.103 port 34664 [preauth]
Sep 30 13:24:36 np0005462840.novalocal sshd-session[1400]: Invalid user siesa from 23.95.128.167 port 58218
Sep 30 13:24:36 np0005462840.novalocal sshd-session[1400]: Received disconnect from 23.95.128.167 port 58218:11: Bye Bye [preauth]
Sep 30 13:24:36 np0005462840.novalocal sshd-session[1400]: Disconnected from invalid user siesa 23.95.128.167 port 58218 [preauth]
Sep 30 13:24:41 np0005462840.novalocal sshd[1005]: Timeout before authentication for connection from 59.36.78.66 to 38.102.83.20, pid = 1372
Sep 30 13:24:42 np0005462840.novalocal sshd-session[1402]: Invalid user minecraft from 51.75.194.10 port 46042
Sep 30 13:24:42 np0005462840.novalocal sshd-session[1402]: Received disconnect from 51.75.194.10 port 46042:11: Bye Bye [preauth]
Sep 30 13:24:42 np0005462840.novalocal sshd-session[1402]: Disconnected from invalid user minecraft 51.75.194.10 port 46042 [preauth]
Sep 30 13:24:44 np0005462840.novalocal sshd-session[1404]: Invalid user nc from 82.29.72.161 port 45176
Sep 30 13:24:44 np0005462840.novalocal sshd-session[1404]: Received disconnect from 82.29.72.161 port 45176:11: Bye Bye [preauth]
Sep 30 13:24:44 np0005462840.novalocal sshd-session[1404]: Disconnected from invalid user nc 82.29.72.161 port 45176 [preauth]
Sep 30 13:24:46 np0005462840.novalocal sshd[1005]: Timeout before authentication for connection from 121.204.171.142 to 38.102.83.20, pid = 1375
Sep 30 13:25:14 np0005462840.novalocal sshd-session[1407]: Received disconnect from 141.98.11.34 port 44408:11:  [preauth]
Sep 30 13:25:14 np0005462840.novalocal sshd-session[1407]: Disconnected from authenticating user root 141.98.11.34 port 44408 [preauth]
Sep 30 13:25:27 np0005462840.novalocal sshd-session[1409]: Invalid user dev from 181.212.34.237 port 19050
Sep 30 13:25:27 np0005462840.novalocal sshd-session[1409]: Received disconnect from 181.212.34.237 port 19050:11: Bye Bye [preauth]
Sep 30 13:25:27 np0005462840.novalocal sshd-session[1409]: Disconnected from invalid user dev 181.212.34.237 port 19050 [preauth]
Sep 30 13:25:33 np0005462840.novalocal sshd-session[1411]: Received disconnect from 23.95.128.167 port 48906:11: Bye Bye [preauth]
Sep 30 13:25:33 np0005462840.novalocal sshd-session[1411]: Disconnected from authenticating user root 23.95.128.167 port 48906 [preauth]
Sep 30 13:25:33 np0005462840.novalocal sshd[1005]: drop connection #1 from [121.204.171.142]:54586 on [38.102.83.20]:22 penalty: exceeded LoginGraceTime
Sep 30 13:25:34 np0005462840.novalocal sshd-session[1413]: Invalid user nc from 51.75.194.10 port 48320
Sep 30 13:25:34 np0005462840.novalocal sshd-session[1413]: Received disconnect from 51.75.194.10 port 48320:11: Bye Bye [preauth]
Sep 30 13:25:34 np0005462840.novalocal sshd-session[1413]: Disconnected from invalid user nc 51.75.194.10 port 48320 [preauth]
Sep 30 13:25:41 np0005462840.novalocal sshd-session[1415]: Invalid user ambari from 82.29.72.161 port 41094
Sep 30 13:25:41 np0005462840.novalocal sshd-session[1415]: Received disconnect from 82.29.72.161 port 41094:11: Bye Bye [preauth]
Sep 30 13:25:41 np0005462840.novalocal sshd-session[1415]: Disconnected from invalid user ambari 82.29.72.161 port 41094 [preauth]
Sep 30 13:25:47 np0005462840.novalocal sshd[1005]: drop connection #1 from [59.36.78.66]:46696 on [38.102.83.20]:22 penalty: exceeded LoginGraceTime
Sep 30 13:26:07 np0005462840.novalocal sshd-session[1417]: Received disconnect from 87.251.77.103 port 48326:11: Bye Bye [preauth]
Sep 30 13:26:07 np0005462840.novalocal sshd-session[1417]: Disconnected from authenticating user root 87.251.77.103 port 48326 [preauth]
Sep 30 13:26:22 np0005462840.novalocal sshd[1005]: Timeout before authentication for connection from 121.204.171.142 to 38.102.83.20, pid = 1394
Sep 30 13:26:27 np0005462840.novalocal sshd-session[1419]: Invalid user extern from 51.75.194.10 port 38018
Sep 30 13:26:27 np0005462840.novalocal sshd-session[1419]: Received disconnect from 51.75.194.10 port 38018:11: Bye Bye [preauth]
Sep 30 13:26:27 np0005462840.novalocal sshd-session[1419]: Disconnected from invalid user extern 51.75.194.10 port 38018 [preauth]
Sep 30 13:26:33 np0005462840.novalocal sshd-session[1421]: Invalid user nc from 23.95.128.167 port 34140
Sep 30 13:26:33 np0005462840.novalocal sshd-session[1421]: Received disconnect from 23.95.128.167 port 34140:11: Bye Bye [preauth]
Sep 30 13:26:33 np0005462840.novalocal sshd-session[1421]: Disconnected from invalid user nc 23.95.128.167 port 34140 [preauth]
Sep 30 13:26:34 np0005462840.novalocal sshd-session[1423]: Invalid user FAKESSH from 181.212.34.237 port 4395
Sep 30 13:26:34 np0005462840.novalocal sshd-session[1423]: Received disconnect from 181.212.34.237 port 4395:11: Bye Bye [preauth]
Sep 30 13:26:34 np0005462840.novalocal sshd-session[1423]: Disconnected from invalid user FAKESSH 181.212.34.237 port 4395 [preauth]
Sep 30 13:26:39 np0005462840.novalocal sshd-session[1425]: Invalid user thomas from 82.29.72.161 port 37016
Sep 30 13:26:39 np0005462840.novalocal sshd-session[1425]: Received disconnect from 82.29.72.161 port 37016:11: Bye Bye [preauth]
Sep 30 13:26:39 np0005462840.novalocal sshd-session[1425]: Disconnected from invalid user thomas 82.29.72.161 port 37016 [preauth]
Sep 30 13:27:21 np0005462840.novalocal sshd-session[1428]: Invalid user mastodon from 51.75.194.10 port 34436
Sep 30 13:27:21 np0005462840.novalocal sshd-session[1428]: Received disconnect from 51.75.194.10 port 34436:11: Bye Bye [preauth]
Sep 30 13:27:21 np0005462840.novalocal sshd-session[1428]: Disconnected from invalid user mastodon 51.75.194.10 port 34436 [preauth]
Sep 30 13:27:22 np0005462840.novalocal sshd-session[1427]: Invalid user administrator from 59.36.78.66 port 38044
Sep 30 13:27:22 np0005462840.novalocal sshd-session[1427]: Received disconnect from 59.36.78.66 port 38044:11: Bye Bye [preauth]
Sep 30 13:27:22 np0005462840.novalocal sshd-session[1427]: Disconnected from invalid user administrator 59.36.78.66 port 38044 [preauth]
Sep 30 13:27:33 np0005462840.novalocal sshd-session[1431]: Invalid user karthavya from 23.95.128.167 port 35056
Sep 30 13:27:33 np0005462840.novalocal sshd-session[1431]: Received disconnect from 23.95.128.167 port 35056:11: Bye Bye [preauth]
Sep 30 13:27:33 np0005462840.novalocal sshd-session[1431]: Disconnected from invalid user karthavya 23.95.128.167 port 35056 [preauth]
Sep 30 13:27:39 np0005462840.novalocal sshd-session[1434]: Invalid user test from 82.29.72.161 port 32938
Sep 30 13:27:39 np0005462840.novalocal sshd-session[1434]: Received disconnect from 82.29.72.161 port 32938:11: Bye Bye [preauth]
Sep 30 13:27:39 np0005462840.novalocal sshd-session[1434]: Disconnected from invalid user test 82.29.72.161 port 32938 [preauth]
Sep 30 13:27:44 np0005462840.novalocal sshd-session[1436]: Invalid user foundry from 181.212.34.237 port 18218
Sep 30 13:27:45 np0005462840.novalocal sshd-session[1436]: Received disconnect from 181.212.34.237 port 18218:11: Bye Bye [preauth]
Sep 30 13:27:45 np0005462840.novalocal sshd-session[1436]: Disconnected from invalid user foundry 181.212.34.237 port 18218 [preauth]
Sep 30 13:28:38 np0005462840.novalocal sshd-session[1439]: Invalid user zx from 23.95.128.167 port 60390
Sep 30 13:28:38 np0005462840.novalocal sshd-session[1439]: Received disconnect from 23.95.128.167 port 60390:11: Bye Bye [preauth]
Sep 30 13:28:38 np0005462840.novalocal sshd-session[1439]: Disconnected from invalid user zx 23.95.128.167 port 60390 [preauth]
Sep 30 13:28:42 np0005462840.novalocal sshd-session[1441]: Invalid user sanket from 82.29.72.161 port 57092
Sep 30 13:28:42 np0005462840.novalocal sshd-session[1441]: Received disconnect from 82.29.72.161 port 57092:11: Bye Bye [preauth]
Sep 30 13:28:42 np0005462840.novalocal sshd-session[1441]: Disconnected from invalid user sanket 82.29.72.161 port 57092 [preauth]
Sep 30 13:29:02 np0005462840.novalocal sshd-session[1444]: Invalid user skim from 181.212.34.237 port 41256
Sep 30 13:29:02 np0005462840.novalocal sshd-session[1444]: Received disconnect from 181.212.34.237 port 41256:11: Bye Bye [preauth]
Sep 30 13:29:02 np0005462840.novalocal sshd-session[1444]: Disconnected from invalid user skim 181.212.34.237 port 41256 [preauth]
Sep 30 13:29:40 np0005462840.novalocal sshd-session[1446]: Invalid user superadmin from 23.95.128.167 port 51846
Sep 30 13:29:40 np0005462840.novalocal sshd-session[1446]: Received disconnect from 23.95.128.167 port 51846:11: Bye Bye [preauth]
Sep 30 13:29:40 np0005462840.novalocal sshd-session[1446]: Disconnected from invalid user superadmin 23.95.128.167 port 51846 [preauth]
Sep 30 13:29:45 np0005462840.novalocal sshd-session[1448]: Invalid user mike from 82.29.72.161 port 53018
Sep 30 13:29:45 np0005462840.novalocal sshd-session[1448]: Received disconnect from 82.29.72.161 port 53018:11: Bye Bye [preauth]
Sep 30 13:29:45 np0005462840.novalocal sshd-session[1448]: Disconnected from invalid user mike 82.29.72.161 port 53018 [preauth]
Sep 30 13:30:19 np0005462840.novalocal sshd-session[1450]: Received disconnect from 181.212.34.237 port 21674:11: Bye Bye [preauth]
Sep 30 13:30:19 np0005462840.novalocal sshd-session[1450]: Disconnected from authenticating user root 181.212.34.237 port 21674 [preauth]
Sep 30 13:30:24 np0005462840.novalocal sshd-session[1452]: Received disconnect from 59.36.78.66 port 48952:11: Bye Bye [preauth]
Sep 30 13:30:24 np0005462840.novalocal sshd-session[1452]: Disconnected from authenticating user root 59.36.78.66 port 48952 [preauth]
Sep 30 13:30:31 np0005462840.novalocal sshd-session[1454]: Received disconnect from 91.224.92.32 port 36968:11:  [preauth]
Sep 30 13:30:31 np0005462840.novalocal sshd-session[1454]: Disconnected from authenticating user root 91.224.92.32 port 36968 [preauth]
Sep 30 13:30:32 np0005462840.novalocal sshd-session[1443]: Connection closed by 59.36.78.66 port 57626 [preauth]
Sep 30 13:30:33 np0005462840.novalocal sshd[1005]: Timeout before authentication for connection from 121.204.171.142 to 38.102.83.20, pid = 1438
Sep 30 13:30:48 np0005462840.novalocal sshd-session[1457]: Invalid user printer from 82.29.72.161 port 48940
Sep 30 13:30:48 np0005462840.novalocal sshd-session[1457]: Received disconnect from 82.29.72.161 port 48940:11: Bye Bye [preauth]
Sep 30 13:30:48 np0005462840.novalocal sshd-session[1457]: Disconnected from invalid user printer 82.29.72.161 port 48940 [preauth]
Sep 30 13:31:29 np0005462840.novalocal sshd-session[1459]: Invalid user dc from 181.212.34.237 port 44595
Sep 30 13:31:29 np0005462840.novalocal sshd-session[1459]: Received disconnect from 181.212.34.237 port 44595:11: Bye Bye [preauth]
Sep 30 13:31:29 np0005462840.novalocal sshd-session[1459]: Disconnected from invalid user dc 181.212.34.237 port 44595 [preauth]
Sep 30 13:31:44 np0005462840.novalocal sshd-session[1461]: Invalid user test from 23.95.128.167 port 33512
Sep 30 13:31:44 np0005462840.novalocal sshd-session[1461]: Received disconnect from 23.95.128.167 port 33512:11: Bye Bye [preauth]
Sep 30 13:31:44 np0005462840.novalocal sshd-session[1461]: Disconnected from invalid user test 23.95.128.167 port 33512 [preauth]
Sep 30 13:31:47 np0005462840.novalocal sshd-session[1463]: Invalid user platform from 82.29.72.161 port 44858
Sep 30 13:31:47 np0005462840.novalocal sshd-session[1463]: Received disconnect from 82.29.72.161 port 44858:11: Bye Bye [preauth]
Sep 30 13:31:47 np0005462840.novalocal sshd-session[1463]: Disconnected from invalid user platform 82.29.72.161 port 44858 [preauth]
Sep 30 13:31:52 np0005462840.novalocal sshd[1005]: drop connection #1 from [121.204.171.142]:44338 on [38.102.83.20]:22 penalty: exceeded LoginGraceTime
Sep 30 13:32:39 np0005462840.novalocal sshd-session[1467]: Received disconnect from 181.212.34.237 port 57560:11: Bye Bye [preauth]
Sep 30 13:32:39 np0005462840.novalocal sshd-session[1467]: Disconnected from authenticating user root 181.212.34.237 port 57560 [preauth]
Sep 30 13:32:43 np0005462840.novalocal sshd-session[1469]: Invalid user git from 23.95.128.167 port 32846
Sep 30 13:32:43 np0005462840.novalocal sshd-session[1469]: Received disconnect from 23.95.128.167 port 32846:11: Bye Bye [preauth]
Sep 30 13:32:43 np0005462840.novalocal sshd-session[1469]: Disconnected from invalid user git 23.95.128.167 port 32846 [preauth]
Sep 30 13:32:47 np0005462840.novalocal sshd-session[1471]: Invalid user Administrator from 80.94.95.115 port 27680
Sep 30 13:32:48 np0005462840.novalocal sshd-session[1471]: Connection closed by invalid user Administrator 80.94.95.115 port 27680 [preauth]
Sep 30 13:32:51 np0005462840.novalocal sshd-session[1473]: Received disconnect from 87.251.77.103 port 47528:11: Bye Bye [preauth]
Sep 30 13:32:51 np0005462840.novalocal sshd-session[1473]: Disconnected from authenticating user root 87.251.77.103 port 47528 [preauth]
Sep 30 13:33:49 np0005462840.novalocal sshd-session[1475]: Received disconnect from 181.212.34.237 port 39791:11: Bye Bye [preauth]
Sep 30 13:33:49 np0005462840.novalocal sshd-session[1475]: Disconnected from authenticating user root 181.212.34.237 port 39791 [preauth]
Sep 30 13:33:50 np0005462840.novalocal sshd[1005]: Timeout before authentication for connection from 59.36.78.66 to 38.102.83.20, pid = 1465
Sep 30 13:34:05 np0005462840.novalocal sshd-session[1477]: Accepted publickey for zuul from 38.102.83.114 port 55306 ssh2: RSA SHA256:zhs3MiW0JhxzckYcMHQES8SMYHj1iGcomnyzmbiwor8
Sep 30 13:34:05 np0005462840.novalocal systemd[1]: Created slice User Slice of UID 1000.
Sep 30 13:34:05 np0005462840.novalocal systemd[1]: Starting User Runtime Directory /run/user/1000...
Sep 30 13:34:05 np0005462840.novalocal systemd-logind[808]: New session 1 of user zuul.
Sep 30 13:34:05 np0005462840.novalocal systemd[1]: Finished User Runtime Directory /run/user/1000.
Sep 30 13:34:05 np0005462840.novalocal systemd[1]: Starting User Manager for UID 1000...
Sep 30 13:34:05 np0005462840.novalocal systemd[1481]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Sep 30 13:34:05 np0005462840.novalocal systemd[1481]: Queued start job for default target Main User Target.
Sep 30 13:34:05 np0005462840.novalocal systemd[1481]: Created slice User Application Slice.
Sep 30 13:34:05 np0005462840.novalocal systemd[1481]: Started Mark boot as successful after the user session has run 2 minutes.
Sep 30 13:34:05 np0005462840.novalocal systemd[1481]: Started Daily Cleanup of User's Temporary Directories.
Sep 30 13:34:05 np0005462840.novalocal systemd[1481]: Reached target Paths.
Sep 30 13:34:05 np0005462840.novalocal systemd[1481]: Reached target Timers.
Sep 30 13:34:05 np0005462840.novalocal systemd[1481]: Starting D-Bus User Message Bus Socket...
Sep 30 13:34:05 np0005462840.novalocal systemd[1481]: Starting Create User's Volatile Files and Directories...
Sep 30 13:34:05 np0005462840.novalocal systemd[1481]: Listening on D-Bus User Message Bus Socket.
Sep 30 13:34:05 np0005462840.novalocal systemd[1481]: Reached target Sockets.
Sep 30 13:34:05 np0005462840.novalocal systemd[1481]: Finished Create User's Volatile Files and Directories.
Sep 30 13:34:05 np0005462840.novalocal systemd[1481]: Reached target Basic System.
Sep 30 13:34:05 np0005462840.novalocal systemd[1481]: Reached target Main User Target.
Sep 30 13:34:05 np0005462840.novalocal systemd[1481]: Startup finished in 109ms.
Sep 30 13:34:05 np0005462840.novalocal systemd[1]: Started User Manager for UID 1000.
Sep 30 13:34:05 np0005462840.novalocal systemd[1]: Started Session 1 of User zuul.
Sep 30 13:34:05 np0005462840.novalocal sshd-session[1477]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Sep 30 13:34:06 np0005462840.novalocal python3[1564]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Sep 30 13:34:08 np0005462840.novalocal python3[1592]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Sep 30 13:34:16 np0005462840.novalocal python3[1650]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Sep 30 13:34:17 np0005462840.novalocal python3[1690]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Sep 30 13:34:19 np0005462840.novalocal python3[1716]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDEni7VUyuetxshJxcVE6fjQZOXzYAg8FpVt+iFu+Y09EQFGPhKnaKr5ydhh2xlvXQGLSXAr/J8kuafZP71jcMP1x3LyrYQ5OGcrI3b+ObcwbvDe23UzqlEReJYf7w98ab4n1RW5ivjPvsSAfXs44LGR6EoA5pRP58l1nlQo1f/ocrE+gpEjLBHTZKGlXm8wHPQMUjR61jy4DTQcwxomsdRZWgktRzRjWKM5uRzZiWaNp4c7Sgo7Du8Yf/LRRrPkuuKD5uLRsdDvuC5lJKeQD+v3Xiz6aArjDUh7gccuYtYxiJsyAn9dmXi50u6YL3FRa6lLLsCT1FA0wHYNOtjOpwqCV0XyqpOTxmIxvwDYVTjL+JjoGQWOHqELybFFkpE/EdTQKBDSodvWYIm9ShXX6iFlsQSySuou+wXdsdj1ed1QaqrJD7qs5KxWJkIa1bXbqxeGzfTGgFcULWf0mIMsKnA/t/i8tGa69/qBEECW9mFW8Dv9EUGDFXGIE6zHu37j2E= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Sep 30 13:34:20 np0005462840.novalocal python3[1740]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 13:34:20 np0005462840.novalocal python3[1839]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Sep 30 13:34:20 np0005462840.novalocal python3[1910]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759239260.2397835-251-194715487616836/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=1c8ce4d3f02e439ea2c8224d59e51cbc_id_rsa follow=False checksum=0fa76fa721166710973cd3a947b14148665c0c3e backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 13:34:21 np0005462840.novalocal python3[2033]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Sep 30 13:34:21 np0005462840.novalocal python3[2104]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759239261.1770914-306-169767729972641/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=1c8ce4d3f02e439ea2c8224d59e51cbc_id_rsa.pub follow=False checksum=132908c2c9a0db365f1ecda812dec4b38a7fc72c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 13:34:23 np0005462840.novalocal python3[2152]: ansible-ping Invoked with data=pong
Sep 30 13:34:24 np0005462840.novalocal python3[2176]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Sep 30 13:34:26 np0005462840.novalocal python3[2234]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Sep 30 13:34:27 np0005462840.novalocal python3[2266]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 13:34:28 np0005462840.novalocal python3[2290]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 13:34:28 np0005462840.novalocal python3[2314]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 13:34:28 np0005462840.novalocal python3[2338]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 13:34:28 np0005462840.novalocal python3[2362]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 13:34:29 np0005462840.novalocal python3[2386]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 13:34:30 np0005462840.novalocal sudo[2410]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnjlybxpxidvtdeklxeqpfvogepxcswf ; /usr/bin/python3'
Sep 30 13:34:30 np0005462840.novalocal sudo[2410]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 13:34:31 np0005462840.novalocal python3[2412]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 13:34:31 np0005462840.novalocal sudo[2410]: pam_unix(sudo:session): session closed for user root
Sep 30 13:34:31 np0005462840.novalocal sudo[2488]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdbofdomwqtogoddqnnlfptujtkugsxr ; /usr/bin/python3'
Sep 30 13:34:31 np0005462840.novalocal sudo[2488]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 13:34:31 np0005462840.novalocal python3[2490]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Sep 30 13:34:31 np0005462840.novalocal sudo[2488]: pam_unix(sudo:session): session closed for user root
Sep 30 13:34:32 np0005462840.novalocal sudo[2561]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ksesqxfwzjwcbfjsnyvmaciztcdqmayb ; /usr/bin/python3'
Sep 30 13:34:32 np0005462840.novalocal sudo[2561]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 13:34:32 np0005462840.novalocal python3[2563]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759239271.3012996-31-136445397131482/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 13:34:32 np0005462840.novalocal sudo[2561]: pam_unix(sudo:session): session closed for user root
Sep 30 13:34:32 np0005462840.novalocal python3[2611]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Sep 30 13:34:33 np0005462840.novalocal python3[2635]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Sep 30 13:34:33 np0005462840.novalocal python3[2659]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Sep 30 13:34:33 np0005462840.novalocal python3[2683]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Sep 30 13:34:33 np0005462840.novalocal python3[2707]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Sep 30 13:34:34 np0005462840.novalocal python3[2731]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Sep 30 13:34:34 np0005462840.novalocal python3[2755]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Sep 30 13:34:34 np0005462840.novalocal python3[2779]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Sep 30 13:34:35 np0005462840.novalocal python3[2803]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Sep 30 13:34:35 np0005462840.novalocal python3[2827]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Sep 30 13:34:35 np0005462840.novalocal python3[2851]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Sep 30 13:34:35 np0005462840.novalocal python3[2875]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Sep 30 13:34:36 np0005462840.novalocal python3[2899]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Sep 30 13:34:36 np0005462840.novalocal python3[2923]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Sep 30 13:34:36 np0005462840.novalocal python3[2947]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Sep 30 13:34:36 np0005462840.novalocal python3[2971]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Sep 30 13:34:37 np0005462840.novalocal python3[2995]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Sep 30 13:34:37 np0005462840.novalocal python3[3019]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Sep 30 13:34:37 np0005462840.novalocal python3[3043]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Sep 30 13:34:37 np0005462840.novalocal python3[3067]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Sep 30 13:34:38 np0005462840.novalocal python3[3091]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Sep 30 13:34:38 np0005462840.novalocal python3[3115]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Sep 30 13:34:38 np0005462840.novalocal python3[3139]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Sep 30 13:34:39 np0005462840.novalocal python3[3163]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Sep 30 13:34:39 np0005462840.novalocal python3[3187]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Sep 30 13:34:39 np0005462840.novalocal python3[3211]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Sep 30 13:34:41 np0005462840.novalocal sudo[3236]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvayspoprqfzpqghihuvlvwrkorddfqp ; /usr/bin/python3'
Sep 30 13:34:41 np0005462840.novalocal sudo[3236]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 13:34:42 np0005462840.novalocal python3[3238]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Sep 30 13:34:42 np0005462840.novalocal systemd[1]: Starting Time & Date Service...
Sep 30 13:34:42 np0005462840.novalocal systemd[1]: Started Time & Date Service.
Sep 30 13:34:42 np0005462840.novalocal systemd-timedated[3240]: Changed time zone to 'UTC' (UTC).
Sep 30 13:34:42 np0005462840.novalocal systemd[1]: Starting dnf makecache...
Sep 30 13:34:42 np0005462840.novalocal sudo[3236]: pam_unix(sudo:session): session closed for user root
Sep 30 13:34:42 np0005462840.novalocal sudo[3268]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlezlnjpjhwmovobgydkkwgfkyomcqps ; /usr/bin/python3'
Sep 30 13:34:42 np0005462840.novalocal sudo[3268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 13:34:42 np0005462840.novalocal python3[3270]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 13:34:42 np0005462840.novalocal sudo[3268]: pam_unix(sudo:session): session closed for user root
Sep 30 13:34:42 np0005462840.novalocal dnf[3243]: Failed determining last makecache time.
Sep 30 13:34:43 np0005462840.novalocal python3[3347]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Sep 30 13:34:43 np0005462840.novalocal dnf[3243]: CentOS Stream 9 - BaseOS                         25 kB/s | 7.0 kB     00:00
Sep 30 13:34:43 np0005462840.novalocal python3[3422]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1759239282.840902-251-218302603800204/source _original_basename=tmpzkqp3827 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 13:34:43 np0005462840.novalocal dnf[3243]: CentOS Stream 9 - AppStream                      29 kB/s | 7.1 kB     00:00
Sep 30 13:34:43 np0005462840.novalocal dnf[3243]: CentOS Stream 9 - CRB                            70 kB/s | 6.9 kB     00:00
Sep 30 13:34:43 np0005462840.novalocal python3[3523]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Sep 30 13:34:44 np0005462840.novalocal dnf[3243]: CentOS Stream 9 - Extras packages                31 kB/s | 8.0 kB     00:00
Sep 30 13:34:44 np0005462840.novalocal python3[3595]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1759239283.7149727-301-43105538117585/source _original_basename=tmp6bw6b5yf follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 13:34:44 np0005462840.novalocal dnf[3243]: Metadata cache created.
Sep 30 13:34:44 np0005462840.novalocal systemd[1]: dnf-makecache.service: Deactivated successfully.
Sep 30 13:34:44 np0005462840.novalocal systemd[1]: Finished dnf makecache.
Sep 30 13:34:45 np0005462840.novalocal sudo[3696]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzyysuccuhjprcrjvxxwwuifacoazohr ; /usr/bin/python3'
Sep 30 13:34:45 np0005462840.novalocal sudo[3696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 13:34:45 np0005462840.novalocal python3[3698]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Sep 30 13:34:45 np0005462840.novalocal sudo[3696]: pam_unix(sudo:session): session closed for user root
Sep 30 13:34:46 np0005462840.novalocal sudo[3769]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjktzacgxieduqehnvratbfreuoyiali ; /usr/bin/python3'
Sep 30 13:34:46 np0005462840.novalocal sudo[3769]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 13:34:46 np0005462840.novalocal python3[3771]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1759239285.4904604-381-15075642071461/source _original_basename=tmpuk6zqcx2 follow=False checksum=6cbe59410b7de8cef4e7b572834f646539a41bfa backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 13:34:46 np0005462840.novalocal sudo[3769]: pam_unix(sudo:session): session closed for user root
Sep 30 13:34:47 np0005462840.novalocal python3[3819]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 13:34:47 np0005462840.novalocal python3[3845]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 13:34:47 np0005462840.novalocal sudo[3923]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkuhiavzlokhxwulszggropmgnjcjiyq ; /usr/bin/python3'
Sep 30 13:34:47 np0005462840.novalocal sudo[3923]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 13:34:47 np0005462840.novalocal python3[3925]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Sep 30 13:34:47 np0005462840.novalocal sudo[3923]: pam_unix(sudo:session): session closed for user root
Sep 30 13:34:48 np0005462840.novalocal sudo[3996]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnycqjuxnrchpdpihwhejksntzbjmjaz ; /usr/bin/python3'
Sep 30 13:34:48 np0005462840.novalocal sudo[3996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 13:34:48 np0005462840.novalocal python3[3998]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1759239287.5808687-451-84789301218340/source _original_basename=tmpv6nicsr3 follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 13:34:48 np0005462840.novalocal sudo[3996]: pam_unix(sudo:session): session closed for user root
Sep 30 13:34:48 np0005462840.novalocal sudo[4047]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzpcsmgjnkeegygcyoqwyyszfefqxwzo ; /usr/bin/python3'
Sep 30 13:34:48 np0005462840.novalocal sudo[4047]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 13:34:48 np0005462840.novalocal python3[4049]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163e3b-3c83-c83d-10df-00000000001f-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 13:34:48 np0005462840.novalocal sudo[4047]: pam_unix(sudo:session): session closed for user root
Sep 30 13:34:49 np0005462840.novalocal python3[4077]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env
                                                       _uses_shell=True zuul_log_id=fa163e3b-3c83-c83d-10df-000000000020-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Sep 30 13:34:51 np0005462840.novalocal python3[4105]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 13:34:56 np0005462840.novalocal sshd[1005]: drop connection #0 from [59.36.78.66]:51148 on [38.102.83.20]:22 penalty: exceeded LoginGraceTime
Sep 30 13:35:03 np0005462840.novalocal sshd-session[4106]: Received disconnect from 181.212.34.237 port 24952:11: Bye Bye [preauth]
Sep 30 13:35:03 np0005462840.novalocal sshd-session[4106]: Disconnected from authenticating user root 181.212.34.237 port 24952 [preauth]
Sep 30 13:35:07 np0005462840.novalocal sshd-session[4108]: Invalid user parking from 121.204.171.142 port 55868
Sep 30 13:35:07 np0005462840.novalocal sshd-session[4108]: Received disconnect from 121.204.171.142 port 55868:11: Bye Bye [preauth]
Sep 30 13:35:07 np0005462840.novalocal sshd-session[4108]: Disconnected from invalid user parking 121.204.171.142 port 55868 [preauth]
Sep 30 13:35:09 np0005462840.novalocal sudo[4133]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynuhbqhnnnrfdivxhfjzemoipgutafud ; /usr/bin/python3'
Sep 30 13:35:09 np0005462840.novalocal sudo[4133]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 13:35:09 np0005462840.novalocal python3[4135]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 13:35:09 np0005462840.novalocal sudo[4133]: pam_unix(sudo:session): session closed for user root
Sep 30 13:35:12 np0005462840.novalocal systemd[1]: systemd-timedated.service: Deactivated successfully.
Sep 30 13:35:44 np0005462840.novalocal sshd-session[4140]: Received disconnect from 141.98.10.225 port 48016:11:  [preauth]
Sep 30 13:35:44 np0005462840.novalocal sshd-session[4140]: Disconnected from authenticating user root 141.98.10.225 port 48016 [preauth]
Sep 30 13:35:50 np0005462840.novalocal kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Sep 30 13:35:50 np0005462840.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Sep 30 13:35:50 np0005462840.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Sep 30 13:35:50 np0005462840.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Sep 30 13:35:50 np0005462840.novalocal kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Sep 30 13:35:50 np0005462840.novalocal kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Sep 30 13:35:50 np0005462840.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Sep 30 13:35:50 np0005462840.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Sep 30 13:35:50 np0005462840.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Sep 30 13:35:50 np0005462840.novalocal kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Sep 30 13:35:50 np0005462840.novalocal NetworkManager[861]: <info>  [1759239350.7183] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Sep 30 13:35:50 np0005462840.novalocal systemd-udevd[4143]: Network interface NamePolicy= disabled on kernel command line.
Sep 30 13:35:50 np0005462840.novalocal NetworkManager[861]: <info>  [1759239350.7340] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Sep 30 13:35:50 np0005462840.novalocal NetworkManager[861]: <info>  [1759239350.7364] settings: (eth1): created default wired connection 'Wired connection 1'
Sep 30 13:35:50 np0005462840.novalocal NetworkManager[861]: <info>  [1759239350.7367] device (eth1): carrier: link connected
Sep 30 13:35:50 np0005462840.novalocal NetworkManager[861]: <info>  [1759239350.7368] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Sep 30 13:35:50 np0005462840.novalocal NetworkManager[861]: <info>  [1759239350.7373] policy: auto-activating connection 'Wired connection 1' (b787acf5-2088-3281-a8cd-a822ba754a2a)
Sep 30 13:35:50 np0005462840.novalocal NetworkManager[861]: <info>  [1759239350.7376] device (eth1): Activation: starting connection 'Wired connection 1' (b787acf5-2088-3281-a8cd-a822ba754a2a)
Sep 30 13:35:50 np0005462840.novalocal NetworkManager[861]: <info>  [1759239350.7377] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Sep 30 13:35:50 np0005462840.novalocal NetworkManager[861]: <info>  [1759239350.7379] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Sep 30 13:35:50 np0005462840.novalocal NetworkManager[861]: <info>  [1759239350.7382] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Sep 30 13:35:50 np0005462840.novalocal NetworkManager[861]: <info>  [1759239350.7385] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Sep 30 13:35:51 np0005462840.novalocal python3[4169]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163e3b-3c83-874d-3d5f-000000000128-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 13:36:01 np0005462840.novalocal sudo[4249]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-riynagqeusyumwogsmquhwtjpdxeppbx ; OS_CLOUD=vexxhost /usr/bin/python3'
Sep 30 13:36:01 np0005462840.novalocal sudo[4249]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 13:36:01 np0005462840.novalocal python3[4251]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Sep 30 13:36:01 np0005462840.novalocal sudo[4249]: pam_unix(sudo:session): session closed for user root
Sep 30 13:36:01 np0005462840.novalocal sudo[4322]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uztqukhxjntszgduuargsemaflivmoth ; OS_CLOUD=vexxhost /usr/bin/python3'
Sep 30 13:36:01 np0005462840.novalocal sudo[4322]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 13:36:01 np0005462840.novalocal python3[4324]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759239361.2609324-104-278307465177632/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=c3c8b79debf5eb276e3ea7f8a7d0f93b842ab852 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 13:36:01 np0005462840.novalocal sudo[4322]: pam_unix(sudo:session): session closed for user root
Sep 30 13:36:02 np0005462840.novalocal sudo[4372]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frakgkrefnyuoorqjqlpfeajkjpljcuf ; OS_CLOUD=vexxhost /usr/bin/python3'
Sep 30 13:36:02 np0005462840.novalocal sudo[4372]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 13:36:02 np0005462840.novalocal python3[4374]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Sep 30 13:36:02 np0005462840.novalocal systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Sep 30 13:36:02 np0005462840.novalocal systemd[1]: Stopped Network Manager Wait Online.
Sep 30 13:36:02 np0005462840.novalocal systemd[1]: Stopping Network Manager Wait Online...
Sep 30 13:36:02 np0005462840.novalocal NetworkManager[861]: <info>  [1759239362.6769] caught SIGTERM, shutting down normally.
Sep 30 13:36:02 np0005462840.novalocal systemd[1]: Stopping Network Manager...
Sep 30 13:36:02 np0005462840.novalocal NetworkManager[861]: <info>  [1759239362.6775] dhcp4 (eth0): canceled DHCP transaction
Sep 30 13:36:02 np0005462840.novalocal NetworkManager[861]: <info>  [1759239362.6775] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Sep 30 13:36:02 np0005462840.novalocal NetworkManager[861]: <info>  [1759239362.6776] dhcp4 (eth0): state changed no lease
Sep 30 13:36:02 np0005462840.novalocal NetworkManager[861]: <info>  [1759239362.6777] manager: NetworkManager state is now CONNECTING
Sep 30 13:36:02 np0005462840.novalocal NetworkManager[861]: <info>  [1759239362.6903] dhcp4 (eth1): canceled DHCP transaction
Sep 30 13:36:02 np0005462840.novalocal NetworkManager[861]: <info>  [1759239362.6904] dhcp4 (eth1): state changed no lease
Sep 30 13:36:02 np0005462840.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Sep 30 13:36:02 np0005462840.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Sep 30 13:36:02 np0005462840.novalocal NetworkManager[861]: <info>  [1759239362.8415] exiting (success)
Sep 30 13:36:02 np0005462840.novalocal systemd[1]: NetworkManager.service: Deactivated successfully.
Sep 30 13:36:02 np0005462840.novalocal systemd[1]: Stopped Network Manager.
Sep 30 13:36:02 np0005462840.novalocal systemd[1]: NetworkManager.service: Consumed 17.944s CPU time, 9.9M memory peak.
Sep 30 13:36:02 np0005462840.novalocal systemd[1]: Starting Network Manager...
Sep 30 13:36:02 np0005462840.novalocal NetworkManager[4391]: <info>  [1759239362.8995] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:1819ccf5-a897-485a-80b9-c42731ad5ac8)
Sep 30 13:36:02 np0005462840.novalocal NetworkManager[4391]: <info>  [1759239362.8998] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Sep 30 13:36:02 np0005462840.novalocal NetworkManager[4391]: <info>  [1759239362.9067] manager[0x5576ed85c070]: monitoring kernel firmware directory '/lib/firmware'.
Sep 30 13:36:02 np0005462840.novalocal systemd[1]: Starting Hostname Service...
Sep 30 13:36:02 np0005462840.novalocal systemd[1]: Started Hostname Service.
Sep 30 13:36:02 np0005462840.novalocal NetworkManager[4391]: <info>  [1759239362.9819] hostname: hostname: using hostnamed
Sep 30 13:36:02 np0005462840.novalocal NetworkManager[4391]: <info>  [1759239362.9820] hostname: static hostname changed from (none) to "np0005462840.novalocal"
Sep 30 13:36:02 np0005462840.novalocal NetworkManager[4391]: <info>  [1759239362.9825] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Sep 30 13:36:02 np0005462840.novalocal NetworkManager[4391]: <info>  [1759239362.9829] manager[0x5576ed85c070]: rfkill: Wi-Fi hardware radio set enabled
Sep 30 13:36:02 np0005462840.novalocal NetworkManager[4391]: <info>  [1759239362.9830] manager[0x5576ed85c070]: rfkill: WWAN hardware radio set enabled
Sep 30 13:36:02 np0005462840.novalocal NetworkManager[4391]: <info>  [1759239362.9857] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Sep 30 13:36:02 np0005462840.novalocal NetworkManager[4391]: <info>  [1759239362.9857] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Sep 30 13:36:02 np0005462840.novalocal NetworkManager[4391]: <info>  [1759239362.9857] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Sep 30 13:36:02 np0005462840.novalocal NetworkManager[4391]: <info>  [1759239362.9858] manager: Networking is enabled by state file
Sep 30 13:36:02 np0005462840.novalocal NetworkManager[4391]: <info>  [1759239362.9860] settings: Loaded settings plugin: keyfile (internal)
Sep 30 13:36:02 np0005462840.novalocal NetworkManager[4391]: <info>  [1759239362.9864] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Sep 30 13:36:02 np0005462840.novalocal NetworkManager[4391]: <info>  [1759239362.9891] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Sep 30 13:36:02 np0005462840.novalocal NetworkManager[4391]: <info>  [1759239362.9900] dhcp: init: Using DHCP client 'internal'
Sep 30 13:36:02 np0005462840.novalocal NetworkManager[4391]: <info>  [1759239362.9903] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Sep 30 13:36:02 np0005462840.novalocal NetworkManager[4391]: <info>  [1759239362.9909] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Sep 30 13:36:02 np0005462840.novalocal NetworkManager[4391]: <info>  [1759239362.9916] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Sep 30 13:36:02 np0005462840.novalocal NetworkManager[4391]: <info>  [1759239362.9924] device (lo): Activation: starting connection 'lo' (5742ac42-8bba-40d6-bdcd-b6cbacaa64c1)
Sep 30 13:36:02 np0005462840.novalocal NetworkManager[4391]: <info>  [1759239362.9930] device (eth0): carrier: link connected
Sep 30 13:36:02 np0005462840.novalocal NetworkManager[4391]: <info>  [1759239362.9934] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Sep 30 13:36:02 np0005462840.novalocal NetworkManager[4391]: <info>  [1759239362.9938] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Sep 30 13:36:02 np0005462840.novalocal NetworkManager[4391]: <info>  [1759239362.9939] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Sep 30 13:36:02 np0005462840.novalocal NetworkManager[4391]: <info>  [1759239362.9944] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Sep 30 13:36:02 np0005462840.novalocal NetworkManager[4391]: <info>  [1759239362.9949] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Sep 30 13:36:02 np0005462840.novalocal NetworkManager[4391]: <info>  [1759239362.9953] device (eth1): carrier: link connected
Sep 30 13:36:02 np0005462840.novalocal NetworkManager[4391]: <info>  [1759239362.9956] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Sep 30 13:36:02 np0005462840.novalocal NetworkManager[4391]: <info>  [1759239362.9960] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (b787acf5-2088-3281-a8cd-a822ba754a2a) (indicated)
Sep 30 13:36:02 np0005462840.novalocal NetworkManager[4391]: <info>  [1759239362.9960] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Sep 30 13:36:02 np0005462840.novalocal NetworkManager[4391]: <info>  [1759239362.9964] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Sep 30 13:36:02 np0005462840.novalocal NetworkManager[4391]: <info>  [1759239362.9969] device (eth1): Activation: starting connection 'Wired connection 1' (b787acf5-2088-3281-a8cd-a822ba754a2a)
Sep 30 13:36:02 np0005462840.novalocal systemd[1]: Started Network Manager.
Sep 30 13:36:02 np0005462840.novalocal NetworkManager[4391]: <info>  [1759239362.9974] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Sep 30 13:36:02 np0005462840.novalocal NetworkManager[4391]: <info>  [1759239362.9989] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Sep 30 13:36:03 np0005462840.novalocal NetworkManager[4391]: <info>  [1759239363.0002] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Sep 30 13:36:03 np0005462840.novalocal NetworkManager[4391]: <info>  [1759239363.0004] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Sep 30 13:36:03 np0005462840.novalocal NetworkManager[4391]: <info>  [1759239363.0006] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Sep 30 13:36:03 np0005462840.novalocal NetworkManager[4391]: <info>  [1759239363.0008] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Sep 30 13:36:03 np0005462840.novalocal NetworkManager[4391]: <info>  [1759239363.0010] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Sep 30 13:36:03 np0005462840.novalocal NetworkManager[4391]: <info>  [1759239363.0011] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Sep 30 13:36:03 np0005462840.novalocal NetworkManager[4391]: <info>  [1759239363.0015] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Sep 30 13:36:03 np0005462840.novalocal NetworkManager[4391]: <info>  [1759239363.0020] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Sep 30 13:36:03 np0005462840.novalocal NetworkManager[4391]: <info>  [1759239363.0022] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Sep 30 13:36:03 np0005462840.novalocal NetworkManager[4391]: <info>  [1759239363.0030] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Sep 30 13:36:03 np0005462840.novalocal NetworkManager[4391]: <info>  [1759239363.0033] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Sep 30 13:36:03 np0005462840.novalocal NetworkManager[4391]: <info>  [1759239363.0042] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Sep 30 13:36:03 np0005462840.novalocal NetworkManager[4391]: <info>  [1759239363.0046] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Sep 30 13:36:03 np0005462840.novalocal NetworkManager[4391]: <info>  [1759239363.0049] device (lo): Activation: successful, device activated.
Sep 30 13:36:03 np0005462840.novalocal systemd[1]: Starting Network Manager Wait Online...
Sep 30 13:36:03 np0005462840.novalocal sudo[4372]: pam_unix(sudo:session): session closed for user root
Sep 30 13:36:03 np0005462840.novalocal python3[4441]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163e3b-3c83-874d-3d5f-0000000000bd-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 13:36:03 np0005462840.novalocal NetworkManager[4391]: <info>  [1759239363.3840] dhcp4 (eth0): state changed new lease, address=38.102.83.20
Sep 30 13:36:03 np0005462840.novalocal NetworkManager[4391]: <info>  [1759239363.3848] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Sep 30 13:36:03 np0005462840.novalocal NetworkManager[4391]: <info>  [1759239363.3960] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Sep 30 13:36:03 np0005462840.novalocal NetworkManager[4391]: <info>  [1759239363.3983] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Sep 30 13:36:03 np0005462840.novalocal NetworkManager[4391]: <info>  [1759239363.3985] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Sep 30 13:36:03 np0005462840.novalocal NetworkManager[4391]: <info>  [1759239363.3988] manager: NetworkManager state is now CONNECTED_SITE
Sep 30 13:36:03 np0005462840.novalocal NetworkManager[4391]: <info>  [1759239363.3991] device (eth0): Activation: successful, device activated.
Sep 30 13:36:03 np0005462840.novalocal NetworkManager[4391]: <info>  [1759239363.3995] manager: NetworkManager state is now CONNECTED_GLOBAL
Sep 30 13:36:11 np0005462840.novalocal sshd-session[4463]: Invalid user seekcy from 210.90.155.80 port 39938
Sep 30 13:36:12 np0005462840.novalocal sshd-session[4463]: Received disconnect from 210.90.155.80 port 39938:11: Bye Bye [preauth]
Sep 30 13:36:12 np0005462840.novalocal sshd-session[4463]: Disconnected from invalid user seekcy 210.90.155.80 port 39938 [preauth]
Sep 30 13:36:13 np0005462840.novalocal sshd-session[4465]: Invalid user minecraft from 87.251.77.103 port 37724
Sep 30 13:36:13 np0005462840.novalocal sshd-session[4465]: Received disconnect from 87.251.77.103 port 37724:11: Bye Bye [preauth]
Sep 30 13:36:13 np0005462840.novalocal sshd-session[4465]: Disconnected from invalid user minecraft 87.251.77.103 port 37724 [preauth]
Sep 30 13:36:13 np0005462840.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Sep 30 13:36:16 np0005462840.novalocal sshd-session[4467]: Received disconnect from 181.212.34.237 port 24720:11: Bye Bye [preauth]
Sep 30 13:36:16 np0005462840.novalocal sshd-session[4467]: Disconnected from authenticating user root 181.212.34.237 port 24720 [preauth]
Sep 30 13:36:25 np0005462840.novalocal sshd-session[4173]: error: kex_exchange_identification: read: Connection reset by peer
Sep 30 13:36:25 np0005462840.novalocal sshd-session[4173]: Connection reset by 45.140.17.97 port 1758
Sep 30 13:36:28 np0005462840.novalocal sshd-session[4469]: Unable to negotiate with 103.172.154.255 port 50322: no matching host key type found. Their offer: ssh-rsa [preauth]
Sep 30 13:36:31 np0005462840.novalocal sshd-session[4471]: Received disconnect from 121.204.171.142 port 59276:11: Bye Bye [preauth]
Sep 30 13:36:31 np0005462840.novalocal sshd-session[4471]: Disconnected from authenticating user root 121.204.171.142 port 59276 [preauth]
Sep 30 13:36:33 np0005462840.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Sep 30 13:36:48 np0005462840.novalocal NetworkManager[4391]: <info>  [1759239408.5865] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Sep 30 13:36:48 np0005462840.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Sep 30 13:36:48 np0005462840.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Sep 30 13:36:48 np0005462840.novalocal NetworkManager[4391]: <info>  [1759239408.6134] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Sep 30 13:36:48 np0005462840.novalocal NetworkManager[4391]: <info>  [1759239408.6142] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Sep 30 13:36:48 np0005462840.novalocal NetworkManager[4391]: <info>  [1759239408.6164] device (eth1): Activation: successful, device activated.
Sep 30 13:36:48 np0005462840.novalocal NetworkManager[4391]: <info>  [1759239408.6175] manager: startup complete
Sep 30 13:36:48 np0005462840.novalocal NetworkManager[4391]: <info>  [1759239408.6181] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Sep 30 13:36:48 np0005462840.novalocal NetworkManager[4391]: <warn>  [1759239408.6200] device (eth1): Activation: failed for connection 'Wired connection 1'
Sep 30 13:36:48 np0005462840.novalocal systemd[1]: Finished Network Manager Wait Online.
Sep 30 13:36:48 np0005462840.novalocal NetworkManager[4391]: <info>  [1759239408.6210] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Sep 30 13:36:48 np0005462840.novalocal NetworkManager[4391]: <info>  [1759239408.6342] dhcp4 (eth1): canceled DHCP transaction
Sep 30 13:36:48 np0005462840.novalocal NetworkManager[4391]: <info>  [1759239408.6343] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Sep 30 13:36:48 np0005462840.novalocal NetworkManager[4391]: <info>  [1759239408.6343] dhcp4 (eth1): state changed no lease
Sep 30 13:36:48 np0005462840.novalocal NetworkManager[4391]: <info>  [1759239408.6359] policy: auto-activating connection 'ci-private-network' (70493aff-8f50-53fb-8a3a-7b4dcd69c293)
Sep 30 13:36:48 np0005462840.novalocal NetworkManager[4391]: <info>  [1759239408.6363] device (eth1): Activation: starting connection 'ci-private-network' (70493aff-8f50-53fb-8a3a-7b4dcd69c293)
Sep 30 13:36:48 np0005462840.novalocal NetworkManager[4391]: <info>  [1759239408.6364] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Sep 30 13:36:48 np0005462840.novalocal NetworkManager[4391]: <info>  [1759239408.6365] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Sep 30 13:36:48 np0005462840.novalocal NetworkManager[4391]: <info>  [1759239408.6370] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Sep 30 13:36:48 np0005462840.novalocal NetworkManager[4391]: <info>  [1759239408.6376] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Sep 30 13:36:48 np0005462840.novalocal NetworkManager[4391]: <info>  [1759239408.6839] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Sep 30 13:36:48 np0005462840.novalocal NetworkManager[4391]: <info>  [1759239408.6841] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Sep 30 13:36:48 np0005462840.novalocal NetworkManager[4391]: <info>  [1759239408.6846] device (eth1): Activation: successful, device activated.
Sep 30 13:36:58 np0005462840.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Sep 30 13:37:03 np0005462840.novalocal sshd-session[1491]: Received disconnect from 38.102.83.114 port 55306:11: disconnected by user
Sep 30 13:37:03 np0005462840.novalocal sshd-session[1491]: Disconnected from user zuul 38.102.83.114 port 55306
Sep 30 13:37:03 np0005462840.novalocal sshd-session[1477]: pam_unix(sshd:session): session closed for user zuul
Sep 30 13:37:03 np0005462840.novalocal systemd-logind[808]: Session 1 logged out. Waiting for processes to exit.
Sep 30 13:37:04 np0005462840.novalocal systemd[1481]: Starting Mark boot as successful...
Sep 30 13:37:04 np0005462840.novalocal systemd[1481]: Finished Mark boot as successful.
Sep 30 13:37:28 np0005462840.novalocal sshd-session[4502]: Invalid user grid from 181.212.34.237 port 45461
Sep 30 13:37:28 np0005462840.novalocal sshd-session[4502]: Received disconnect from 181.212.34.237 port 45461:11: Bye Bye [preauth]
Sep 30 13:37:28 np0005462840.novalocal sshd-session[4502]: Disconnected from invalid user grid 181.212.34.237 port 45461 [preauth]
Sep 30 13:37:30 np0005462840.novalocal sshd[1005]: Timeout before authentication for connection from 14.103.128.118 to 38.102.83.20, pid = 4139
Sep 30 13:37:51 np0005462840.novalocal sshd-session[4504]: Invalid user ya from 87.251.77.103 port 41778
Sep 30 13:37:51 np0005462840.novalocal sshd-session[4504]: Received disconnect from 87.251.77.103 port 41778:11: Bye Bye [preauth]
Sep 30 13:37:51 np0005462840.novalocal sshd-session[4504]: Disconnected from invalid user ya 87.251.77.103 port 41778 [preauth]
Sep 30 13:38:13 np0005462840.novalocal sshd-session[4507]: Connection closed by 59.36.78.66 port 33864 [preauth]
Sep 30 13:38:14 np0005462840.novalocal sshd-session[4509]: Accepted publickey for zuul from 38.102.83.114 port 53866 ssh2: RSA SHA256:PQ5gAlGqGw5eyUoP3tGuJWzdC0qrtAhhgPp/wWGLEq4
Sep 30 13:38:14 np0005462840.novalocal systemd-logind[808]: New session 3 of user zuul.
Sep 30 13:38:14 np0005462840.novalocal systemd[1]: Started Session 3 of User zuul.
Sep 30 13:38:14 np0005462840.novalocal sshd-session[4509]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Sep 30 13:38:14 np0005462840.novalocal sudo[4588]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydskubcyyapmkagxkhhlbdzzntxlnnyr ; OS_CLOUD=vexxhost /usr/bin/python3'
Sep 30 13:38:14 np0005462840.novalocal sudo[4588]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 13:38:14 np0005462840.novalocal python3[4590]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Sep 30 13:38:14 np0005462840.novalocal sudo[4588]: pam_unix(sudo:session): session closed for user root
Sep 30 13:38:14 np0005462840.novalocal sudo[4661]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykwkxdkviromovoerdhlbbfjazrtxebh ; OS_CLOUD=vexxhost /usr/bin/python3'
Sep 30 13:38:14 np0005462840.novalocal sudo[4661]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 13:38:15 np0005462840.novalocal python3[4663]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759239494.4218216-373-28938393193512/source _original_basename=tmpd5m7he3i follow=False checksum=482039c4143390a302b2989199040d739f326055 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 13:38:15 np0005462840.novalocal sudo[4661]: pam_unix(sudo:session): session closed for user root
Sep 30 13:38:19 np0005462840.novalocal sshd-session[4512]: Connection closed by 38.102.83.114 port 53866
Sep 30 13:38:19 np0005462840.novalocal sshd-session[4509]: pam_unix(sshd:session): session closed for user zuul
Sep 30 13:38:19 np0005462840.novalocal systemd[1]: session-3.scope: Deactivated successfully.
Sep 30 13:38:19 np0005462840.novalocal systemd-logind[808]: Session 3 logged out. Waiting for processes to exit.
Sep 30 13:38:19 np0005462840.novalocal systemd-logind[808]: Removed session 3.
Sep 30 13:38:27 np0005462840.novalocal sshd-session[4688]: Received disconnect from 209.38.228.14 port 35778:11: Bye Bye [preauth]
Sep 30 13:38:27 np0005462840.novalocal sshd-session[4688]: Disconnected from authenticating user root 209.38.228.14 port 35778 [preauth]
Sep 30 13:38:34 np0005462840.novalocal sshd[1005]: Timeout before authentication for connection from 59.36.78.66 to 38.102.83.20, pid = 4473
Sep 30 13:39:01 np0005462840.novalocal anacron[1150]: Job `cron.daily' started
Sep 30 13:39:01 np0005462840.novalocal anacron[1150]: Job `cron.daily' terminated
Sep 30 13:39:27 np0005462840.novalocal sshd-session[4692]: Invalid user dev from 121.204.171.142 port 35338
Sep 30 13:39:27 np0005462840.novalocal sshd-session[4692]: Received disconnect from 121.204.171.142 port 35338:11: Bye Bye [preauth]
Sep 30 13:39:27 np0005462840.novalocal sshd-session[4692]: Disconnected from invalid user dev 121.204.171.142 port 35338 [preauth]
Sep 30 13:39:36 np0005462840.novalocal sshd[1005]: drop connection #0 from [59.36.78.66]:53414 on [38.102.83.20]:22 penalty: exceeded LoginGraceTime
Sep 30 13:39:44 np0005462840.novalocal sshd-session[4694]: Received disconnect from 209.38.228.14 port 38404:11: Bye Bye [preauth]
Sep 30 13:39:44 np0005462840.novalocal sshd-session[4694]: Disconnected from authenticating user root 209.38.228.14 port 38404 [preauth]
Sep 30 13:39:51 np0005462840.novalocal sshd-session[4697]: Received disconnect from 210.90.155.80 port 34592:11: Bye Bye [preauth]
Sep 30 13:39:51 np0005462840.novalocal sshd-session[4697]: Disconnected from authenticating user root 210.90.155.80 port 34592 [preauth]
Sep 30 13:40:04 np0005462840.novalocal systemd[1481]: Created slice User Background Tasks Slice.
Sep 30 13:40:04 np0005462840.novalocal systemd[1481]: Starting Cleanup of User's Temporary Files and Directories...
Sep 30 13:40:04 np0005462840.novalocal systemd[1481]: Finished Cleanup of User's Temporary Files and Directories.
Sep 30 13:40:45 np0005462840.novalocal sshd-session[4700]: Received disconnect from 209.38.228.14 port 44340:11: Bye Bye [preauth]
Sep 30 13:40:45 np0005462840.novalocal sshd-session[4700]: Disconnected from authenticating user root 209.38.228.14 port 44340 [preauth]
Sep 30 13:40:53 np0005462840.novalocal sshd-session[4702]: Received disconnect from 193.46.255.159 port 15278:11:  [preauth]
Sep 30 13:40:53 np0005462840.novalocal sshd-session[4702]: Disconnected from authenticating user root 193.46.255.159 port 15278 [preauth]
Sep 30 13:41:04 np0005462840.novalocal sshd-session[4705]: Invalid user tan from 210.90.155.80 port 57924
Sep 30 13:41:04 np0005462840.novalocal sshd-session[4705]: Received disconnect from 210.90.155.80 port 57924:11: Bye Bye [preauth]
Sep 30 13:41:04 np0005462840.novalocal sshd-session[4705]: Disconnected from invalid user tan 210.90.155.80 port 57924 [preauth]
Sep 30 13:41:13 np0005462840.novalocal sshd-session[4704]: Received disconnect from 59.36.78.66 port 44762:11: Bye Bye [preauth]
Sep 30 13:41:13 np0005462840.novalocal sshd-session[4704]: Disconnected from 59.36.78.66 port 44762 [preauth]
Sep 30 13:41:47 np0005462840.novalocal sshd-session[4708]: Invalid user ruslan from 209.38.228.14 port 44946
Sep 30 13:41:47 np0005462840.novalocal sshd-session[4708]: Received disconnect from 209.38.228.14 port 44946:11: Bye Bye [preauth]
Sep 30 13:41:47 np0005462840.novalocal sshd-session[4708]: Disconnected from invalid user ruslan 209.38.228.14 port 44946 [preauth]
Sep 30 13:42:17 np0005462840.novalocal sshd-session[4710]: Invalid user user from 185.156.73.233 port 16416
Sep 30 13:42:18 np0005462840.novalocal sshd-session[4710]: Connection closed by invalid user user 185.156.73.233 port 16416 [preauth]
Sep 30 13:42:26 np0005462840.novalocal sshd-session[4712]: Invalid user san from 210.90.155.80 port 53238
Sep 30 13:42:26 np0005462840.novalocal sshd-session[4712]: Received disconnect from 210.90.155.80 port 53238:11: Bye Bye [preauth]
Sep 30 13:42:26 np0005462840.novalocal sshd-session[4712]: Disconnected from invalid user san 210.90.155.80 port 53238 [preauth]
Sep 30 13:42:47 np0005462840.novalocal sshd-session[4714]: Received disconnect from 209.38.228.14 port 58538:11: Bye Bye [preauth]
Sep 30 13:42:47 np0005462840.novalocal sshd-session[4714]: Disconnected from authenticating user root 209.38.228.14 port 58538 [preauth]
Sep 30 13:43:30 np0005462840.novalocal sshd-session[4717]: Accepted publickey for zuul from 38.102.83.114 port 58456 ssh2: RSA SHA256:PQ5gAlGqGw5eyUoP3tGuJWzdC0qrtAhhgPp/wWGLEq4
Sep 30 13:43:30 np0005462840.novalocal systemd-logind[808]: New session 4 of user zuul.
Sep 30 13:43:30 np0005462840.novalocal systemd[1]: Started Session 4 of User zuul.
Sep 30 13:43:30 np0005462840.novalocal sshd-session[4717]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Sep 30 13:43:30 np0005462840.novalocal sudo[4744]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njegfirxmkxjvktflbuhalnhwpvocdii ; /usr/bin/python3'
Sep 30 13:43:30 np0005462840.novalocal sudo[4744]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 13:43:30 np0005462840.novalocal python3[4746]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda
                                                       _uses_shell=True zuul_log_id=fa163e3b-3c83-c2b7-0dc1-000000001cf8-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 13:43:30 np0005462840.novalocal sudo[4744]: pam_unix(sudo:session): session closed for user root
Sep 30 13:43:31 np0005462840.novalocal sudo[4775]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzspgrydkhtzebgsmwlmsypygbcmzuxh ; /usr/bin/python3'
Sep 30 13:43:31 np0005462840.novalocal sudo[4775]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 13:43:31 np0005462840.novalocal python3[4777]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 13:43:31 np0005462840.novalocal sudo[4775]: pam_unix(sudo:session): session closed for user root
Sep 30 13:43:31 np0005462840.novalocal sudo[4801]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fairxwqbhzsqntknskkjyzaiwvckgode ; /usr/bin/python3'
Sep 30 13:43:31 np0005462840.novalocal sudo[4801]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 13:43:31 np0005462840.novalocal python3[4803]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 13:43:31 np0005462840.novalocal sudo[4801]: pam_unix(sudo:session): session closed for user root
Sep 30 13:43:31 np0005462840.novalocal sudo[4827]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jenfhivjiclijgvdvgaxtgthbfavbgig ; /usr/bin/python3'
Sep 30 13:43:31 np0005462840.novalocal sudo[4827]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 13:43:32 np0005462840.novalocal python3[4829]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 13:43:32 np0005462840.novalocal sudo[4827]: pam_unix(sudo:session): session closed for user root
Sep 30 13:43:32 np0005462840.novalocal sudo[4853]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqowekqyzwevfdsndyydtatgumghlowi ; /usr/bin/python3'
Sep 30 13:43:32 np0005462840.novalocal sudo[4853]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 13:43:32 np0005462840.novalocal python3[4855]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 13:43:32 np0005462840.novalocal sudo[4853]: pam_unix(sudo:session): session closed for user root
Sep 30 13:43:32 np0005462840.novalocal sshd-session[4750]: Received disconnect from 210.90.155.80 port 48330:11: Bye Bye [preauth]
Sep 30 13:43:32 np0005462840.novalocal sshd-session[4750]: Disconnected from authenticating user root 210.90.155.80 port 48330 [preauth]
Sep 30 13:43:32 np0005462840.novalocal sudo[4879]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqapgbcdbawzomkgniuvkruwjktsvpeb ; /usr/bin/python3'
Sep 30 13:43:32 np0005462840.novalocal sudo[4879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 13:43:32 np0005462840.novalocal python3[4881]: ansible-ansible.builtin.lineinfile Invoked with path=/etc/systemd/system.conf regexp=^#DefaultIOAccounting=no line=DefaultIOAccounting=yes state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 13:43:32 np0005462840.novalocal python3[4881]: ansible-ansible.builtin.lineinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Sep 30 13:43:32 np0005462840.novalocal sudo[4879]: pam_unix(sudo:session): session closed for user root
Sep 30 13:43:33 np0005462840.novalocal sudo[4905]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzpuquxjsvjnjuzxilgicmxnpmjadhdh ; /usr/bin/python3'
Sep 30 13:43:33 np0005462840.novalocal sudo[4905]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 13:43:33 np0005462840.novalocal python3[4907]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Sep 30 13:43:33 np0005462840.novalocal systemd[1]: Reloading.
Sep 30 13:43:33 np0005462840.novalocal systemd-rc-local-generator[4928]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 13:43:34 np0005462840.novalocal sudo[4905]: pam_unix(sudo:session): session closed for user root
Sep 30 13:43:35 np0005462840.novalocal sudo[4961]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akwelhvmsixoijroyritycqhnjxisajs ; /usr/bin/python3'
Sep 30 13:43:35 np0005462840.novalocal sudo[4961]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 13:43:35 np0005462840.novalocal python3[4963]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Sep 30 13:43:35 np0005462840.novalocal sudo[4961]: pam_unix(sudo:session): session closed for user root
Sep 30 13:43:35 np0005462840.novalocal sudo[4987]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpmranmujxbycsplrkxfnbdxiduikhuc ; /usr/bin/python3'
Sep 30 13:43:35 np0005462840.novalocal sudo[4987]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 13:43:36 np0005462840.novalocal python3[4989]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 13:43:36 np0005462840.novalocal sudo[4987]: pam_unix(sudo:session): session closed for user root
Sep 30 13:43:36 np0005462840.novalocal sudo[5015]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjgtlxuuuiytmrhtknwrllitashyfnrd ; /usr/bin/python3'
Sep 30 13:43:36 np0005462840.novalocal sudo[5015]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 13:43:36 np0005462840.novalocal python3[5017]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 13:43:36 np0005462840.novalocal sudo[5015]: pam_unix(sudo:session): session closed for user root
Sep 30 13:43:36 np0005462840.novalocal sudo[5043]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kovlbbgwirtzqoomsemqbskrraearloh ; /usr/bin/python3'
Sep 30 13:43:36 np0005462840.novalocal sudo[5043]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 13:43:36 np0005462840.novalocal python3[5045]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 13:43:36 np0005462840.novalocal sudo[5043]: pam_unix(sudo:session): session closed for user root
Sep 30 13:43:36 np0005462840.novalocal sudo[5071]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfvodhcshxkcoijcoviwhiawqhjuezeq ; /usr/bin/python3'
Sep 30 13:43:36 np0005462840.novalocal sudo[5071]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 13:43:36 np0005462840.novalocal python3[5073]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 13:43:36 np0005462840.novalocal sudo[5071]: pam_unix(sudo:session): session closed for user root
Sep 30 13:43:37 np0005462840.novalocal python3[5100]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;
                                                       _uses_shell=True zuul_log_id=fa163e3b-3c83-c2b7-0dc1-000000001cfe-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 13:43:38 np0005462840.novalocal python3[5130]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 13:43:40 np0005462840.novalocal sshd-session[4720]: Connection closed by 38.102.83.114 port 58456
Sep 30 13:43:40 np0005462840.novalocal sshd-session[4717]: pam_unix(sshd:session): session closed for user zuul
Sep 30 13:43:40 np0005462840.novalocal systemd[1]: session-4.scope: Deactivated successfully.
Sep 30 13:43:40 np0005462840.novalocal systemd[1]: session-4.scope: Consumed 3.303s CPU time.
Sep 30 13:43:40 np0005462840.novalocal systemd-logind[808]: Session 4 logged out. Waiting for processes to exit.
Sep 30 13:43:40 np0005462840.novalocal systemd-logind[808]: Removed session 4.
Sep 30 13:43:42 np0005462840.novalocal sshd-session[5135]: Invalid user mohsen from 121.204.171.142 port 43624
Sep 30 13:43:42 np0005462840.novalocal sshd-session[5138]: Accepted publickey for zuul from 38.102.83.114 port 42700 ssh2: RSA SHA256:PQ5gAlGqGw5eyUoP3tGuJWzdC0qrtAhhgPp/wWGLEq4
Sep 30 13:43:42 np0005462840.novalocal systemd-logind[808]: New session 5 of user zuul.
Sep 30 13:43:42 np0005462840.novalocal systemd[1]: Started Session 5 of User zuul.
Sep 30 13:43:42 np0005462840.novalocal sshd-session[5138]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Sep 30 13:43:42 np0005462840.novalocal sudo[5165]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-giilbvrijfpymxxedayhsyyoockwrtnk ; /usr/bin/python3'
Sep 30 13:43:42 np0005462840.novalocal sudo[5165]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 13:43:42 np0005462840.novalocal python3[5167]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Sep 30 13:43:44 np0005462840.novalocal sshd-session[5169]: Received disconnect from 209.38.228.14 port 42208:11: Bye Bye [preauth]
Sep 30 13:43:44 np0005462840.novalocal sshd-session[5169]: Disconnected from authenticating user root 209.38.228.14 port 42208 [preauth]
Sep 30 13:44:07 np0005462840.novalocal kernel: SELinux:  Converting 366 SID table entries...
Sep 30 13:44:07 np0005462840.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Sep 30 13:44:07 np0005462840.novalocal kernel: SELinux:  policy capability open_perms=1
Sep 30 13:44:07 np0005462840.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Sep 30 13:44:07 np0005462840.novalocal kernel: SELinux:  policy capability always_check_network=0
Sep 30 13:44:07 np0005462840.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Sep 30 13:44:07 np0005462840.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Sep 30 13:44:07 np0005462840.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Sep 30 13:44:20 np0005462840.novalocal kernel: SELinux:  Converting 366 SID table entries...
Sep 30 13:44:20 np0005462840.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Sep 30 13:44:20 np0005462840.novalocal kernel: SELinux:  policy capability open_perms=1
Sep 30 13:44:20 np0005462840.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Sep 30 13:44:20 np0005462840.novalocal kernel: SELinux:  policy capability always_check_network=0
Sep 30 13:44:20 np0005462840.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Sep 30 13:44:20 np0005462840.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Sep 30 13:44:20 np0005462840.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Sep 30 13:44:31 np0005462840.novalocal kernel: SELinux:  Converting 366 SID table entries...
Sep 30 13:44:31 np0005462840.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Sep 30 13:44:31 np0005462840.novalocal kernel: SELinux:  policy capability open_perms=1
Sep 30 13:44:31 np0005462840.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Sep 30 13:44:31 np0005462840.novalocal kernel: SELinux:  policy capability always_check_network=0
Sep 30 13:44:31 np0005462840.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Sep 30 13:44:31 np0005462840.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Sep 30 13:44:31 np0005462840.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Sep 30 13:44:34 np0005462840.novalocal setsebool[5233]: The virt_use_nfs policy boolean was changed to 1 by root
Sep 30 13:44:34 np0005462840.novalocal setsebool[5233]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Sep 30 13:44:37 np0005462840.novalocal sshd-session[5241]: Received disconnect from 209.38.228.14 port 51866:11: Bye Bye [preauth]
Sep 30 13:44:37 np0005462840.novalocal sshd-session[5241]: Disconnected from authenticating user root 209.38.228.14 port 51866 [preauth]
Sep 30 13:44:46 np0005462840.novalocal kernel: SELinux:  Converting 369 SID table entries...
Sep 30 13:44:46 np0005462840.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Sep 30 13:44:46 np0005462840.novalocal kernel: SELinux:  policy capability open_perms=1
Sep 30 13:44:46 np0005462840.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Sep 30 13:44:46 np0005462840.novalocal kernel: SELinux:  policy capability always_check_network=0
Sep 30 13:44:46 np0005462840.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Sep 30 13:44:46 np0005462840.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Sep 30 13:44:46 np0005462840.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Sep 30 13:44:52 np0005462840.novalocal sshd-session[5260]: Invalid user mikrotik from 210.90.155.80 port 43756
Sep 30 13:44:52 np0005462840.novalocal sshd-session[5260]: Received disconnect from 210.90.155.80 port 43756:11: Bye Bye [preauth]
Sep 30 13:44:52 np0005462840.novalocal sshd-session[5260]: Disconnected from invalid user mikrotik 210.90.155.80 port 43756 [preauth]
Sep 30 13:44:59 np0005462840.novalocal sshd-session[5954]: Invalid user mohammad from 121.204.171.142 port 49758
Sep 30 13:45:05 np0005462840.novalocal sshd-session[5954]: Received disconnect from 121.204.171.142 port 49758:11: Bye Bye [preauth]
Sep 30 13:45:05 np0005462840.novalocal sshd-session[5954]: Disconnected from invalid user mohammad 121.204.171.142 port 49758 [preauth]
Sep 30 13:45:15 np0005462840.novalocal dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Sep 30 13:45:15 np0005462840.novalocal systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Sep 30 13:45:15 np0005462840.novalocal systemd[1]: Starting man-db-cache-update.service...
Sep 30 13:45:15 np0005462840.novalocal systemd[1]: Reloading.
Sep 30 13:45:15 np0005462840.novalocal systemd-rc-local-generator[5997]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 13:45:16 np0005462840.novalocal systemd[1]: Queuing reload/restart jobs for marked units…
Sep 30 13:45:21 np0005462840.novalocal systemd[1]: Starting PackageKit Daemon...
Sep 30 13:45:21 np0005462840.novalocal PackageKit[8178]: daemon start
Sep 30 13:45:21 np0005462840.novalocal systemd[1]: Starting Authorization Manager...
Sep 30 13:45:21 np0005462840.novalocal polkitd[8246]: Started polkitd version 0.117
Sep 30 13:45:21 np0005462840.novalocal polkitd[8246]: Loading rules from directory /etc/polkit-1/rules.d
Sep 30 13:45:21 np0005462840.novalocal polkitd[8246]: Loading rules from directory /usr/share/polkit-1/rules.d
Sep 30 13:45:21 np0005462840.novalocal polkitd[8246]: Finished loading, compiling and executing 3 rules
Sep 30 13:45:21 np0005462840.novalocal systemd[1]: Started Authorization Manager.
Sep 30 13:45:21 np0005462840.novalocal polkitd[8246]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Sep 30 13:45:21 np0005462840.novalocal systemd[1]: Started PackageKit Daemon.
Sep 30 13:45:24 np0005462840.novalocal sudo[5165]: pam_unix(sudo:session): session closed for user root
Sep 30 13:45:24 np0005462840.novalocal irqbalance[801]: Cannot change IRQ 27 affinity: Operation not permitted
Sep 30 13:45:24 np0005462840.novalocal irqbalance[801]: IRQ 27 affinity is now unmanaged
Sep 30 13:45:26 np0005462840.novalocal python3[10224]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"
                                                        _uses_shell=True zuul_log_id=fa163e3b-3c83-0ae0-34c2-00000000000c-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 13:45:27 np0005462840.novalocal kernel: evm: overlay not supported
Sep 30 13:45:28 np0005462840.novalocal systemd[1481]: Starting D-Bus User Message Bus...
Sep 30 13:45:28 np0005462840.novalocal dbus-broker-launch[11085]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Sep 30 13:45:28 np0005462840.novalocal dbus-broker-launch[11085]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Sep 30 13:45:28 np0005462840.novalocal systemd[1481]: Started D-Bus User Message Bus.
Sep 30 13:45:28 np0005462840.novalocal dbus-broker-lau[11085]: Ready
Sep 30 13:45:28 np0005462840.novalocal systemd[1481]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Sep 30 13:45:28 np0005462840.novalocal systemd[1481]: Created slice Slice /user.
Sep 30 13:45:28 np0005462840.novalocal systemd[1481]: podman-10978.scope: unit configures an IP firewall, but not running as root.
Sep 30 13:45:28 np0005462840.novalocal systemd[1481]: (This warning is only shown for the first unit using IP firewalling.)
Sep 30 13:45:28 np0005462840.novalocal systemd[1481]: Started podman-10978.scope.
Sep 30 13:45:28 np0005462840.novalocal systemd[1481]: Started podman-pause-740f5974.scope.
Sep 30 13:45:29 np0005462840.novalocal sudo[11386]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqwijxbibdwdtvrfdxsohjmtepytesyg ; /usr/bin/python3'
Sep 30 13:45:29 np0005462840.novalocal sudo[11386]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 13:45:29 np0005462840.novalocal python3[11393]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]
                                                       location = "38.102.83.195:5001"
                                                       insecure = true path=/etc/containers/registries.conf block=[[registry]]
                                                       location = "38.102.83.195:5001"
                                                       insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 13:45:29 np0005462840.novalocal sudo[11386]: pam_unix(sudo:session): session closed for user root
Sep 30 13:45:29 np0005462840.novalocal sshd-session[5141]: Connection closed by 38.102.83.114 port 42700
Sep 30 13:45:29 np0005462840.novalocal sshd-session[5138]: pam_unix(sshd:session): session closed for user zuul
Sep 30 13:45:29 np0005462840.novalocal systemd-logind[808]: Session 5 logged out. Waiting for processes to exit.
Sep 30 13:45:29 np0005462840.novalocal systemd[1]: session-5.scope: Deactivated successfully.
Sep 30 13:45:29 np0005462840.novalocal systemd[1]: session-5.scope: Consumed 1min 827ms CPU time.
Sep 30 13:45:29 np0005462840.novalocal systemd-logind[808]: Removed session 5.
Sep 30 13:45:41 np0005462840.novalocal sshd-session[15760]: Received disconnect from 209.38.228.14 port 58748:11: Bye Bye [preauth]
Sep 30 13:45:41 np0005462840.novalocal sshd-session[15760]: Disconnected from authenticating user root 209.38.228.14 port 58748 [preauth]
Sep 30 13:45:42 np0005462840.novalocal sshd[1005]: Timeout before authentication for connection from 121.204.171.142 to 38.102.83.20, pid = 5135
Sep 30 13:45:47 np0005462840.novalocal sshd-session[17819]: Received disconnect from 91.224.92.79 port 15900:11:  [preauth]
Sep 30 13:45:47 np0005462840.novalocal sshd-session[17819]: Disconnected from authenticating user root 91.224.92.79 port 15900 [preauth]
Sep 30 13:45:47 np0005462840.novalocal sshd-session[18325]: Connection closed by 38.129.56.219 port 39678 [preauth]
Sep 30 13:45:47 np0005462840.novalocal sshd-session[18331]: Connection closed by 38.129.56.219 port 39682 [preauth]
Sep 30 13:45:47 np0005462840.novalocal sshd-session[18328]: Unable to negotiate with 38.129.56.219 port 39686: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Sep 30 13:45:47 np0005462840.novalocal sshd-session[18333]: Unable to negotiate with 38.129.56.219 port 39688: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Sep 30 13:45:47 np0005462840.novalocal sshd-session[18330]: Unable to negotiate with 38.129.56.219 port 39702: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Sep 30 13:45:52 np0005462840.novalocal sshd-session[19681]: Accepted publickey for zuul from 38.102.83.114 port 33138 ssh2: RSA SHA256:PQ5gAlGqGw5eyUoP3tGuJWzdC0qrtAhhgPp/wWGLEq4
Sep 30 13:45:52 np0005462840.novalocal systemd-logind[808]: New session 6 of user zuul.
Sep 30 13:45:52 np0005462840.novalocal systemd[1]: Started Session 6 of User zuul.
Sep 30 13:45:52 np0005462840.novalocal sshd-session[19681]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Sep 30 13:45:52 np0005462840.novalocal python3[19767]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIhhV8c382h6nVmP++m84nmJ0b1bVFOwRyizHW1PWdeaYPkbGazjanBCvtUqXol5du4IPWxStluJLtsiy++F0wc= zuul@np0005462839.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Sep 30 13:45:52 np0005462840.novalocal sudo[19936]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kyvivtbnhqujkcwiipqkmgddwdmgravv ; /usr/bin/python3'
Sep 30 13:45:52 np0005462840.novalocal sudo[19936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 13:45:52 np0005462840.novalocal python3[19945]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIhhV8c382h6nVmP++m84nmJ0b1bVFOwRyizHW1PWdeaYPkbGazjanBCvtUqXol5du4IPWxStluJLtsiy++F0wc= zuul@np0005462839.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Sep 30 13:45:52 np0005462840.novalocal sudo[19936]: pam_unix(sudo:session): session closed for user root
Sep 30 13:45:53 np0005462840.novalocal sudo[20261]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szekablcjzlaxvriewfhfkgmlbwypvlq ; /usr/bin/python3'
Sep 30 13:45:53 np0005462840.novalocal sudo[20261]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 13:45:53 np0005462840.novalocal python3[20270]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005462840.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Sep 30 13:45:53 np0005462840.novalocal useradd[20360]: new group: name=cloud-admin, GID=1002
Sep 30 13:45:53 np0005462840.novalocal useradd[20360]: new user: name=cloud-admin, UID=1002, GID=1002, home=/home/cloud-admin, shell=/bin/bash, from=none
Sep 30 13:45:53 np0005462840.novalocal sudo[20261]: pam_unix(sudo:session): session closed for user root
Sep 30 13:45:54 np0005462840.novalocal sudo[20563]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uohtemohxcdavjsifujxffedhfpsvnfj ; /usr/bin/python3'
Sep 30 13:45:54 np0005462840.novalocal sudo[20563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 13:45:54 np0005462840.novalocal python3[20570]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIhhV8c382h6nVmP++m84nmJ0b1bVFOwRyizHW1PWdeaYPkbGazjanBCvtUqXol5du4IPWxStluJLtsiy++F0wc= zuul@np0005462839.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Sep 30 13:45:54 np0005462840.novalocal sudo[20563]: pam_unix(sudo:session): session closed for user root
Sep 30 13:45:54 np0005462840.novalocal sudo[20790]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnmaimkekrksdgpvrrpgihlbezvvdilq ; /usr/bin/python3'
Sep 30 13:45:54 np0005462840.novalocal sudo[20790]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 13:45:54 np0005462840.novalocal python3[20792]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Sep 30 13:45:54 np0005462840.novalocal sudo[20790]: pam_unix(sudo:session): session closed for user root
Sep 30 13:45:55 np0005462840.novalocal sudo[21033]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjhgrzvupwaigxahsacxonltumqdrmke ; /usr/bin/python3'
Sep 30 13:45:55 np0005462840.novalocal sudo[21033]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 13:45:55 np0005462840.novalocal python3[21041]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1759239954.7136152-167-51391033287229/source _original_basename=tmp8mdc07wp follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 13:45:55 np0005462840.novalocal sudo[21033]: pam_unix(sudo:session): session closed for user root
Sep 30 13:45:56 np0005462840.novalocal sudo[21344]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpksdljuvmtqltvcdjeceegrdjjvprio ; /usr/bin/python3'
Sep 30 13:45:56 np0005462840.novalocal sudo[21344]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 13:45:56 np0005462840.novalocal python3[21350]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Sep 30 13:45:56 np0005462840.novalocal systemd[1]: Starting Hostname Service...
Sep 30 13:45:56 np0005462840.novalocal systemd[1]: Started Hostname Service.
Sep 30 13:45:56 np0005462840.novalocal systemd-hostnamed[21446]: Changed pretty hostname to 'compute-0'
Sep 30 13:45:56 compute-0 systemd-hostnamed[21446]: Hostname set to <compute-0> (static)
Sep 30 13:45:56 compute-0 NetworkManager[4391]: <info>  [1759239956.5933] hostname: static hostname changed from "np0005462840.novalocal" to "compute-0"
Sep 30 13:45:56 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Sep 30 13:45:56 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Sep 30 13:45:56 compute-0 sudo[21344]: pam_unix(sudo:session): session closed for user root
Sep 30 13:45:56 compute-0 sshd-session[19727]: Connection closed by 38.102.83.114 port 33138
Sep 30 13:45:56 compute-0 sshd-session[19681]: pam_unix(sshd:session): session closed for user zuul
Sep 30 13:45:56 compute-0 systemd[1]: session-6.scope: Deactivated successfully.
Sep 30 13:45:56 compute-0 systemd[1]: session-6.scope: Consumed 2.235s CPU time.
Sep 30 13:45:56 compute-0 systemd-logind[808]: Session 6 logged out. Waiting for processes to exit.
Sep 30 13:45:56 compute-0 systemd-logind[808]: Removed session 6.
Sep 30 13:45:58 compute-0 sshd-session[21531]: Received disconnect from 210.90.155.80 port 38866:11: Bye Bye [preauth]
Sep 30 13:45:58 compute-0 sshd-session[21531]: Disconnected from authenticating user root 210.90.155.80 port 38866 [preauth]
Sep 30 13:46:06 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Sep 30 13:46:09 compute-0 sshd-session[25184]: Received disconnect from 87.251.77.103 port 57586:11: Bye Bye [preauth]
Sep 30 13:46:09 compute-0 sshd-session[25184]: Disconnected from authenticating user root 87.251.77.103 port 57586 [preauth]
Sep 30 13:46:15 compute-0 sshd[1005]: Timeout before authentication for connection from 59.36.78.66 to 38.102.83.20, pid = 5215
Sep 30 13:46:19 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Sep 30 13:46:19 compute-0 systemd[1]: Finished man-db-cache-update.service.
Sep 30 13:46:19 compute-0 systemd[1]: man-db-cache-update.service: Consumed 53.915s CPU time.
Sep 30 13:46:19 compute-0 systemd[1]: run-r3ed5b4422b5c476e817c8485cc25923b.service: Deactivated successfully.
Sep 30 13:46:26 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Sep 30 13:46:37 compute-0 sshd-session[27036]: Received disconnect from 209.38.228.14 port 33050:11: Bye Bye [preauth]
Sep 30 13:46:37 compute-0 sshd-session[27036]: Disconnected from authenticating user root 209.38.228.14 port 33050 [preauth]
Sep 30 13:47:06 compute-0 sshd-session[27038]: Received disconnect from 210.90.155.80 port 33958:11: Bye Bye [preauth]
Sep 30 13:47:06 compute-0 sshd-session[27038]: Disconnected from authenticating user root 210.90.155.80 port 33958 [preauth]
Sep 30 13:47:23 compute-0 sshd[1005]: drop connection #1 from [59.36.78.66]:38340 on [38.102.83.20]:22 penalty: exceeded LoginGraceTime
Sep 30 13:47:35 compute-0 sshd-session[27043]: Invalid user jerry from 209.38.228.14 port 53690
Sep 30 13:47:35 compute-0 sshd-session[27043]: Received disconnect from 209.38.228.14 port 53690:11: Bye Bye [preauth]
Sep 30 13:47:35 compute-0 sshd-session[27043]: Disconnected from invalid user jerry 209.38.228.14 port 53690 [preauth]
Sep 30 13:47:48 compute-0 sshd[1005]: Timeout before authentication for connection from 59.36.78.66 to 38.102.83.20, pid = 18504
Sep 30 13:47:51 compute-0 sshd-session[27045]: Invalid user zx from 87.251.77.103 port 36134
Sep 30 13:47:51 compute-0 sshd-session[27045]: Received disconnect from 87.251.77.103 port 36134:11: Bye Bye [preauth]
Sep 30 13:47:51 compute-0 sshd-session[27045]: Disconnected from invalid user zx 87.251.77.103 port 36134 [preauth]
Sep 30 13:48:16 compute-0 sshd-session[27047]: Invalid user seekcy from 210.90.155.80 port 57362
Sep 30 13:48:16 compute-0 sshd-session[27047]: Received disconnect from 210.90.155.80 port 57362:11: Bye Bye [preauth]
Sep 30 13:48:16 compute-0 sshd-session[27047]: Disconnected from invalid user seekcy 210.90.155.80 port 57362 [preauth]
Sep 30 13:48:37 compute-0 sshd-session[27049]: Invalid user whs from 209.38.228.14 port 53500
Sep 30 13:48:37 compute-0 sshd-session[27049]: Received disconnect from 209.38.228.14 port 53500:11: Bye Bye [preauth]
Sep 30 13:48:37 compute-0 sshd-session[27049]: Disconnected from invalid user whs 209.38.228.14 port 53500 [preauth]
Sep 30 13:48:53 compute-0 sshd[1005]: drop connection #0 from [59.36.78.66]:57910 on [38.102.83.20]:22 penalty: exceeded LoginGraceTime
Sep 30 13:49:23 compute-0 sshd-session[27051]: Invalid user loader from 210.90.155.80 port 52476
Sep 30 13:49:23 compute-0 sshd-session[27051]: Received disconnect from 210.90.155.80 port 52476:11: Bye Bye [preauth]
Sep 30 13:49:23 compute-0 sshd-session[27051]: Disconnected from invalid user loader 210.90.155.80 port 52476 [preauth]
Sep 30 13:49:31 compute-0 sshd-session[27053]: Invalid user superuser from 209.38.228.14 port 46612
Sep 30 13:49:32 compute-0 sshd-session[27053]: Received disconnect from 209.38.228.14 port 46612:11: Bye Bye [preauth]
Sep 30 13:49:32 compute-0 sshd-session[27053]: Disconnected from invalid user superuser 209.38.228.14 port 46612 [preauth]
Sep 30 13:49:32 compute-0 sshd-session[27055]: Received disconnect from 87.251.77.103 port 40914:11: Bye Bye [preauth]
Sep 30 13:49:32 compute-0 sshd-session[27055]: Disconnected from authenticating user root 87.251.77.103 port 40914 [preauth]
Sep 30 13:49:36 compute-0 sshd-session[27057]: Accepted publickey for zuul from 38.129.56.219 port 57410 ssh2: RSA SHA256:PQ5gAlGqGw5eyUoP3tGuJWzdC0qrtAhhgPp/wWGLEq4
Sep 30 13:49:36 compute-0 systemd-logind[808]: New session 7 of user zuul.
Sep 30 13:49:36 compute-0 systemd[1]: Started Session 7 of User zuul.
Sep 30 13:49:36 compute-0 sshd-session[27057]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Sep 30 13:49:36 compute-0 python3[27133]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Sep 30 13:49:38 compute-0 sudo[27247]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-herasifwzwfayvsjiyhslludtuquteuw ; /usr/bin/python3'
Sep 30 13:49:38 compute-0 sudo[27247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 13:49:38 compute-0 python3[27249]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Sep 30 13:49:38 compute-0 sudo[27247]: pam_unix(sudo:session): session closed for user root
Sep 30 13:49:39 compute-0 sudo[27320]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elbadejumumxwiekbwkjaicawnvoptle ; /usr/bin/python3'
Sep 30 13:49:39 compute-0 sudo[27320]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 13:49:39 compute-0 python3[27322]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759240178.5033855-31299-256420619632633/source mode=0755 _original_basename=delorean.repo follow=False checksum=fdbc451c7e16efca2444f90fdb72f8eb1c12a1b5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 13:49:39 compute-0 sudo[27320]: pam_unix(sudo:session): session closed for user root
Sep 30 13:49:39 compute-0 sudo[27346]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zifvmbfghitnhzatpasmrngeqwwdeyqe ; /usr/bin/python3'
Sep 30 13:49:39 compute-0 sudo[27346]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 13:49:39 compute-0 python3[27348]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Sep 30 13:49:39 compute-0 sudo[27346]: pam_unix(sudo:session): session closed for user root
Sep 30 13:49:39 compute-0 sudo[27419]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcjipjoenjoxnklhykznbybnpnvlarmy ; /usr/bin/python3'
Sep 30 13:49:39 compute-0 sudo[27419]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 13:49:39 compute-0 python3[27421]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759240178.5033855-31299-256420619632633/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=0bdbb813b840548359ae77c28d76ca272ccaf31b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 13:49:39 compute-0 sudo[27419]: pam_unix(sudo:session): session closed for user root
Sep 30 13:49:39 compute-0 sudo[27445]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbhqpjnsecurbaudzorduulnkmjaswwt ; /usr/bin/python3'
Sep 30 13:49:39 compute-0 sudo[27445]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 13:49:40 compute-0 python3[27447]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Sep 30 13:49:40 compute-0 sudo[27445]: pam_unix(sudo:session): session closed for user root
Sep 30 13:49:40 compute-0 sudo[27518]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkeaxkfpprqhzlbukjjrbeeqlgerituh ; /usr/bin/python3'
Sep 30 13:49:40 compute-0 sudo[27518]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 13:49:40 compute-0 python3[27520]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759240178.5033855-31299-256420619632633/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 13:49:40 compute-0 sudo[27518]: pam_unix(sudo:session): session closed for user root
Sep 30 13:49:40 compute-0 sudo[27544]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odfxeqoesxpcpwfsyzlaobdsmgxrpwur ; /usr/bin/python3'
Sep 30 13:49:40 compute-0 sudo[27544]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 13:49:40 compute-0 python3[27546]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Sep 30 13:49:40 compute-0 sudo[27544]: pam_unix(sudo:session): session closed for user root
Sep 30 13:49:40 compute-0 sudo[27617]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hocbtcfaoncihmlukfbqphunzkfkolgx ; /usr/bin/python3'
Sep 30 13:49:40 compute-0 sudo[27617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 13:49:41 compute-0 python3[27619]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759240178.5033855-31299-256420619632633/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 13:49:41 compute-0 sudo[27617]: pam_unix(sudo:session): session closed for user root
Sep 30 13:49:41 compute-0 sudo[27643]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpnkmclnhhqlctnldrfcgftcnjxorhln ; /usr/bin/python3'
Sep 30 13:49:41 compute-0 sudo[27643]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 13:49:41 compute-0 python3[27645]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Sep 30 13:49:41 compute-0 sudo[27643]: pam_unix(sudo:session): session closed for user root
Sep 30 13:49:41 compute-0 sudo[27716]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkecvvmhfkddrzkvkunlljotdaiogjcj ; /usr/bin/python3'
Sep 30 13:49:41 compute-0 sudo[27716]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 13:49:41 compute-0 python3[27718]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759240178.5033855-31299-256420619632633/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 13:49:41 compute-0 sudo[27716]: pam_unix(sudo:session): session closed for user root
Sep 30 13:49:41 compute-0 sudo[27742]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtuydbpedidrizaofmejuwlbpfzccqce ; /usr/bin/python3'
Sep 30 13:49:41 compute-0 sudo[27742]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 13:49:41 compute-0 python3[27744]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Sep 30 13:49:41 compute-0 sudo[27742]: pam_unix(sudo:session): session closed for user root
Sep 30 13:49:41 compute-0 sudo[27815]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wsswknuqgrneuvrjwukbmjqpuyhxjpis ; /usr/bin/python3'
Sep 30 13:49:41 compute-0 sudo[27815]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 13:49:42 compute-0 python3[27817]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759240178.5033855-31299-256420619632633/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 13:49:42 compute-0 sudo[27815]: pam_unix(sudo:session): session closed for user root
Sep 30 13:49:42 compute-0 sudo[27841]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pknaxetcaflzpbvpfztlxiohoissofhc ; /usr/bin/python3'
Sep 30 13:49:42 compute-0 sudo[27841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 13:49:42 compute-0 python3[27843]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Sep 30 13:49:42 compute-0 sudo[27841]: pam_unix(sudo:session): session closed for user root
Sep 30 13:49:42 compute-0 sudo[27914]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttzilameeidvqwxotazoqujnrxzefqmw ; /usr/bin/python3'
Sep 30 13:49:42 compute-0 sudo[27914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 13:49:42 compute-0 python3[27916]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759240178.5033855-31299-256420619632633/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=3193b2329e025492c2ae01f1388d5694c4facea6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 13:49:42 compute-0 sudo[27914]: pam_unix(sudo:session): session closed for user root
Sep 30 13:49:45 compute-0 sshd-session[27942]: Unable to negotiate with 192.168.122.11 port 33256: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Sep 30 13:49:45 compute-0 sshd-session[27945]: Connection closed by 192.168.122.11 port 33226 [preauth]
Sep 30 13:49:45 compute-0 sshd-session[27941]: Connection closed by 192.168.122.11 port 33240 [preauth]
Sep 30 13:49:45 compute-0 sshd-session[27944]: Unable to negotiate with 192.168.122.11 port 33254: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Sep 30 13:49:45 compute-0 sshd-session[27943]: Unable to negotiate with 192.168.122.11 port 33268: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Sep 30 13:49:54 compute-0 python3[27974]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 13:50:25 compute-0 sshd-session[27979]: Invalid user webserver from 209.38.228.14 port 33098
Sep 30 13:50:25 compute-0 sshd-session[27979]: Received disconnect from 209.38.228.14 port 33098:11: Bye Bye [preauth]
Sep 30 13:50:25 compute-0 sshd-session[27979]: Disconnected from invalid user webserver 209.38.228.14 port 33098 [preauth]
Sep 30 13:50:27 compute-0 PackageKit[8178]: daemon quit
Sep 30 13:50:27 compute-0 systemd[1]: packagekit.service: Deactivated successfully.
Sep 30 13:50:28 compute-0 sshd-session[27981]: Invalid user free from 210.90.155.80 port 47762
Sep 30 13:50:28 compute-0 sshd-session[27981]: Received disconnect from 210.90.155.80 port 47762:11: Bye Bye [preauth]
Sep 30 13:50:28 compute-0 sshd-session[27981]: Disconnected from invalid user free 210.90.155.80 port 47762 [preauth]
Sep 30 13:51:04 compute-0 sshd-session[27984]: Received disconnect from 91.224.92.79 port 61186:11:  [preauth]
Sep 30 13:51:04 compute-0 sshd-session[27984]: Disconnected from authenticating user root 91.224.92.79 port 61186 [preauth]
Sep 30 13:51:13 compute-0 sshd-session[27986]: Invalid user karthavya from 87.251.77.103 port 40844
Sep 30 13:51:13 compute-0 sshd-session[27986]: Received disconnect from 87.251.77.103 port 40844:11: Bye Bye [preauth]
Sep 30 13:51:13 compute-0 sshd-session[27986]: Disconnected from invalid user karthavya 87.251.77.103 port 40844 [preauth]
Sep 30 13:51:19 compute-0 sshd-session[27988]: Invalid user lgsm from 209.38.228.14 port 53746
Sep 30 13:51:19 compute-0 sshd-session[27988]: Received disconnect from 209.38.228.14 port 53746:11: Bye Bye [preauth]
Sep 30 13:51:19 compute-0 sshd-session[27988]: Disconnected from invalid user lgsm 209.38.228.14 port 53746 [preauth]
Sep 30 13:51:33 compute-0 sshd-session[27990]: Invalid user ai from 210.90.155.80 port 43018
Sep 30 13:51:34 compute-0 sshd-session[27990]: Received disconnect from 210.90.155.80 port 43018:11: Bye Bye [preauth]
Sep 30 13:51:34 compute-0 sshd-session[27990]: Disconnected from invalid user ai 210.90.155.80 port 43018 [preauth]
Sep 30 13:52:16 compute-0 sshd-session[27992]: Invalid user postgres from 209.38.228.14 port 50708
Sep 30 13:52:16 compute-0 sshd-session[27992]: Received disconnect from 209.38.228.14 port 50708:11: Bye Bye [preauth]
Sep 30 13:52:16 compute-0 sshd-session[27992]: Disconnected from invalid user postgres 209.38.228.14 port 50708 [preauth]
Sep 30 13:52:22 compute-0 sshd[1005]: Timeout before authentication for connection from 59.36.78.66 to 38.102.83.20, pid = 27977
Sep 30 13:52:23 compute-0 sshd-session[27994]: Invalid user admin from 80.94.95.115 port 45418
Sep 30 13:52:23 compute-0 sshd-session[27994]: Connection closed by invalid user admin 80.94.95.115 port 45418 [preauth]
Sep 30 13:52:42 compute-0 sshd-session[27998]: Invalid user guru from 210.90.155.80 port 38152
Sep 30 13:52:42 compute-0 sshd-session[27998]: Received disconnect from 210.90.155.80 port 38152:11: Bye Bye [preauth]
Sep 30 13:52:42 compute-0 sshd-session[27998]: Disconnected from invalid user guru 210.90.155.80 port 38152 [preauth]
Sep 30 13:53:19 compute-0 sshd-session[28000]: Invalid user user from 209.38.228.14 port 47986
Sep 30 13:53:19 compute-0 sshd-session[28000]: Received disconnect from 209.38.228.14 port 47986:11: Bye Bye [preauth]
Sep 30 13:53:19 compute-0 sshd-session[28000]: Disconnected from invalid user user 209.38.228.14 port 47986 [preauth]
Sep 30 13:53:52 compute-0 sshd-session[28002]: Invalid user seekcy from 210.90.155.80 port 33338
Sep 30 13:53:52 compute-0 sshd-session[28002]: Received disconnect from 210.90.155.80 port 33338:11: Bye Bye [preauth]
Sep 30 13:53:52 compute-0 sshd-session[28002]: Disconnected from invalid user seekcy 210.90.155.80 port 33338 [preauth]
Sep 30 13:54:19 compute-0 sshd-session[28004]: Invalid user gustavo from 209.38.228.14 port 51368
Sep 30 13:54:20 compute-0 sshd-session[28004]: Received disconnect from 209.38.228.14 port 51368:11: Bye Bye [preauth]
Sep 30 13:54:20 compute-0 sshd-session[28004]: Disconnected from invalid user gustavo 209.38.228.14 port 51368 [preauth]
Sep 30 13:54:37 compute-0 sshd-session[28006]: Invalid user platform from 87.251.77.103 port 54382
Sep 30 13:54:37 compute-0 sshd-session[28006]: Received disconnect from 87.251.77.103 port 54382:11: Bye Bye [preauth]
Sep 30 13:54:37 compute-0 sshd-session[28006]: Disconnected from invalid user platform 87.251.77.103 port 54382 [preauth]
Sep 30 13:54:54 compute-0 sshd-session[27060]: Received disconnect from 38.129.56.219 port 57410:11: disconnected by user
Sep 30 13:54:54 compute-0 sshd-session[27060]: Disconnected from user zuul 38.129.56.219 port 57410
Sep 30 13:54:54 compute-0 sshd-session[27057]: pam_unix(sshd:session): session closed for user zuul
Sep 30 13:54:54 compute-0 systemd[1]: session-7.scope: Deactivated successfully.
Sep 30 13:54:54 compute-0 systemd[1]: session-7.scope: Consumed 4.620s CPU time.
Sep 30 13:54:54 compute-0 systemd-logind[808]: Session 7 logged out. Waiting for processes to exit.
Sep 30 13:54:54 compute-0 systemd-logind[808]: Removed session 7.
Sep 30 13:55:05 compute-0 sshd-session[28009]: Invalid user seekcy from 210.90.155.80 port 56696
Sep 30 13:55:06 compute-0 sshd-session[28009]: Received disconnect from 210.90.155.80 port 56696:11: Bye Bye [preauth]
Sep 30 13:55:06 compute-0 sshd-session[28009]: Disconnected from invalid user seekcy 210.90.155.80 port 56696 [preauth]
Sep 30 13:55:22 compute-0 sshd-session[28011]: Invalid user mgeweb from 209.38.228.14 port 57988
Sep 30 13:55:22 compute-0 sshd-session[28011]: Received disconnect from 209.38.228.14 port 57988:11: Bye Bye [preauth]
Sep 30 13:55:22 compute-0 sshd-session[28011]: Disconnected from invalid user mgeweb 209.38.228.14 port 57988 [preauth]
Sep 30 13:56:11 compute-0 sshd-session[28013]: Invalid user Azure from 210.90.155.80 port 51776
Sep 30 13:56:12 compute-0 sshd-session[28013]: Received disconnect from 210.90.155.80 port 51776:11: Bye Bye [preauth]
Sep 30 13:56:12 compute-0 sshd-session[28013]: Disconnected from invalid user Azure 210.90.155.80 port 51776 [preauth]
Sep 30 13:56:19 compute-0 sshd-session[28015]: Received disconnect from 209.38.228.14 port 49056:11: Bye Bye [preauth]
Sep 30 13:56:19 compute-0 sshd-session[28015]: Disconnected from authenticating user root 209.38.228.14 port 49056 [preauth]
Sep 30 13:56:24 compute-0 sshd-session[28017]: Received disconnect from 193.46.255.20 port 57328:11:  [preauth]
Sep 30 13:56:24 compute-0 sshd-session[28017]: Disconnected from authenticating user root 193.46.255.20 port 57328 [preauth]
Sep 30 13:57:12 compute-0 sshd-session[28019]: Invalid user ec2-user from 209.38.228.14 port 55454
Sep 30 13:57:12 compute-0 sshd-session[28019]: Received disconnect from 209.38.228.14 port 55454:11: Bye Bye [preauth]
Sep 30 13:57:12 compute-0 sshd-session[28019]: Disconnected from invalid user ec2-user 209.38.228.14 port 55454 [preauth]
Sep 30 13:57:16 compute-0 sshd-session[28021]: Invalid user admin1 from 210.90.155.80 port 47088
Sep 30 13:57:16 compute-0 sshd-session[28021]: Received disconnect from 210.90.155.80 port 47088:11: Bye Bye [preauth]
Sep 30 13:57:16 compute-0 sshd-session[28021]: Disconnected from invalid user admin1 210.90.155.80 port 47088 [preauth]
Sep 30 13:58:07 compute-0 sshd-session[28023]: Invalid user psg from 209.38.228.14 port 36882
Sep 30 13:58:07 compute-0 sshd-session[28023]: Received disconnect from 209.38.228.14 port 36882:11: Bye Bye [preauth]
Sep 30 13:58:07 compute-0 sshd-session[28023]: Disconnected from invalid user psg 209.38.228.14 port 36882 [preauth]
Sep 30 13:58:21 compute-0 sshd-session[28027]: Invalid user rock from 210.90.155.80 port 42158
Sep 30 13:58:21 compute-0 sshd-session[28027]: Received disconnect from 210.90.155.80 port 42158:11: Bye Bye [preauth]
Sep 30 13:58:21 compute-0 sshd-session[28027]: Disconnected from invalid user rock 210.90.155.80 port 42158 [preauth]
Sep 30 13:59:01 compute-0 anacron[1150]: Job `cron.weekly' started
Sep 30 13:59:01 compute-0 anacron[1150]: Job `cron.weekly' terminated
Sep 30 13:59:02 compute-0 sshd-session[28031]: Invalid user mohamed from 209.38.228.14 port 39190
Sep 30 13:59:02 compute-0 sshd-session[28031]: Received disconnect from 209.38.228.14 port 39190:11: Bye Bye [preauth]
Sep 30 13:59:02 compute-0 sshd-session[28031]: Disconnected from invalid user mohamed 209.38.228.14 port 39190 [preauth]
Sep 30 13:59:30 compute-0 sshd-session[28033]: Invalid user psy from 210.90.155.80 port 37508
Sep 30 13:59:30 compute-0 sshd-session[28033]: Received disconnect from 210.90.155.80 port 37508:11: Bye Bye [preauth]
Sep 30 13:59:30 compute-0 sshd-session[28033]: Disconnected from invalid user psy 210.90.155.80 port 37508 [preauth]
Sep 30 13:59:59 compute-0 sshd-session[28035]: Invalid user test1 from 209.38.228.14 port 44530
Sep 30 13:59:59 compute-0 sshd-session[28035]: Received disconnect from 209.38.228.14 port 44530:11: Bye Bye [preauth]
Sep 30 13:59:59 compute-0 sshd-session[28035]: Disconnected from invalid user test1 209.38.228.14 port 44530 [preauth]
Sep 30 14:00:37 compute-0 sshd-session[28037]: Invalid user seekcy from 210.90.155.80 port 32782
Sep 30 14:00:37 compute-0 sshd-session[28037]: Received disconnect from 210.90.155.80 port 32782:11: Bye Bye [preauth]
Sep 30 14:00:37 compute-0 sshd-session[28037]: Disconnected from invalid user seekcy 210.90.155.80 port 32782 [preauth]
Sep 30 14:01:00 compute-0 sshd-session[28039]: Received disconnect from 209.38.228.14 port 50332:11: Bye Bye [preauth]
Sep 30 14:01:00 compute-0 sshd-session[28039]: Disconnected from authenticating user root 209.38.228.14 port 50332 [preauth]
Sep 30 14:01:01 compute-0 CROND[28042]: (root) CMD (run-parts /etc/cron.hourly)
Sep 30 14:01:01 compute-0 run-parts[28045]: (/etc/cron.hourly) starting 0anacron
Sep 30 14:01:01 compute-0 run-parts[28051]: (/etc/cron.hourly) finished 0anacron
Sep 30 14:01:01 compute-0 CROND[28041]: (root) CMDEND (run-parts /etc/cron.hourly)
Sep 30 14:01:32 compute-0 sshd-session[28052]: Accepted publickey for zuul from 192.168.122.30 port 60376 ssh2: ECDSA SHA256:bXV1aFTGAGwGo0hLh6HZ3pTGxlJrPf0VedxXflT3nU8
Sep 30 14:01:32 compute-0 systemd-logind[808]: New session 8 of user zuul.
Sep 30 14:01:32 compute-0 systemd[1]: Started Session 8 of User zuul.
Sep 30 14:01:32 compute-0 sshd-session[28052]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Sep 30 14:01:33 compute-0 python3.9[28205]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Sep 30 14:01:34 compute-0 sudo[28384]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjzubneazoczipcjlbngxbvvngfzcsfq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759240894.0166154-56-98254408802305/AnsiballZ_command.py'
Sep 30 14:01:34 compute-0 sudo[28384]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:01:34 compute-0 python3.9[28386]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:01:38 compute-0 sshd-session[28400]: Received disconnect from 193.46.255.7 port 11770:11:  [preauth]
Sep 30 14:01:38 compute-0 sshd-session[28400]: Disconnected from authenticating user root 193.46.255.7 port 11770 [preauth]
Sep 30 14:01:42 compute-0 sudo[28384]: pam_unix(sudo:session): session closed for user root
Sep 30 14:01:42 compute-0 sshd-session[28055]: Connection closed by 192.168.122.30 port 60376
Sep 30 14:01:42 compute-0 sshd-session[28052]: pam_unix(sshd:session): session closed for user zuul
Sep 30 14:01:42 compute-0 systemd[1]: session-8.scope: Deactivated successfully.
Sep 30 14:01:42 compute-0 systemd[1]: session-8.scope: Consumed 8.448s CPU time.
Sep 30 14:01:42 compute-0 systemd-logind[808]: Session 8 logged out. Waiting for processes to exit.
Sep 30 14:01:42 compute-0 systemd-logind[808]: Removed session 8.
Sep 30 14:01:43 compute-0 sshd-session[28418]: Invalid user seekcy from 210.90.155.80 port 56112
Sep 30 14:01:43 compute-0 sshd-session[28418]: Received disconnect from 210.90.155.80 port 56112:11: Bye Bye [preauth]
Sep 30 14:01:43 compute-0 sshd-session[28418]: Disconnected from invalid user seekcy 210.90.155.80 port 56112 [preauth]
Sep 30 14:01:55 compute-0 sshd-session[28447]: Invalid user seekcy from 209.38.228.14 port 45328
Sep 30 14:01:56 compute-0 sshd-session[28447]: Received disconnect from 209.38.228.14 port 45328:11: Bye Bye [preauth]
Sep 30 14:01:56 compute-0 sshd-session[28447]: Disconnected from invalid user seekcy 209.38.228.14 port 45328 [preauth]
Sep 30 14:02:00 compute-0 sshd-session[28449]: Accepted publickey for zuul from 192.168.122.30 port 51942 ssh2: ECDSA SHA256:bXV1aFTGAGwGo0hLh6HZ3pTGxlJrPf0VedxXflT3nU8
Sep 30 14:02:00 compute-0 systemd-logind[808]: New session 9 of user zuul.
Sep 30 14:02:00 compute-0 systemd[1]: Started Session 9 of User zuul.
Sep 30 14:02:00 compute-0 sshd-session[28449]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Sep 30 14:02:01 compute-0 python3.9[28602]: ansible-ansible.legacy.ping Invoked with data=pong
Sep 30 14:02:02 compute-0 python3.9[28776]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Sep 30 14:02:03 compute-0 sudo[28926]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anxunljhhnlppidrhcxxdkvrmdishfbh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759240922.925636-93-25151550710165/AnsiballZ_command.py'
Sep 30 14:02:03 compute-0 sudo[28926]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:02:03 compute-0 python3.9[28928]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:02:03 compute-0 sudo[28926]: pam_unix(sudo:session): session closed for user root
Sep 30 14:02:04 compute-0 sudo[29079]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rocnjfngddkypxqbjwjnkynzqdqrcksf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759240923.9112356-129-97104118729800/AnsiballZ_stat.py'
Sep 30 14:02:04 compute-0 sudo[29079]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:02:04 compute-0 python3.9[29081]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 14:02:04 compute-0 sudo[29079]: pam_unix(sudo:session): session closed for user root
Sep 30 14:02:05 compute-0 sudo[29231]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygsnhtcuarftzcqktxsmfmhrytjqxlja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759240924.9413173-153-53975923637793/AnsiballZ_file.py'
Sep 30 14:02:05 compute-0 sudo[29231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:02:05 compute-0 python3.9[29233]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:02:05 compute-0 sudo[29231]: pam_unix(sudo:session): session closed for user root
Sep 30 14:02:06 compute-0 sudo[29383]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynpmpmpxcbcppgvdcjnpmhwlkeufeqcd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759240925.8424282-177-24517023835485/AnsiballZ_stat.py'
Sep 30 14:02:06 compute-0 sudo[29383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:02:06 compute-0 python3.9[29385]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:02:06 compute-0 sudo[29383]: pam_unix(sudo:session): session closed for user root
Sep 30 14:02:06 compute-0 sudo[29506]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stxespknfplzfqptrqgyrqkchbqakhmb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759240925.8424282-177-24517023835485/AnsiballZ_copy.py'
Sep 30 14:02:06 compute-0 sudo[29506]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:02:07 compute-0 python3.9[29508]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1759240925.8424282-177-24517023835485/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:02:07 compute-0 sudo[29506]: pam_unix(sudo:session): session closed for user root
Sep 30 14:02:07 compute-0 sudo[29658]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aabzyvotpfgkbrfctkdplidrulgrhsay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759240927.1864579-222-149560016380856/AnsiballZ_setup.py'
Sep 30 14:02:07 compute-0 sudo[29658]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:02:07 compute-0 python3.9[29660]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Sep 30 14:02:07 compute-0 sudo[29658]: pam_unix(sudo:session): session closed for user root
Sep 30 14:02:08 compute-0 sudo[29814]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eixkqsbvnvltgcasatjvzcakwmmiaaad ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759240928.1311483-246-13024399005878/AnsiballZ_file.py'
Sep 30 14:02:08 compute-0 sudo[29814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:02:08 compute-0 python3.9[29816]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:02:08 compute-0 sudo[29814]: pam_unix(sudo:session): session closed for user root
Sep 30 14:02:09 compute-0 python3.9[29966]: ansible-ansible.builtin.service_facts Invoked
Sep 30 14:02:14 compute-0 python3.9[30221]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:02:15 compute-0 python3.9[30371]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Sep 30 14:02:16 compute-0 python3.9[30525]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Sep 30 14:02:17 compute-0 sudo[30681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmecyovljvkxathrpfauuekoctokfrsx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759240937.1465304-390-192608885880675/AnsiballZ_setup.py'
Sep 30 14:02:17 compute-0 sudo[30681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:02:17 compute-0 python3.9[30683]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Sep 30 14:02:17 compute-0 sudo[30681]: pam_unix(sudo:session): session closed for user root
Sep 30 14:02:18 compute-0 sudo[30765]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxolghzbusqfmwaysnhjrscczwhdsjvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759240937.1465304-390-192608885880675/AnsiballZ_dnf.py'
Sep 30 14:02:18 compute-0 sudo[30765]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:02:18 compute-0 python3.9[30767]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Sep 30 14:02:50 compute-0 sshd-session[30911]: Received disconnect from 210.90.155.80 port 51364:11: Bye Bye [preauth]
Sep 30 14:02:50 compute-0 sshd-session[30911]: Disconnected from authenticating user root 210.90.155.80 port 51364 [preauth]
Sep 30 14:02:52 compute-0 sshd-session[30913]: Invalid user chris from 209.38.228.14 port 43900
Sep 30 14:02:52 compute-0 sshd-session[30913]: Received disconnect from 209.38.228.14 port 43900:11: Bye Bye [preauth]
Sep 30 14:02:52 compute-0 sshd-session[30913]: Disconnected from invalid user chris 209.38.228.14 port 43900 [preauth]
Sep 30 14:03:09 compute-0 systemd[1]: Reloading.
Sep 30 14:03:09 compute-0 systemd-rc-local-generator[30963]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:03:09 compute-0 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Sep 30 14:03:10 compute-0 systemd[1]: Reloading.
Sep 30 14:03:10 compute-0 systemd-rc-local-generator[31006]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:03:10 compute-0 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Sep 30 14:03:10 compute-0 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Sep 30 14:03:10 compute-0 systemd[1]: Reloading.
Sep 30 14:03:11 compute-0 systemd-rc-local-generator[31046]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:03:11 compute-0 systemd[1]: Listening on LVM2 poll daemon socket.
Sep 30 14:03:12 compute-0 dbus-broker-launch[769]: Noticed file-system modification, trigger reload.
Sep 30 14:03:12 compute-0 dbus-broker-launch[769]: Noticed file-system modification, trigger reload.
Sep 30 14:03:12 compute-0 dbus-broker-launch[769]: Noticed file-system modification, trigger reload.
Sep 30 14:03:45 compute-0 sshd-session[31152]: Invalid user ods from 209.38.228.14 port 58668
Sep 30 14:03:45 compute-0 sshd-session[31152]: Received disconnect from 209.38.228.14 port 58668:11: Bye Bye [preauth]
Sep 30 14:03:45 compute-0 sshd-session[31152]: Disconnected from invalid user ods 209.38.228.14 port 58668 [preauth]
Sep 30 14:03:56 compute-0 sshd-session[31174]: Received disconnect from 210.90.155.80 port 46658:11: Bye Bye [preauth]
Sep 30 14:03:56 compute-0 sshd-session[31174]: Disconnected from authenticating user root 210.90.155.80 port 46658 [preauth]
Sep 30 14:04:44 compute-0 sshd-session[31270]: Invalid user seekcy from 209.38.228.14 port 37176
Sep 30 14:04:44 compute-0 sshd-session[31270]: Received disconnect from 209.38.228.14 port 37176:11: Bye Bye [preauth]
Sep 30 14:04:44 compute-0 sshd-session[31270]: Disconnected from invalid user seekcy 209.38.228.14 port 37176 [preauth]
Sep 30 14:04:49 compute-0 kernel: SELinux:  Converting 2714 SID table entries...
Sep 30 14:04:49 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Sep 30 14:04:49 compute-0 kernel: SELinux:  policy capability open_perms=1
Sep 30 14:04:49 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Sep 30 14:04:49 compute-0 kernel: SELinux:  policy capability always_check_network=0
Sep 30 14:04:49 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Sep 30 14:04:49 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Sep 30 14:04:49 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Sep 30 14:04:50 compute-0 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Sep 30 14:04:50 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Sep 30 14:04:50 compute-0 systemd[1]: Starting man-db-cache-update.service...
Sep 30 14:04:50 compute-0 systemd[1]: Reloading.
Sep 30 14:04:50 compute-0 systemd-rc-local-generator[31379]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:04:51 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Sep 30 14:04:52 compute-0 systemd[1]: Starting PackageKit Daemon...
Sep 30 14:04:52 compute-0 PackageKit[31967]: daemon start
Sep 30 14:04:52 compute-0 systemd[1]: Started PackageKit Daemon.
Sep 30 14:04:52 compute-0 sudo[30765]: pam_unix(sudo:session): session closed for user root
Sep 30 14:04:52 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Sep 30 14:04:52 compute-0 systemd[1]: Finished man-db-cache-update.service.
Sep 30 14:04:52 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.292s CPU time.
Sep 30 14:04:52 compute-0 systemd[1]: run-r9f7652812c1642dc9b029bcb225191ae.service: Deactivated successfully.
Sep 30 14:04:53 compute-0 sudo[32294]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djwrrdwlceonhntdjxmzouhbqiobejne ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241092.9039183-426-150872243395166/AnsiballZ_command.py'
Sep 30 14:04:53 compute-0 sudo[32294]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:04:53 compute-0 python3.9[32296]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:04:54 compute-0 sudo[32294]: pam_unix(sudo:session): session closed for user root
Sep 30 14:04:55 compute-0 sudo[32575]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oncdmzvngixxhrelupzwwovqckpwrgzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241094.6240077-450-263293140733714/AnsiballZ_selinux.py'
Sep 30 14:04:55 compute-0 sudo[32575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:04:55 compute-0 python3.9[32577]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Sep 30 14:04:55 compute-0 sudo[32575]: pam_unix(sudo:session): session closed for user root
Sep 30 14:04:56 compute-0 sudo[32727]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvenouowxambazzqekwmvkixpycpabza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241096.2370381-483-85499315886560/AnsiballZ_command.py'
Sep 30 14:04:56 compute-0 sudo[32727]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:04:56 compute-0 python3.9[32729]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Sep 30 14:04:57 compute-0 sudo[32727]: pam_unix(sudo:session): session closed for user root
Sep 30 14:04:58 compute-0 sudo[32881]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzgbvhypzyixxlezqcpmsqtxounuixwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241097.8710737-507-270011759087347/AnsiballZ_file.py'
Sep 30 14:04:58 compute-0 sudo[32881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:05:00 compute-0 python3.9[32883]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:05:00 compute-0 sudo[32881]: pam_unix(sudo:session): session closed for user root
Sep 30 14:05:01 compute-0 sudo[33033]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amvdloufyviunpjusqfrpoaqfmkiicmm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241101.4047594-531-105221993465032/AnsiballZ_mount.py'
Sep 30 14:05:01 compute-0 sudo[33033]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:05:02 compute-0 python3.9[33035]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Sep 30 14:05:02 compute-0 sudo[33033]: pam_unix(sudo:session): session closed for user root
Sep 30 14:05:03 compute-0 sshd-session[33036]: Invalid user sales1 from 210.90.155.80 port 41728
Sep 30 14:05:03 compute-0 sshd-session[33036]: Received disconnect from 210.90.155.80 port 41728:11: Bye Bye [preauth]
Sep 30 14:05:03 compute-0 sshd-session[33036]: Disconnected from invalid user sales1 210.90.155.80 port 41728 [preauth]
Sep 30 14:05:03 compute-0 sudo[33187]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-keupfscvlleyhjyrtriitkicafstdigz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241103.0437043-615-221803575398461/AnsiballZ_file.py'
Sep 30 14:05:03 compute-0 sudo[33187]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:05:09 compute-0 python3.9[33189]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:05:09 compute-0 sudo[33187]: pam_unix(sudo:session): session closed for user root
Sep 30 14:05:11 compute-0 sudo[33339]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cauqcjwjwmfulwfpdxocxclhrjlnmhvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241111.5534217-639-187573919818393/AnsiballZ_stat.py'
Sep 30 14:05:11 compute-0 sudo[33339]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:05:12 compute-0 python3.9[33341]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:05:12 compute-0 sudo[33339]: pam_unix(sudo:session): session closed for user root
Sep 30 14:05:12 compute-0 sudo[33462]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdlujwbohjhodcvmvscizyunpolmbnzy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241111.5534217-639-187573919818393/AnsiballZ_copy.py'
Sep 30 14:05:12 compute-0 sudo[33462]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:05:12 compute-0 python3.9[33464]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759241111.5534217-639-187573919818393/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=b0c7031d22a68ee9798f5449b16cb4c47fcab9d3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:05:12 compute-0 sudo[33462]: pam_unix(sudo:session): session closed for user root
Sep 30 14:05:13 compute-0 sudo[33614]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjxusjhjxehywdhtbnxzoctldwzzocky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241113.573534-720-37462074872940/AnsiballZ_getent.py'
Sep 30 14:05:13 compute-0 sudo[33614]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:05:14 compute-0 python3.9[33616]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Sep 30 14:05:14 compute-0 sudo[33614]: pam_unix(sudo:session): session closed for user root
Sep 30 14:05:15 compute-0 sudo[33767]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynpzklmoxdzgguecnoeffxkmtcdqxuxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241114.682282-744-160678072832354/AnsiballZ_group.py'
Sep 30 14:05:15 compute-0 sudo[33767]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:05:15 compute-0 python3.9[33769]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Sep 30 14:05:15 compute-0 groupadd[33770]: group added to /etc/group: name=qemu, GID=107
Sep 30 14:05:15 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Sep 30 14:05:15 compute-0 groupadd[33770]: group added to /etc/gshadow: name=qemu
Sep 30 14:05:15 compute-0 groupadd[33770]: new group: name=qemu, GID=107
Sep 30 14:05:15 compute-0 sudo[33767]: pam_unix(sudo:session): session closed for user root
Sep 30 14:05:16 compute-0 sudo[33926]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjoemwuqgbudllbbmaqjzwpzcafckzsa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241115.532775-768-169171901143450/AnsiballZ_user.py'
Sep 30 14:05:16 compute-0 sudo[33926]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:05:16 compute-0 python3.9[33928]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Sep 30 14:05:16 compute-0 useradd[33930]: new user: name=qemu, UID=107, GID=107, home=/home/qemu, shell=/sbin/nologin, from=/dev/pts/0
Sep 30 14:05:16 compute-0 sudo[33926]: pam_unix(sudo:session): session closed for user root
Sep 30 14:05:16 compute-0 sudo[34086]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amjcdacfssvpnyufsqpkskoucsfmjnoa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241116.5328152-792-10714691323938/AnsiballZ_getent.py'
Sep 30 14:05:16 compute-0 sudo[34086]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:05:17 compute-0 python3.9[34088]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Sep 30 14:05:17 compute-0 sudo[34086]: pam_unix(sudo:session): session closed for user root
Sep 30 14:05:17 compute-0 sudo[34239]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ompgaoogsxzchynbfnwwsafwhkndedmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241117.26142-816-148551740120930/AnsiballZ_group.py'
Sep 30 14:05:17 compute-0 sudo[34239]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:05:17 compute-0 python3.9[34241]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Sep 30 14:05:17 compute-0 groupadd[34242]: group added to /etc/group: name=hugetlbfs, GID=42477
Sep 30 14:05:17 compute-0 groupadd[34242]: group added to /etc/gshadow: name=hugetlbfs
Sep 30 14:05:17 compute-0 groupadd[34242]: new group: name=hugetlbfs, GID=42477
Sep 30 14:05:17 compute-0 sudo[34239]: pam_unix(sudo:session): session closed for user root
Sep 30 14:05:18 compute-0 sudo[34397]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pifbqvldrohkbtqkoplyxrszesedvjgw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241118.0330405-843-8711289659901/AnsiballZ_file.py'
Sep 30 14:05:18 compute-0 sudo[34397]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:05:18 compute-0 python3.9[34399]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Sep 30 14:05:18 compute-0 sudo[34397]: pam_unix(sudo:session): session closed for user root
Sep 30 14:05:19 compute-0 sudo[34549]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfjyerosfzehemynheeqnnsrgappatyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241118.886518-876-142843450177887/AnsiballZ_dnf.py'
Sep 30 14:05:19 compute-0 sudo[34549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:05:19 compute-0 python3.9[34551]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Sep 30 14:05:21 compute-0 sudo[34549]: pam_unix(sudo:session): session closed for user root
Sep 30 14:05:21 compute-0 sudo[34702]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itggtljypjxxnmroxfdayxxlkdhjyzgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241121.3779645-900-92787942881466/AnsiballZ_file.py'
Sep 30 14:05:21 compute-0 sudo[34702]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:05:21 compute-0 python3.9[34704]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:05:21 compute-0 sudo[34702]: pam_unix(sudo:session): session closed for user root
Sep 30 14:05:22 compute-0 sudo[34854]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvdaojvwzrspjttmzzilmbhnxlypxqxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241122.043576-924-201427701842907/AnsiballZ_stat.py'
Sep 30 14:05:22 compute-0 sudo[34854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:05:22 compute-0 python3.9[34856]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:05:22 compute-0 sudo[34854]: pam_unix(sudo:session): session closed for user root
Sep 30 14:05:22 compute-0 sudo[34977]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jlawnrpzyxiefgcjvygpiloczuhhjlyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241122.043576-924-201427701842907/AnsiballZ_copy.py'
Sep 30 14:05:22 compute-0 sudo[34977]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:05:23 compute-0 python3.9[34979]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759241122.043576-924-201427701842907/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:05:23 compute-0 sudo[34977]: pam_unix(sudo:session): session closed for user root
Sep 30 14:05:23 compute-0 sudo[35129]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uospaivviptyvfbvtvqcrhadrscpcvrt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241123.2767112-969-103353043844109/AnsiballZ_systemd.py'
Sep 30 14:05:23 compute-0 sudo[35129]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:05:24 compute-0 python3.9[35131]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Sep 30 14:05:24 compute-0 systemd[1]: Starting Load Kernel Modules...
Sep 30 14:05:24 compute-0 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Sep 30 14:05:24 compute-0 kernel: Bridge firewalling registered
Sep 30 14:05:24 compute-0 systemd-modules-load[35135]: Inserted module 'br_netfilter'
Sep 30 14:05:24 compute-0 systemd[1]: Finished Load Kernel Modules.
Sep 30 14:05:24 compute-0 sudo[35129]: pam_unix(sudo:session): session closed for user root
Sep 30 14:05:24 compute-0 sudo[35288]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxketcyrsgbyqoitfuqloxgbipvhigkv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241124.6581662-993-229461665889841/AnsiballZ_stat.py'
Sep 30 14:05:24 compute-0 sudo[35288]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:05:25 compute-0 python3.9[35290]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:05:25 compute-0 sudo[35288]: pam_unix(sudo:session): session closed for user root
Sep 30 14:05:25 compute-0 sudo[35411]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxdovyoxtjbrxjmqhxuzzdlizstpbykw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241124.6581662-993-229461665889841/AnsiballZ_copy.py'
Sep 30 14:05:25 compute-0 sudo[35411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:05:25 compute-0 python3.9[35413]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759241124.6581662-993-229461665889841/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:05:25 compute-0 sudo[35411]: pam_unix(sudo:session): session closed for user root
Sep 30 14:05:26 compute-0 sudo[35563]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-povxbtynvsvgpxqztwwazleuinsqlrsq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241126.3965437-1047-228884310839274/AnsiballZ_dnf.py'
Sep 30 14:05:26 compute-0 sudo[35563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:05:26 compute-0 python3.9[35565]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Sep 30 14:05:30 compute-0 dbus-broker-launch[769]: Noticed file-system modification, trigger reload.
Sep 30 14:05:30 compute-0 dbus-broker-launch[769]: Noticed file-system modification, trigger reload.
Sep 30 14:05:31 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Sep 30 14:05:31 compute-0 systemd[1]: Starting man-db-cache-update.service...
Sep 30 14:05:31 compute-0 systemd[1]: Reloading.
Sep 30 14:05:31 compute-0 systemd-rc-local-generator[35627]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:05:31 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Sep 30 14:05:31 compute-0 sudo[35563]: pam_unix(sudo:session): session closed for user root
Sep 30 14:05:32 compute-0 python3.9[37065]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 14:05:33 compute-0 python3.9[38089]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Sep 30 14:05:34 compute-0 python3.9[38955]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 14:05:34 compute-0 sudo[39734]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvwfwbihkwosltagxjruktjztaanvexn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241134.7394242-1164-2290263651385/AnsiballZ_command.py'
Sep 30 14:05:34 compute-0 sudo[39734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:05:35 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Sep 30 14:05:35 compute-0 systemd[1]: Finished man-db-cache-update.service.
Sep 30 14:05:35 compute-0 systemd[1]: man-db-cache-update.service: Consumed 4.814s CPU time.
Sep 30 14:05:35 compute-0 systemd[1]: run-r217a4da29fb24ddca7f46e1f03195bb3.service: Deactivated successfully.
Sep 30 14:05:35 compute-0 python3.9[39736]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:05:35 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Sep 30 14:05:35 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Sep 30 14:05:35 compute-0 sudo[39734]: pam_unix(sudo:session): session closed for user root
Sep 30 14:05:36 compute-0 sudo[40109]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yoemaodkejdmhjerdhbnaetkdemapntb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241136.2154727-1191-137018063275480/AnsiballZ_systemd.py'
Sep 30 14:05:36 compute-0 sudo[40109]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:05:36 compute-0 python3.9[40111]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 14:05:36 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Sep 30 14:05:36 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Sep 30 14:05:36 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Sep 30 14:05:36 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Sep 30 14:05:37 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Sep 30 14:05:37 compute-0 sudo[40109]: pam_unix(sudo:session): session closed for user root
Sep 30 14:05:37 compute-0 python3.9[40272]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Sep 30 14:05:40 compute-0 sudo[40422]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkzuctfhxirvvmgmgjlnyaczzqmjmkte ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241140.4079633-1362-85818712005657/AnsiballZ_systemd.py'
Sep 30 14:05:40 compute-0 sudo[40422]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:05:41 compute-0 python3.9[40424]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 14:05:41 compute-0 systemd[1]: Reloading.
Sep 30 14:05:41 compute-0 systemd-rc-local-generator[40454]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:05:41 compute-0 sudo[40422]: pam_unix(sudo:session): session closed for user root
Sep 30 14:05:41 compute-0 sudo[40611]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jojsdkgqppjbzxnsvaxghiaabykjjyiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241141.4356132-1362-53568432907229/AnsiballZ_systemd.py'
Sep 30 14:05:41 compute-0 sudo[40611]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:05:42 compute-0 python3.9[40613]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 14:05:42 compute-0 systemd[1]: Reloading.
Sep 30 14:05:42 compute-0 systemd-rc-local-generator[40644]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:05:42 compute-0 sudo[40611]: pam_unix(sudo:session): session closed for user root
Sep 30 14:05:42 compute-0 sshd-session[40652]: Invalid user seekcy from 209.38.228.14 port 44406
Sep 30 14:05:42 compute-0 sudo[40803]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqmgqqhjhrbubxycstcyybugnmjcwunm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241142.6087499-1410-102980277440199/AnsiballZ_command.py'
Sep 30 14:05:42 compute-0 sshd-session[40652]: Received disconnect from 209.38.228.14 port 44406:11: Bye Bye [preauth]
Sep 30 14:05:42 compute-0 sshd-session[40652]: Disconnected from invalid user seekcy 209.38.228.14 port 44406 [preauth]
Sep 30 14:05:42 compute-0 sudo[40803]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:05:43 compute-0 python3.9[40805]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:05:43 compute-0 sudo[40803]: pam_unix(sudo:session): session closed for user root
Sep 30 14:05:43 compute-0 sudo[40956]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzmdnnznwrwwdupxnsbqjgrrzwsxwbdl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241143.2919102-1434-249863872735734/AnsiballZ_command.py'
Sep 30 14:05:43 compute-0 sudo[40956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:05:43 compute-0 python3.9[40958]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:05:43 compute-0 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Sep 30 14:05:43 compute-0 sudo[40956]: pam_unix(sudo:session): session closed for user root
Sep 30 14:05:44 compute-0 sudo[41109]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjbgqarvezumtsirrbvwzqmfdqvilhdd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241143.9889953-1458-195683686928794/AnsiballZ_command.py'
Sep 30 14:05:44 compute-0 sudo[41109]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:05:44 compute-0 python3.9[41111]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:05:45 compute-0 sudo[41109]: pam_unix(sudo:session): session closed for user root
Sep 30 14:05:46 compute-0 sudo[41271]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ayycqxeqhmyrueshjzcqcavijhapbewj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241146.0542524-1482-24549049113256/AnsiballZ_command.py'
Sep 30 14:05:46 compute-0 sudo[41271]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:05:46 compute-0 python3.9[41273]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:05:46 compute-0 sudo[41271]: pam_unix(sudo:session): session closed for user root
Sep 30 14:05:47 compute-0 sudo[41424]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txsxhjrahcryckzthzdedlxbtbekonvz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241146.7244465-1506-124665613673933/AnsiballZ_systemd.py'
Sep 30 14:05:47 compute-0 sudo[41424]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:05:47 compute-0 python3.9[41426]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Sep 30 14:05:47 compute-0 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Sep 30 14:05:47 compute-0 systemd[1]: Stopped Apply Kernel Variables.
Sep 30 14:05:47 compute-0 systemd[1]: Stopping Apply Kernel Variables...
Sep 30 14:05:47 compute-0 systemd[1]: Starting Apply Kernel Variables...
Sep 30 14:05:47 compute-0 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Sep 30 14:05:47 compute-0 systemd[1]: Finished Apply Kernel Variables.
Sep 30 14:05:47 compute-0 sudo[41424]: pam_unix(sudo:session): session closed for user root
Sep 30 14:05:47 compute-0 sshd-session[28452]: Connection closed by 192.168.122.30 port 51942
Sep 30 14:05:47 compute-0 sshd-session[28449]: pam_unix(sshd:session): session closed for user zuul
Sep 30 14:05:47 compute-0 systemd[1]: session-9.scope: Deactivated successfully.
Sep 30 14:05:47 compute-0 systemd[1]: session-9.scope: Consumed 2min 18.053s CPU time.
Sep 30 14:05:47 compute-0 systemd-logind[808]: Session 9 logged out. Waiting for processes to exit.
Sep 30 14:05:47 compute-0 systemd-logind[808]: Removed session 9.
Sep 30 14:05:54 compute-0 sshd-session[41456]: Accepted publickey for zuul from 192.168.122.30 port 49514 ssh2: ECDSA SHA256:bXV1aFTGAGwGo0hLh6HZ3pTGxlJrPf0VedxXflT3nU8
Sep 30 14:05:54 compute-0 systemd-logind[808]: New session 10 of user zuul.
Sep 30 14:05:54 compute-0 systemd[1]: Started Session 10 of User zuul.
Sep 30 14:05:54 compute-0 sshd-session[41456]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Sep 30 14:05:55 compute-0 python3.9[41609]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Sep 30 14:05:56 compute-0 sudo[41763]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzlmtjmyvxampnwvpsnwokevkzqgepwo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241155.8957403-68-231572670861696/AnsiballZ_getent.py'
Sep 30 14:05:56 compute-0 sudo[41763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:05:56 compute-0 python3.9[41765]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Sep 30 14:05:56 compute-0 sudo[41763]: pam_unix(sudo:session): session closed for user root
Sep 30 14:05:57 compute-0 sudo[41916]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hsefmjoxwatzudwyptniyeqxslgfmrqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241156.8248615-92-8490900214648/AnsiballZ_group.py'
Sep 30 14:05:57 compute-0 sudo[41916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:05:57 compute-0 python3.9[41918]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Sep 30 14:05:57 compute-0 groupadd[41919]: group added to /etc/group: name=openvswitch, GID=42476
Sep 30 14:05:57 compute-0 groupadd[41919]: group added to /etc/gshadow: name=openvswitch
Sep 30 14:05:57 compute-0 groupadd[41919]: new group: name=openvswitch, GID=42476
Sep 30 14:05:57 compute-0 sudo[41916]: pam_unix(sudo:session): session closed for user root
Sep 30 14:05:58 compute-0 sudo[42074]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjjnwbyvwwdyohgmysevegzwvmdkzopj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241157.7699378-116-64496611593164/AnsiballZ_user.py'
Sep 30 14:05:58 compute-0 sudo[42074]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:05:58 compute-0 python3.9[42076]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Sep 30 14:05:58 compute-0 useradd[42078]: new user: name=openvswitch, UID=42476, GID=42476, home=/home/openvswitch, shell=/sbin/nologin, from=/dev/pts/0
Sep 30 14:05:58 compute-0 useradd[42078]: add 'openvswitch' to group 'hugetlbfs'
Sep 30 14:05:58 compute-0 useradd[42078]: add 'openvswitch' to shadow group 'hugetlbfs'
Sep 30 14:05:58 compute-0 sudo[42074]: pam_unix(sudo:session): session closed for user root
Sep 30 14:05:59 compute-0 sudo[42234]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wchhtwrhounserhrlobnrilkdljpoigo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241159.1475391-146-220101867391104/AnsiballZ_setup.py'
Sep 30 14:05:59 compute-0 sudo[42234]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:05:59 compute-0 python3.9[42236]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Sep 30 14:05:59 compute-0 sudo[42234]: pam_unix(sudo:session): session closed for user root
Sep 30 14:06:00 compute-0 sudo[42318]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ucaqitsflyrplpgrzqwalnewaxhywzcr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241159.1475391-146-220101867391104/AnsiballZ_dnf.py'
Sep 30 14:06:00 compute-0 sudo[42318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:06:00 compute-0 python3.9[42320]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Sep 30 14:06:03 compute-0 sudo[42318]: pam_unix(sudo:session): session closed for user root
Sep 30 14:06:03 compute-0 sudo[42482]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dglvihqrlmlqaxdcvioquhkusiuihuoa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241163.5127432-188-260762633854324/AnsiballZ_dnf.py'
Sep 30 14:06:03 compute-0 sudo[42482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:06:04 compute-0 python3.9[42484]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Sep 30 14:06:05 compute-0 sshd-session[42486]: Invalid user anonymous from 185.156.73.233 port 20982
Sep 30 14:06:05 compute-0 sshd-session[42486]: Connection closed by invalid user anonymous 185.156.73.233 port 20982 [preauth]
Sep 30 14:06:15 compute-0 sshd-session[42501]: Received disconnect from 210.90.155.80 port 37026:11: Bye Bye [preauth]
Sep 30 14:06:15 compute-0 sshd-session[42501]: Disconnected from authenticating user root 210.90.155.80 port 37026 [preauth]
Sep 30 14:06:18 compute-0 kernel: SELinux:  Converting 2724 SID table entries...
Sep 30 14:06:18 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Sep 30 14:06:18 compute-0 kernel: SELinux:  policy capability open_perms=1
Sep 30 14:06:18 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Sep 30 14:06:18 compute-0 kernel: SELinux:  policy capability always_check_network=0
Sep 30 14:06:18 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Sep 30 14:06:18 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Sep 30 14:06:18 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Sep 30 14:06:18 compute-0 groupadd[42511]: group added to /etc/group: name=unbound, GID=993
Sep 30 14:06:18 compute-0 groupadd[42511]: group added to /etc/gshadow: name=unbound
Sep 30 14:06:18 compute-0 groupadd[42511]: new group: name=unbound, GID=993
Sep 30 14:06:18 compute-0 useradd[42518]: new user: name=unbound, UID=993, GID=993, home=/var/lib/unbound, shell=/sbin/nologin, from=none
Sep 30 14:06:18 compute-0 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Sep 30 14:06:18 compute-0 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Sep 30 14:06:20 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Sep 30 14:06:20 compute-0 systemd[1]: Starting man-db-cache-update.service...
Sep 30 14:06:20 compute-0 systemd[1]: Reloading.
Sep 30 14:06:20 compute-0 systemd-rc-local-generator[43017]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:06:20 compute-0 systemd-sysv-generator[43020]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 14:06:20 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Sep 30 14:06:21 compute-0 sudo[42482]: pam_unix(sudo:session): session closed for user root
Sep 30 14:06:21 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Sep 30 14:06:21 compute-0 systemd[1]: Finished man-db-cache-update.service.
Sep 30 14:06:21 compute-0 systemd[1]: run-r5238f660be5a42a8965c7deaf938dadb.service: Deactivated successfully.
Sep 30 14:06:22 compute-0 sudo[43588]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgknlshhrbbcxideywdzvoouptahpamo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241181.5144653-212-10300482869374/AnsiballZ_systemd.py'
Sep 30 14:06:22 compute-0 sudo[43588]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:06:22 compute-0 python3.9[43590]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Sep 30 14:06:22 compute-0 systemd[1]: Reloading.
Sep 30 14:06:22 compute-0 systemd-sysv-generator[43625]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 14:06:22 compute-0 systemd-rc-local-generator[43621]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:06:22 compute-0 systemd[1]: Starting Open vSwitch Database Unit...
Sep 30 14:06:22 compute-0 chown[43632]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Sep 30 14:06:22 compute-0 ovs-ctl[43637]: /etc/openvswitch/conf.db does not exist ... (warning).
Sep 30 14:06:22 compute-0 ovs-ctl[43637]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Sep 30 14:06:22 compute-0 ovs-ctl[43637]: Starting ovsdb-server [  OK  ]
Sep 30 14:06:22 compute-0 ovs-vsctl[43686]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Sep 30 14:06:23 compute-0 ovs-vsctl[43706]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"c6331d25-78a2-493c-bb43-51ad387342be\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Sep 30 14:06:23 compute-0 ovs-ctl[43637]: Configuring Open vSwitch system IDs [  OK  ]
Sep 30 14:06:23 compute-0 ovs-vsctl[43712]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Sep 30 14:06:23 compute-0 ovs-ctl[43637]: Enabling remote OVSDB managers [  OK  ]
Sep 30 14:06:23 compute-0 systemd[1]: Started Open vSwitch Database Unit.
Sep 30 14:06:23 compute-0 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Sep 30 14:06:23 compute-0 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Sep 30 14:06:23 compute-0 systemd[1]: Starting Open vSwitch Forwarding Unit...
Sep 30 14:06:23 compute-0 kernel: openvswitch: Open vSwitch switching datapath
Sep 30 14:06:23 compute-0 ovs-ctl[43757]: Inserting openvswitch module [  OK  ]
Sep 30 14:06:23 compute-0 ovs-ctl[43726]: Starting ovs-vswitchd [  OK  ]
Sep 30 14:06:23 compute-0 ovs-ctl[43726]: Enabling remote OVSDB managers [  OK  ]
Sep 30 14:06:23 compute-0 systemd[1]: Started Open vSwitch Forwarding Unit.
Sep 30 14:06:23 compute-0 ovs-vsctl[43775]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Sep 30 14:06:23 compute-0 systemd[1]: Starting Open vSwitch...
Sep 30 14:06:23 compute-0 systemd[1]: Finished Open vSwitch.
Sep 30 14:06:23 compute-0 sudo[43588]: pam_unix(sudo:session): session closed for user root
Sep 30 14:06:24 compute-0 python3.9[43926]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Sep 30 14:06:25 compute-0 sudo[44076]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tugeynnysowvzzvbptkjfdjigrbkugiy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241184.5069675-266-31384869465824/AnsiballZ_sefcontext.py'
Sep 30 14:06:25 compute-0 sudo[44076]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:06:25 compute-0 python3.9[44078]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Sep 30 14:06:26 compute-0 kernel: SELinux:  Converting 2738 SID table entries...
Sep 30 14:06:26 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Sep 30 14:06:26 compute-0 kernel: SELinux:  policy capability open_perms=1
Sep 30 14:06:26 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Sep 30 14:06:26 compute-0 kernel: SELinux:  policy capability always_check_network=0
Sep 30 14:06:26 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Sep 30 14:06:26 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Sep 30 14:06:26 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Sep 30 14:06:26 compute-0 sudo[44076]: pam_unix(sudo:session): session closed for user root
Sep 30 14:06:28 compute-0 python3.9[44233]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Sep 30 14:06:28 compute-0 sudo[44391]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvgltrvcqscstiembpmiuwfyqgwkxges ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241188.61901-320-219442048847707/AnsiballZ_dnf.py'
Sep 30 14:06:28 compute-0 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Sep 30 14:06:28 compute-0 sudo[44391]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:06:29 compute-0 python3.9[44393]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Sep 30 14:06:29 compute-0 sshd-session[44317]: Received disconnect from 91.224.92.32 port 25266:11:  [preauth]
Sep 30 14:06:29 compute-0 sshd-session[44317]: Disconnected from authenticating user root 91.224.92.32 port 25266 [preauth]
Sep 30 14:06:30 compute-0 sudo[44391]: pam_unix(sudo:session): session closed for user root
Sep 30 14:06:31 compute-0 sudo[44544]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hunxtoxlwqufhvrzoftqxsdlkkpjsphs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241191.1748834-344-133247872285744/AnsiballZ_command.py'
Sep 30 14:06:31 compute-0 sudo[44544]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:06:31 compute-0 python3.9[44546]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:06:32 compute-0 sudo[44544]: pam_unix(sudo:session): session closed for user root
Sep 30 14:06:33 compute-0 sudo[44831]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwiwicasnqrodgfppskcrrfdpsvtdqnu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241192.9150543-368-41080473442786/AnsiballZ_file.py'
Sep 30 14:06:33 compute-0 sudo[44831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:06:33 compute-0 python3.9[44833]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Sep 30 14:06:33 compute-0 sudo[44831]: pam_unix(sudo:session): session closed for user root
Sep 30 14:06:34 compute-0 python3.9[44983]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 14:06:34 compute-0 sudo[45135]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hadomizogvjwzgtrrpnwijqwbtzikmgu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241194.7180378-416-114176630260501/AnsiballZ_dnf.py'
Sep 30 14:06:34 compute-0 sudo[45135]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:06:35 compute-0 python3.9[45137]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Sep 30 14:06:37 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Sep 30 14:06:37 compute-0 systemd[1]: Starting man-db-cache-update.service...
Sep 30 14:06:37 compute-0 systemd[1]: Reloading.
Sep 30 14:06:37 compute-0 systemd-rc-local-generator[45173]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:06:37 compute-0 systemd-sysv-generator[45180]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 14:06:37 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Sep 30 14:06:38 compute-0 sudo[45135]: pam_unix(sudo:session): session closed for user root
Sep 30 14:06:39 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Sep 30 14:06:39 compute-0 systemd[1]: Finished man-db-cache-update.service.
Sep 30 14:06:39 compute-0 systemd[1]: run-r4359e8248a1e4e96a1637df5fca97301.service: Deactivated successfully.
Sep 30 14:06:39 compute-0 sudo[45452]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnsaynudlnuxpoermqmhnohlawygmlcg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241198.7956991-440-251229404808707/AnsiballZ_systemd.py'
Sep 30 14:06:39 compute-0 sudo[45452]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:06:39 compute-0 python3.9[45454]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Sep 30 14:06:39 compute-0 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Sep 30 14:06:39 compute-0 systemd[1]: Stopped Network Manager Wait Online.
Sep 30 14:06:39 compute-0 systemd[1]: Stopping Network Manager Wait Online...
Sep 30 14:06:39 compute-0 systemd[1]: Stopping Network Manager...
Sep 30 14:06:39 compute-0 NetworkManager[4391]: <info>  [1759241199.4597] caught SIGTERM, shutting down normally.
Sep 30 14:06:39 compute-0 NetworkManager[4391]: <info>  [1759241199.4615] dhcp4 (eth0): canceled DHCP transaction
Sep 30 14:06:39 compute-0 NetworkManager[4391]: <info>  [1759241199.4616] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Sep 30 14:06:39 compute-0 NetworkManager[4391]: <info>  [1759241199.4616] dhcp4 (eth0): state changed no lease
Sep 30 14:06:39 compute-0 NetworkManager[4391]: <info>  [1759241199.4620] manager: NetworkManager state is now CONNECTED_SITE
Sep 30 14:06:39 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Sep 30 14:06:39 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Sep 30 14:06:39 compute-0 NetworkManager[4391]: <info>  [1759241199.6463] exiting (success)
Sep 30 14:06:39 compute-0 systemd[1]: NetworkManager.service: Deactivated successfully.
Sep 30 14:06:39 compute-0 systemd[1]: Stopped Network Manager.
Sep 30 14:06:39 compute-0 systemd[1]: NetworkManager.service: Consumed 11.483s CPU time, 4.2M memory peak, read 0B from disk, written 28.0K to disk.
Sep 30 14:06:39 compute-0 systemd[1]: Starting Network Manager...
Sep 30 14:06:39 compute-0 NetworkManager[45472]: <info>  [1759241199.7060] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:1819ccf5-a897-485a-80b9-c42731ad5ac8)
Sep 30 14:06:39 compute-0 NetworkManager[45472]: <info>  [1759241199.7061] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Sep 30 14:06:39 compute-0 NetworkManager[45472]: <info>  [1759241199.7120] manager[0x56529d8f0090]: monitoring kernel firmware directory '/lib/firmware'.
Sep 30 14:06:39 compute-0 systemd[1]: Starting Hostname Service...
Sep 30 14:06:39 compute-0 systemd[1]: Started Hostname Service.
Sep 30 14:06:39 compute-0 NetworkManager[45472]: <info>  [1759241199.8076] hostname: hostname: using hostnamed
Sep 30 14:06:39 compute-0 NetworkManager[45472]: <info>  [1759241199.8077] hostname: static hostname changed from (none) to "compute-0"
Sep 30 14:06:39 compute-0 NetworkManager[45472]: <info>  [1759241199.8083] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Sep 30 14:06:39 compute-0 NetworkManager[45472]: <info>  [1759241199.8088] manager[0x56529d8f0090]: rfkill: Wi-Fi hardware radio set enabled
Sep 30 14:06:39 compute-0 NetworkManager[45472]: <info>  [1759241199.8089] manager[0x56529d8f0090]: rfkill: WWAN hardware radio set enabled
Sep 30 14:06:39 compute-0 NetworkManager[45472]: <info>  [1759241199.8115] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Sep 30 14:06:39 compute-0 NetworkManager[45472]: <info>  [1759241199.8125] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Sep 30 14:06:39 compute-0 NetworkManager[45472]: <info>  [1759241199.8126] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Sep 30 14:06:39 compute-0 NetworkManager[45472]: <info>  [1759241199.8126] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Sep 30 14:06:39 compute-0 NetworkManager[45472]: <info>  [1759241199.8126] manager: Networking is enabled by state file
Sep 30 14:06:39 compute-0 NetworkManager[45472]: <info>  [1759241199.8129] settings: Loaded settings plugin: keyfile (internal)
Sep 30 14:06:39 compute-0 NetworkManager[45472]: <info>  [1759241199.8133] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Sep 30 14:06:39 compute-0 NetworkManager[45472]: <info>  [1759241199.8157] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Sep 30 14:06:39 compute-0 NetworkManager[45472]: <info>  [1759241199.8168] dhcp: init: Using DHCP client 'internal'
Sep 30 14:06:39 compute-0 NetworkManager[45472]: <info>  [1759241199.8170] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Sep 30 14:06:39 compute-0 NetworkManager[45472]: <info>  [1759241199.8177] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Sep 30 14:06:39 compute-0 NetworkManager[45472]: <info>  [1759241199.8182] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Sep 30 14:06:39 compute-0 NetworkManager[45472]: <info>  [1759241199.8189] device (lo): Activation: starting connection 'lo' (5742ac42-8bba-40d6-bdcd-b6cbacaa64c1)
Sep 30 14:06:39 compute-0 NetworkManager[45472]: <info>  [1759241199.8195] device (eth0): carrier: link connected
Sep 30 14:06:39 compute-0 NetworkManager[45472]: <info>  [1759241199.8199] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Sep 30 14:06:39 compute-0 NetworkManager[45472]: <info>  [1759241199.8202] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Sep 30 14:06:39 compute-0 NetworkManager[45472]: <info>  [1759241199.8203] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Sep 30 14:06:39 compute-0 NetworkManager[45472]: <info>  [1759241199.8209] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Sep 30 14:06:39 compute-0 NetworkManager[45472]: <info>  [1759241199.8217] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Sep 30 14:06:39 compute-0 NetworkManager[45472]: <info>  [1759241199.8224] device (eth1): carrier: link connected
Sep 30 14:06:39 compute-0 NetworkManager[45472]: <info>  [1759241199.8229] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Sep 30 14:06:39 compute-0 NetworkManager[45472]: <info>  [1759241199.8236] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (70493aff-8f50-53fb-8a3a-7b4dcd69c293) (indicated)
Sep 30 14:06:39 compute-0 NetworkManager[45472]: <info>  [1759241199.8237] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Sep 30 14:06:39 compute-0 NetworkManager[45472]: <info>  [1759241199.8243] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Sep 30 14:06:39 compute-0 NetworkManager[45472]: <info>  [1759241199.8251] device (eth1): Activation: starting connection 'ci-private-network' (70493aff-8f50-53fb-8a3a-7b4dcd69c293)
Sep 30 14:06:39 compute-0 systemd[1]: Started Network Manager.
Sep 30 14:06:39 compute-0 NetworkManager[45472]: <info>  [1759241199.8258] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Sep 30 14:06:39 compute-0 NetworkManager[45472]: <info>  [1759241199.8271] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Sep 30 14:06:39 compute-0 NetworkManager[45472]: <info>  [1759241199.8275] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Sep 30 14:06:39 compute-0 NetworkManager[45472]: <info>  [1759241199.8278] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Sep 30 14:06:39 compute-0 NetworkManager[45472]: <info>  [1759241199.8281] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Sep 30 14:06:39 compute-0 NetworkManager[45472]: <info>  [1759241199.8286] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Sep 30 14:06:39 compute-0 NetworkManager[45472]: <info>  [1759241199.8289] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Sep 30 14:06:39 compute-0 NetworkManager[45472]: <info>  [1759241199.8293] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Sep 30 14:06:39 compute-0 NetworkManager[45472]: <info>  [1759241199.8299] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Sep 30 14:06:39 compute-0 NetworkManager[45472]: <info>  [1759241199.8310] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Sep 30 14:06:39 compute-0 NetworkManager[45472]: <info>  [1759241199.8314] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Sep 30 14:06:39 compute-0 NetworkManager[45472]: <info>  [1759241199.8342] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Sep 30 14:06:39 compute-0 NetworkManager[45472]: <info>  [1759241199.8357] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Sep 30 14:06:39 compute-0 NetworkManager[45472]: <info>  [1759241199.8366] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Sep 30 14:06:39 compute-0 NetworkManager[45472]: <info>  [1759241199.8369] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Sep 30 14:06:39 compute-0 NetworkManager[45472]: <info>  [1759241199.8375] device (lo): Activation: successful, device activated.
Sep 30 14:06:39 compute-0 NetworkManager[45472]: <info>  [1759241199.8383] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Sep 30 14:06:39 compute-0 NetworkManager[45472]: <info>  [1759241199.8385] dhcp4 (eth0): state changed new lease, address=38.102.83.20
Sep 30 14:06:39 compute-0 NetworkManager[45472]: <info>  [1759241199.8388] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Sep 30 14:06:39 compute-0 NetworkManager[45472]: <info>  [1759241199.8390] manager: NetworkManager state is now CONNECTED_LOCAL
Sep 30 14:06:39 compute-0 NetworkManager[45472]: <info>  [1759241199.8394] device (eth1): Activation: successful, device activated.
Sep 30 14:06:39 compute-0 NetworkManager[45472]: <info>  [1759241199.8410] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Sep 30 14:06:39 compute-0 systemd[1]: Starting Network Manager Wait Online...
Sep 30 14:06:39 compute-0 sudo[45452]: pam_unix(sudo:session): session closed for user root
Sep 30 14:06:40 compute-0 NetworkManager[45472]: <info>  [1759241200.0933] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Sep 30 14:06:40 compute-0 NetworkManager[45472]: <info>  [1759241200.0982] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Sep 30 14:06:40 compute-0 NetworkManager[45472]: <info>  [1759241200.0985] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Sep 30 14:06:40 compute-0 NetworkManager[45472]: <info>  [1759241200.0990] manager: NetworkManager state is now CONNECTED_SITE
Sep 30 14:06:40 compute-0 NetworkManager[45472]: <info>  [1759241200.1002] device (eth0): Activation: successful, device activated.
Sep 30 14:06:40 compute-0 NetworkManager[45472]: <info>  [1759241200.1011] manager: NetworkManager state is now CONNECTED_GLOBAL
Sep 30 14:06:40 compute-0 NetworkManager[45472]: <info>  [1759241200.1019] manager: startup complete
Sep 30 14:06:40 compute-0 systemd[1]: Finished Network Manager Wait Online.
Sep 30 14:06:40 compute-0 sudo[45679]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uakxgpswbnlrflcagpkfawsnwicayafo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241200.113087-464-51528477567862/AnsiballZ_dnf.py'
Sep 30 14:06:40 compute-0 sudo[45679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:06:40 compute-0 python3.9[45681]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Sep 30 14:06:43 compute-0 sshd-session[45688]: Invalid user seekcy from 209.38.228.14 port 36624
Sep 30 14:06:43 compute-0 sshd-session[45688]: Received disconnect from 209.38.228.14 port 36624:11: Bye Bye [preauth]
Sep 30 14:06:43 compute-0 sshd-session[45688]: Disconnected from invalid user seekcy 209.38.228.14 port 36624 [preauth]
Sep 30 14:06:50 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Sep 30 14:06:50 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Sep 30 14:06:50 compute-0 systemd[1]: Starting man-db-cache-update.service...
Sep 30 14:06:50 compute-0 systemd[1]: Reloading.
Sep 30 14:06:50 compute-0 systemd-rc-local-generator[45734]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:06:50 compute-0 systemd-sysv-generator[45739]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 14:06:50 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Sep 30 14:06:53 compute-0 sudo[45679]: pam_unix(sudo:session): session closed for user root
Sep 30 14:06:54 compute-0 sudo[46141]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgcinyvtxhxlsmsczvvynhmfobshikiz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241214.188538-500-72008875124537/AnsiballZ_stat.py'
Sep 30 14:06:54 compute-0 sudo[46141]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:06:54 compute-0 python3.9[46143]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 14:06:54 compute-0 sudo[46141]: pam_unix(sudo:session): session closed for user root
Sep 30 14:06:55 compute-0 sudo[46293]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwhjopghxmdbrsszrhqiblfplqwfjvif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241214.9088907-527-164473264855265/AnsiballZ_ini_file.py'
Sep 30 14:06:55 compute-0 sudo[46293]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:06:55 compute-0 python3.9[46295]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:06:55 compute-0 sudo[46293]: pam_unix(sudo:session): session closed for user root
Sep 30 14:06:56 compute-0 sudo[46447]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jsfvqlarucmklbpnqsvtvvlsasvjzscb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241215.887414-557-18561132820852/AnsiballZ_ini_file.py'
Sep 30 14:06:56 compute-0 sudo[46447]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:06:56 compute-0 python3.9[46449]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:06:56 compute-0 sudo[46447]: pam_unix(sudo:session): session closed for user root
Sep 30 14:06:56 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Sep 30 14:06:56 compute-0 systemd[1]: Finished man-db-cache-update.service.
Sep 30 14:06:56 compute-0 systemd[1]: run-r739fb9bc499b45acb30dbfe005b3c2bf.service: Deactivated successfully.
Sep 30 14:06:56 compute-0 sudo[46600]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhrzlcwdufrhqikhnznmyygptzijflfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241216.5070174-557-187510269240713/AnsiballZ_ini_file.py'
Sep 30 14:06:56 compute-0 sudo[46600]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:06:56 compute-0 python3.9[46602]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:06:56 compute-0 sudo[46600]: pam_unix(sudo:session): session closed for user root
Sep 30 14:06:57 compute-0 sudo[46752]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzowttxmnqsjhrqbioltcbbsztlvscsu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241217.1253283-602-263558560870958/AnsiballZ_ini_file.py'
Sep 30 14:06:57 compute-0 sudo[46752]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:06:57 compute-0 python3.9[46754]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:06:57 compute-0 sudo[46752]: pam_unix(sudo:session): session closed for user root
Sep 30 14:06:58 compute-0 sudo[46904]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aadztredubrulwwuejvgcdhvetzeijha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241217.7834954-602-261089655869755/AnsiballZ_ini_file.py'
Sep 30 14:06:58 compute-0 sudo[46904]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:06:58 compute-0 python3.9[46906]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:06:58 compute-0 sudo[46904]: pam_unix(sudo:session): session closed for user root
Sep 30 14:06:58 compute-0 sudo[47056]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-loeolpqrtjpedhfxmsbyyihegfgxjyuk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241218.494248-647-185333656572653/AnsiballZ_stat.py'
Sep 30 14:06:58 compute-0 sudo[47056]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:06:58 compute-0 python3.9[47058]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:06:58 compute-0 sudo[47056]: pam_unix(sudo:session): session closed for user root
Sep 30 14:06:59 compute-0 sudo[47179]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-duqatmwsxmohpwqoyhzepqdcdepbhgyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241218.494248-647-185333656572653/AnsiballZ_copy.py'
Sep 30 14:06:59 compute-0 sudo[47179]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:06:59 compute-0 python3.9[47181]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1759241218.494248-647-185333656572653/.source _original_basename=.gsbpfh35 follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:06:59 compute-0 sudo[47179]: pam_unix(sudo:session): session closed for user root
Sep 30 14:07:00 compute-0 sudo[47331]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-msruvbsiprgcnkidkrxxyiqltqqbxiip ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241220.1661692-692-27278446459382/AnsiballZ_file.py'
Sep 30 14:07:00 compute-0 sudo[47331]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:07:00 compute-0 python3.9[47333]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:07:00 compute-0 sudo[47331]: pam_unix(sudo:session): session closed for user root
Sep 30 14:07:01 compute-0 sudo[47483]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ssruzcflfezvhujqpqomqpyvjspitdjj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241220.8636982-716-30234373862267/AnsiballZ_edpm_os_net_config_mappings.py'
Sep 30 14:07:01 compute-0 sudo[47483]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:07:01 compute-0 python3.9[47485]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Sep 30 14:07:01 compute-0 sudo[47483]: pam_unix(sudo:session): session closed for user root
Sep 30 14:07:01 compute-0 sudo[47635]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkntrzhllclqnxmcjecyoedsqlqhrmli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241221.7423172-743-277843542900365/AnsiballZ_file.py'
Sep 30 14:07:01 compute-0 sudo[47635]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:07:02 compute-0 python3.9[47637]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:07:02 compute-0 sudo[47635]: pam_unix(sudo:session): session closed for user root
Sep 30 14:07:02 compute-0 sudo[47787]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ryqudwcylgllbanycgrajyxirwinqmcm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241222.6110008-773-219351279885006/AnsiballZ_stat.py'
Sep 30 14:07:02 compute-0 sudo[47787]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:07:03 compute-0 sudo[47787]: pam_unix(sudo:session): session closed for user root
Sep 30 14:07:03 compute-0 sudo[47910]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzssggyepsbgedsthpfefceohmjxjiec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241222.6110008-773-219351279885006/AnsiballZ_copy.py'
Sep 30 14:07:03 compute-0 sudo[47910]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:07:03 compute-0 sudo[47910]: pam_unix(sudo:session): session closed for user root
Sep 30 14:07:04 compute-0 sudo[48062]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cymgifvvbjfohkzrjmawjqrkstbozqfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241223.8502893-818-188003558196373/AnsiballZ_slurp.py'
Sep 30 14:07:04 compute-0 sudo[48062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:07:04 compute-0 python3.9[48064]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Sep 30 14:07:04 compute-0 sudo[48062]: pam_unix(sudo:session): session closed for user root
Sep 30 14:07:05 compute-0 sudo[48237]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbpigkedlqfatffyunabyztzbeqtqzep ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241224.7508423-845-71561385966441/async_wrapper.py j779402216112 300 /home/zuul/.ansible/tmp/ansible-tmp-1759241224.7508423-845-71561385966441/AnsiballZ_edpm_os_net_config.py _'
Sep 30 14:07:05 compute-0 sudo[48237]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:07:05 compute-0 ansible-async_wrapper.py[48239]: Invoked with j779402216112 300 /home/zuul/.ansible/tmp/ansible-tmp-1759241224.7508423-845-71561385966441/AnsiballZ_edpm_os_net_config.py _
Sep 30 14:07:05 compute-0 ansible-async_wrapper.py[48242]: Starting module and watcher
Sep 30 14:07:05 compute-0 ansible-async_wrapper.py[48242]: Start watching 48243 (300)
Sep 30 14:07:05 compute-0 ansible-async_wrapper.py[48243]: Start module (48243)
Sep 30 14:07:05 compute-0 ansible-async_wrapper.py[48239]: Return async_wrapper task started.
Sep 30 14:07:05 compute-0 sudo[48237]: pam_unix(sudo:session): session closed for user root
Sep 30 14:07:05 compute-0 python3.9[48244]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Sep 30 14:07:06 compute-0 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Sep 30 14:07:06 compute-0 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Sep 30 14:07:06 compute-0 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Sep 30 14:07:06 compute-0 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Sep 30 14:07:06 compute-0 kernel: cfg80211: failed to load regulatory.db
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.6733] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=48245 uid=0 result="success"
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.6746] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=48245 uid=0 result="success"
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.7274] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.7277] audit: op="connection-add" uuid="e5864adb-8f6b-4994-9c1f-8861253b433b" name="br-ex-br" pid=48245 uid=0 result="success"
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.7295] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.7296] audit: op="connection-add" uuid="cd2e53f0-ae56-4d9c-93fc-25ed48826df7" name="br-ex-port" pid=48245 uid=0 result="success"
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.7310] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.7311] audit: op="connection-add" uuid="6ca92d6f-1dc7-4dcb-849c-50097753eaec" name="eth1-port" pid=48245 uid=0 result="success"
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.7331] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.7332] audit: op="connection-add" uuid="5442bda3-103b-4a8d-ab36-891a9f4f482b" name="vlan20-port" pid=48245 uid=0 result="success"
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.7346] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.7347] audit: op="connection-add" uuid="d076ec70-3c34-474d-bfe6-251b1f77ec94" name="vlan21-port" pid=48245 uid=0 result="success"
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.7360] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.7362] audit: op="connection-add" uuid="3e7c1b5f-01dd-4a13-89fd-8008195ef474" name="vlan22-port" pid=48245 uid=0 result="success"
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.7373] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.7375] audit: op="connection-add" uuid="5bf9d37d-e72e-471c-b09f-e7584980b72c" name="vlan23-port" pid=48245 uid=0 result="success"
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.7393] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="connection.timestamp,connection.autoconnect-priority,ipv4.dhcp-timeout,ipv4.dhcp-client-id,ipv6.addr-gen-mode,ipv6.method,ipv6.dhcp-timeout,802-3-ethernet.mtu" pid=48245 uid=0 result="success"
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.7409] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.7411] audit: op="connection-add" uuid="b1e8d3cf-20a6-4046-abdf-a8e9c028c631" name="br-ex-if" pid=48245 uid=0 result="success"
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9083] audit: op="connection-update" uuid="70493aff-8f50-53fb-8a3a-7b4dcd69c293" name="ci-private-network" args="connection.controller,connection.master,connection.slave-type,connection.timestamp,connection.port-type,ipv4.dns,ipv4.never-default,ipv4.method,ipv4.addresses,ipv4.routing-rules,ipv4.routes,ipv6.addr-gen-mode,ipv6.dns,ipv6.method,ipv6.addresses,ipv6.routing-rules,ipv6.routes,ovs-interface.type,ovs-external-ids.data" pid=48245 uid=0 result="success"
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9101] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9103] audit: op="connection-add" uuid="1de19e51-6c19-4281-b391-3e7d1309af4a" name="vlan20-if" pid=48245 uid=0 result="success"
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9120] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9122] audit: op="connection-add" uuid="cc08856b-ce2f-4b8f-af2c-3042289caa8e" name="vlan21-if" pid=48245 uid=0 result="success"
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9136] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9137] audit: op="connection-add" uuid="d5d0eb4a-314f-45c4-84ab-be06f23c315c" name="vlan22-if" pid=48245 uid=0 result="success"
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9154] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9155] audit: op="connection-add" uuid="e8381561-73f7-4780-be23-f06f4238fadd" name="vlan23-if" pid=48245 uid=0 result="success"
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9167] audit: op="connection-delete" uuid="b787acf5-2088-3281-a8cd-a822ba754a2a" name="Wired connection 1" pid=48245 uid=0 result="success"
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9178] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9187] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9190] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (e5864adb-8f6b-4994-9c1f-8861253b433b)
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9191] audit: op="connection-activate" uuid="e5864adb-8f6b-4994-9c1f-8861253b433b" name="br-ex-br" pid=48245 uid=0 result="success"
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9193] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9199] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9202] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (cd2e53f0-ae56-4d9c-93fc-25ed48826df7)
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9204] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9209] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9213] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (6ca92d6f-1dc7-4dcb-849c-50097753eaec)
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9214] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9220] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9224] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (5442bda3-103b-4a8d-ab36-891a9f4f482b)
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9225] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9232] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9235] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (d076ec70-3c34-474d-bfe6-251b1f77ec94)
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9237] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9242] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9246] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (3e7c1b5f-01dd-4a13-89fd-8008195ef474)
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9247] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9252] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9254] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (5bf9d37d-e72e-471c-b09f-e7584980b72c)
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9255] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9257] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9258] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9263] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9266] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9269] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (b1e8d3cf-20a6-4046-abdf-a8e9c028c631)
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9269] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9272] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9274] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9274] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9275] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9283] device (eth1): disconnecting for new activation request.
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9283] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9285] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9286] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9287] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9289] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9292] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9295] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (1de19e51-6c19-4281-b391-3e7d1309af4a)
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9295] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9297] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9298] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9299] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9301] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9304] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9307] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (cc08856b-ce2f-4b8f-af2c-3042289caa8e)
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9307] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9310] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9311] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9312] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9313] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9316] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9318] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (d5d0eb4a-314f-45c4-84ab-be06f23c315c)
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9319] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9321] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9322] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9323] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9324] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9328] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9333] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (e8381561-73f7-4780-be23-f06f4238fadd)
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9333] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9336] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9337] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9338] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9338] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9347] audit: op="device-reapply" interface="eth0" ifindex=2 args="connection.autoconnect-priority,ipv4.dhcp-timeout,ipv4.dhcp-client-id,ipv6.addr-gen-mode,ipv6.method,802-3-ethernet.mtu" pid=48245 uid=0 result="success"
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9348] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9351] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9352] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9357] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9359] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9361] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9364] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9365] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9369] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9372] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9375] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Sep 30 14:07:07 compute-0 kernel: ovs-system: entered promiscuous mode
Sep 30 14:07:07 compute-0 kernel: Timeout policy base is empty
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9384] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9390] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9394] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Sep 30 14:07:07 compute-0 systemd-udevd[48251]: Network interface NamePolicy= disabled on kernel command line.
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9397] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9398] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9402] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9406] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9408] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9410] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9414] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9419] dhcp4 (eth0): canceled DHCP transaction
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9419] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9419] dhcp4 (eth0): state changed no lease
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9422] dhcp4 (eth0): activation: beginning transaction (no timeout)
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9433] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Sep 30 14:07:07 compute-0 NetworkManager[45472]: <info>  [1759241227.9436] audit: op="device-reapply" interface="eth1" ifindex=3 pid=48245 uid=0 result="fail" reason="Device is not activated"
Sep 30 14:07:07 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Sep 30 14:07:07 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Sep 30 14:07:07 compute-0 kernel: br-ex: entered promiscuous mode
Sep 30 14:07:07 compute-0 kernel: vlan20: entered promiscuous mode
Sep 30 14:07:07 compute-0 systemd-udevd[48249]: Network interface NamePolicy= disabled on kernel command line.
Sep 30 14:07:08 compute-0 NetworkManager[45472]: <info>  [1759241228.0276] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Sep 30 14:07:08 compute-0 NetworkManager[45472]: <info>  [1759241228.0282] dhcp4 (eth0): state changed new lease, address=38.102.83.20
Sep 30 14:07:08 compute-0 NetworkManager[45472]: <info>  [1759241228.0292] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Sep 30 14:07:08 compute-0 NetworkManager[45472]: <info>  [1759241228.0300] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Sep 30 14:07:08 compute-0 NetworkManager[45472]: <info>  [1759241228.0310] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Sep 30 14:07:08 compute-0 NetworkManager[45472]: <info>  [1759241228.0319] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Sep 30 14:07:08 compute-0 kernel: vlan21: entered promiscuous mode
Sep 30 14:07:08 compute-0 kernel: vlan22: entered promiscuous mode
Sep 30 14:07:08 compute-0 kernel: vlan23: entered promiscuous mode
Sep 30 14:07:08 compute-0 NetworkManager[45472]: <info>  [1759241228.1743] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Sep 30 14:07:08 compute-0 NetworkManager[45472]: <info>  [1759241228.1879] device (eth1): Activation: starting connection 'ci-private-network' (70493aff-8f50-53fb-8a3a-7b4dcd69c293)
Sep 30 14:07:08 compute-0 NetworkManager[45472]: <info>  [1759241228.1885] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Sep 30 14:07:08 compute-0 NetworkManager[45472]: <info>  [1759241228.1888] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Sep 30 14:07:08 compute-0 NetworkManager[45472]: <info>  [1759241228.1890] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Sep 30 14:07:08 compute-0 NetworkManager[45472]: <info>  [1759241228.1892] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Sep 30 14:07:08 compute-0 NetworkManager[45472]: <info>  [1759241228.1894] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Sep 30 14:07:08 compute-0 NetworkManager[45472]: <info>  [1759241228.1896] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Sep 30 14:07:08 compute-0 NetworkManager[45472]: <info>  [1759241228.1897] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Sep 30 14:07:08 compute-0 NetworkManager[45472]: <info>  [1759241228.1899] device (eth1): state change: disconnected -> deactivating (reason 'new-activation', managed-type: 'full')
Sep 30 14:07:08 compute-0 NetworkManager[45472]: <info>  [1759241228.1907] device (eth1): disconnecting for new activation request.
Sep 30 14:07:08 compute-0 NetworkManager[45472]: <info>  [1759241228.1908] audit: op="connection-activate" uuid="70493aff-8f50-53fb-8a3a-7b4dcd69c293" name="ci-private-network" pid=48245 uid=0 result="success"
Sep 30 14:07:08 compute-0 NetworkManager[45472]: <info>  [1759241228.1912] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Sep 30 14:07:08 compute-0 NetworkManager[45472]: <info>  [1759241228.1931] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Sep 30 14:07:08 compute-0 NetworkManager[45472]: <info>  [1759241228.1943] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Sep 30 14:07:08 compute-0 NetworkManager[45472]: <info>  [1759241228.1951] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Sep 30 14:07:08 compute-0 NetworkManager[45472]: <info>  [1759241228.1954] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Sep 30 14:07:08 compute-0 NetworkManager[45472]: <info>  [1759241228.1960] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Sep 30 14:07:08 compute-0 NetworkManager[45472]: <info>  [1759241228.1972] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Sep 30 14:07:08 compute-0 NetworkManager[45472]: <info>  [1759241228.1978] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Sep 30 14:07:08 compute-0 NetworkManager[45472]: <info>  [1759241228.1985] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Sep 30 14:07:08 compute-0 NetworkManager[45472]: <info>  [1759241228.1991] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Sep 30 14:07:08 compute-0 NetworkManager[45472]: <info>  [1759241228.1996] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Sep 30 14:07:08 compute-0 NetworkManager[45472]: <info>  [1759241228.2002] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Sep 30 14:07:08 compute-0 NetworkManager[45472]: <info>  [1759241228.2007] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Sep 30 14:07:08 compute-0 NetworkManager[45472]: <info>  [1759241228.2013] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Sep 30 14:07:08 compute-0 NetworkManager[45472]: <info>  [1759241228.2019] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Sep 30 14:07:08 compute-0 NetworkManager[45472]: <info>  [1759241228.2024] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Sep 30 14:07:08 compute-0 NetworkManager[45472]: <info>  [1759241228.2029] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Sep 30 14:07:08 compute-0 NetworkManager[45472]: <info>  [1759241228.2033] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Sep 30 14:07:08 compute-0 NetworkManager[45472]: <info>  [1759241228.2059] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=48245 uid=0 result="success"
Sep 30 14:07:08 compute-0 NetworkManager[45472]: <info>  [1759241228.2061] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Sep 30 14:07:08 compute-0 NetworkManager[45472]: <info>  [1759241228.2068] device (eth1): Activation: starting connection 'ci-private-network' (70493aff-8f50-53fb-8a3a-7b4dcd69c293)
Sep 30 14:07:08 compute-0 NetworkManager[45472]: <info>  [1759241228.2075] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Sep 30 14:07:08 compute-0 NetworkManager[45472]: <info>  [1759241228.2103] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Sep 30 14:07:08 compute-0 NetworkManager[45472]: <info>  [1759241228.2107] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Sep 30 14:07:08 compute-0 NetworkManager[45472]: <info>  [1759241228.2117] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Sep 30 14:07:08 compute-0 NetworkManager[45472]: <info>  [1759241228.2125] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Sep 30 14:07:08 compute-0 NetworkManager[45472]: <info>  [1759241228.2144] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Sep 30 14:07:08 compute-0 NetworkManager[45472]: <info>  [1759241228.2152] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Sep 30 14:07:08 compute-0 NetworkManager[45472]: <info>  [1759241228.2157] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Sep 30 14:07:08 compute-0 NetworkManager[45472]: <info>  [1759241228.2159] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Sep 30 14:07:08 compute-0 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Sep 30 14:07:08 compute-0 NetworkManager[45472]: <info>  [1759241228.2181] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Sep 30 14:07:08 compute-0 NetworkManager[45472]: <info>  [1759241228.2194] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Sep 30 14:07:08 compute-0 NetworkManager[45472]: <info>  [1759241228.2202] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Sep 30 14:07:08 compute-0 NetworkManager[45472]: <info>  [1759241228.2209] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Sep 30 14:07:08 compute-0 NetworkManager[45472]: <info>  [1759241228.2217] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Sep 30 14:07:08 compute-0 NetworkManager[45472]: <info>  [1759241228.2219] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Sep 30 14:07:08 compute-0 NetworkManager[45472]: <info>  [1759241228.2227] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Sep 30 14:07:08 compute-0 NetworkManager[45472]: <info>  [1759241228.2234] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Sep 30 14:07:08 compute-0 NetworkManager[45472]: <info>  [1759241228.2240] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Sep 30 14:07:08 compute-0 NetworkManager[45472]: <info>  [1759241228.2247] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Sep 30 14:07:08 compute-0 NetworkManager[45472]: <info>  [1759241228.2254] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Sep 30 14:07:08 compute-0 NetworkManager[45472]: <info>  [1759241228.2256] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Sep 30 14:07:08 compute-0 NetworkManager[45472]: <info>  [1759241228.2257] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Sep 30 14:07:08 compute-0 NetworkManager[45472]: <info>  [1759241228.2267] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Sep 30 14:07:08 compute-0 NetworkManager[45472]: <info>  [1759241228.2272] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Sep 30 14:07:08 compute-0 NetworkManager[45472]: <info>  [1759241228.2279] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Sep 30 14:07:08 compute-0 NetworkManager[45472]: <info>  [1759241228.2287] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Sep 30 14:07:08 compute-0 NetworkManager[45472]: <info>  [1759241228.2293] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Sep 30 14:07:08 compute-0 NetworkManager[45472]: <info>  [1759241228.2299] device (eth1): Activation: successful, device activated.
Sep 30 14:07:09 compute-0 sudo[48607]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbeahujbygtymodtnvfhqspubxqiugbf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241228.9034793-845-111318498178565/AnsiballZ_async_status.py'
Sep 30 14:07:09 compute-0 sudo[48607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:07:09 compute-0 NetworkManager[45472]: <info>  [1759241229.4537] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=48245 uid=0 result="success"
Sep 30 14:07:09 compute-0 python3.9[48609]: ansible-ansible.legacy.async_status Invoked with jid=j779402216112.48239 mode=status _async_dir=/root/.ansible_async
Sep 30 14:07:09 compute-0 NetworkManager[45472]: <info>  [1759241229.7075] checkpoint[0x56529d8c6950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Sep 30 14:07:09 compute-0 sudo[48607]: pam_unix(sudo:session): session closed for user root
Sep 30 14:07:09 compute-0 NetworkManager[45472]: <info>  [1759241229.7084] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=48245 uid=0 result="success"
Sep 30 14:07:09 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Sep 30 14:07:10 compute-0 NetworkManager[45472]: <info>  [1759241230.0792] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=48245 uid=0 result="success"
Sep 30 14:07:10 compute-0 NetworkManager[45472]: <info>  [1759241230.0803] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=48245 uid=0 result="success"
Sep 30 14:07:10 compute-0 NetworkManager[45472]: <info>  [1759241230.6394] audit: op="networking-control" arg="global-dns-configuration" pid=48245 uid=0 result="success"
Sep 30 14:07:10 compute-0 NetworkManager[45472]: <info>  [1759241230.7223] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Sep 30 14:07:10 compute-0 ansible-async_wrapper.py[48242]: 48243 still running (300)
Sep 30 14:07:10 compute-0 NetworkManager[45472]: <info>  [1759241230.8187] audit: op="networking-control" arg="global-dns-configuration" pid=48245 uid=0 result="success"
Sep 30 14:07:10 compute-0 NetworkManager[45472]: <info>  [1759241230.8213] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=48245 uid=0 result="success"
Sep 30 14:07:10 compute-0 NetworkManager[45472]: <info>  [1759241230.9420] checkpoint[0x56529d8c6a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Sep 30 14:07:10 compute-0 NetworkManager[45472]: <info>  [1759241230.9423] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=48245 uid=0 result="success"
Sep 30 14:07:10 compute-0 ansible-async_wrapper.py[48243]: Module complete (48243)
Sep 30 14:07:12 compute-0 sudo[48716]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdgnebpeesscwxwwygepoapnzvbnuzkh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241228.9034793-845-111318498178565/AnsiballZ_async_status.py'
Sep 30 14:07:12 compute-0 sudo[48716]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:07:13 compute-0 python3.9[48718]: ansible-ansible.legacy.async_status Invoked with jid=j779402216112.48239 mode=status _async_dir=/root/.ansible_async
Sep 30 14:07:13 compute-0 sudo[48716]: pam_unix(sudo:session): session closed for user root
Sep 30 14:07:13 compute-0 sudo[48816]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twrhkwyralkprshepfqqqxhwtshawbwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241228.9034793-845-111318498178565/AnsiballZ_async_status.py'
Sep 30 14:07:13 compute-0 sudo[48816]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:07:13 compute-0 python3.9[48818]: ansible-ansible.legacy.async_status Invoked with jid=j779402216112.48239 mode=cleanup _async_dir=/root/.ansible_async
Sep 30 14:07:13 compute-0 sudo[48816]: pam_unix(sudo:session): session closed for user root
Sep 30 14:07:14 compute-0 sudo[48968]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbcpvhnzwjnseaeagigtnqewwrbrqbwh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241233.955207-926-177176307320833/AnsiballZ_stat.py'
Sep 30 14:07:14 compute-0 sudo[48968]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:07:14 compute-0 python3.9[48970]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:07:14 compute-0 sudo[48968]: pam_unix(sudo:session): session closed for user root
Sep 30 14:07:14 compute-0 sudo[49091]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njsdbkllcvuejhaznadpaqtgundltadd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241233.955207-926-177176307320833/AnsiballZ_copy.py'
Sep 30 14:07:14 compute-0 sudo[49091]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:07:14 compute-0 python3.9[49093]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759241233.955207-926-177176307320833/.source.returncode _original_basename=.3ve4206c follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:07:15 compute-0 sudo[49091]: pam_unix(sudo:session): session closed for user root
Sep 30 14:07:15 compute-0 sudo[49243]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjoqbtvhaamwxlnbjdlfdcaswswiqhtg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241235.279819-974-280727915575810/AnsiballZ_stat.py'
Sep 30 14:07:15 compute-0 sudo[49243]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:07:15 compute-0 python3.9[49245]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:07:15 compute-0 sudo[49243]: pam_unix(sudo:session): session closed for user root
Sep 30 14:07:15 compute-0 ansible-async_wrapper.py[48242]: Done in kid B.
Sep 30 14:07:16 compute-0 sudo[49366]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykshkdncnrkpvfpcmvduphyfuehbfefp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241235.279819-974-280727915575810/AnsiballZ_copy.py'
Sep 30 14:07:16 compute-0 sudo[49366]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:07:16 compute-0 python3.9[49368]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759241235.279819-974-280727915575810/.source.cfg _original_basename=.8valigbg follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:07:16 compute-0 sudo[49366]: pam_unix(sudo:session): session closed for user root
Sep 30 14:07:16 compute-0 sudo[49519]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zybmldcitjrqsirgmqmnnkkvmeeevsrx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241236.43414-1019-251383975883020/AnsiballZ_systemd.py'
Sep 30 14:07:16 compute-0 sudo[49519]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:07:17 compute-0 python3.9[49521]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Sep 30 14:07:17 compute-0 systemd[1]: Reloading Network Manager...
Sep 30 14:07:17 compute-0 NetworkManager[45472]: <info>  [1759241237.1132] audit: op="reload" arg="0" pid=49525 uid=0 result="success"
Sep 30 14:07:17 compute-0 NetworkManager[45472]: <info>  [1759241237.1145] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Sep 30 14:07:17 compute-0 systemd[1]: Reloaded Network Manager.
Sep 30 14:07:17 compute-0 sudo[49519]: pam_unix(sudo:session): session closed for user root
Sep 30 14:07:17 compute-0 sshd-session[41459]: Connection closed by 192.168.122.30 port 49514
Sep 30 14:07:17 compute-0 sshd-session[41456]: pam_unix(sshd:session): session closed for user zuul
Sep 30 14:07:17 compute-0 systemd-logind[808]: Session 10 logged out. Waiting for processes to exit.
Sep 30 14:07:17 compute-0 systemd[1]: session-10.scope: Deactivated successfully.
Sep 30 14:07:17 compute-0 systemd[1]: session-10.scope: Consumed 50.703s CPU time.
Sep 30 14:07:17 compute-0 systemd-logind[808]: Removed session 10.
Sep 30 14:07:23 compute-0 sshd-session[49556]: Accepted publickey for zuul from 192.168.122.30 port 54446 ssh2: ECDSA SHA256:bXV1aFTGAGwGo0hLh6HZ3pTGxlJrPf0VedxXflT3nU8
Sep 30 14:07:23 compute-0 systemd-logind[808]: New session 11 of user zuul.
Sep 30 14:07:23 compute-0 systemd[1]: Started Session 11 of User zuul.
Sep 30 14:07:23 compute-0 sshd-session[49556]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Sep 30 14:07:24 compute-0 python3.9[49709]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Sep 30 14:07:25 compute-0 python3.9[49863]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Sep 30 14:07:26 compute-0 python3.9[50057]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:07:26 compute-0 sshd-session[49559]: Connection closed by 192.168.122.30 port 54446
Sep 30 14:07:26 compute-0 sshd-session[49556]: pam_unix(sshd:session): session closed for user zuul
Sep 30 14:07:26 compute-0 systemd[1]: session-11.scope: Deactivated successfully.
Sep 30 14:07:26 compute-0 systemd[1]: session-11.scope: Consumed 2.319s CPU time.
Sep 30 14:07:26 compute-0 systemd-logind[808]: Session 11 logged out. Waiting for processes to exit.
Sep 30 14:07:26 compute-0 systemd-logind[808]: Removed session 11.
Sep 30 14:07:27 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Sep 30 14:07:32 compute-0 sshd-session[50086]: Received disconnect from 210.90.155.80 port 60440:11: Bye Bye [preauth]
Sep 30 14:07:32 compute-0 sshd-session[50086]: Disconnected from authenticating user root 210.90.155.80 port 60440 [preauth]
Sep 30 14:07:33 compute-0 sshd-session[50089]: Accepted publickey for zuul from 192.168.122.30 port 36958 ssh2: ECDSA SHA256:bXV1aFTGAGwGo0hLh6HZ3pTGxlJrPf0VedxXflT3nU8
Sep 30 14:07:33 compute-0 systemd-logind[808]: New session 12 of user zuul.
Sep 30 14:07:33 compute-0 systemd[1]: Started Session 12 of User zuul.
Sep 30 14:07:33 compute-0 sshd-session[50089]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Sep 30 14:07:34 compute-0 python3.9[50242]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Sep 30 14:07:35 compute-0 python3.9[50396]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Sep 30 14:07:36 compute-0 sudo[50551]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-arislnbgoogrlohsywkhjkiaiuhhueld ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241255.7737865-80-238757596484933/AnsiballZ_setup.py'
Sep 30 14:07:36 compute-0 sudo[50551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:07:36 compute-0 python3.9[50553]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Sep 30 14:07:36 compute-0 sudo[50551]: pam_unix(sudo:session): session closed for user root
Sep 30 14:07:37 compute-0 sudo[50635]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzwfzypwqpndoguwnjeyvnjlrrqktmvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241255.7737865-80-238757596484933/AnsiballZ_dnf.py'
Sep 30 14:07:37 compute-0 sudo[50635]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:07:37 compute-0 python3.9[50637]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Sep 30 14:07:38 compute-0 sudo[50635]: pam_unix(sudo:session): session closed for user root
Sep 30 14:07:39 compute-0 sudo[50789]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jiwspqcfnwbkdmxhkkdqlaohwcyxvfjw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241259.0076725-116-107204529125703/AnsiballZ_setup.py'
Sep 30 14:07:39 compute-0 sudo[50789]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:07:39 compute-0 python3.9[50791]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Sep 30 14:07:39 compute-0 sudo[50789]: pam_unix(sudo:session): session closed for user root
Sep 30 14:07:40 compute-0 sudo[50984]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dcnrirmtxzmascraphffdzdppvneafxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241260.1969612-149-202679495712629/AnsiballZ_file.py'
Sep 30 14:07:40 compute-0 sudo[50984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:07:40 compute-0 python3.9[50986]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:07:40 compute-0 sudo[50984]: pam_unix(sudo:session): session closed for user root
Sep 30 14:07:41 compute-0 sudo[51136]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etobtgmxwluwkvfmwbtphjuzvuvlueet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241261.024996-173-81839389204631/AnsiballZ_command.py'
Sep 30 14:07:41 compute-0 sudo[51136]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:07:41 compute-0 python3.9[51138]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:07:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat735430522-merged.mount: Deactivated successfully.
Sep 30 14:07:41 compute-0 podman[51139]: 2025-09-30 14:07:41.824187727 +0000 UTC m=+0.117124616 system refresh
Sep 30 14:07:41 compute-0 sudo[51136]: pam_unix(sudo:session): session closed for user root
Sep 30 14:07:42 compute-0 sudo[51297]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zsrppkuhpinljisbdzaxxbwwipnqqxtj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241261.9972372-197-139979532905413/AnsiballZ_stat.py'
Sep 30 14:07:42 compute-0 sudo[51297]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:07:42 compute-0 python3.9[51299]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:07:42 compute-0 sudo[51297]: pam_unix(sudo:session): session closed for user root
Sep 30 14:07:42 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Sep 30 14:07:43 compute-0 sudo[51420]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijokfptvznfshpwzslkfsjyothrppcjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241261.9972372-197-139979532905413/AnsiballZ_copy.py'
Sep 30 14:07:43 compute-0 sudo[51420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:07:43 compute-0 python3.9[51422]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759241261.9972372-197-139979532905413/.source.json follow=False _original_basename=podman_network_config.j2 checksum=9d7eff0d09564358ae4edd27f8e3039a943bb8d7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:07:43 compute-0 sudo[51420]: pam_unix(sudo:session): session closed for user root
Sep 30 14:07:43 compute-0 sudo[51572]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrygsaziyhubfogtzhdvblcfpfhrwnud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241263.5136688-242-144210214493825/AnsiballZ_stat.py'
Sep 30 14:07:43 compute-0 sudo[51572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:07:43 compute-0 python3.9[51574]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:07:44 compute-0 sudo[51572]: pam_unix(sudo:session): session closed for user root
Sep 30 14:07:44 compute-0 sudo[51697]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gptcbkwtsmynnyccvaplznrgberqvwmr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241263.5136688-242-144210214493825/AnsiballZ_copy.py'
Sep 30 14:07:44 compute-0 sudo[51697]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:07:44 compute-0 python3.9[51699]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759241263.5136688-242-144210214493825/.source.conf follow=False _original_basename=registries.conf.j2 checksum=3d72769785e04dd3ae90416f7325c617e0f9262b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:07:44 compute-0 sudo[51697]: pam_unix(sudo:session): session closed for user root
Sep 30 14:07:44 compute-0 sshd-session[51653]: Received disconnect from 209.38.228.14 port 43204:11: Bye Bye [preauth]
Sep 30 14:07:44 compute-0 sshd-session[51653]: Disconnected from authenticating user root 209.38.228.14 port 43204 [preauth]
Sep 30 14:07:45 compute-0 sudo[51849]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktdjtttatgasltigygcccpgomcjqyybh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241264.7382376-290-166625224805217/AnsiballZ_ini_file.py'
Sep 30 14:07:45 compute-0 sudo[51849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:07:45 compute-0 python3.9[51851]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:07:45 compute-0 sudo[51849]: pam_unix(sudo:session): session closed for user root
Sep 30 14:07:45 compute-0 sudo[52001]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ddcarwceptjrvddgxflgbtyshsnwzcws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241265.518454-290-61242632648547/AnsiballZ_ini_file.py'
Sep 30 14:07:45 compute-0 sudo[52001]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:07:45 compute-0 python3.9[52003]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:07:45 compute-0 sudo[52001]: pam_unix(sudo:session): session closed for user root
Sep 30 14:07:46 compute-0 sudo[52153]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzkxvaksifearlfxzygausemxtpxfczf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241266.0859919-290-9014312476090/AnsiballZ_ini_file.py'
Sep 30 14:07:46 compute-0 sudo[52153]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:07:46 compute-0 python3.9[52155]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:07:46 compute-0 sudo[52153]: pam_unix(sudo:session): session closed for user root
Sep 30 14:07:46 compute-0 sudo[52305]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-deqwjciwdhpzbznxhyyiullsyfvkbqkw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241266.6518173-290-137225548660240/AnsiballZ_ini_file.py'
Sep 30 14:07:46 compute-0 sudo[52305]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:07:47 compute-0 python3.9[52307]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:07:47 compute-0 sudo[52305]: pam_unix(sudo:session): session closed for user root
Sep 30 14:07:47 compute-0 sudo[52457]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jncnugklfmevqhvulakbevzyhlmknehh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241267.515947-383-119110747312747/AnsiballZ_dnf.py'
Sep 30 14:07:47 compute-0 sudo[52457]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:07:48 compute-0 python3.9[52459]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Sep 30 14:07:49 compute-0 sudo[52457]: pam_unix(sudo:session): session closed for user root
Sep 30 14:07:50 compute-0 sudo[52610]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctmonvyvzqdprzgsowfmughdstghbjdf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241270.0334988-416-267350483197875/AnsiballZ_setup.py'
Sep 30 14:07:50 compute-0 sudo[52610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:07:50 compute-0 python3.9[52612]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Sep 30 14:07:50 compute-0 sudo[52610]: pam_unix(sudo:session): session closed for user root
Sep 30 14:07:51 compute-0 sudo[52764]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rucqpdpxrcceudjueufqnnodkwekwoyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241270.7952156-440-112969653314161/AnsiballZ_stat.py'
Sep 30 14:07:51 compute-0 sudo[52764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:07:51 compute-0 python3.9[52766]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 14:07:51 compute-0 sudo[52764]: pam_unix(sudo:session): session closed for user root
Sep 30 14:07:51 compute-0 sudo[52916]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgneqfyasvwzwyrwxvytjbannspycnol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241271.506096-467-44650184954051/AnsiballZ_stat.py'
Sep 30 14:07:51 compute-0 sudo[52916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:07:51 compute-0 python3.9[52918]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 14:07:51 compute-0 sudo[52916]: pam_unix(sudo:session): session closed for user root
Sep 30 14:07:52 compute-0 sudo[53068]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aeenqarwprendqmefzcejekgckihnywu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241272.3847647-497-162297703472145/AnsiballZ_service_facts.py'
Sep 30 14:07:52 compute-0 sudo[53068]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:07:53 compute-0 python3.9[53070]: ansible-service_facts Invoked
Sep 30 14:07:53 compute-0 network[53087]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Sep 30 14:07:53 compute-0 network[53088]: 'network-scripts' will be removed from distribution in near future.
Sep 30 14:07:53 compute-0 network[53089]: It is advised to switch to 'NetworkManager' instead for network management.
Sep 30 14:07:55 compute-0 sudo[53068]: pam_unix(sudo:session): session closed for user root
Sep 30 14:07:56 compute-0 sudo[53374]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abugonujerjsyvjggjsbhgflqnzxjmpt ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1759241276.493306-536-156407028349642/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1759241276.493306-536-156407028349642/args'
Sep 30 14:07:56 compute-0 sudo[53374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:07:56 compute-0 sudo[53374]: pam_unix(sudo:session): session closed for user root
Sep 30 14:07:57 compute-0 sudo[53541]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uaqbadulohfawugcqrjanwommoeniqvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241277.2240458-569-193126434602169/AnsiballZ_dnf.py'
Sep 30 14:07:57 compute-0 sudo[53541]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:07:57 compute-0 python3.9[53543]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Sep 30 14:07:59 compute-0 sudo[53541]: pam_unix(sudo:session): session closed for user root
Sep 30 14:08:00 compute-0 sudo[53694]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkwfpzcwgrthfmkimgvaysaqyptvyhmi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241279.5885472-608-232613393588180/AnsiballZ_package_facts.py'
Sep 30 14:08:00 compute-0 sudo[53694]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:08:00 compute-0 python3.9[53696]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Sep 30 14:08:00 compute-0 sudo[53694]: pam_unix(sudo:session): session closed for user root
Sep 30 14:08:01 compute-0 sudo[53846]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etgeuojknvuycilaiqzyokodoqbetzgj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241281.5464735-638-172137066748327/AnsiballZ_stat.py'
Sep 30 14:08:01 compute-0 sudo[53846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:08:02 compute-0 python3.9[53848]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:08:02 compute-0 sudo[53846]: pam_unix(sudo:session): session closed for user root
Sep 30 14:08:02 compute-0 sudo[53971]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjvmmpzwkisjgijqwzaxuloymmexywav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241281.5464735-638-172137066748327/AnsiballZ_copy.py'
Sep 30 14:08:02 compute-0 sudo[53971]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:08:02 compute-0 python3.9[53973]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759241281.5464735-638-172137066748327/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:08:02 compute-0 sudo[53971]: pam_unix(sudo:session): session closed for user root
Sep 30 14:08:03 compute-0 sudo[54125]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvtyhevzgeefzxdsppqytbataehxkqbl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241282.9161825-683-33684685959128/AnsiballZ_stat.py'
Sep 30 14:08:03 compute-0 sudo[54125]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:08:03 compute-0 python3.9[54127]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:08:03 compute-0 sudo[54125]: pam_unix(sudo:session): session closed for user root
Sep 30 14:08:03 compute-0 sudo[54250]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gcyjgyxojycabszqdcagbrmbpjuokprg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241282.9161825-683-33684685959128/AnsiballZ_copy.py'
Sep 30 14:08:03 compute-0 sudo[54250]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:08:03 compute-0 python3.9[54252]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759241282.9161825-683-33684685959128/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:08:03 compute-0 sudo[54250]: pam_unix(sudo:session): session closed for user root
Sep 30 14:08:05 compute-0 sudo[54404]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxyphfuwpwrzskfljcyyphufzlgfpfjl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241285.0299423-746-227559846850219/AnsiballZ_lineinfile.py'
Sep 30 14:08:05 compute-0 sudo[54404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:08:05 compute-0 python3.9[54406]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:08:05 compute-0 sudo[54404]: pam_unix(sudo:session): session closed for user root
Sep 30 14:08:07 compute-0 sudo[54558]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxjdzzuesazkipdmsgvjmkcuqdukspyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241286.758255-791-190243779744526/AnsiballZ_setup.py'
Sep 30 14:08:07 compute-0 sudo[54558]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:08:07 compute-0 python3.9[54560]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Sep 30 14:08:07 compute-0 sudo[54558]: pam_unix(sudo:session): session closed for user root
Sep 30 14:08:08 compute-0 sudo[54642]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpfcflttiathicrrvvycxiernxrrvght ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241286.758255-791-190243779744526/AnsiballZ_systemd.py'
Sep 30 14:08:08 compute-0 sudo[54642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:08:08 compute-0 python3.9[54644]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 14:08:08 compute-0 sudo[54642]: pam_unix(sudo:session): session closed for user root
Sep 30 14:08:09 compute-0 sudo[54796]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zcvhoiyrgvizeucflgulxrhjlhrkaagk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241289.3798306-839-78570710012902/AnsiballZ_setup.py'
Sep 30 14:08:09 compute-0 sudo[54796]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:08:09 compute-0 python3.9[54798]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Sep 30 14:08:10 compute-0 sudo[54796]: pam_unix(sudo:session): session closed for user root
Sep 30 14:08:10 compute-0 sudo[54880]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hitodpptubdxirtygudwluwchivhinpe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241289.3798306-839-78570710012902/AnsiballZ_systemd.py'
Sep 30 14:08:10 compute-0 sudo[54880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:08:10 compute-0 python3.9[54882]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Sep 30 14:08:10 compute-0 chronyd[790]: chronyd exiting
Sep 30 14:08:10 compute-0 systemd[1]: Stopping NTP client/server...
Sep 30 14:08:10 compute-0 systemd[1]: chronyd.service: Deactivated successfully.
Sep 30 14:08:10 compute-0 systemd[1]: Stopped NTP client/server.
Sep 30 14:08:10 compute-0 systemd[1]: Starting NTP client/server...
Sep 30 14:08:10 compute-0 chronyd[54890]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
Sep 30 14:08:10 compute-0 chronyd[54890]: Frequency -26.013 +/- 0.267 ppm read from /var/lib/chrony/drift
Sep 30 14:08:10 compute-0 chronyd[54890]: Loaded seccomp filter (level 2)
Sep 30 14:08:10 compute-0 systemd[1]: Started NTP client/server.
Sep 30 14:08:10 compute-0 sudo[54880]: pam_unix(sudo:session): session closed for user root
Sep 30 14:08:11 compute-0 sshd-session[50092]: Connection closed by 192.168.122.30 port 36958
Sep 30 14:08:11 compute-0 sshd-session[50089]: pam_unix(sshd:session): session closed for user zuul
Sep 30 14:08:11 compute-0 systemd[1]: session-12.scope: Deactivated successfully.
Sep 30 14:08:11 compute-0 systemd[1]: session-12.scope: Consumed 24.604s CPU time.
Sep 30 14:08:11 compute-0 systemd-logind[808]: Session 12 logged out. Waiting for processes to exit.
Sep 30 14:08:11 compute-0 systemd-logind[808]: Removed session 12.
Sep 30 14:08:17 compute-0 sshd-session[54916]: Accepted publickey for zuul from 192.168.122.30 port 54338 ssh2: ECDSA SHA256:bXV1aFTGAGwGo0hLh6HZ3pTGxlJrPf0VedxXflT3nU8
Sep 30 14:08:17 compute-0 systemd-logind[808]: New session 13 of user zuul.
Sep 30 14:08:17 compute-0 systemd[1]: Started Session 13 of User zuul.
Sep 30 14:08:17 compute-0 sshd-session[54916]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Sep 30 14:08:18 compute-0 sudo[55069]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fznfexwlriekdqrgjzjeazrnxtmjbihs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241297.603999-26-120976157031380/AnsiballZ_file.py'
Sep 30 14:08:18 compute-0 sudo[55069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:08:18 compute-0 python3.9[55071]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:08:18 compute-0 sudo[55069]: pam_unix(sudo:session): session closed for user root
Sep 30 14:08:18 compute-0 sudo[55221]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bnqgqxmmapiucwppcearmjmtbadznake ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241298.4904988-62-253307241126230/AnsiballZ_stat.py'
Sep 30 14:08:18 compute-0 sudo[55221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:08:19 compute-0 python3.9[55223]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:08:19 compute-0 sudo[55221]: pam_unix(sudo:session): session closed for user root
Sep 30 14:08:19 compute-0 sudo[55344]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vareneknhwfihqehpfmhctllctscrwfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241298.4904988-62-253307241126230/AnsiballZ_copy.py'
Sep 30 14:08:19 compute-0 sudo[55344]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:08:19 compute-0 python3.9[55346]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759241298.4904988-62-253307241126230/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:08:19 compute-0 sudo[55344]: pam_unix(sudo:session): session closed for user root
Sep 30 14:08:20 compute-0 sshd-session[54919]: Connection closed by 192.168.122.30 port 54338
Sep 30 14:08:20 compute-0 sshd-session[54916]: pam_unix(sshd:session): session closed for user zuul
Sep 30 14:08:20 compute-0 systemd[1]: session-13.scope: Deactivated successfully.
Sep 30 14:08:20 compute-0 systemd[1]: session-13.scope: Consumed 1.585s CPU time.
Sep 30 14:08:20 compute-0 systemd-logind[808]: Session 13 logged out. Waiting for processes to exit.
Sep 30 14:08:20 compute-0 systemd-logind[808]: Removed session 13.
Sep 30 14:08:26 compute-0 sshd-session[55371]: Accepted publickey for zuul from 192.168.122.30 port 54016 ssh2: ECDSA SHA256:bXV1aFTGAGwGo0hLh6HZ3pTGxlJrPf0VedxXflT3nU8
Sep 30 14:08:26 compute-0 systemd-logind[808]: New session 14 of user zuul.
Sep 30 14:08:26 compute-0 systemd[1]: Started Session 14 of User zuul.
Sep 30 14:08:26 compute-0 sshd-session[55371]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Sep 30 14:08:28 compute-0 python3.9[55524]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Sep 30 14:08:29 compute-0 sudo[55678]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqdqdlrejpphvkqonfhhdyuytfmyvtyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241308.7016106-59-124029277822478/AnsiballZ_file.py'
Sep 30 14:08:29 compute-0 sudo[55678]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:08:29 compute-0 python3.9[55680]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:08:29 compute-0 sudo[55678]: pam_unix(sudo:session): session closed for user root
Sep 30 14:08:30 compute-0 sudo[55853]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nedxdpcfooyqznpodzlvrqhckdfwrzdk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241309.5960703-83-68235168711698/AnsiballZ_stat.py'
Sep 30 14:08:30 compute-0 sudo[55853]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:08:30 compute-0 python3.9[55855]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:08:30 compute-0 sudo[55853]: pam_unix(sudo:session): session closed for user root
Sep 30 14:08:30 compute-0 sudo[55976]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujuwamslbxkxjbydulgtpevhrwycknev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241309.5960703-83-68235168711698/AnsiballZ_copy.py'
Sep 30 14:08:30 compute-0 sudo[55976]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:08:31 compute-0 python3.9[55978]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1759241309.5960703-83-68235168711698/.source.json _original_basename=.3kv_xkqi follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:08:31 compute-0 sudo[55976]: pam_unix(sudo:session): session closed for user root
Sep 30 14:08:31 compute-0 sudo[56128]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-puqqfgrpcovgkwyevavkuyoifdlocgdc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241311.5563474-152-25933638385174/AnsiballZ_stat.py'
Sep 30 14:08:31 compute-0 sudo[56128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:08:32 compute-0 python3.9[56130]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:08:32 compute-0 sudo[56128]: pam_unix(sudo:session): session closed for user root
Sep 30 14:08:32 compute-0 sudo[56251]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvciizxiynyulxujkidmmvnocucozkbk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241311.5563474-152-25933638385174/AnsiballZ_copy.py'
Sep 30 14:08:32 compute-0 sudo[56251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:08:32 compute-0 python3.9[56253]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759241311.5563474-152-25933638385174/.source _original_basename=.anhps_l1 follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:08:32 compute-0 sudo[56251]: pam_unix(sudo:session): session closed for user root
Sep 30 14:08:32 compute-0 sudo[56403]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfvbxoptovblqfxitdcqdanywjsbmulf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241312.7295225-200-199194413183461/AnsiballZ_file.py'
Sep 30 14:08:32 compute-0 sudo[56403]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:08:33 compute-0 python3.9[56405]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:08:33 compute-0 sudo[56403]: pam_unix(sudo:session): session closed for user root
Sep 30 14:08:33 compute-0 sudo[56555]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbskguzesldvyqwgvlgcvrsairnibjkh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241313.4713893-224-191977465637595/AnsiballZ_stat.py'
Sep 30 14:08:33 compute-0 sudo[56555]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:08:33 compute-0 python3.9[56557]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:08:33 compute-0 sudo[56555]: pam_unix(sudo:session): session closed for user root
Sep 30 14:08:34 compute-0 sudo[56678]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-syisjfidwvaslqhhappeshgajkubukcc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241313.4713893-224-191977465637595/AnsiballZ_copy.py'
Sep 30 14:08:34 compute-0 sudo[56678]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:08:34 compute-0 python3.9[56680]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759241313.4713893-224-191977465637595/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:08:34 compute-0 sudo[56678]: pam_unix(sudo:session): session closed for user root
Sep 30 14:08:34 compute-0 sudo[56830]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hspjkryrwxiqasposqsnsnxodwokgsrx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241314.650414-224-272925360975621/AnsiballZ_stat.py'
Sep 30 14:08:34 compute-0 sudo[56830]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:08:35 compute-0 python3.9[56832]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:08:35 compute-0 sudo[56830]: pam_unix(sudo:session): session closed for user root
Sep 30 14:08:35 compute-0 sudo[56953]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jlsxqqjqucaqpqqxzbxxlseosiljozdy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241314.650414-224-272925360975621/AnsiballZ_copy.py'
Sep 30 14:08:35 compute-0 sudo[56953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:08:35 compute-0 python3.9[56955]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759241314.650414-224-272925360975621/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:08:35 compute-0 sudo[56953]: pam_unix(sudo:session): session closed for user root
Sep 30 14:08:36 compute-0 sudo[57105]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcusaicwpainpofvyajglykouitytsnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241315.999707-311-8965765698177/AnsiballZ_file.py'
Sep 30 14:08:36 compute-0 sudo[57105]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:08:36 compute-0 python3.9[57107]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:08:36 compute-0 sudo[57105]: pam_unix(sudo:session): session closed for user root
Sep 30 14:08:37 compute-0 sudo[57257]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivtukqqrvycxvtceannnofsqobosmryb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241317.0251849-335-181535387332808/AnsiballZ_stat.py'
Sep 30 14:08:37 compute-0 sudo[57257]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:08:37 compute-0 python3.9[57259]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:08:37 compute-0 sudo[57257]: pam_unix(sudo:session): session closed for user root
Sep 30 14:08:37 compute-0 sudo[57380]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-juvzktfeusaswmhvkhbsejkraqqwtvxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241317.0251849-335-181535387332808/AnsiballZ_copy.py'
Sep 30 14:08:37 compute-0 sudo[57380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:08:38 compute-0 python3.9[57382]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759241317.0251849-335-181535387332808/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:08:38 compute-0 sudo[57380]: pam_unix(sudo:session): session closed for user root
Sep 30 14:08:38 compute-0 sudo[57532]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qykpeujmftisacwufjtnpabjqreinliq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241318.378449-380-99879004668676/AnsiballZ_stat.py'
Sep 30 14:08:38 compute-0 sudo[57532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:08:38 compute-0 python3.9[57534]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:08:38 compute-0 sudo[57532]: pam_unix(sudo:session): session closed for user root
Sep 30 14:08:39 compute-0 sudo[57655]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-doypzdaqoyllfgwzvlpyqbckkioivzyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241318.378449-380-99879004668676/AnsiballZ_copy.py'
Sep 30 14:08:39 compute-0 sudo[57655]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:08:39 compute-0 python3.9[57657]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759241318.378449-380-99879004668676/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:08:39 compute-0 sudo[57655]: pam_unix(sudo:session): session closed for user root
Sep 30 14:08:40 compute-0 sudo[57809]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vycedjpwxbsnphofefjkfzdgzaccjipm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241319.6279075-425-146871234488500/AnsiballZ_systemd.py'
Sep 30 14:08:40 compute-0 sudo[57809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:08:40 compute-0 python3.9[57811]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 14:08:40 compute-0 systemd[1]: Reloading.
Sep 30 14:08:40 compute-0 systemd-rc-local-generator[57838]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:08:40 compute-0 systemd-sysv-generator[57842]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 14:08:40 compute-0 sshd-session[57682]: Invalid user jay from 210.90.155.80 port 55662
Sep 30 14:08:40 compute-0 systemd[1]: Reloading.
Sep 30 14:08:40 compute-0 systemd-rc-local-generator[57874]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:08:40 compute-0 systemd-sysv-generator[57879]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 14:08:41 compute-0 sshd-session[57682]: Received disconnect from 210.90.155.80 port 55662:11: Bye Bye [preauth]
Sep 30 14:08:41 compute-0 sshd-session[57682]: Disconnected from invalid user jay 210.90.155.80 port 55662 [preauth]
Sep 30 14:08:41 compute-0 systemd[1]: Starting EDPM Container Shutdown...
Sep 30 14:08:41 compute-0 systemd[1]: Finished EDPM Container Shutdown.
Sep 30 14:08:41 compute-0 sudo[57809]: pam_unix(sudo:session): session closed for user root
Sep 30 14:08:41 compute-0 sudo[58036]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjjpneyorffqsjiddphasyhlwovbpztc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241321.2535434-449-263146817982173/AnsiballZ_stat.py'
Sep 30 14:08:41 compute-0 sudo[58036]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:08:41 compute-0 python3.9[58038]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:08:41 compute-0 sudo[58036]: pam_unix(sudo:session): session closed for user root
Sep 30 14:08:42 compute-0 sudo[58159]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbugzvkplwnfmhnvzlaeqxlpuxzdavpz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241321.2535434-449-263146817982173/AnsiballZ_copy.py'
Sep 30 14:08:42 compute-0 sudo[58159]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:08:42 compute-0 python3.9[58161]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759241321.2535434-449-263146817982173/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:08:42 compute-0 sudo[58159]: pam_unix(sudo:session): session closed for user root
Sep 30 14:08:42 compute-0 sudo[58311]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohsaqnwrcjbscofglplazkcxpquksgbq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241322.5314548-494-230258087559449/AnsiballZ_stat.py'
Sep 30 14:08:42 compute-0 sudo[58311]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:08:43 compute-0 python3.9[58313]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:08:43 compute-0 sudo[58311]: pam_unix(sudo:session): session closed for user root
Sep 30 14:08:43 compute-0 sudo[58434]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztgsfooopnvtwavuwamnqqfpfgnnanmg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241322.5314548-494-230258087559449/AnsiballZ_copy.py'
Sep 30 14:08:43 compute-0 sudo[58434]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:08:43 compute-0 python3.9[58436]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759241322.5314548-494-230258087559449/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:08:43 compute-0 sudo[58434]: pam_unix(sudo:session): session closed for user root
Sep 30 14:08:44 compute-0 sudo[58586]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-isfmevbokoibkholjizbtgsrqtlpkqkb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241323.7863054-539-151569486301825/AnsiballZ_systemd.py'
Sep 30 14:08:44 compute-0 sudo[58586]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:08:44 compute-0 python3.9[58588]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 14:08:44 compute-0 systemd[1]: Reloading.
Sep 30 14:08:44 compute-0 systemd-rc-local-generator[58616]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:08:44 compute-0 systemd-sysv-generator[58619]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 14:08:44 compute-0 systemd[1]: Reloading.
Sep 30 14:08:44 compute-0 systemd-rc-local-generator[58653]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:08:44 compute-0 systemd-sysv-generator[58656]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 14:08:44 compute-0 systemd[1]: Starting Create netns directory...
Sep 30 14:08:44 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Sep 30 14:08:44 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Sep 30 14:08:44 compute-0 systemd[1]: Finished Create netns directory.
Sep 30 14:08:44 compute-0 sudo[58586]: pam_unix(sudo:session): session closed for user root
Sep 30 14:08:45 compute-0 sshd-session[58689]: Invalid user matt from 209.38.228.14 port 59134
Sep 30 14:08:45 compute-0 sshd-session[58689]: Received disconnect from 209.38.228.14 port 59134:11: Bye Bye [preauth]
Sep 30 14:08:45 compute-0 sshd-session[58689]: Disconnected from invalid user matt 209.38.228.14 port 59134 [preauth]
Sep 30 14:08:45 compute-0 python3.9[58816]: ansible-ansible.builtin.service_facts Invoked
Sep 30 14:08:45 compute-0 network[58833]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Sep 30 14:08:45 compute-0 network[58834]: 'network-scripts' will be removed from distribution in near future.
Sep 30 14:08:45 compute-0 network[58835]: It is advised to switch to 'NetworkManager' instead for network management.
Sep 30 14:08:49 compute-0 sudo[59097]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfrgenvngxbskyrfclnqqhzpugtogwci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241329.4428678-587-6658646353203/AnsiballZ_systemd.py'
Sep 30 14:08:49 compute-0 sudo[59097]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:08:50 compute-0 python3.9[59099]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 14:08:50 compute-0 systemd[1]: Reloading.
Sep 30 14:08:50 compute-0 systemd-rc-local-generator[59126]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:08:50 compute-0 systemd-sysv-generator[59132]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 14:08:50 compute-0 systemd[1]: Stopping IPv4 firewall with iptables...
Sep 30 14:08:50 compute-0 iptables.init[59138]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Sep 30 14:08:50 compute-0 iptables.init[59138]: iptables: Flushing firewall rules: [  OK  ]
Sep 30 14:08:50 compute-0 systemd[1]: iptables.service: Deactivated successfully.
Sep 30 14:08:50 compute-0 systemd[1]: Stopped IPv4 firewall with iptables.
Sep 30 14:08:50 compute-0 sudo[59097]: pam_unix(sudo:session): session closed for user root
Sep 30 14:08:51 compute-0 sudo[59332]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxtturyfslfikeszhjkwdkgvqgdljmfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241330.833928-587-154198533157146/AnsiballZ_systemd.py'
Sep 30 14:08:51 compute-0 sudo[59332]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:08:51 compute-0 python3.9[59334]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 14:08:51 compute-0 sudo[59332]: pam_unix(sudo:session): session closed for user root
Sep 30 14:08:51 compute-0 sudo[59486]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmubtkdsskzjfpmhrfnlrtlhqktmuojs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241331.7094562-635-270525901021294/AnsiballZ_systemd.py'
Sep 30 14:08:51 compute-0 sudo[59486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:08:52 compute-0 python3.9[59488]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 14:08:52 compute-0 systemd[1]: Reloading.
Sep 30 14:08:52 compute-0 systemd-sysv-generator[59520]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 14:08:52 compute-0 systemd-rc-local-generator[59517]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:08:52 compute-0 systemd[1]: Starting Netfilter Tables...
Sep 30 14:08:52 compute-0 systemd[1]: Finished Netfilter Tables.
Sep 30 14:08:52 compute-0 sudo[59486]: pam_unix(sudo:session): session closed for user root
Sep 30 14:08:53 compute-0 sudo[59679]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpwnhadsmbhnuowdblwtonwwsdkliuav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241332.7754583-659-223551640209142/AnsiballZ_command.py'
Sep 30 14:08:53 compute-0 sudo[59679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:08:53 compute-0 python3.9[59681]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:08:53 compute-0 sudo[59679]: pam_unix(sudo:session): session closed for user root
Sep 30 14:08:54 compute-0 sudo[59832]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eyuyrfkzsoempozcdozlibxijwjfmkiy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241333.9522204-701-223032948870/AnsiballZ_stat.py'
Sep 30 14:08:54 compute-0 sudo[59832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:08:54 compute-0 python3.9[59834]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:08:54 compute-0 sudo[59832]: pam_unix(sudo:session): session closed for user root
Sep 30 14:08:54 compute-0 sudo[59957]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axygyvinbvgimahbhbgndifdytuniwaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241333.9522204-701-223032948870/AnsiballZ_copy.py'
Sep 30 14:08:54 compute-0 sudo[59957]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:08:54 compute-0 python3.9[59959]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759241333.9522204-701-223032948870/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=4729b6ffc5b555fa142bf0b6e6dc15609cb89a22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:08:55 compute-0 sudo[59957]: pam_unix(sudo:session): session closed for user root
Sep 30 14:08:55 compute-0 python3.9[60110]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Sep 30 14:08:55 compute-0 polkitd[8246]: Registered Authentication Agent for unix-process:60112:469820 (system bus name :1.524 [/usr/bin/pkttyagent --notify-fd 5 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Sep 30 14:09:20 compute-0 polkitd[8246]: Unregistered Authentication Agent for unix-process:60112:469820 (system bus name :1.524, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Sep 30 14:09:20 compute-0 polkitd[8246]: Operator of unix-process:60112:469820 FAILED to authenticate to gain authorization for action org.freedesktop.systemd1.manage-units for system-bus-name::1.523 [<unknown>] (owned by unix-user:zuul)
Sep 30 14:09:20 compute-0 polkit-agent-helper-1[60124]: pam_unix(polkit-1:auth): conversation failed
Sep 30 14:09:20 compute-0 polkit-agent-helper-1[60124]: pam_unix(polkit-1:auth): auth could not identify password for [root]
Sep 30 14:09:21 compute-0 sshd-session[55374]: Connection closed by 192.168.122.30 port 54016
Sep 30 14:09:21 compute-0 sshd-session[55371]: pam_unix(sshd:session): session closed for user zuul
Sep 30 14:09:21 compute-0 systemd[1]: session-14.scope: Deactivated successfully.
Sep 30 14:09:21 compute-0 systemd[1]: session-14.scope: Consumed 18.759s CPU time.
Sep 30 14:09:21 compute-0 systemd-logind[808]: Session 14 logged out. Waiting for processes to exit.
Sep 30 14:09:21 compute-0 systemd-logind[808]: Removed session 14.
Sep 30 14:09:34 compute-0 sshd-session[60150]: Accepted publickey for zuul from 192.168.122.30 port 41542 ssh2: ECDSA SHA256:bXV1aFTGAGwGo0hLh6HZ3pTGxlJrPf0VedxXflT3nU8
Sep 30 14:09:34 compute-0 systemd-logind[808]: New session 15 of user zuul.
Sep 30 14:09:34 compute-0 systemd[1]: Started Session 15 of User zuul.
Sep 30 14:09:34 compute-0 sshd-session[60150]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Sep 30 14:09:36 compute-0 python3.9[60303]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Sep 30 14:09:36 compute-0 sudo[60457]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-psagdkcitvmobqomuftxtycqosbbyexk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241376.4427722-59-208799679958547/AnsiballZ_file.py'
Sep 30 14:09:36 compute-0 sudo[60457]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:09:37 compute-0 python3.9[60459]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:09:37 compute-0 sudo[60457]: pam_unix(sudo:session): session closed for user root
Sep 30 14:09:37 compute-0 sudo[60632]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahjjotkzlsbwtyekwjlbmzzdwrqomzcr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241377.345855-83-142242371678198/AnsiballZ_stat.py'
Sep 30 14:09:37 compute-0 sudo[60632]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:09:38 compute-0 python3.9[60634]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:09:38 compute-0 sudo[60632]: pam_unix(sudo:session): session closed for user root
Sep 30 14:09:38 compute-0 sudo[60710]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwrcrlbwttkwlngfoowwmvgwjqzmxhsf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241377.345855-83-142242371678198/AnsiballZ_file.py'
Sep 30 14:09:38 compute-0 sudo[60710]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:09:38 compute-0 python3.9[60712]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.hogikjgg recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:09:38 compute-0 sudo[60710]: pam_unix(sudo:session): session closed for user root
Sep 30 14:09:39 compute-0 sudo[60862]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-divqktincechovkgfeymtzdkxjanwrvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241379.0261443-143-121317926205306/AnsiballZ_stat.py'
Sep 30 14:09:39 compute-0 sudo[60862]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:09:39 compute-0 python3.9[60864]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:09:39 compute-0 sudo[60862]: pam_unix(sudo:session): session closed for user root
Sep 30 14:09:39 compute-0 sudo[60940]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrojfijboaovzvlcvfpucpfryozoepae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241379.0261443-143-121317926205306/AnsiballZ_file.py'
Sep 30 14:09:39 compute-0 sudo[60940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:09:40 compute-0 python3.9[60942]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.m45s_ndb recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:09:40 compute-0 sudo[60940]: pam_unix(sudo:session): session closed for user root
Sep 30 14:09:40 compute-0 sudo[61092]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bsxzjrhwoxrgskromwgevbjwmshueqrk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241380.4722116-182-115368547771140/AnsiballZ_file.py'
Sep 30 14:09:40 compute-0 sudo[61092]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:09:40 compute-0 python3.9[61094]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:09:40 compute-0 sudo[61092]: pam_unix(sudo:session): session closed for user root
Sep 30 14:09:41 compute-0 sudo[61244]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghsblfdutwqoiquemrgkdjwocwuqflrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241381.1259913-206-265638872060388/AnsiballZ_stat.py'
Sep 30 14:09:41 compute-0 sudo[61244]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:09:41 compute-0 python3.9[61246]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:09:41 compute-0 sudo[61244]: pam_unix(sudo:session): session closed for user root
Sep 30 14:09:41 compute-0 sudo[61322]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwlwxowrmheyewfqxdwrvkjiogqkfgzy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241381.1259913-206-265638872060388/AnsiballZ_file.py'
Sep 30 14:09:41 compute-0 sudo[61322]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:09:42 compute-0 python3.9[61324]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:09:42 compute-0 sudo[61322]: pam_unix(sudo:session): session closed for user root
Sep 30 14:09:42 compute-0 sudo[61474]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgezbvdvnxdoxuvxcrryripbgmvaejoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241382.2051458-206-232974789876701/AnsiballZ_stat.py'
Sep 30 14:09:42 compute-0 sudo[61474]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:09:42 compute-0 python3.9[61476]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:09:42 compute-0 sudo[61474]: pam_unix(sudo:session): session closed for user root
Sep 30 14:09:42 compute-0 sudo[61552]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jatgdzqebjfqsbqxurluxljounpjpwph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241382.2051458-206-232974789876701/AnsiballZ_file.py'
Sep 30 14:09:42 compute-0 sudo[61552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:09:43 compute-0 python3.9[61554]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:09:43 compute-0 sudo[61552]: pam_unix(sudo:session): session closed for user root
Sep 30 14:09:43 compute-0 sudo[61704]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxobyggbvsfmliujvzlgsxucsbqxgdad ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241383.3988373-275-38367646265834/AnsiballZ_file.py'
Sep 30 14:09:43 compute-0 sudo[61704]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:09:44 compute-0 python3.9[61706]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:09:44 compute-0 sudo[61704]: pam_unix(sudo:session): session closed for user root
Sep 30 14:09:44 compute-0 sudo[61856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmuqjsqpmcqhblyuybdwlpibkpaealop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241384.3157861-299-163351561569404/AnsiballZ_stat.py'
Sep 30 14:09:44 compute-0 sudo[61856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:09:44 compute-0 python3.9[61858]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:09:44 compute-0 sudo[61856]: pam_unix(sudo:session): session closed for user root
Sep 30 14:09:45 compute-0 sudo[61934]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dngyymctxauwtiormrokvdvmijudzvvk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241384.3157861-299-163351561569404/AnsiballZ_file.py'
Sep 30 14:09:45 compute-0 sudo[61934]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:09:45 compute-0 python3.9[61936]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:09:45 compute-0 sudo[61934]: pam_unix(sudo:session): session closed for user root
Sep 30 14:09:45 compute-0 sudo[62088]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrfyjcpetfcdycwcbgcwcwlcwuueccbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241385.5949104-335-66129283018599/AnsiballZ_stat.py'
Sep 30 14:09:45 compute-0 sudo[62088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:09:46 compute-0 python3.9[62090]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:09:46 compute-0 sudo[62088]: pam_unix(sudo:session): session closed for user root
Sep 30 14:09:46 compute-0 sudo[62166]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwcwvxyvcoiyjgkgrxlbgdyubyqlpfnb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241385.5949104-335-66129283018599/AnsiballZ_file.py'
Sep 30 14:09:46 compute-0 sudo[62166]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:09:46 compute-0 sshd-session[62076]: Received disconnect from 209.38.228.14 port 51190:11: Bye Bye [preauth]
Sep 30 14:09:46 compute-0 sshd-session[62076]: Disconnected from authenticating user root 209.38.228.14 port 51190 [preauth]
Sep 30 14:09:46 compute-0 python3.9[62168]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:09:46 compute-0 sudo[62166]: pam_unix(sudo:session): session closed for user root
Sep 30 14:09:47 compute-0 sudo[62318]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jlqsdfsensuqbaevndqondsdmexzrzai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241386.7791984-371-280043540297783/AnsiballZ_systemd.py'
Sep 30 14:09:47 compute-0 sudo[62318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:09:47 compute-0 python3.9[62320]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 14:09:47 compute-0 systemd[1]: Reloading.
Sep 30 14:09:47 compute-0 systemd-sysv-generator[62350]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 14:09:47 compute-0 systemd-rc-local-generator[62347]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:09:47 compute-0 sudo[62318]: pam_unix(sudo:session): session closed for user root
Sep 30 14:09:48 compute-0 sudo[62506]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-strstoudhbxqzyssboombcrojsulfmmi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241388.1269858-395-88072427487064/AnsiballZ_stat.py'
Sep 30 14:09:48 compute-0 sudo[62506]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:09:48 compute-0 python3.9[62508]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:09:48 compute-0 sudo[62506]: pam_unix(sudo:session): session closed for user root
Sep 30 14:09:49 compute-0 sudo[62584]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufsncjbdgfjbcsjiqzoaphsvwvxlyxrm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241388.1269858-395-88072427487064/AnsiballZ_file.py'
Sep 30 14:09:49 compute-0 sudo[62584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:09:49 compute-0 python3.9[62586]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:09:49 compute-0 sudo[62584]: pam_unix(sudo:session): session closed for user root
Sep 30 14:09:49 compute-0 sudo[62738]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phivqzphcbbqqyevbrgtoipzitxqyiwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241389.4635544-431-168716709415462/AnsiballZ_stat.py'
Sep 30 14:09:49 compute-0 sudo[62738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:09:49 compute-0 python3.9[62740]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:09:50 compute-0 sudo[62738]: pam_unix(sudo:session): session closed for user root
Sep 30 14:09:50 compute-0 sudo[62816]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwltylpeqcnmyfpfrppjttuaigwtylco ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241389.4635544-431-168716709415462/AnsiballZ_file.py'
Sep 30 14:09:50 compute-0 sudo[62816]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:09:50 compute-0 sshd-session[62587]: Invalid user ctf from 210.90.155.80 port 50834
Sep 30 14:09:50 compute-0 python3.9[62818]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:09:50 compute-0 sudo[62816]: pam_unix(sudo:session): session closed for user root
Sep 30 14:09:50 compute-0 sshd-session[62587]: Received disconnect from 210.90.155.80 port 50834:11: Bye Bye [preauth]
Sep 30 14:09:50 compute-0 sshd-session[62587]: Disconnected from invalid user ctf 210.90.155.80 port 50834 [preauth]
Sep 30 14:09:50 compute-0 sudo[62968]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgssuxfecqelmhhxbqayhuyxvybsavvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241390.6094422-467-102475373640532/AnsiballZ_systemd.py'
Sep 30 14:09:50 compute-0 sudo[62968]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:09:51 compute-0 python3.9[62970]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 14:09:51 compute-0 systemd[1]: Reloading.
Sep 30 14:09:51 compute-0 systemd-rc-local-generator[62999]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:09:51 compute-0 systemd-sysv-generator[63003]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 14:09:51 compute-0 systemd[1]: Starting Create netns directory...
Sep 30 14:09:51 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Sep 30 14:09:51 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Sep 30 14:09:51 compute-0 systemd[1]: Finished Create netns directory.
Sep 30 14:09:51 compute-0 sudo[62968]: pam_unix(sudo:session): session closed for user root
Sep 30 14:09:52 compute-0 python3.9[63161]: ansible-ansible.builtin.service_facts Invoked
Sep 30 14:09:52 compute-0 network[63178]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Sep 30 14:09:52 compute-0 network[63179]: 'network-scripts' will be removed from distribution in near future.
Sep 30 14:09:52 compute-0 network[63180]: It is advised to switch to 'NetworkManager' instead for network management.
Sep 30 14:09:57 compute-0 sudo[63441]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvjkgrasuneukqfnchaerpkugwrwyoca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241397.6675098-545-45947989476205/AnsiballZ_stat.py'
Sep 30 14:09:57 compute-0 sudo[63441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:09:58 compute-0 python3.9[63443]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:09:58 compute-0 sudo[63441]: pam_unix(sudo:session): session closed for user root
Sep 30 14:09:58 compute-0 sudo[63519]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggblgidwtcpchiadecackbozydyqwwpd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241397.6675098-545-45947989476205/AnsiballZ_file.py'
Sep 30 14:09:58 compute-0 sudo[63519]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:09:58 compute-0 python3.9[63521]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:09:58 compute-0 sudo[63519]: pam_unix(sudo:session): session closed for user root
Sep 30 14:09:59 compute-0 sudo[63671]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqzfjbqyntavobtqhttoxkcdtrtteoxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241398.99328-584-233315508781821/AnsiballZ_file.py'
Sep 30 14:09:59 compute-0 sudo[63671]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:09:59 compute-0 python3.9[63673]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:09:59 compute-0 sudo[63671]: pam_unix(sudo:session): session closed for user root
Sep 30 14:09:59 compute-0 sudo[63823]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdyzrbhfonpsubfwgrxwrdolmkzzsroa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241399.6904976-608-145386769382241/AnsiballZ_stat.py'
Sep 30 14:09:59 compute-0 sudo[63823]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:10:00 compute-0 python3.9[63825]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:10:00 compute-0 sudo[63823]: pam_unix(sudo:session): session closed for user root
Sep 30 14:10:00 compute-0 sudo[63946]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmwwgzfzpblxlfuvvasenvhaibtywmtk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241399.6904976-608-145386769382241/AnsiballZ_copy.py'
Sep 30 14:10:00 compute-0 sudo[63946]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:10:01 compute-0 python3.9[63948]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759241399.6904976-608-145386769382241/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:10:01 compute-0 sudo[63946]: pam_unix(sudo:session): session closed for user root
Sep 30 14:10:01 compute-0 sudo[64098]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cahpqdfhfrvjintzpkzlenbzbapfytak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241401.4455862-662-257713439587136/AnsiballZ_timezone.py'
Sep 30 14:10:01 compute-0 sudo[64098]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:10:02 compute-0 python3.9[64100]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Sep 30 14:10:02 compute-0 systemd[1]: Starting Time & Date Service...
Sep 30 14:10:02 compute-0 systemd[1]: Started Time & Date Service.
Sep 30 14:10:02 compute-0 sudo[64098]: pam_unix(sudo:session): session closed for user root
Sep 30 14:10:02 compute-0 sudo[64254]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgihtzkwezilkaorlwmcuwhixgppidiz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241402.572101-689-43506782292015/AnsiballZ_file.py'
Sep 30 14:10:02 compute-0 sudo[64254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:10:03 compute-0 python3.9[64256]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:10:03 compute-0 sudo[64254]: pam_unix(sudo:session): session closed for user root
Sep 30 14:10:03 compute-0 sudo[64406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-daiunqkvtpxeqolscgdgalzeicwsaeae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241403.439541-713-219488454314227/AnsiballZ_stat.py'
Sep 30 14:10:03 compute-0 sudo[64406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:10:03 compute-0 python3.9[64408]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:10:03 compute-0 sudo[64406]: pam_unix(sudo:session): session closed for user root
Sep 30 14:10:04 compute-0 sudo[64529]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfniwqrcjoqzyvgzarpfceqizcedjxji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241403.439541-713-219488454314227/AnsiballZ_copy.py'
Sep 30 14:10:04 compute-0 sudo[64529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:10:04 compute-0 python3.9[64531]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759241403.439541-713-219488454314227/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:10:04 compute-0 sudo[64529]: pam_unix(sudo:session): session closed for user root
Sep 30 14:10:05 compute-0 sudo[64681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpszixaukaujikejbkhifjbvencfhpyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241404.7688253-758-18263340798729/AnsiballZ_stat.py'
Sep 30 14:10:05 compute-0 sudo[64681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:10:05 compute-0 python3.9[64683]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:10:05 compute-0 sudo[64681]: pam_unix(sudo:session): session closed for user root
Sep 30 14:10:05 compute-0 sudo[64804]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlneglxdeciezvvhoyxebprzdomvavqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241404.7688253-758-18263340798729/AnsiballZ_copy.py'
Sep 30 14:10:05 compute-0 sudo[64804]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:10:05 compute-0 python3.9[64806]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759241404.7688253-758-18263340798729/.source.yaml _original_basename=.fyv2el0q follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:10:05 compute-0 sudo[64804]: pam_unix(sudo:session): session closed for user root
Sep 30 14:10:06 compute-0 sudo[64956]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aleyailqlmdnuscmsozfnoulesfcuddw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241406.0121887-803-120839119726012/AnsiballZ_stat.py'
Sep 30 14:10:06 compute-0 sudo[64956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:10:06 compute-0 python3.9[64958]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:10:06 compute-0 sudo[64956]: pam_unix(sudo:session): session closed for user root
Sep 30 14:10:07 compute-0 sudo[65079]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ecxnxanxpsterrqiubnxzfbawqsqdrit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241406.0121887-803-120839119726012/AnsiballZ_copy.py'
Sep 30 14:10:07 compute-0 sudo[65079]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:10:07 compute-0 python3.9[65081]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759241406.0121887-803-120839119726012/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:10:07 compute-0 sudo[65079]: pam_unix(sudo:session): session closed for user root
Sep 30 14:10:07 compute-0 sudo[65231]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjzgbrdsyrttnddidbnknylagystaitm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241407.433916-848-274896920655209/AnsiballZ_command.py'
Sep 30 14:10:07 compute-0 sudo[65231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:10:08 compute-0 python3.9[65233]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:10:08 compute-0 sudo[65231]: pam_unix(sudo:session): session closed for user root
Sep 30 14:10:08 compute-0 sudo[65384]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvrjoocromfywdhxgvotfanmzhgtvavv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241408.2980044-872-124453111572237/AnsiballZ_command.py'
Sep 30 14:10:08 compute-0 sudo[65384]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:10:08 compute-0 python3.9[65386]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:10:08 compute-0 sudo[65384]: pam_unix(sudo:session): session closed for user root
Sep 30 14:10:09 compute-0 sudo[65537]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-natnmzlfcmxyfjahgziuchcdnlcuvocc ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759241408.9737105-896-269956747661651/AnsiballZ_edpm_nftables_from_files.py'
Sep 30 14:10:09 compute-0 sudo[65537]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:10:09 compute-0 python3[65539]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Sep 30 14:10:09 compute-0 sudo[65537]: pam_unix(sudo:session): session closed for user root
Sep 30 14:10:10 compute-0 sudo[65689]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnuwtpuofjcwsbhjatcdslauclkbllgt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241409.7946587-920-202420505377904/AnsiballZ_stat.py'
Sep 30 14:10:10 compute-0 sudo[65689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:10:10 compute-0 python3.9[65691]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:10:10 compute-0 sudo[65689]: pam_unix(sudo:session): session closed for user root
Sep 30 14:10:10 compute-0 sudo[65813]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnsgttrqgtsswjwvafmbqggyskpkvxkr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241409.7946587-920-202420505377904/AnsiballZ_copy.py'
Sep 30 14:10:10 compute-0 sudo[65813]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:10:10 compute-0 python3.9[65815]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759241409.7946587-920-202420505377904/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:10:10 compute-0 sudo[65813]: pam_unix(sudo:session): session closed for user root
Sep 30 14:10:11 compute-0 sudo[65965]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbtreygumjnppnucarqhwsflftpngsqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241411.043162-965-68101693795350/AnsiballZ_stat.py'
Sep 30 14:10:11 compute-0 sudo[65965]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:10:11 compute-0 python3.9[65967]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:10:11 compute-0 sudo[65965]: pam_unix(sudo:session): session closed for user root
Sep 30 14:10:11 compute-0 sudo[66088]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jujyrtnwvmcduohgqzfarnaoaxrnznvb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241411.043162-965-68101693795350/AnsiballZ_copy.py'
Sep 30 14:10:11 compute-0 sudo[66088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:10:12 compute-0 python3.9[66090]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759241411.043162-965-68101693795350/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:10:12 compute-0 sudo[66088]: pam_unix(sudo:session): session closed for user root
Sep 30 14:10:12 compute-0 sudo[66240]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxakcpnwzlcqyrjdogjbzhrefobintzh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241412.2447076-1010-19898847009722/AnsiballZ_stat.py'
Sep 30 14:10:12 compute-0 sudo[66240]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:10:12 compute-0 python3.9[66242]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:10:12 compute-0 sudo[66240]: pam_unix(sudo:session): session closed for user root
Sep 30 14:10:13 compute-0 sudo[66363]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qshasrqzuedrqcdflqeywubdrdwlqbdf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241412.2447076-1010-19898847009722/AnsiballZ_copy.py'
Sep 30 14:10:13 compute-0 sudo[66363]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:10:13 compute-0 python3.9[66365]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759241412.2447076-1010-19898847009722/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:10:13 compute-0 sudo[66363]: pam_unix(sudo:session): session closed for user root
Sep 30 14:10:13 compute-0 sudo[66515]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etnoshdoyasztfhhgkcotgswweaxtotr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241413.448724-1055-194236651790443/AnsiballZ_stat.py'
Sep 30 14:10:13 compute-0 sudo[66515]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:10:13 compute-0 python3.9[66517]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:10:13 compute-0 sudo[66515]: pam_unix(sudo:session): session closed for user root
Sep 30 14:10:14 compute-0 sudo[66638]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-harumkvzkdqudazccuymanltkgbhvrkk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241413.448724-1055-194236651790443/AnsiballZ_copy.py'
Sep 30 14:10:14 compute-0 sudo[66638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:10:14 compute-0 python3.9[66640]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759241413.448724-1055-194236651790443/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:10:14 compute-0 sudo[66638]: pam_unix(sudo:session): session closed for user root
Sep 30 14:10:15 compute-0 sudo[66790]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dezzejpydzxlujsbfacoyhovipkfzcrg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241414.6359718-1100-90249961537061/AnsiballZ_stat.py'
Sep 30 14:10:15 compute-0 sudo[66790]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:10:15 compute-0 python3.9[66792]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:10:15 compute-0 sudo[66790]: pam_unix(sudo:session): session closed for user root
Sep 30 14:10:15 compute-0 sudo[66913]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yptwfpzabhjmfuwchlyyqsumcwwvytqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241414.6359718-1100-90249961537061/AnsiballZ_copy.py'
Sep 30 14:10:15 compute-0 sudo[66913]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:10:16 compute-0 python3.9[66915]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759241414.6359718-1100-90249961537061/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:10:16 compute-0 sudo[66913]: pam_unix(sudo:session): session closed for user root
Sep 30 14:10:16 compute-0 sudo[67065]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpbzywzhuavkczbfzgdcuuersycvlbtw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241416.1776705-1145-215315345937202/AnsiballZ_file.py'
Sep 30 14:10:16 compute-0 sudo[67065]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:10:16 compute-0 python3.9[67067]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:10:16 compute-0 sudo[67065]: pam_unix(sudo:session): session closed for user root
Sep 30 14:10:17 compute-0 sudo[67217]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzvoaujlthdrtpobsoaiiinzglauydll ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241416.8323565-1169-265819538301257/AnsiballZ_command.py'
Sep 30 14:10:17 compute-0 sudo[67217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:10:17 compute-0 python3.9[67219]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:10:17 compute-0 sudo[67217]: pam_unix(sudo:session): session closed for user root
Sep 30 14:10:18 compute-0 sudo[67376]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvuwafywjyfzidjbcmpypzdbaqndtwhv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241417.5687926-1193-101930471767192/AnsiballZ_blockinfile.py'
Sep 30 14:10:18 compute-0 sudo[67376]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:10:18 compute-0 python3.9[67378]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:10:18 compute-0 sudo[67376]: pam_unix(sudo:session): session closed for user root
Sep 30 14:10:18 compute-0 sudo[67529]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxeixglgaortxzkmjpckhgizjyrpwrkv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241418.4877539-1220-210872485860978/AnsiballZ_file.py'
Sep 30 14:10:18 compute-0 sudo[67529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:10:19 compute-0 python3.9[67531]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:10:19 compute-0 sudo[67529]: pam_unix(sudo:session): session closed for user root
Sep 30 14:10:19 compute-0 sudo[67681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxvyldhhqimdkrorgtelkybhqghxkuwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241419.1606102-1220-32941118312071/AnsiballZ_file.py'
Sep 30 14:10:19 compute-0 sudo[67681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:10:19 compute-0 python3.9[67683]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:10:19 compute-0 sudo[67681]: pam_unix(sudo:session): session closed for user root
Sep 30 14:10:20 compute-0 sudo[67833]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufktcrggxmoyshqzswkbwrzykfppltii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241419.804036-1265-135764965208782/AnsiballZ_mount.py'
Sep 30 14:10:20 compute-0 sudo[67833]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:10:20 compute-0 python3.9[67835]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Sep 30 14:10:20 compute-0 sudo[67833]: pam_unix(sudo:session): session closed for user root
Sep 30 14:10:20 compute-0 sudo[67986]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgxnnabaxosyawndpgbsmqbffieucuoa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241420.6106591-1265-113649344454802/AnsiballZ_mount.py'
Sep 30 14:10:20 compute-0 sudo[67986]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:10:21 compute-0 python3.9[67988]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Sep 30 14:10:21 compute-0 sudo[67986]: pam_unix(sudo:session): session closed for user root
Sep 30 14:10:21 compute-0 sshd-session[60153]: Connection closed by 192.168.122.30 port 41542
Sep 30 14:10:21 compute-0 sshd-session[60150]: pam_unix(sshd:session): session closed for user zuul
Sep 30 14:10:21 compute-0 systemd[1]: session-15.scope: Deactivated successfully.
Sep 30 14:10:21 compute-0 systemd[1]: session-15.scope: Consumed 31.079s CPU time.
Sep 30 14:10:21 compute-0 systemd-logind[808]: Session 15 logged out. Waiting for processes to exit.
Sep 30 14:10:21 compute-0 systemd-logind[808]: Removed session 15.
Sep 30 14:10:27 compute-0 sshd-session[68014]: Accepted publickey for zuul from 192.168.122.30 port 43366 ssh2: ECDSA SHA256:bXV1aFTGAGwGo0hLh6HZ3pTGxlJrPf0VedxXflT3nU8
Sep 30 14:10:27 compute-0 systemd-logind[808]: New session 16 of user zuul.
Sep 30 14:10:27 compute-0 systemd[1]: Started Session 16 of User zuul.
Sep 30 14:10:27 compute-0 sshd-session[68014]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Sep 30 14:10:27 compute-0 sudo[68167]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwvtcuswxtdhnrndltcanqbxfcuicime ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241427.3085876-18-121600059959895/AnsiballZ_tempfile.py'
Sep 30 14:10:27 compute-0 sudo[68167]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:10:28 compute-0 python3.9[68169]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Sep 30 14:10:28 compute-0 sudo[68167]: pam_unix(sudo:session): session closed for user root
Sep 30 14:10:28 compute-0 sudo[68319]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjxebsftjhszfdxinpzqxsyxuvjixkhm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241428.1923485-54-48795330518478/AnsiballZ_stat.py'
Sep 30 14:10:28 compute-0 sudo[68319]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:10:28 compute-0 python3.9[68321]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 14:10:28 compute-0 sudo[68319]: pam_unix(sudo:session): session closed for user root
Sep 30 14:10:29 compute-0 sudo[68471]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkizqkwyisknqvbxvqmougailcplejsy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241429.082423-84-37416192363344/AnsiballZ_setup.py'
Sep 30 14:10:29 compute-0 sudo[68471]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:10:30 compute-0 python3.9[68473]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Sep 30 14:10:30 compute-0 sudo[68471]: pam_unix(sudo:session): session closed for user root
Sep 30 14:10:30 compute-0 sudo[68623]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdolfaykwpnkmshqmofqznlyqkcuihdn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241430.3466554-109-171657783718115/AnsiballZ_blockinfile.py'
Sep 30 14:10:30 compute-0 sudo[68623]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:10:31 compute-0 python3.9[68625]: ansible-ansible.builtin.blockinfile Invoked with block=compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCnxECDV8o4yTWjIh4fvbRM7O3xbGJ1m92I7pON0ACAWIAouASSEepZzIkP/5+xhc3FvoENphurtvwMkG/2EO54537hANpdReJX9jOK8oyBKFY69IjJkJOJeVP+oxwcxoh3EWtJs1YuvUmWc2OlOxw1dU/jABEbEjdAKAhvqxRaqUYugro6sW3wPvfJAchlkp6HZlUOKtLNvQYY0TgEm3KEnnNRPy81PrLCBPFw+4r/4OLCLfGiNNBXurueYIi2AtJU5ri8w0IasaCJIuRaf0b9nZb9YhYheEZwNMWWo0TqqWLjxpEpkAwEpFt20BG5gWVcehU6LTHU1jhBHtvj/bw29G3Bjj661M2x1TalNg1qVS1uqHqt+iaTYHDkjU6EDBgNTlJB2E7o5g8gx5odi1xDt1+82pz2ofs9HExCG8e34PG+VPbiITiBmYxIhD/sedYo/whhBpwnk8Ntc6FiTJ8YKZFoDrdpRszhCSjF3Ku4tV3K/OALpdEj9gfZof1g9w0=
                                            compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBHaVDEpiNIgxbcdiDZPInyHzgYXaub7mLSciYJRys3z
                                            compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCxQB4y9hJVNDPIO/1RzO1QfaaNnxXt0XWNC3imzikzmekKOgg80jMXW/2phxTZXO0o7+FqN5NV4+uvp8a+O56I=
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDv3lqvLSdSe9FfjIvLbovEc/EXXFpVSKrphGNEdNwPKWxKCqbRYPxeqJl9Jji4K0tZVFsnk2y80vkJi2t49CsgHkulvDipHH0WbxzT5JmxX3U03kqn5gkmrCxpqL6za8bs9Q7mkt6mjkWly6gcmfLpKuuLvUxZKOU1LZ2AVlGJ+lx8BKyB/eLXF3G5Z+SizImDNtYWWRadJLWvD5niRNMIc2TlUCokf7CPDF8EiD9l/XSjvS1B8gsIkbj61bZbc5FPy0L7Rf3R2/GQep+DOwM+SlKvMhN3JDAnmMlD3OXlJNYzMbwR38RaTSg1pFgzzOPsqZ9Iz2JfJ1PsEjDeExvLlplJumOgKmj0EVqPUzSrgMHEIqGK1cql5+xL2pPsaxx+7FoLVxTyLuFpNgy3DE7BTVpJsFThJyWiQQOxp4VvZeErMHcsAbyAgDLQdb6+hj/Ywpz+IVhhCI/z71G4iDd0vr30Ege2Mu65bqGRrTGryXZjFKR1aotsf9ftBCV0WkM=
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJtA9linW0FhxFVV4OOPBy2+xpEXZnSB7XZ4XJ6LwDJf
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFs/AgtcpVNFq8p4SVbSHfwdF0vUxZGYjSLggzy7X+2gYefshG0Ix5Z3uc2A1+UYgtw8a3032k+JQ3hw3F4uXS8=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCv0Wkq+j+fGZm3g4FqO0k97HEDxRPMvcwc3mezGI0d6HXz1idSLlEvisBD5x33g1ZEpqE+UKpaz5jo631sXRWDIQkB+XyBenmVirsvjb64FBUfa5ddymIDBQI7h4Nu3k3jdnLV+0VP4lc5k27jVePROBQWh5AZ504IzDUlKXASzzbP0ZT3DKXWRbeREqIK0w2errWoAuULV4cYBhmk/v4vlAliBhPh2bwRJRa43VNXHlJnX7lK1qqFHPp63fe0t23uXUssYQJ8OyJnRT7030ZOYwU4LYK0MXgYJqP7fClsFqnzrcaWJDO32L7M89peYQ5QKF0eMNHf+a15s1nhPkgnynsOqpId2OleuJZqpt00reWSxmwfG09sb6EwI4EAxGTWWL87DuwXz3ipJMbrRa+8PrLTAjrLuHC10aMtq1qejCQJgHd4yZVZ024zi9KHZMgVnzedQF9Byf3u2ZnJSMNok8VHosQ/ny421qEgNF44XEbNUD56bXCSxsU5Gg+yNNs=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIP4KjWL9aevmnyMtrV7tu6pE8vgoG3wZbh9qSrtJ+XoC
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOqI6tqXjTKjeu4yD2TwTST9ws8xuagG9BbnXQ6fvmDvvniDkLihQ6k7GTTmBGgJE5lCje5bKOG2MRkcVCNXhKU=
                                             create=True mode=0644 path=/tmp/ansible.b6id3jhh state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:10:31 compute-0 sudo[68623]: pam_unix(sudo:session): session closed for user root
Sep 30 14:10:31 compute-0 sudo[68775]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhdyrpkxtxwmobntnwbsevlirbdwixhz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241431.296958-133-101308274057611/AnsiballZ_command.py'
Sep 30 14:10:31 compute-0 sudo[68775]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:10:32 compute-0 python3.9[68777]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.b6id3jhh' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:10:32 compute-0 sudo[68775]: pam_unix(sudo:session): session closed for user root
Sep 30 14:10:32 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Sep 30 14:10:32 compute-0 sudo[68931]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xytctkfcarmuwzumbnschtdvufjxpwqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241432.2117698-157-137120679206132/AnsiballZ_file.py'
Sep 30 14:10:32 compute-0 sudo[68931]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:10:32 compute-0 python3.9[68933]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.b6id3jhh state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:10:32 compute-0 sudo[68931]: pam_unix(sudo:session): session closed for user root
Sep 30 14:10:33 compute-0 sshd-session[68017]: Connection closed by 192.168.122.30 port 43366
Sep 30 14:10:33 compute-0 sshd-session[68014]: pam_unix(sshd:session): session closed for user zuul
Sep 30 14:10:33 compute-0 systemd[1]: session-16.scope: Deactivated successfully.
Sep 30 14:10:33 compute-0 systemd[1]: session-16.scope: Consumed 3.388s CPU time.
Sep 30 14:10:33 compute-0 systemd-logind[808]: Session 16 logged out. Waiting for processes to exit.
Sep 30 14:10:33 compute-0 systemd-logind[808]: Removed session 16.
Sep 30 14:10:38 compute-0 sshd-session[68958]: Accepted publickey for zuul from 192.168.122.30 port 56006 ssh2: ECDSA SHA256:bXV1aFTGAGwGo0hLh6HZ3pTGxlJrPf0VedxXflT3nU8
Sep 30 14:10:38 compute-0 systemd-logind[808]: New session 17 of user zuul.
Sep 30 14:10:38 compute-0 systemd[1]: Started Session 17 of User zuul.
Sep 30 14:10:38 compute-0 sshd-session[68958]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Sep 30 14:10:39 compute-0 python3.9[69111]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Sep 30 14:10:40 compute-0 sudo[69265]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgzoqiasezjjrwrefpuejjeriguncpcn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241439.9483013-56-115303350127035/AnsiballZ_systemd.py'
Sep 30 14:10:40 compute-0 sudo[69265]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:10:40 compute-0 python3.9[69267]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Sep 30 14:10:41 compute-0 sudo[69265]: pam_unix(sudo:session): session closed for user root
Sep 30 14:10:41 compute-0 sudo[69419]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olvxiykrgdrmlbjqvcbcoefzrirwfrri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241441.1609664-80-245715406548933/AnsiballZ_systemd.py'
Sep 30 14:10:41 compute-0 sudo[69419]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:10:41 compute-0 python3.9[69421]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Sep 30 14:10:41 compute-0 sudo[69419]: pam_unix(sudo:session): session closed for user root
Sep 30 14:10:42 compute-0 sudo[69572]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdgpgiqgrnlcizesmkeczefkzlybhlqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241442.104573-107-87648858291706/AnsiballZ_command.py'
Sep 30 14:10:42 compute-0 sudo[69572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:10:42 compute-0 python3.9[69574]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:10:42 compute-0 sudo[69572]: pam_unix(sudo:session): session closed for user root
Sep 30 14:10:43 compute-0 sudo[69725]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uitaxvfnkngrgvqtmgoepnuvnbvgkjkp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241443.2000833-131-42897985483573/AnsiballZ_stat.py'
Sep 30 14:10:43 compute-0 sudo[69725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:10:43 compute-0 python3.9[69727]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 14:10:43 compute-0 sudo[69725]: pam_unix(sudo:session): session closed for user root
Sep 30 14:10:44 compute-0 sudo[69879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-buamkizdxezimcdyyucjcuvuatpgyclx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241444.1624837-155-66162547402645/AnsiballZ_command.py'
Sep 30 14:10:44 compute-0 sudo[69879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:10:44 compute-0 python3.9[69881]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:10:44 compute-0 sudo[69879]: pam_unix(sudo:session): session closed for user root
Sep 30 14:10:45 compute-0 sudo[70034]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tyfzgvzmvyyrzlpqbkrixvtodtbqfsoz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241444.9058895-179-4443634075046/AnsiballZ_file.py'
Sep 30 14:10:45 compute-0 sudo[70034]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:10:45 compute-0 python3.9[70036]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:10:45 compute-0 sudo[70034]: pam_unix(sudo:session): session closed for user root
Sep 30 14:10:45 compute-0 sshd-session[68961]: Connection closed by 192.168.122.30 port 56006
Sep 30 14:10:45 compute-0 sshd-session[68958]: pam_unix(sshd:session): session closed for user zuul
Sep 30 14:10:45 compute-0 systemd[1]: session-17.scope: Deactivated successfully.
Sep 30 14:10:45 compute-0 systemd[1]: session-17.scope: Consumed 4.369s CPU time.
Sep 30 14:10:45 compute-0 systemd-logind[808]: Session 17 logged out. Waiting for processes to exit.
Sep 30 14:10:45 compute-0 systemd-logind[808]: Removed session 17.
Sep 30 14:10:51 compute-0 sshd-session[70061]: Accepted publickey for zuul from 192.168.122.30 port 44118 ssh2: ECDSA SHA256:bXV1aFTGAGwGo0hLh6HZ3pTGxlJrPf0VedxXflT3nU8
Sep 30 14:10:51 compute-0 systemd-logind[808]: New session 18 of user zuul.
Sep 30 14:10:51 compute-0 systemd[1]: Started Session 18 of User zuul.
Sep 30 14:10:51 compute-0 sshd-session[70061]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Sep 30 14:10:52 compute-0 python3.9[70214]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Sep 30 14:10:53 compute-0 sudo[70370]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtaqovndwmkzapdjwljfhpsjqwhtquqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241452.9450846-62-231343264336160/AnsiballZ_setup.py'
Sep 30 14:10:53 compute-0 sudo[70370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:10:53 compute-0 python3.9[70372]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Sep 30 14:10:53 compute-0 sudo[70370]: pam_unix(sudo:session): session closed for user root
Sep 30 14:10:53 compute-0 sshd-session[70219]: Received disconnect from 210.90.155.80 port 46058:11: Bye Bye [preauth]
Sep 30 14:10:53 compute-0 sshd-session[70219]: Disconnected from authenticating user root 210.90.155.80 port 46058 [preauth]
Sep 30 14:10:54 compute-0 sudo[70454]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ocjficeocnuvsqtqfmctdulhzwqmxukb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241452.9450846-62-231343264336160/AnsiballZ_dnf.py'
Sep 30 14:10:54 compute-0 sudo[70454]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:10:54 compute-0 python3.9[70458]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Sep 30 14:10:54 compute-0 sshd-session[70455]: Invalid user seekcy from 209.38.228.14 port 42448
Sep 30 14:10:54 compute-0 sshd-session[70455]: Received disconnect from 209.38.228.14 port 42448:11: Bye Bye [preauth]
Sep 30 14:10:54 compute-0 sshd-session[70455]: Disconnected from invalid user seekcy 209.38.228.14 port 42448 [preauth]
Sep 30 14:10:55 compute-0 sudo[70454]: pam_unix(sudo:session): session closed for user root
Sep 30 14:10:56 compute-0 python3.9[70609]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:10:57 compute-0 python3.9[70760]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Sep 30 14:10:58 compute-0 python3.9[70910]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 14:10:58 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Sep 30 14:10:58 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Sep 30 14:10:59 compute-0 python3.9[71061]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 14:10:59 compute-0 sshd-session[70064]: Connection closed by 192.168.122.30 port 44118
Sep 30 14:10:59 compute-0 sshd-session[70061]: pam_unix(sshd:session): session closed for user zuul
Sep 30 14:10:59 compute-0 systemd[1]: session-18.scope: Deactivated successfully.
Sep 30 14:10:59 compute-0 systemd[1]: session-18.scope: Consumed 6.104s CPU time.
Sep 30 14:10:59 compute-0 systemd-logind[808]: Session 18 logged out. Waiting for processes to exit.
Sep 30 14:10:59 compute-0 systemd-logind[808]: Removed session 18.
Sep 30 14:11:08 compute-0 sshd-session[71086]: Accepted publickey for zuul from 38.129.56.219 port 49486 ssh2: RSA SHA256:PQ5gAlGqGw5eyUoP3tGuJWzdC0qrtAhhgPp/wWGLEq4
Sep 30 14:11:08 compute-0 systemd-logind[808]: New session 19 of user zuul.
Sep 30 14:11:08 compute-0 systemd[1]: Started Session 19 of User zuul.
Sep 30 14:11:08 compute-0 sshd-session[71086]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Sep 30 14:11:08 compute-0 sudo[71162]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pyljnfxqhckypkjykgbulkgswiuqjmux ; /usr/bin/python3'
Sep 30 14:11:08 compute-0 sudo[71162]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:11:08 compute-0 useradd[71166]: new group: name=ceph-admin, GID=42478
Sep 30 14:11:08 compute-0 useradd[71166]: new user: name=ceph-admin, UID=42477, GID=42478, home=/home/ceph-admin, shell=/bin/bash, from=none
Sep 30 14:11:08 compute-0 sudo[71162]: pam_unix(sudo:session): session closed for user root
Sep 30 14:11:09 compute-0 sudo[71248]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppxdqgcmjcpwurvznplddgcsubvrpveu ; /usr/bin/python3'
Sep 30 14:11:09 compute-0 sudo[71248]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:11:09 compute-0 sudo[71248]: pam_unix(sudo:session): session closed for user root
Sep 30 14:11:09 compute-0 sudo[71321]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfxlnlgrjbqsbqqbxuamnfqmctuflvec ; /usr/bin/python3'
Sep 30 14:11:09 compute-0 sudo[71321]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:11:09 compute-0 sudo[71321]: pam_unix(sudo:session): session closed for user root
Sep 30 14:11:10 compute-0 sudo[71371]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-scdutyshchtzwaoaewckkakjwmrivqvr ; /usr/bin/python3'
Sep 30 14:11:10 compute-0 sudo[71371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:11:10 compute-0 sudo[71371]: pam_unix(sudo:session): session closed for user root
Sep 30 14:11:10 compute-0 sudo[71397]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oesgternbxtvmpjwbtfdkdkfhxxzewud ; /usr/bin/python3'
Sep 30 14:11:10 compute-0 sudo[71397]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:11:10 compute-0 sudo[71397]: pam_unix(sudo:session): session closed for user root
Sep 30 14:11:10 compute-0 sudo[71423]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwfcnjkqheeqbretxayblbuxcnwjpoww ; /usr/bin/python3'
Sep 30 14:11:10 compute-0 sudo[71423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:11:11 compute-0 sudo[71423]: pam_unix(sudo:session): session closed for user root
Sep 30 14:11:11 compute-0 sudo[71449]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzkyrkjtusblcpnabxwvcgdomtgivssc ; /usr/bin/python3'
Sep 30 14:11:11 compute-0 sudo[71449]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:11:11 compute-0 sudo[71449]: pam_unix(sudo:session): session closed for user root
Sep 30 14:11:11 compute-0 sudo[71527]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbephogvvpihogodtaiobpujjdnmdaze ; /usr/bin/python3'
Sep 30 14:11:11 compute-0 sudo[71527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:11:12 compute-0 sudo[71527]: pam_unix(sudo:session): session closed for user root
Sep 30 14:11:12 compute-0 sudo[71600]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcxoqcdztcqgbyrltgtvxdmtvfdreqmr ; /usr/bin/python3'
Sep 30 14:11:12 compute-0 sudo[71600]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:11:12 compute-0 sudo[71600]: pam_unix(sudo:session): session closed for user root
Sep 30 14:11:12 compute-0 sudo[71702]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oarsxikccandmiihzfnjmejxisjdgzqg ; /usr/bin/python3'
Sep 30 14:11:12 compute-0 sudo[71702]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:11:12 compute-0 sudo[71702]: pam_unix(sudo:session): session closed for user root
Sep 30 14:11:13 compute-0 sudo[71775]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofogklbjoulqptfvgsrunygtftevzztv ; /usr/bin/python3'
Sep 30 14:11:13 compute-0 sudo[71775]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:11:13 compute-0 sudo[71775]: pam_unix(sudo:session): session closed for user root
Sep 30 14:11:13 compute-0 sudo[71825]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blzpiryvkgctrfavvfiadfyvcsbxuhcc ; /usr/bin/python3'
Sep 30 14:11:13 compute-0 sudo[71825]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:11:14 compute-0 python3[71827]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Sep 30 14:11:15 compute-0 sudo[71825]: pam_unix(sudo:session): session closed for user root
Sep 30 14:11:15 compute-0 sudo[71921]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wycthyyfuxwdibtqndjyksfyeokxenaa ; /usr/bin/python3'
Sep 30 14:11:15 compute-0 sudo[71921]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:11:15 compute-0 python3[71923]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Sep 30 14:11:17 compute-0 sudo[71921]: pam_unix(sudo:session): session closed for user root
Sep 30 14:11:17 compute-0 sudo[71948]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-viqzxfhypoemuyudrimphdofbbslyaor ; /usr/bin/python3'
Sep 30 14:11:17 compute-0 sudo[71948]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:11:17 compute-0 python3[71950]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 14:11:17 compute-0 sudo[71948]: pam_unix(sudo:session): session closed for user root
Sep 30 14:11:17 compute-0 sudo[71974]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgwnrvcjjwybqwdewyfkwampvibexdvg ; /usr/bin/python3'
Sep 30 14:11:17 compute-0 sudo[71974]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:11:17 compute-0 python3[71976]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G
                                          losetup /dev/loop3 /var/lib/ceph-osd-0.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:11:17 compute-0 kernel: loop: module loaded
Sep 30 14:11:17 compute-0 kernel: loop3: detected capacity change from 0 to 41943040
Sep 30 14:11:17 compute-0 sudo[71974]: pam_unix(sudo:session): session closed for user root
Sep 30 14:11:18 compute-0 sudo[72009]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zavypgpmaqriciahpfgbpzkzaxayqdsz ; /usr/bin/python3'
Sep 30 14:11:18 compute-0 sudo[72009]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:11:18 compute-0 python3[72011]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3
                                          vgcreate ceph_vg0 /dev/loop3
                                          lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:11:18 compute-0 lvm[72014]: PV /dev/loop3 not used.
Sep 30 14:11:18 compute-0 lvm[72016]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 14:11:18 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Sep 30 14:11:18 compute-0 lvm[72022]:   1 logical volume(s) in volume group "ceph_vg0" now active
Sep 30 14:11:18 compute-0 lvm[72026]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 14:11:18 compute-0 lvm[72026]: VG ceph_vg0 finished
Sep 30 14:11:18 compute-0 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Sep 30 14:11:18 compute-0 sudo[72009]: pam_unix(sudo:session): session closed for user root
Sep 30 14:11:18 compute-0 sudo[72102]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ayszpeusxmomadlwypvbzxvzqnmceglc ; /usr/bin/python3'
Sep 30 14:11:18 compute-0 sudo[72102]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:11:19 compute-0 python3[72104]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Sep 30 14:11:19 compute-0 sudo[72102]: pam_unix(sudo:session): session closed for user root
Sep 30 14:11:19 compute-0 sudo[72175]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pnksfrfsbmwyhbrkumwyinqahudpghhz ; /usr/bin/python3'
Sep 30 14:11:19 compute-0 sudo[72175]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:11:19 compute-0 python3[72177]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759241478.8100193-34792-274130229158675/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:11:19 compute-0 sudo[72175]: pam_unix(sudo:session): session closed for user root
Sep 30 14:11:19 compute-0 sudo[72225]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlsyfqzluajgicppltbiepzhrhagwipl ; /usr/bin/python3'
Sep 30 14:11:19 compute-0 sudo[72225]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:11:20 compute-0 python3[72227]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 14:11:20 compute-0 systemd[1]: Reloading.
Sep 30 14:11:20 compute-0 systemd-sysv-generator[72258]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 14:11:20 compute-0 systemd-rc-local-generator[72254]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:11:20 compute-0 systemd[1]: Starting Ceph OSD losetup...
Sep 30 14:11:20 compute-0 bash[72267]: /dev/loop3: [64513]:4194939 (/var/lib/ceph-osd-0.img)
Sep 30 14:11:20 compute-0 systemd[1]: Finished Ceph OSD losetup.
Sep 30 14:11:20 compute-0 lvm[72268]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 14:11:20 compute-0 lvm[72268]: VG ceph_vg0 finished
Sep 30 14:11:20 compute-0 sudo[72225]: pam_unix(sudo:session): session closed for user root
Sep 30 14:11:23 compute-0 python3[72292]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Sep 30 14:11:25 compute-0 sudo[72383]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzzkipgmoimpokhoevqpeyybwazcajwz ; /usr/bin/python3'
Sep 30 14:11:25 compute-0 sudo[72383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:11:26 compute-0 python3[72385]: ansible-ansible.legacy.dnf Invoked with name=['centos-release-ceph-squid'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Sep 30 14:11:28 compute-0 sudo[72383]: pam_unix(sudo:session): session closed for user root
Sep 30 14:11:28 compute-0 sudo[72441]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wibefbjorrymygveucrskpmhfmecugsj ; /usr/bin/python3'
Sep 30 14:11:28 compute-0 sudo[72441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:11:28 compute-0 python3[72443]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Sep 30 14:11:31 compute-0 groupadd[72453]: group added to /etc/group: name=cephadm, GID=992
Sep 30 14:11:31 compute-0 groupadd[72453]: group added to /etc/gshadow: name=cephadm
Sep 30 14:11:31 compute-0 groupadd[72453]: new group: name=cephadm, GID=992
Sep 30 14:11:31 compute-0 useradd[72460]: new user: name=cephadm, UID=992, GID=992, home=/var/lib/cephadm, shell=/bin/bash, from=none
Sep 30 14:11:32 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Sep 30 14:11:32 compute-0 systemd[1]: Starting man-db-cache-update.service...
Sep 30 14:11:32 compute-0 sudo[72441]: pam_unix(sudo:session): session closed for user root
Sep 30 14:11:32 compute-0 sudo[72559]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwppwprmmnufjubzmpfjnywgwxvvwrcp ; /usr/bin/python3'
Sep 30 14:11:32 compute-0 sudo[72559]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:11:32 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Sep 30 14:11:32 compute-0 systemd[1]: Finished man-db-cache-update.service.
Sep 30 14:11:32 compute-0 systemd[1]: run-r18104351d03e4950851078435cc38924.service: Deactivated successfully.
Sep 30 14:11:32 compute-0 python3[72561]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 14:11:32 compute-0 sudo[72559]: pam_unix(sudo:session): session closed for user root
Sep 30 14:11:32 compute-0 sudo[72588]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfxcgbcuwzmyndiwvjyqyfrdbjkyraih ; /usr/bin/python3'
Sep 30 14:11:32 compute-0 sudo[72588]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:11:33 compute-0 python3[72590]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:11:33 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Sep 30 14:11:33 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Sep 30 14:11:33 compute-0 sudo[72588]: pam_unix(sudo:session): session closed for user root
Sep 30 14:11:33 compute-0 sudo[72652]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udabldfdmnysqrtxkjtbmaetwvibgjvj ; /usr/bin/python3'
Sep 30 14:11:33 compute-0 sudo[72652]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:11:34 compute-0 python3[72654]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:11:34 compute-0 sudo[72652]: pam_unix(sudo:session): session closed for user root
Sep 30 14:11:34 compute-0 sudo[72678]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghnmsbwzlrijtjhgqmbcimotdnohsldx ; /usr/bin/python3'
Sep 30 14:11:34 compute-0 sudo[72678]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:11:34 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Sep 30 14:11:34 compute-0 python3[72680]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:11:34 compute-0 sudo[72678]: pam_unix(sudo:session): session closed for user root
Sep 30 14:11:35 compute-0 sudo[72756]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvvrzyzhwxyygytkkoecdbqlzwvwcuze ; /usr/bin/python3'
Sep 30 14:11:35 compute-0 sudo[72756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:11:35 compute-0 python3[72758]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Sep 30 14:11:35 compute-0 sudo[72756]: pam_unix(sudo:session): session closed for user root
Sep 30 14:11:35 compute-0 sudo[72829]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgqxyorvtyclclchkteznvtruobxwcmy ; /usr/bin/python3'
Sep 30 14:11:35 compute-0 sudo[72829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:11:35 compute-0 python3[72831]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759241494.8613138-35003-116080545240083/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=a2c84611a4e46cfce32a90c112eae0345cab6abb backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:11:35 compute-0 sudo[72829]: pam_unix(sudo:session): session closed for user root
Sep 30 14:11:36 compute-0 sudo[72931]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acufrtqnrfjxfwlxcsayhoefxqrojprl ; /usr/bin/python3'
Sep 30 14:11:36 compute-0 sudo[72931]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:11:36 compute-0 python3[72933]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Sep 30 14:11:36 compute-0 sudo[72931]: pam_unix(sudo:session): session closed for user root
Sep 30 14:11:36 compute-0 sudo[73004]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvdfixqzhhzrecyorojmpsdmtlhogvbt ; /usr/bin/python3'
Sep 30 14:11:36 compute-0 sudo[73004]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:11:36 compute-0 python3[73006]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759241495.9763086-35022-60325998152209/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:11:36 compute-0 sudo[73004]: pam_unix(sudo:session): session closed for user root
Sep 30 14:11:36 compute-0 sudo[73054]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzzowknkasyrfhfqalgluaadtmfxssgy ; /usr/bin/python3'
Sep 30 14:11:36 compute-0 sudo[73054]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:11:36 compute-0 python3[73056]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 14:11:36 compute-0 sudo[73054]: pam_unix(sudo:session): session closed for user root
Sep 30 14:11:37 compute-0 sudo[73082]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkhonfynegsmngvszqlnudirzmqlnsly ; /usr/bin/python3'
Sep 30 14:11:37 compute-0 sudo[73082]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:11:37 compute-0 python3[73084]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 14:11:37 compute-0 sudo[73082]: pam_unix(sudo:session): session closed for user root
Sep 30 14:11:37 compute-0 sudo[73110]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oyhulomphohhezobejvyjqhmqhhgbalz ; /usr/bin/python3'
Sep 30 14:11:37 compute-0 sudo[73110]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:11:37 compute-0 python3[73112]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 14:11:37 compute-0 sudo[73110]: pam_unix(sudo:session): session closed for user root
Sep 30 14:11:37 compute-0 sudo[73138]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzwynlmrizhekettqbnjecowlqkhvlhm ; /usr/bin/python3'
Sep 30 14:11:37 compute-0 sudo[73138]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:11:37 compute-0 python3[73140]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 --config /home/ceph-admin/assimilate_ceph.conf \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100
                                           _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:11:38 compute-0 sshd-session[73144]: Accepted publickey for ceph-admin from 192.168.122.100 port 43496 ssh2: RSA SHA256:xW6Secl6o9Q/fOm6V4KS97DIZ06Q0FgYLSMG01uhfVw
Sep 30 14:11:38 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Sep 30 14:11:38 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Sep 30 14:11:38 compute-0 systemd-logind[808]: New session 20 of user ceph-admin.
Sep 30 14:11:38 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Sep 30 14:11:38 compute-0 systemd[1]: Starting User Manager for UID 42477...
Sep 30 14:11:38 compute-0 systemd[73148]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Sep 30 14:11:38 compute-0 systemd[73148]: Queued start job for default target Main User Target.
Sep 30 14:11:38 compute-0 systemd[73148]: Created slice User Application Slice.
Sep 30 14:11:38 compute-0 systemd[73148]: Started Mark boot as successful after the user session has run 2 minutes.
Sep 30 14:11:38 compute-0 systemd[73148]: Started Daily Cleanup of User's Temporary Directories.
Sep 30 14:11:38 compute-0 systemd[73148]: Reached target Paths.
Sep 30 14:11:38 compute-0 systemd[73148]: Reached target Timers.
Sep 30 14:11:38 compute-0 systemd[73148]: Starting D-Bus User Message Bus Socket...
Sep 30 14:11:38 compute-0 systemd[73148]: Starting Create User's Volatile Files and Directories...
Sep 30 14:11:38 compute-0 systemd[73148]: Listening on D-Bus User Message Bus Socket.
Sep 30 14:11:38 compute-0 systemd[73148]: Finished Create User's Volatile Files and Directories.
Sep 30 14:11:38 compute-0 systemd[73148]: Reached target Sockets.
Sep 30 14:11:38 compute-0 systemd[73148]: Reached target Basic System.
Sep 30 14:11:38 compute-0 systemd[73148]: Reached target Main User Target.
Sep 30 14:11:38 compute-0 systemd[73148]: Startup finished in 137ms.
Sep 30 14:11:38 compute-0 systemd[1]: Started User Manager for UID 42477.
Sep 30 14:11:38 compute-0 systemd[1]: Started Session 20 of User ceph-admin.
Sep 30 14:11:38 compute-0 sshd-session[73144]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Sep 30 14:11:38 compute-0 sudo[73164]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/echo
Sep 30 14:11:38 compute-0 sudo[73164]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:11:38 compute-0 sudo[73164]: pam_unix(sudo:session): session closed for user root
Sep 30 14:11:38 compute-0 sshd-session[73163]: Received disconnect from 192.168.122.100 port 43496:11: disconnected by user
Sep 30 14:11:38 compute-0 sshd-session[73163]: Disconnected from user ceph-admin 192.168.122.100 port 43496
Sep 30 14:11:38 compute-0 sshd-session[73144]: pam_unix(sshd:session): session closed for user ceph-admin
Sep 30 14:11:38 compute-0 systemd[1]: session-20.scope: Deactivated successfully.
Sep 30 14:11:38 compute-0 systemd-logind[808]: Session 20 logged out. Waiting for processes to exit.
Sep 30 14:11:38 compute-0 systemd-logind[808]: Removed session 20.
Sep 30 14:11:38 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Sep 30 14:11:38 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Sep 30 14:11:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat1987924127-merged.mount: Deactivated successfully.
Sep 30 14:11:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat1987924127-lower\x2dmapped.mount: Deactivated successfully.
Sep 30 14:11:41 compute-0 sshd-session[73283]: Received disconnect from 193.46.255.217 port 29374:11:  [preauth]
Sep 30 14:11:41 compute-0 sshd-session[73283]: Disconnected from authenticating user root 193.46.255.217 port 29374 [preauth]
Sep 30 14:11:48 compute-0 systemd[1]: Stopping User Manager for UID 42477...
Sep 30 14:11:48 compute-0 systemd[73148]: Activating special unit Exit the Session...
Sep 30 14:11:48 compute-0 systemd[73148]: Stopped target Main User Target.
Sep 30 14:11:48 compute-0 systemd[73148]: Stopped target Basic System.
Sep 30 14:11:48 compute-0 systemd[73148]: Stopped target Paths.
Sep 30 14:11:48 compute-0 systemd[73148]: Stopped target Sockets.
Sep 30 14:11:48 compute-0 systemd[73148]: Stopped target Timers.
Sep 30 14:11:48 compute-0 systemd[73148]: Stopped Mark boot as successful after the user session has run 2 minutes.
Sep 30 14:11:48 compute-0 systemd[73148]: Stopped Daily Cleanup of User's Temporary Directories.
Sep 30 14:11:48 compute-0 systemd[73148]: Closed D-Bus User Message Bus Socket.
Sep 30 14:11:48 compute-0 systemd[73148]: Stopped Create User's Volatile Files and Directories.
Sep 30 14:11:48 compute-0 systemd[73148]: Removed slice User Application Slice.
Sep 30 14:11:48 compute-0 systemd[73148]: Reached target Shutdown.
Sep 30 14:11:48 compute-0 systemd[73148]: Finished Exit the Session.
Sep 30 14:11:48 compute-0 systemd[73148]: Reached target Exit the Session.
Sep 30 14:11:48 compute-0 systemd[1]: user@42477.service: Deactivated successfully.
Sep 30 14:11:48 compute-0 systemd[1]: Stopped User Manager for UID 42477.
Sep 30 14:11:48 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Sep 30 14:11:48 compute-0 systemd[1]: run-user-42477.mount: Deactivated successfully.
Sep 30 14:11:48 compute-0 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Sep 30 14:11:48 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Sep 30 14:11:48 compute-0 systemd[1]: Removed slice User Slice of UID 42477.
Sep 30 14:11:54 compute-0 sshd-session[73304]: Invalid user zhouhao from 209.38.228.14 port 53946
Sep 30 14:11:54 compute-0 sshd-session[73304]: Received disconnect from 209.38.228.14 port 53946:11: Bye Bye [preauth]
Sep 30 14:11:54 compute-0 sshd-session[73304]: Disconnected from invalid user zhouhao 209.38.228.14 port 53946 [preauth]
Sep 30 14:11:59 compute-0 podman[73240]: 2025-09-30 14:11:59.945145954 +0000 UTC m=+21.208524033 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:11:59 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Sep 30 14:12:00 compute-0 podman[73307]: 2025-09-30 14:11:59.993106882 +0000 UTC m=+0.022439435 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:12:00 compute-0 podman[73307]: 2025-09-30 14:12:00.091363268 +0000 UTC m=+0.120695801 container create 052137ff6537320065677d4333a526f97f1d714e1a27411a30bf4397fbbe54c3 (image=quay.io/ceph/ceph:v19, name=youthful_elion, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:12:00 compute-0 systemd[1]: Created slice Virtual Machine and Container Slice.
Sep 30 14:12:00 compute-0 systemd[1]: Started libpod-conmon-052137ff6537320065677d4333a526f97f1d714e1a27411a30bf4397fbbe54c3.scope.
Sep 30 14:12:00 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:12:00 compute-0 podman[73307]: 2025-09-30 14:12:00.280562369 +0000 UTC m=+0.309894922 container init 052137ff6537320065677d4333a526f97f1d714e1a27411a30bf4397fbbe54c3 (image=quay.io/ceph/ceph:v19, name=youthful_elion, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Sep 30 14:12:00 compute-0 podman[73307]: 2025-09-30 14:12:00.294275516 +0000 UTC m=+0.323608059 container start 052137ff6537320065677d4333a526f97f1d714e1a27411a30bf4397fbbe54c3 (image=quay.io/ceph/ceph:v19, name=youthful_elion, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1)
Sep 30 14:12:00 compute-0 podman[73307]: 2025-09-30 14:12:00.341425252 +0000 UTC m=+0.370757805 container attach 052137ff6537320065677d4333a526f97f1d714e1a27411a30bf4397fbbe54c3 (image=quay.io/ceph/ceph:v19, name=youthful_elion, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Sep 30 14:12:00 compute-0 youthful_elion[73323]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)
Sep 30 14:12:00 compute-0 systemd[1]: libpod-052137ff6537320065677d4333a526f97f1d714e1a27411a30bf4397fbbe54c3.scope: Deactivated successfully.
Sep 30 14:12:00 compute-0 podman[73307]: 2025-09-30 14:12:00.391748511 +0000 UTC m=+0.421081044 container died 052137ff6537320065677d4333a526f97f1d714e1a27411a30bf4397fbbe54c3 (image=quay.io/ceph/ceph:v19, name=youthful_elion, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:12:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-cca60130dd2596ca2e2d7c32bc9f4c945b2c8787a08f95e6d0a98995fc092cc0-merged.mount: Deactivated successfully.
Sep 30 14:12:00 compute-0 podman[73307]: 2025-09-30 14:12:00.681609531 +0000 UTC m=+0.710942064 container remove 052137ff6537320065677d4333a526f97f1d714e1a27411a30bf4397fbbe54c3 (image=quay.io/ceph/ceph:v19, name=youthful_elion, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Sep 30 14:12:00 compute-0 systemd[1]: libpod-conmon-052137ff6537320065677d4333a526f97f1d714e1a27411a30bf4397fbbe54c3.scope: Deactivated successfully.
Sep 30 14:12:00 compute-0 podman[73340]: 2025-09-30 14:12:00.832721012 +0000 UTC m=+0.125313341 container create 94d00e911230c028f0797bf2d3190a321d4b238f94b9d590303715f344bcb728 (image=quay.io/ceph/ceph:v19, name=condescending_heyrovsky, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Sep 30 14:12:00 compute-0 podman[73340]: 2025-09-30 14:12:00.744973799 +0000 UTC m=+0.037566148 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:12:00 compute-0 systemd[1]: Started libpod-conmon-94d00e911230c028f0797bf2d3190a321d4b238f94b9d590303715f344bcb728.scope.
Sep 30 14:12:00 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:12:00 compute-0 podman[73340]: 2025-09-30 14:12:00.97605555 +0000 UTC m=+0.268647919 container init 94d00e911230c028f0797bf2d3190a321d4b238f94b9d590303715f344bcb728 (image=quay.io/ceph/ceph:v19, name=condescending_heyrovsky, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Sep 30 14:12:00 compute-0 podman[73340]: 2025-09-30 14:12:00.98412652 +0000 UTC m=+0.276718839 container start 94d00e911230c028f0797bf2d3190a321d4b238f94b9d590303715f344bcb728 (image=quay.io/ceph/ceph:v19, name=condescending_heyrovsky, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:12:00 compute-0 condescending_heyrovsky[73356]: 167 167
Sep 30 14:12:00 compute-0 systemd[1]: libpod-94d00e911230c028f0797bf2d3190a321d4b238f94b9d590303715f344bcb728.scope: Deactivated successfully.
Sep 30 14:12:01 compute-0 podman[73340]: 2025-09-30 14:12:01.027641812 +0000 UTC m=+0.320234141 container attach 94d00e911230c028f0797bf2d3190a321d4b238f94b9d590303715f344bcb728 (image=quay.io/ceph/ceph:v19, name=condescending_heyrovsky, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:12:01 compute-0 podman[73340]: 2025-09-30 14:12:01.028146285 +0000 UTC m=+0.320738614 container died 94d00e911230c028f0797bf2d3190a321d4b238f94b9d590303715f344bcb728 (image=quay.io/ceph/ceph:v19, name=condescending_heyrovsky, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:12:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-dd89fde842f6e2ebcb403423f63b91c0da746296939523b857c27668a8b57908-merged.mount: Deactivated successfully.
Sep 30 14:12:01 compute-0 podman[73340]: 2025-09-30 14:12:01.158028554 +0000 UTC m=+0.450620893 container remove 94d00e911230c028f0797bf2d3190a321d4b238f94b9d590303715f344bcb728 (image=quay.io/ceph/ceph:v19, name=condescending_heyrovsky, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Sep 30 14:12:01 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Sep 30 14:12:01 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Sep 30 14:12:01 compute-0 systemd[1]: libpod-conmon-94d00e911230c028f0797bf2d3190a321d4b238f94b9d590303715f344bcb728.scope: Deactivated successfully.
Sep 30 14:12:01 compute-0 podman[73374]: 2025-09-30 14:12:01.220865918 +0000 UTC m=+0.044512069 container create 3767c41300d9e2af8995348f0b3e0c443dad798dabdd7579086ed82e2aabef32 (image=quay.io/ceph/ceph:v19, name=beautiful_ardinghelli, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Sep 30 14:12:01 compute-0 systemd[1]: Started libpod-conmon-3767c41300d9e2af8995348f0b3e0c443dad798dabdd7579086ed82e2aabef32.scope.
Sep 30 14:12:01 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:12:01 compute-0 podman[73374]: 2025-09-30 14:12:01.19865653 +0000 UTC m=+0.022302701 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:12:01 compute-0 podman[73374]: 2025-09-30 14:12:01.302371598 +0000 UTC m=+0.126017769 container init 3767c41300d9e2af8995348f0b3e0c443dad798dabdd7579086ed82e2aabef32 (image=quay.io/ceph/ceph:v19, name=beautiful_ardinghelli, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Sep 30 14:12:01 compute-0 podman[73374]: 2025-09-30 14:12:01.308410375 +0000 UTC m=+0.132056526 container start 3767c41300d9e2af8995348f0b3e0c443dad798dabdd7579086ed82e2aabef32 (image=quay.io/ceph/ceph:v19, name=beautiful_ardinghelli, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:12:01 compute-0 podman[73374]: 2025-09-30 14:12:01.314514124 +0000 UTC m=+0.138160555 container attach 3767c41300d9e2af8995348f0b3e0c443dad798dabdd7579086ed82e2aabef32 (image=quay.io/ceph/ceph:v19, name=beautiful_ardinghelli, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Sep 30 14:12:01 compute-0 beautiful_ardinghelli[73390]: AQAx5dto7s2rExAAkQUpgRNigSxZcDp9k1OXZA==
Sep 30 14:12:01 compute-0 systemd[1]: libpod-3767c41300d9e2af8995348f0b3e0c443dad798dabdd7579086ed82e2aabef32.scope: Deactivated successfully.
Sep 30 14:12:01 compute-0 podman[73374]: 2025-09-30 14:12:01.333723104 +0000 UTC m=+0.157369275 container died 3767c41300d9e2af8995348f0b3e0c443dad798dabdd7579086ed82e2aabef32 (image=quay.io/ceph/ceph:v19, name=beautiful_ardinghelli, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Sep 30 14:12:01 compute-0 podman[73374]: 2025-09-30 14:12:01.380257244 +0000 UTC m=+0.203903395 container remove 3767c41300d9e2af8995348f0b3e0c443dad798dabdd7579086ed82e2aabef32 (image=quay.io/ceph/ceph:v19, name=beautiful_ardinghelli, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Sep 30 14:12:01 compute-0 systemd[1]: libpod-conmon-3767c41300d9e2af8995348f0b3e0c443dad798dabdd7579086ed82e2aabef32.scope: Deactivated successfully.
Sep 30 14:12:01 compute-0 podman[73408]: 2025-09-30 14:12:01.500701117 +0000 UTC m=+0.102807665 container create d33f102ad77cd67f415d965658070438890e2f9fa2752608c3d92d373ca5dc3b (image=quay.io/ceph/ceph:v19, name=stupefied_cannon, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:12:01 compute-0 podman[73408]: 2025-09-30 14:12:01.416770824 +0000 UTC m=+0.018877392 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:12:01 compute-0 systemd[1]: Started libpod-conmon-d33f102ad77cd67f415d965658070438890e2f9fa2752608c3d92d373ca5dc3b.scope.
Sep 30 14:12:01 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:12:01 compute-0 podman[73408]: 2025-09-30 14:12:01.574706252 +0000 UTC m=+0.176812820 container init d33f102ad77cd67f415d965658070438890e2f9fa2752608c3d92d373ca5dc3b (image=quay.io/ceph/ceph:v19, name=stupefied_cannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:12:01 compute-0 podman[73408]: 2025-09-30 14:12:01.580934394 +0000 UTC m=+0.183040952 container start d33f102ad77cd67f415d965658070438890e2f9fa2752608c3d92d373ca5dc3b (image=quay.io/ceph/ceph:v19, name=stupefied_cannon, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:12:01 compute-0 podman[73408]: 2025-09-30 14:12:01.597477435 +0000 UTC m=+0.199583983 container attach d33f102ad77cd67f415d965658070438890e2f9fa2752608c3d92d373ca5dc3b (image=quay.io/ceph/ceph:v19, name=stupefied_cannon, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Sep 30 14:12:01 compute-0 stupefied_cannon[73426]: AQAx5dtotwokJBAAdpeKbUrGlFzGTRWkzGvH0Q==
Sep 30 14:12:01 compute-0 systemd[1]: libpod-d33f102ad77cd67f415d965658070438890e2f9fa2752608c3d92d373ca5dc3b.scope: Deactivated successfully.
Sep 30 14:12:01 compute-0 podman[73408]: 2025-09-30 14:12:01.612413753 +0000 UTC m=+0.214520301 container died d33f102ad77cd67f415d965658070438890e2f9fa2752608c3d92d373ca5dc3b (image=quay.io/ceph/ceph:v19, name=stupefied_cannon, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:12:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-541e4e7258b4a601c954b7dc857ea1fff12eb2c9eeb6037a4eb6edbc688a7b0d-merged.mount: Deactivated successfully.
Sep 30 14:12:02 compute-0 podman[73408]: 2025-09-30 14:12:02.613836923 +0000 UTC m=+1.215943471 container remove d33f102ad77cd67f415d965658070438890e2f9fa2752608c3d92d373ca5dc3b (image=quay.io/ceph/ceph:v19, name=stupefied_cannon, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid)
Sep 30 14:12:02 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Sep 30 14:12:02 compute-0 sshd-session[73425]: Invalid user edubook from 210.90.155.80 port 41338
Sep 30 14:12:02 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Sep 30 14:12:02 compute-0 systemd[1]: libpod-conmon-d33f102ad77cd67f415d965658070438890e2f9fa2752608c3d92d373ca5dc3b.scope: Deactivated successfully.
Sep 30 14:12:02 compute-0 podman[73447]: 2025-09-30 14:12:02.751775851 +0000 UTC m=+0.115622999 container create 2227ca87b6ec1d61117dd3c0935d71a1430e9216e324d7e96956d468bb71e9da (image=quay.io/ceph/ceph:v19, name=ecstatic_ishizaka, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:12:02 compute-0 podman[73447]: 2025-09-30 14:12:02.660919917 +0000 UTC m=+0.024767095 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:12:02 compute-0 sshd-session[73425]: Received disconnect from 210.90.155.80 port 41338:11: Bye Bye [preauth]
Sep 30 14:12:02 compute-0 sshd-session[73425]: Disconnected from invalid user edubook 210.90.155.80 port 41338 [preauth]
Sep 30 14:12:03 compute-0 systemd[1]: Started libpod-conmon-2227ca87b6ec1d61117dd3c0935d71a1430e9216e324d7e96956d468bb71e9da.scope.
Sep 30 14:12:03 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:12:03 compute-0 podman[73447]: 2025-09-30 14:12:03.281251592 +0000 UTC m=+0.645098760 container init 2227ca87b6ec1d61117dd3c0935d71a1430e9216e324d7e96956d468bb71e9da (image=quay.io/ceph/ceph:v19, name=ecstatic_ishizaka, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Sep 30 14:12:03 compute-0 podman[73447]: 2025-09-30 14:12:03.287895485 +0000 UTC m=+0.651742633 container start 2227ca87b6ec1d61117dd3c0935d71a1430e9216e324d7e96956d468bb71e9da (image=quay.io/ceph/ceph:v19, name=ecstatic_ishizaka, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Sep 30 14:12:03 compute-0 podman[73447]: 2025-09-30 14:12:03.299545258 +0000 UTC m=+0.663392426 container attach 2227ca87b6ec1d61117dd3c0935d71a1430e9216e324d7e96956d468bb71e9da (image=quay.io/ceph/ceph:v19, name=ecstatic_ishizaka, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:12:03 compute-0 ecstatic_ishizaka[73465]: AQAz5dtoE7KJEhAAgnoAoScuL1wQZtzTryd1vA==
Sep 30 14:12:03 compute-0 systemd[1]: libpod-2227ca87b6ec1d61117dd3c0935d71a1430e9216e324d7e96956d468bb71e9da.scope: Deactivated successfully.
Sep 30 14:12:03 compute-0 podman[73447]: 2025-09-30 14:12:03.315612776 +0000 UTC m=+0.679459924 container died 2227ca87b6ec1d61117dd3c0935d71a1430e9216e324d7e96956d468bb71e9da (image=quay.io/ceph/ceph:v19, name=ecstatic_ishizaka, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:12:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-b64b0782ff585cfd34e08b7822b2c34425aca327624e4991e2e267c06c91f13e-merged.mount: Deactivated successfully.
Sep 30 14:12:03 compute-0 podman[73447]: 2025-09-30 14:12:03.377930867 +0000 UTC m=+0.741778015 container remove 2227ca87b6ec1d61117dd3c0935d71a1430e9216e324d7e96956d468bb71e9da (image=quay.io/ceph/ceph:v19, name=ecstatic_ishizaka, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:12:03 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Sep 30 14:12:03 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Sep 30 14:12:03 compute-0 systemd[1]: libpod-conmon-2227ca87b6ec1d61117dd3c0935d71a1430e9216e324d7e96956d468bb71e9da.scope: Deactivated successfully.
Sep 30 14:12:03 compute-0 podman[73485]: 2025-09-30 14:12:03.437351683 +0000 UTC m=+0.036652194 container create 650084ae3b11ed2422896d5824f8941b55ad8bfc508bf25517136e41e80f5981 (image=quay.io/ceph/ceph:v19, name=eloquent_sutherland, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True)
Sep 30 14:12:03 compute-0 systemd[1]: Started libpod-conmon-650084ae3b11ed2422896d5824f8941b55ad8bfc508bf25517136e41e80f5981.scope.
Sep 30 14:12:03 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:12:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd016eb552828dd7705798165b7d03d6d88b827a48661983d15c84f6585e8e97/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:03 compute-0 podman[73485]: 2025-09-30 14:12:03.492489617 +0000 UTC m=+0.091790128 container init 650084ae3b11ed2422896d5824f8941b55ad8bfc508bf25517136e41e80f5981 (image=quay.io/ceph/ceph:v19, name=eloquent_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Sep 30 14:12:03 compute-0 podman[73485]: 2025-09-30 14:12:03.498784001 +0000 UTC m=+0.098084512 container start 650084ae3b11ed2422896d5824f8941b55ad8bfc508bf25517136e41e80f5981 (image=quay.io/ceph/ceph:v19, name=eloquent_sutherland, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:12:03 compute-0 podman[73485]: 2025-09-30 14:12:03.502539119 +0000 UTC m=+0.101839820 container attach 650084ae3b11ed2422896d5824f8941b55ad8bfc508bf25517136e41e80f5981 (image=quay.io/ceph/ceph:v19, name=eloquent_sutherland, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:12:03 compute-0 podman[73485]: 2025-09-30 14:12:03.419836247 +0000 UTC m=+0.019136778 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:12:03 compute-0 eloquent_sutherland[73501]: /usr/bin/monmaptool: monmap file /tmp/monmap
Sep 30 14:12:03 compute-0 eloquent_sutherland[73501]: setting min_mon_release = quincy
Sep 30 14:12:03 compute-0 eloquent_sutherland[73501]: /usr/bin/monmaptool: set fsid to 5e3c7776-ac03-5698-b79f-a6dc2d80cae6
Sep 30 14:12:03 compute-0 eloquent_sutherland[73501]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Sep 30 14:12:03 compute-0 systemd[1]: libpod-650084ae3b11ed2422896d5824f8941b55ad8bfc508bf25517136e41e80f5981.scope: Deactivated successfully.
Sep 30 14:12:03 compute-0 podman[73485]: 2025-09-30 14:12:03.532131168 +0000 UTC m=+0.131431679 container died 650084ae3b11ed2422896d5824f8941b55ad8bfc508bf25517136e41e80f5981 (image=quay.io/ceph/ceph:v19, name=eloquent_sutherland, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:12:03 compute-0 podman[73485]: 2025-09-30 14:12:03.572636632 +0000 UTC m=+0.171937143 container remove 650084ae3b11ed2422896d5824f8941b55ad8bfc508bf25517136e41e80f5981 (image=quay.io/ceph/ceph:v19, name=eloquent_sutherland, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Sep 30 14:12:03 compute-0 systemd[1]: libpod-conmon-650084ae3b11ed2422896d5824f8941b55ad8bfc508bf25517136e41e80f5981.scope: Deactivated successfully.
Sep 30 14:12:03 compute-0 podman[73518]: 2025-09-30 14:12:03.633587757 +0000 UTC m=+0.041803448 container create c18430669f82c239c6b8b42d3a12deba135a3714b476ba2107ed2876daf98aa6 (image=quay.io/ceph/ceph:v19, name=elated_morse, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:12:03 compute-0 systemd[1]: Started libpod-conmon-c18430669f82c239c6b8b42d3a12deba135a3714b476ba2107ed2876daf98aa6.scope.
Sep 30 14:12:03 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:12:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63b2ce35cae23a2633baf9aaf07f165aa9c1971460cfc617aafb6e0fb4bf8e45/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63b2ce35cae23a2633baf9aaf07f165aa9c1971460cfc617aafb6e0fb4bf8e45/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63b2ce35cae23a2633baf9aaf07f165aa9c1971460cfc617aafb6e0fb4bf8e45/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63b2ce35cae23a2633baf9aaf07f165aa9c1971460cfc617aafb6e0fb4bf8e45/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:03 compute-0 podman[73518]: 2025-09-30 14:12:03.701126764 +0000 UTC m=+0.109342485 container init c18430669f82c239c6b8b42d3a12deba135a3714b476ba2107ed2876daf98aa6 (image=quay.io/ceph/ceph:v19, name=elated_morse, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Sep 30 14:12:03 compute-0 podman[73518]: 2025-09-30 14:12:03.707796048 +0000 UTC m=+0.116011739 container start c18430669f82c239c6b8b42d3a12deba135a3714b476ba2107ed2876daf98aa6 (image=quay.io/ceph/ceph:v19, name=elated_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Sep 30 14:12:03 compute-0 podman[73518]: 2025-09-30 14:12:03.612788536 +0000 UTC m=+0.021004247 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:12:03 compute-0 podman[73518]: 2025-09-30 14:12:03.712341936 +0000 UTC m=+0.120557647 container attach c18430669f82c239c6b8b42d3a12deba135a3714b476ba2107ed2876daf98aa6 (image=quay.io/ceph/ceph:v19, name=elated_morse, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:12:03 compute-0 systemd[1]: libpod-c18430669f82c239c6b8b42d3a12deba135a3714b476ba2107ed2876daf98aa6.scope: Deactivated successfully.
Sep 30 14:12:03 compute-0 podman[73561]: 2025-09-30 14:12:03.83437019 +0000 UTC m=+0.025606767 container died c18430669f82c239c6b8b42d3a12deba135a3714b476ba2107ed2876daf98aa6 (image=quay.io/ceph/ceph:v19, name=elated_morse, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Sep 30 14:12:03 compute-0 podman[73561]: 2025-09-30 14:12:03.867587954 +0000 UTC m=+0.058824501 container remove c18430669f82c239c6b8b42d3a12deba135a3714b476ba2107ed2876daf98aa6 (image=quay.io/ceph/ceph:v19, name=elated_morse, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:12:03 compute-0 systemd[1]: libpod-conmon-c18430669f82c239c6b8b42d3a12deba135a3714b476ba2107ed2876daf98aa6.scope: Deactivated successfully.
Sep 30 14:12:03 compute-0 systemd[1]: Reloading.
Sep 30 14:12:03 compute-0 systemd-rc-local-generator[73601]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:12:03 compute-0 systemd-sysv-generator[73605]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 14:12:04 compute-0 systemd[1]: Reloading.
Sep 30 14:12:04 compute-0 systemd-rc-local-generator[73642]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:12:04 compute-0 systemd-sysv-generator[73646]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 14:12:04 compute-0 systemd[1]: Reached target All Ceph clusters and services.
Sep 30 14:12:04 compute-0 systemd[1]: Reloading.
Sep 30 14:12:04 compute-0 systemd-sysv-generator[73684]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 14:12:04 compute-0 systemd-rc-local-generator[73681]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:12:04 compute-0 systemd[1]: Reached target Ceph cluster 5e3c7776-ac03-5698-b79f-a6dc2d80cae6.
Sep 30 14:12:04 compute-0 systemd[1]: Reloading.
Sep 30 14:12:04 compute-0 systemd-sysv-generator[73723]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 14:12:04 compute-0 systemd-rc-local-generator[73719]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:12:05 compute-0 systemd[1]: Reloading.
Sep 30 14:12:05 compute-0 systemd-rc-local-generator[73759]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:12:05 compute-0 systemd-sysv-generator[73763]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 14:12:05 compute-0 systemd[1]: Created slice Slice /system/ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6.
Sep 30 14:12:05 compute-0 systemd[1]: Reached target System Time Set.
Sep 30 14:12:05 compute-0 systemd[1]: Reached target System Time Synchronized.
Sep 30 14:12:05 compute-0 systemd[1]: Starting Ceph mon.compute-0 for 5e3c7776-ac03-5698-b79f-a6dc2d80cae6...
Sep 30 14:12:05 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Sep 30 14:12:05 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Sep 30 14:12:05 compute-0 podman[73819]: 2025-09-30 14:12:05.57352875 +0000 UTC m=+0.112339574 container create 645a8d23d8ddaf9c7f37ee398cf316283a4203fae1bfcc190c169e9932a486cd (image=quay.io/ceph/ceph:v19, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:12:05 compute-0 podman[73819]: 2025-09-30 14:12:05.490800977 +0000 UTC m=+0.029611821 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:12:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e646e6fda02fa308fd5ee324012041bc82007c12b73e647efcca185336b68644/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e646e6fda02fa308fd5ee324012041bc82007c12b73e647efcca185336b68644/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e646e6fda02fa308fd5ee324012041bc82007c12b73e647efcca185336b68644/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e646e6fda02fa308fd5ee324012041bc82007c12b73e647efcca185336b68644/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:06 compute-0 podman[73819]: 2025-09-30 14:12:06.153731672 +0000 UTC m=+0.692542516 container init 645a8d23d8ddaf9c7f37ee398cf316283a4203fae1bfcc190c169e9932a486cd (image=quay.io/ceph/ceph:v19, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325)
Sep 30 14:12:06 compute-0 podman[73819]: 2025-09-30 14:12:06.160474367 +0000 UTC m=+0.699285201 container start 645a8d23d8ddaf9c7f37ee398cf316283a4203fae1bfcc190c169e9932a486cd (image=quay.io/ceph/ceph:v19, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mon-compute-0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:12:06 compute-0 ceph-mon[73839]: set uid:gid to 167:167 (ceph:ceph)
Sep 30 14:12:06 compute-0 ceph-mon[73839]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mon, pid 2
Sep 30 14:12:06 compute-0 ceph-mon[73839]: pidfile_write: ignore empty --pid-file
Sep 30 14:12:06 compute-0 ceph-mon[73839]: load: jerasure load: lrc 
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb: RocksDB version: 7.9.2
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb: Git sha 0
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb: Compile date 2025-07-17 03:12:14
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb: DB SUMMARY
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb: DB Session ID:  ZXH27Q9NHLMCBB14RNIS
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb: CURRENT file:  CURRENT
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb: IDENTITY file:  IDENTITY
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:                         Options.error_if_exists: 0
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:                       Options.create_if_missing: 0
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:                         Options.paranoid_checks: 1
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:             Options.flush_verify_memtable_count: 1
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:                                     Options.env: 0x5631a6183c20
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:                                      Options.fs: PosixFileSystem
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:                                Options.info_log: 0x5631a7540d60
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:                Options.max_file_opening_threads: 16
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:                              Options.statistics: (nil)
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:                               Options.use_fsync: 0
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:                       Options.max_log_file_size: 0
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:                   Options.log_file_time_to_roll: 0
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:                       Options.keep_log_file_num: 1000
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:                    Options.recycle_log_file_num: 0
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:                         Options.allow_fallocate: 1
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:                        Options.allow_mmap_reads: 0
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:                       Options.allow_mmap_writes: 0
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:                        Options.use_direct_reads: 0
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:          Options.create_missing_column_families: 0
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:                              Options.db_log_dir: 
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:                                 Options.wal_dir: 
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:                Options.table_cache_numshardbits: 6
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:                         Options.WAL_ttl_seconds: 0
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:                       Options.WAL_size_limit_MB: 0
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:             Options.manifest_preallocation_size: 4194304
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:                     Options.is_fd_close_on_exec: 1
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:                   Options.advise_random_on_open: 1
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:                    Options.db_write_buffer_size: 0
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:                    Options.write_buffer_manager: 0x5631a7545900
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:         Options.access_hint_on_compaction_start: 1
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:                      Options.use_adaptive_mutex: 0
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:                            Options.rate_limiter: (nil)
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:                       Options.wal_recovery_mode: 2
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:                  Options.enable_thread_tracking: 0
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:                  Options.enable_pipelined_write: 0
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:                  Options.unordered_write: 0
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:             Options.write_thread_max_yield_usec: 100
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:                               Options.row_cache: None
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:                              Options.wal_filter: None
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:             Options.avoid_flush_during_recovery: 0
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:             Options.allow_ingest_behind: 0
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:             Options.two_write_queues: 0
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:             Options.manual_wal_flush: 0
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:             Options.wal_compression: 0
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:             Options.atomic_flush: 0
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:                 Options.persist_stats_to_disk: 0
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:                 Options.write_dbid_to_manifest: 0
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:                 Options.log_readahead_size: 0
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:                 Options.best_efforts_recovery: 0
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:             Options.allow_data_in_errors: 0
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:             Options.db_host_id: __hostname__
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:             Options.enforce_single_del_contracts: true
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:             Options.max_background_jobs: 2
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:             Options.max_background_compactions: -1
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:             Options.max_subcompactions: 1
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:             Options.delayed_write_rate : 16777216
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:             Options.max_total_wal_size: 0
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:                   Options.stats_dump_period_sec: 600
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:                 Options.stats_persist_period_sec: 600
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:                          Options.max_open_files: -1
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:                          Options.bytes_per_sync: 0
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:                      Options.wal_bytes_per_sync: 0
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:                   Options.strict_bytes_per_sync: 0
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:       Options.compaction_readahead_size: 0
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:                  Options.max_background_flushes: -1
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb: Compression algorithms supported:
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:         kZSTD supported: 0
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:         kXpressCompression supported: 0
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:         kBZip2Compression supported: 0
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:         kZSTDNotFinalCompression supported: 0
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:         kLZ4Compression supported: 1
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:         kZlibCompression supported: 1
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:         kLZ4HCCompression supported: 1
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:         kSnappyCompression supported: 1
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb: Fast CRC32 supported: Supported on x86
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb: DMutex implementation: pthread_mutex_t
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:           Options.merge_operator: 
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:        Options.compaction_filter: None
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:        Options.compaction_filter_factory: None
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:  Options.sst_partitioner_factory: None
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:         Options.memtable_factory: SkipListFactory
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:            Options.table_factory: BlockBasedTable
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5631a7540500)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5631a7565350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:        Options.write_buffer_size: 33554432
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:  Options.max_write_buffer_number: 2
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:          Options.compression: NoCompression
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:                  Options.bottommost_compression: Disabled
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:       Options.prefix_extractor: nullptr
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:             Options.num_levels: 7
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:            Options.compression_opts.window_bits: -14
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:                  Options.compression_opts.level: 32767
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:               Options.compression_opts.strategy: 0
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:         Options.compression_opts.parallel_threads: 1
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:                  Options.compression_opts.enabled: false
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:              Options.level0_stop_writes_trigger: 36
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:                   Options.target_file_size_base: 67108864
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:             Options.target_file_size_multiplier: 1
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:                        Options.arena_block_size: 1048576
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:                Options.disable_auto_compactions: 0
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:                   Options.inplace_update_support: 0
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:                 Options.inplace_update_num_locks: 10000
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:               Options.memtable_whole_key_filtering: 0
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:   Options.memtable_huge_page_size: 0
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:                           Options.bloom_locality: 0
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:                    Options.max_successive_merges: 0
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:                Options.optimize_filters_for_hits: 0
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:                Options.paranoid_file_checks: 0
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:                Options.force_consistency_checks: 1
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:                Options.report_bg_io_stats: 0
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:                               Options.ttl: 2592000
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:          Options.periodic_compaction_seconds: 0
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:    Options.preserve_internal_time_seconds: 0
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:                       Options.enable_blob_files: false
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:                           Options.min_blob_size: 0
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:                          Options.blob_file_size: 268435456
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:                   Options.blob_compression_type: NoCompression
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:          Options.enable_blob_garbage_collection: false
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:          Options.blob_compaction_readahead_size: 0
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb:                Options.blob_file_starting_level: 0
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 4a74fe2f-a33e-416b-ba25-743e7942b3ac
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759241526209721, "job": 1, "event": "recovery_started", "wal_files": [4]}
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Sep 30 14:12:06 compute-0 bash[73819]: 645a8d23d8ddaf9c7f37ee398cf316283a4203fae1bfcc190c169e9932a486cd
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759241526406739, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759241526, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4a74fe2f-a33e-416b-ba25-743e7942b3ac", "db_session_id": "ZXH27Q9NHLMCBB14RNIS", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759241526407002, "job": 1, "event": "recovery_finished"}
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Sep 30 14:12:06 compute-0 systemd[1]: Started Ceph mon.compute-0 for 5e3c7776-ac03-5698-b79f-a6dc2d80cae6.
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5631a7566e00
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb: DB pointer 0x5631a7670000
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Sep 30 14:12:06 compute-0 ceph-mon[73839]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.5 total, 0.5 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.20              0.00         1    0.197       0      0       0.0       0.0
                                            Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.20              0.00         1    0.197       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.20              0.00         1    0.197       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.20              0.00         1    0.197       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.5 total, 0.5 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5631a7565350#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Sep 30 14:12:06 compute-0 ceph-mon[73839]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6
Sep 30 14:12:06 compute-0 ceph-mon[73839]: mon.compute-0@-1(???) e0 preinit fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6
Sep 30 14:12:06 compute-0 ceph-mon[73839]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Sep 30 14:12:06 compute-0 ceph-mon[73839]: mon.compute-0@0(probing) e0 win_standalone_election
Sep 30 14:12:06 compute-0 ceph-mon[73839]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Sep 30 14:12:06 compute-0 ceph-mon[73839]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Sep 30 14:12:06 compute-0 ceph-mon[73839]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Sep 30 14:12:06 compute-0 podman[73861]: 2025-09-30 14:12:06.72305913 +0000 UTC m=+0.034180140 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:12:06 compute-0 ceph-mon[73839]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Sep 30 14:12:06 compute-0 ceph-mon[73839]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Sep 30 14:12:06 compute-0 ceph-mon[73839]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Sep 30 14:12:06 compute-0 ceph-mon[73839]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Sep 30 14:12:06 compute-0 ceph-mon[73839]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Sep 30 14:12:06 compute-0 ceph-mon[73839]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Sep 30 14:12:06 compute-0 podman[73861]: 2025-09-30 14:12:06.838726269 +0000 UTC m=+0.149847259 container create 675beaf08be8724397405704f26c519c1e0993216a7195086336afc5ec220b44 (image=quay.io/ceph/ceph:v19, name=hungry_kepler, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Sep 30 14:12:06 compute-0 ceph-mon[73839]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Sep 30 14:12:06 compute-0 ceph-mon[73839]: mon.compute-0@0(probing) e1 win_standalone_election
Sep 30 14:12:06 compute-0 ceph-mon[73839]: paxos.0).electionLogic(2) init, last seen epoch 2
Sep 30 14:12:06 compute-0 systemd[1]: Started libpod-conmon-675beaf08be8724397405704f26c519c1e0993216a7195086336afc5ec220b44.scope.
Sep 30 14:12:06 compute-0 ceph-mon[73839]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Sep 30 14:12:06 compute-0 ceph-mon[73839]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Sep 30 14:12:06 compute-0 ceph-mon[73839]: log_channel(cluster) log [DBG] : monmap epoch 1
Sep 30 14:12:06 compute-0 ceph-mon[73839]: log_channel(cluster) log [DBG] : fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6
Sep 30 14:12:06 compute-0 ceph-mon[73839]: log_channel(cluster) log [DBG] : last_changed 2025-09-30T14:12:03.527961+0000
Sep 30 14:12:06 compute-0 ceph-mon[73839]: log_channel(cluster) log [DBG] : created 2025-09-30T14:12:03.527961+0000
Sep 30 14:12:06 compute-0 ceph-mon[73839]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Sep 30 14:12:06 compute-0 ceph-mon[73839]: log_channel(cluster) log [DBG] : election_strategy: 1
Sep 30 14:12:06 compute-0 ceph-mon[73839]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Sep 30 14:12:06 compute-0 ceph-mon[73839]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Sep 30 14:12:06 compute-0 ceph-mon[73839]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=squid,ceph_version=ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable),ceph_version_short=19.2.3,compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v19,cpu=AMD EPYC-Rome Processor,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Mon Sep 15 21:46:13 UTC 2025,kernel_version=5.14.0-617.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864116,os=Linux}
Sep 30 14:12:06 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:12:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2ed864a4416de42f6e679db078b5958cf68bd31b1e1f4c9ee1736c720b20692/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2ed864a4416de42f6e679db078b5958cf68bd31b1e1f4c9ee1736c720b20692/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2ed864a4416de42f6e679db078b5958cf68bd31b1e1f4c9ee1736c720b20692/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:06 compute-0 ceph-mon[73839]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Sep 30 14:12:06 compute-0 ceph-mon[73839]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Sep 30 14:12:06 compute-0 ceph-mon[73839]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Sep 30 14:12:06 compute-0 ceph-mon[73839]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Sep 30 14:12:06 compute-0 ceph-mon[73839]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Sep 30 14:12:06 compute-0 ceph-mon[73839]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout,16=squid ondisk layout}
Sep 30 14:12:07 compute-0 ceph-mon[73839]: mon.compute-0@0(leader).mds e1 new map
Sep 30 14:12:07 compute-0 ceph-mon[73839]: mon.compute-0@0(leader).mds e1 print_map
                                           e1
                                           btime 2025-09-30T14:12:06:949277+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Sep 30 14:12:07 compute-0 ceph-mon[73839]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Sep 30 14:12:07 compute-0 ceph-mon[73839]: log_channel(cluster) log [DBG] : fsmap 
Sep 30 14:12:07 compute-0 podman[73861]: 2025-09-30 14:12:07.152273905 +0000 UTC m=+0.463394895 container init 675beaf08be8724397405704f26c519c1e0993216a7195086336afc5ec220b44 (image=quay.io/ceph/ceph:v19, name=hungry_kepler, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Sep 30 14:12:07 compute-0 podman[73861]: 2025-09-30 14:12:07.161817983 +0000 UTC m=+0.472938973 container start 675beaf08be8724397405704f26c519c1e0993216a7195086336afc5ec220b44 (image=quay.io/ceph/ceph:v19, name=hungry_kepler, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:12:07 compute-0 ceph-mon[73839]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Sep 30 14:12:07 compute-0 ceph-mon[73839]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Sep 30 14:12:07 compute-0 ceph-mon[73839]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Sep 30 14:12:07 compute-0 ceph-mon[73839]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Sep 30 14:12:07 compute-0 ceph-mon[73839]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Sep 30 14:12:07 compute-0 ceph-mon[73839]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Sep 30 14:12:07 compute-0 ceph-mon[73839]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Sep 30 14:12:07 compute-0 ceph-mon[73839]: mkfs 5e3c7776-ac03-5698-b79f-a6dc2d80cae6
Sep 30 14:12:07 compute-0 podman[73861]: 2025-09-30 14:12:07.170477788 +0000 UTC m=+0.481598778 container attach 675beaf08be8724397405704f26c519c1e0993216a7195086336afc5ec220b44 (image=quay.io/ceph/ceph:v19, name=hungry_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Sep 30 14:12:07 compute-0 ceph-mon[73839]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Sep 30 14:12:07 compute-0 ceph-mon[73839]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Sep 30 14:12:07 compute-0 ceph-mon[73839]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Sep 30 14:12:07 compute-0 ceph-mon[73839]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Sep 30 14:12:07 compute-0 ceph-mon[73839]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Sep 30 14:12:07 compute-0 ceph-mon[73839]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2943246517' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Sep 30 14:12:07 compute-0 hungry_kepler[73895]:   cluster:
Sep 30 14:12:07 compute-0 hungry_kepler[73895]:     id:     5e3c7776-ac03-5698-b79f-a6dc2d80cae6
Sep 30 14:12:07 compute-0 hungry_kepler[73895]:     health: HEALTH_OK
Sep 30 14:12:07 compute-0 hungry_kepler[73895]:  
Sep 30 14:12:07 compute-0 hungry_kepler[73895]:   services:
Sep 30 14:12:07 compute-0 hungry_kepler[73895]:     mon: 1 daemons, quorum compute-0 (age 0.421564s)
Sep 30 14:12:07 compute-0 hungry_kepler[73895]:     mgr: no daemons active
Sep 30 14:12:07 compute-0 hungry_kepler[73895]:     osd: 0 osds: 0 up, 0 in
Sep 30 14:12:07 compute-0 hungry_kepler[73895]:  
Sep 30 14:12:07 compute-0 hungry_kepler[73895]:   data:
Sep 30 14:12:07 compute-0 hungry_kepler[73895]:     pools:   0 pools, 0 pgs
Sep 30 14:12:07 compute-0 hungry_kepler[73895]:     objects: 0 objects, 0 B
Sep 30 14:12:07 compute-0 hungry_kepler[73895]:     usage:   0 B used, 0 B / 0 B avail
Sep 30 14:12:07 compute-0 hungry_kepler[73895]:     pgs:     
Sep 30 14:12:07 compute-0 hungry_kepler[73895]:  
Sep 30 14:12:07 compute-0 systemd[1]: libpod-675beaf08be8724397405704f26c519c1e0993216a7195086336afc5ec220b44.scope: Deactivated successfully.
Sep 30 14:12:07 compute-0 podman[73861]: 2025-09-30 14:12:07.39273257 +0000 UTC m=+0.703853560 container died 675beaf08be8724397405704f26c519c1e0993216a7195086336afc5ec220b44 (image=quay.io/ceph/ceph:v19, name=hungry_kepler, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:12:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-b2ed864a4416de42f6e679db078b5958cf68bd31b1e1f4c9ee1736c720b20692-merged.mount: Deactivated successfully.
Sep 30 14:12:07 compute-0 podman[73861]: 2025-09-30 14:12:07.443019428 +0000 UTC m=+0.754140418 container remove 675beaf08be8724397405704f26c519c1e0993216a7195086336afc5ec220b44 (image=quay.io/ceph/ceph:v19, name=hungry_kepler, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Sep 30 14:12:07 compute-0 systemd[1]: libpod-conmon-675beaf08be8724397405704f26c519c1e0993216a7195086336afc5ec220b44.scope: Deactivated successfully.
Sep 30 14:12:07 compute-0 podman[73933]: 2025-09-30 14:12:07.525487623 +0000 UTC m=+0.054325524 container create 56a71afec6d97ea239ec45f91922cdc033f8e913dd2e6b9678532c0ddadcf595 (image=quay.io/ceph/ceph:v19, name=optimistic_wescoff, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Sep 30 14:12:07 compute-0 systemd[1]: Started libpod-conmon-56a71afec6d97ea239ec45f91922cdc033f8e913dd2e6b9678532c0ddadcf595.scope.
Sep 30 14:12:07 compute-0 podman[73933]: 2025-09-30 14:12:07.498774298 +0000 UTC m=+0.027612229 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:12:07 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:12:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/501bf2d09566cd99c79e5e2928004f9adede02004e54646f95878a9612f08813/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/501bf2d09566cd99c79e5e2928004f9adede02004e54646f95878a9612f08813/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/501bf2d09566cd99c79e5e2928004f9adede02004e54646f95878a9612f08813/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/501bf2d09566cd99c79e5e2928004f9adede02004e54646f95878a9612f08813/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:07 compute-0 podman[73933]: 2025-09-30 14:12:07.617516837 +0000 UTC m=+0.146354748 container init 56a71afec6d97ea239ec45f91922cdc033f8e913dd2e6b9678532c0ddadcf595 (image=quay.io/ceph/ceph:v19, name=optimistic_wescoff, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:12:07 compute-0 podman[73933]: 2025-09-30 14:12:07.622913397 +0000 UTC m=+0.151751288 container start 56a71afec6d97ea239ec45f91922cdc033f8e913dd2e6b9678532c0ddadcf595 (image=quay.io/ceph/ceph:v19, name=optimistic_wescoff, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:12:07 compute-0 podman[73933]: 2025-09-30 14:12:07.629834837 +0000 UTC m=+0.158672748 container attach 56a71afec6d97ea239ec45f91922cdc033f8e913dd2e6b9678532c0ddadcf595 (image=quay.io/ceph/ceph:v19, name=optimistic_wescoff, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:12:07 compute-0 ceph-mon[73839]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Sep 30 14:12:07 compute-0 ceph-mon[73839]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3812360340' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Sep 30 14:12:07 compute-0 ceph-mon[73839]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3812360340' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Sep 30 14:12:07 compute-0 optimistic_wescoff[73949]: 
Sep 30 14:12:07 compute-0 optimistic_wescoff[73949]: [global]
Sep 30 14:12:07 compute-0 optimistic_wescoff[73949]:         fsid = 5e3c7776-ac03-5698-b79f-a6dc2d80cae6
Sep 30 14:12:07 compute-0 optimistic_wescoff[73949]:         mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Sep 30 14:12:07 compute-0 systemd[1]: libpod-56a71afec6d97ea239ec45f91922cdc033f8e913dd2e6b9678532c0ddadcf595.scope: Deactivated successfully.
Sep 30 14:12:07 compute-0 conmon[73949]: conmon 56a71afec6d97ea239ec <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-56a71afec6d97ea239ec45f91922cdc033f8e913dd2e6b9678532c0ddadcf595.scope/container/memory.events
Sep 30 14:12:07 compute-0 podman[73933]: 2025-09-30 14:12:07.833472824 +0000 UTC m=+0.362310745 container died 56a71afec6d97ea239ec45f91922cdc033f8e913dd2e6b9678532c0ddadcf595 (image=quay.io/ceph/ceph:v19, name=optimistic_wescoff, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:12:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-501bf2d09566cd99c79e5e2928004f9adede02004e54646f95878a9612f08813-merged.mount: Deactivated successfully.
Sep 30 14:12:07 compute-0 podman[73933]: 2025-09-30 14:12:07.875729953 +0000 UTC m=+0.404567834 container remove 56a71afec6d97ea239ec45f91922cdc033f8e913dd2e6b9678532c0ddadcf595 (image=quay.io/ceph/ceph:v19, name=optimistic_wescoff, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Sep 30 14:12:07 compute-0 systemd[1]: libpod-conmon-56a71afec6d97ea239ec45f91922cdc033f8e913dd2e6b9678532c0ddadcf595.scope: Deactivated successfully.
Sep 30 14:12:07 compute-0 podman[73987]: 2025-09-30 14:12:07.946641368 +0000 UTC m=+0.044214581 container create d19484dd73b6d4e2141e13e181039badba626d696fcca61b9622d9aa88106e1c (image=quay.io/ceph/ceph:v19, name=funny_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Sep 30 14:12:07 compute-0 systemd[1]: Started libpod-conmon-d19484dd73b6d4e2141e13e181039badba626d696fcca61b9622d9aa88106e1c.scope.
Sep 30 14:12:08 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:12:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcca36c18fcb9ff6e6fc7fa1ac204100d26beda24015f0df6bbadaed98d46d96/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcca36c18fcb9ff6e6fc7fa1ac204100d26beda24015f0df6bbadaed98d46d96/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcca36c18fcb9ff6e6fc7fa1ac204100d26beda24015f0df6bbadaed98d46d96/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcca36c18fcb9ff6e6fc7fa1ac204100d26beda24015f0df6bbadaed98d46d96/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:08 compute-0 podman[73987]: 2025-09-30 14:12:07.926021792 +0000 UTC m=+0.023595025 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:12:08 compute-0 podman[73987]: 2025-09-30 14:12:08.041700531 +0000 UTC m=+0.139273744 container init d19484dd73b6d4e2141e13e181039badba626d696fcca61b9622d9aa88106e1c (image=quay.io/ceph/ceph:v19, name=funny_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:12:08 compute-0 podman[73987]: 2025-09-30 14:12:08.056341382 +0000 UTC m=+0.153914595 container start d19484dd73b6d4e2141e13e181039badba626d696fcca61b9622d9aa88106e1c (image=quay.io/ceph/ceph:v19, name=funny_lederberg, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Sep 30 14:12:08 compute-0 podman[73987]: 2025-09-30 14:12:08.061598678 +0000 UTC m=+0.159171921 container attach d19484dd73b6d4e2141e13e181039badba626d696fcca61b9622d9aa88106e1c (image=quay.io/ceph/ceph:v19, name=funny_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Sep 30 14:12:08 compute-0 ceph-mon[73839]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Sep 30 14:12:08 compute-0 ceph-mon[73839]: monmap epoch 1
Sep 30 14:12:08 compute-0 ceph-mon[73839]: fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6
Sep 30 14:12:08 compute-0 ceph-mon[73839]: last_changed 2025-09-30T14:12:03.527961+0000
Sep 30 14:12:08 compute-0 ceph-mon[73839]: created 2025-09-30T14:12:03.527961+0000
Sep 30 14:12:08 compute-0 ceph-mon[73839]: min_mon_release 19 (squid)
Sep 30 14:12:08 compute-0 ceph-mon[73839]: election_strategy: 1
Sep 30 14:12:08 compute-0 ceph-mon[73839]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Sep 30 14:12:08 compute-0 ceph-mon[73839]: fsmap 
Sep 30 14:12:08 compute-0 ceph-mon[73839]: osdmap e1: 0 total, 0 up, 0 in
Sep 30 14:12:08 compute-0 ceph-mon[73839]: mgrmap e1: no daemons active
Sep 30 14:12:08 compute-0 ceph-mon[73839]: from='client.? 192.168.122.100:0/2943246517' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Sep 30 14:12:08 compute-0 ceph-mon[73839]: from='client.? 192.168.122.100:0/3812360340' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Sep 30 14:12:08 compute-0 ceph-mon[73839]: from='client.? 192.168.122.100:0/3812360340' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Sep 30 14:12:08 compute-0 ceph-mon[73839]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:12:08 compute-0 ceph-mon[73839]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/297132313' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:12:08 compute-0 systemd[1]: libpod-d19484dd73b6d4e2141e13e181039badba626d696fcca61b9622d9aa88106e1c.scope: Deactivated successfully.
Sep 30 14:12:08 compute-0 podman[74029]: 2025-09-30 14:12:08.347502045 +0000 UTC m=+0.028295667 container died d19484dd73b6d4e2141e13e181039badba626d696fcca61b9622d9aa88106e1c (image=quay.io/ceph/ceph:v19, name=funny_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Sep 30 14:12:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-dcca36c18fcb9ff6e6fc7fa1ac204100d26beda24015f0df6bbadaed98d46d96-merged.mount: Deactivated successfully.
Sep 30 14:12:08 compute-0 podman[74029]: 2025-09-30 14:12:08.520385852 +0000 UTC m=+0.201179434 container remove d19484dd73b6d4e2141e13e181039badba626d696fcca61b9622d9aa88106e1c (image=quay.io/ceph/ceph:v19, name=funny_lederberg, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Sep 30 14:12:08 compute-0 systemd[1]: libpod-conmon-d19484dd73b6d4e2141e13e181039badba626d696fcca61b9622d9aa88106e1c.scope: Deactivated successfully.
Sep 30 14:12:08 compute-0 systemd[1]: Stopping Ceph mon.compute-0 for 5e3c7776-ac03-5698-b79f-a6dc2d80cae6...
Sep 30 14:12:08 compute-0 ceph-mon[73839]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Sep 30 14:12:08 compute-0 ceph-mon[73839]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Sep 30 14:12:08 compute-0 ceph-mon[73839]: mon.compute-0@0(leader) e1 shutdown
Sep 30 14:12:08 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mon-compute-0[73835]: 2025-09-30T14:12:08.975+0000 7fb692c85640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Sep 30 14:12:08 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mon-compute-0[73835]: 2025-09-30T14:12:08.975+0000 7fb692c85640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Sep 30 14:12:08 compute-0 ceph-mon[73839]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Sep 30 14:12:08 compute-0 ceph-mon[73839]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Sep 30 14:12:09 compute-0 podman[74070]: 2025-09-30 14:12:09.161045637 +0000 UTC m=+0.479832342 container died 645a8d23d8ddaf9c7f37ee398cf316283a4203fae1bfcc190c169e9932a486cd (image=quay.io/ceph/ceph:v19, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mon-compute-0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:12:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-e646e6fda02fa308fd5ee324012041bc82007c12b73e647efcca185336b68644-merged.mount: Deactivated successfully.
Sep 30 14:12:09 compute-0 podman[74070]: 2025-09-30 14:12:09.20537546 +0000 UTC m=+0.524162165 container remove 645a8d23d8ddaf9c7f37ee398cf316283a4203fae1bfcc190c169e9932a486cd (image=quay.io/ceph/ceph:v19, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:12:09 compute-0 bash[74070]: ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mon-compute-0
Sep 30 14:12:09 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Sep 30 14:12:09 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Sep 30 14:12:09 compute-0 systemd[1]: ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6@mon.compute-0.service: Deactivated successfully.
Sep 30 14:12:09 compute-0 systemd[1]: Stopped Ceph mon.compute-0 for 5e3c7776-ac03-5698-b79f-a6dc2d80cae6.
Sep 30 14:12:09 compute-0 systemd[1]: Starting Ceph mon.compute-0 for 5e3c7776-ac03-5698-b79f-a6dc2d80cae6...
Sep 30 14:12:09 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Sep 30 14:12:09 compute-0 podman[74174]: 2025-09-30 14:12:09.564227545 +0000 UTC m=+0.046011948 container create a277d7b6b6f3cf10a7ce0ade5eebf0f8127074c248f9bce4451399614b97ded5 (image=quay.io/ceph/ceph:v19, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:12:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2870b3e6109d38c2f990d94211a1967c304a713f9000811639232bf759f752c3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2870b3e6109d38c2f990d94211a1967c304a713f9000811639232bf759f752c3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2870b3e6109d38c2f990d94211a1967c304a713f9000811639232bf759f752c3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2870b3e6109d38c2f990d94211a1967c304a713f9000811639232bf759f752c3/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:09 compute-0 podman[74174]: 2025-09-30 14:12:09.630055237 +0000 UTC m=+0.111839670 container init a277d7b6b6f3cf10a7ce0ade5eebf0f8127074c248f9bce4451399614b97ded5 (image=quay.io/ceph/ceph:v19, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:12:09 compute-0 podman[74174]: 2025-09-30 14:12:09.636965467 +0000 UTC m=+0.118749880 container start a277d7b6b6f3cf10a7ce0ade5eebf0f8127074c248f9bce4451399614b97ded5 (image=quay.io/ceph/ceph:v19, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mon-compute-0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:12:09 compute-0 podman[74174]: 2025-09-30 14:12:09.544887662 +0000 UTC m=+0.026672115 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:12:09 compute-0 bash[74174]: a277d7b6b6f3cf10a7ce0ade5eebf0f8127074c248f9bce4451399614b97ded5
Sep 30 14:12:09 compute-0 systemd[1]: Started Ceph mon.compute-0 for 5e3c7776-ac03-5698-b79f-a6dc2d80cae6.
Sep 30 14:12:09 compute-0 ceph-mon[74194]: set uid:gid to 167:167 (ceph:ceph)
Sep 30 14:12:09 compute-0 ceph-mon[74194]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mon, pid 2
Sep 30 14:12:09 compute-0 ceph-mon[74194]: pidfile_write: ignore empty --pid-file
Sep 30 14:12:09 compute-0 ceph-mon[74194]: load: jerasure load: lrc 
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb: RocksDB version: 7.9.2
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb: Git sha 0
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb: Compile date 2025-07-17 03:12:14
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb: DB SUMMARY
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb: DB Session ID:  KY5CTSKWFSFJYE5835A9
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb: CURRENT file:  CURRENT
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb: IDENTITY file:  IDENTITY
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 58743 ; 
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:                         Options.error_if_exists: 0
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:                       Options.create_if_missing: 0
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:                         Options.paranoid_checks: 1
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:             Options.flush_verify_memtable_count: 1
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:                                     Options.env: 0x5596d6152c20
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:                                      Options.fs: PosixFileSystem
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:                                Options.info_log: 0x5596d71ece20
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:                Options.max_file_opening_threads: 16
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:                              Options.statistics: (nil)
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:                               Options.use_fsync: 0
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:                       Options.max_log_file_size: 0
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:                   Options.log_file_time_to_roll: 0
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:                       Options.keep_log_file_num: 1000
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:                    Options.recycle_log_file_num: 0
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:                         Options.allow_fallocate: 1
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:                        Options.allow_mmap_reads: 0
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:                       Options.allow_mmap_writes: 0
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:                        Options.use_direct_reads: 0
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:          Options.create_missing_column_families: 0
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:                              Options.db_log_dir: 
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:                                 Options.wal_dir: 
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:                Options.table_cache_numshardbits: 6
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:                         Options.WAL_ttl_seconds: 0
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:                       Options.WAL_size_limit_MB: 0
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:             Options.manifest_preallocation_size: 4194304
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:                     Options.is_fd_close_on_exec: 1
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:                   Options.advise_random_on_open: 1
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:                    Options.db_write_buffer_size: 0
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:                    Options.write_buffer_manager: 0x5596d71f1900
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:         Options.access_hint_on_compaction_start: 1
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:                      Options.use_adaptive_mutex: 0
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:                            Options.rate_limiter: (nil)
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:                       Options.wal_recovery_mode: 2
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:                  Options.enable_thread_tracking: 0
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:                  Options.enable_pipelined_write: 0
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:                  Options.unordered_write: 0
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:             Options.write_thread_max_yield_usec: 100
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:                               Options.row_cache: None
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:                              Options.wal_filter: None
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:             Options.avoid_flush_during_recovery: 0
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:             Options.allow_ingest_behind: 0
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:             Options.two_write_queues: 0
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:             Options.manual_wal_flush: 0
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:             Options.wal_compression: 0
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:             Options.atomic_flush: 0
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:                 Options.persist_stats_to_disk: 0
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:                 Options.write_dbid_to_manifest: 0
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:                 Options.log_readahead_size: 0
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:                 Options.best_efforts_recovery: 0
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:             Options.allow_data_in_errors: 0
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:             Options.db_host_id: __hostname__
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:             Options.enforce_single_del_contracts: true
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:             Options.max_background_jobs: 2
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:             Options.max_background_compactions: -1
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:             Options.max_subcompactions: 1
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:             Options.delayed_write_rate : 16777216
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:             Options.max_total_wal_size: 0
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:                   Options.stats_dump_period_sec: 600
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:                 Options.stats_persist_period_sec: 600
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:                          Options.max_open_files: -1
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:                          Options.bytes_per_sync: 0
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:                      Options.wal_bytes_per_sync: 0
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:                   Options.strict_bytes_per_sync: 0
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:       Options.compaction_readahead_size: 0
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:                  Options.max_background_flushes: -1
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb: Compression algorithms supported:
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:         kZSTD supported: 0
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:         kXpressCompression supported: 0
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:         kBZip2Compression supported: 0
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:         kZSTDNotFinalCompression supported: 0
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:         kLZ4Compression supported: 1
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:         kZlibCompression supported: 1
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:         kLZ4HCCompression supported: 1
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:         kSnappyCompression supported: 1
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb: Fast CRC32 supported: Supported on x86
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb: DMutex implementation: pthread_mutex_t
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:           Options.merge_operator: 
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:        Options.compaction_filter: None
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:        Options.compaction_filter_factory: None
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:  Options.sst_partitioner_factory: None
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:         Options.memtable_factory: SkipListFactory
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:            Options.table_factory: BlockBasedTable
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5596d71ecaa0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5596d7211350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:        Options.write_buffer_size: 33554432
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:  Options.max_write_buffer_number: 2
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:          Options.compression: NoCompression
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:                  Options.bottommost_compression: Disabled
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:       Options.prefix_extractor: nullptr
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:             Options.num_levels: 7
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:            Options.compression_opts.window_bits: -14
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:                  Options.compression_opts.level: 32767
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:               Options.compression_opts.strategy: 0
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:         Options.compression_opts.parallel_threads: 1
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:                  Options.compression_opts.enabled: false
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:              Options.level0_stop_writes_trigger: 36
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:                   Options.target_file_size_base: 67108864
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:             Options.target_file_size_multiplier: 1
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:                        Options.arena_block_size: 1048576
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:                Options.disable_auto_compactions: 0
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:                   Options.inplace_update_support: 0
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:                 Options.inplace_update_num_locks: 10000
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:               Options.memtable_whole_key_filtering: 0
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:   Options.memtable_huge_page_size: 0
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:                           Options.bloom_locality: 0
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:                    Options.max_successive_merges: 0
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:                Options.optimize_filters_for_hits: 0
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:                Options.paranoid_file_checks: 0
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:                Options.force_consistency_checks: 1
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:                Options.report_bg_io_stats: 0
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:                               Options.ttl: 2592000
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:          Options.periodic_compaction_seconds: 0
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:    Options.preserve_internal_time_seconds: 0
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:                       Options.enable_blob_files: false
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:                           Options.min_blob_size: 0
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:                          Options.blob_file_size: 268435456
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:                   Options.blob_compression_type: NoCompression
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:          Options.enable_blob_garbage_collection: false
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:          Options.blob_compaction_readahead_size: 0
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb:                Options.blob_file_starting_level: 0
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 4a74fe2f-a33e-416b-ba25-743e7942b3ac
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759241529683014, "job": 1, "event": "recovery_started", "wal_files": [9]}
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759241529687594, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 58494, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 137, "table_properties": {"data_size": 56968, "index_size": 168, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 325, "raw_key_size": 3182, "raw_average_key_size": 30, "raw_value_size": 54485, "raw_average_value_size": 523, "num_data_blocks": 9, "num_entries": 104, "num_filter_entries": 104, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759241529, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4a74fe2f-a33e-416b-ba25-743e7942b3ac", "db_session_id": "KY5CTSKWFSFJYE5835A9", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759241529687758, "job": 1, "event": "recovery_finished"}
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5596d7212e00
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb: DB pointer 0x5596d731c000
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Sep 30 14:12:09 compute-0 ceph-mon[74194]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0   59.02 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     13.7      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0   59.02 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     13.7      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     13.7      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     13.7      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 2.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 2.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5596d7211350#2 capacity: 512.00 MB usage: 0.84 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(2,0.48 KB,9.23872e-05%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Sep 30 14:12:09 compute-0 ceph-mon[74194]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6
Sep 30 14:12:09 compute-0 ceph-mon[74194]: mon.compute-0@-1(???) e1 preinit fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6
Sep 30 14:12:09 compute-0 ceph-mon[74194]: mon.compute-0@-1(???).mds e1 new map
Sep 30 14:12:09 compute-0 ceph-mon[74194]: mon.compute-0@-1(???).mds e1 print_map
                                           e1
                                           btime 2025-09-30T14:12:06:949277+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Sep 30 14:12:09 compute-0 ceph-mon[74194]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Sep 30 14:12:09 compute-0 ceph-mon[74194]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Sep 30 14:12:09 compute-0 ceph-mon[74194]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Sep 30 14:12:09 compute-0 ceph-mon[74194]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Sep 30 14:12:09 compute-0 ceph-mon[74194]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Sep 30 14:12:09 compute-0 ceph-mon[74194]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Sep 30 14:12:09 compute-0 ceph-mon[74194]: mon.compute-0@0(probing) e1 win_standalone_election
Sep 30 14:12:09 compute-0 ceph-mon[74194]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Sep 30 14:12:09 compute-0 ceph-mon[74194]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Sep 30 14:12:09 compute-0 ceph-mon[74194]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Sep 30 14:12:09 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : monmap epoch 1
Sep 30 14:12:09 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6
Sep 30 14:12:09 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : last_changed 2025-09-30T14:12:03.527961+0000
Sep 30 14:12:09 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : created 2025-09-30T14:12:03.527961+0000
Sep 30 14:12:09 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Sep 30 14:12:09 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : election_strategy: 1
Sep 30 14:12:09 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Sep 30 14:12:09 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Sep 30 14:12:09 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : fsmap 
Sep 30 14:12:09 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Sep 30 14:12:09 compute-0 podman[74195]: 2025-09-30 14:12:09.722942863 +0000 UTC m=+0.044345164 container create c3ec437a8965b5fbd4788939d061582a4acd344e5c99c3c03a1425cdfc07f9af (image=quay.io/ceph/ceph:v19, name=brave_gould, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid)
Sep 30 14:12:09 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Sep 30 14:12:09 compute-0 systemd[1]: Started libpod-conmon-c3ec437a8965b5fbd4788939d061582a4acd344e5c99c3c03a1425cdfc07f9af.scope.
Sep 30 14:12:09 compute-0 ceph-mon[74194]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Sep 30 14:12:09 compute-0 ceph-mon[74194]: monmap epoch 1
Sep 30 14:12:09 compute-0 ceph-mon[74194]: fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6
Sep 30 14:12:09 compute-0 ceph-mon[74194]: last_changed 2025-09-30T14:12:03.527961+0000
Sep 30 14:12:09 compute-0 ceph-mon[74194]: created 2025-09-30T14:12:03.527961+0000
Sep 30 14:12:09 compute-0 ceph-mon[74194]: min_mon_release 19 (squid)
Sep 30 14:12:09 compute-0 ceph-mon[74194]: election_strategy: 1
Sep 30 14:12:09 compute-0 ceph-mon[74194]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Sep 30 14:12:09 compute-0 ceph-mon[74194]: fsmap 
Sep 30 14:12:09 compute-0 ceph-mon[74194]: osdmap e1: 0 total, 0 up, 0 in
Sep 30 14:12:09 compute-0 ceph-mon[74194]: mgrmap e1: no daemons active
Sep 30 14:12:09 compute-0 podman[74195]: 2025-09-30 14:12:09.704431672 +0000 UTC m=+0.025834003 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:12:09 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:12:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fbc6d3b4fd48327612f7702c810c5a3be940edf04d5848232d294c87f630188/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fbc6d3b4fd48327612f7702c810c5a3be940edf04d5848232d294c87f630188/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fbc6d3b4fd48327612f7702c810c5a3be940edf04d5848232d294c87f630188/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:09 compute-0 podman[74195]: 2025-09-30 14:12:09.82431049 +0000 UTC m=+0.145712791 container init c3ec437a8965b5fbd4788939d061582a4acd344e5c99c3c03a1425cdfc07f9af (image=quay.io/ceph/ceph:v19, name=brave_gould, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:12:09 compute-0 podman[74195]: 2025-09-30 14:12:09.832256377 +0000 UTC m=+0.153658678 container start c3ec437a8965b5fbd4788939d061582a4acd344e5c99c3c03a1425cdfc07f9af (image=quay.io/ceph/ceph:v19, name=brave_gould, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:12:09 compute-0 podman[74195]: 2025-09-30 14:12:09.8358506 +0000 UTC m=+0.157252931 container attach c3ec437a8965b5fbd4788939d061582a4acd344e5c99c3c03a1425cdfc07f9af (image=quay.io/ceph/ceph:v19, name=brave_gould, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:12:10 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0)
Sep 30 14:12:10 compute-0 systemd[1]: libpod-c3ec437a8965b5fbd4788939d061582a4acd344e5c99c3c03a1425cdfc07f9af.scope: Deactivated successfully.
Sep 30 14:12:10 compute-0 conmon[74249]: conmon c3ec437a8965b5fbd478 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c3ec437a8965b5fbd4788939d061582a4acd344e5c99c3c03a1425cdfc07f9af.scope/container/memory.events
Sep 30 14:12:10 compute-0 podman[74195]: 2025-09-30 14:12:10.03733701 +0000 UTC m=+0.358739311 container died c3ec437a8965b5fbd4788939d061582a4acd344e5c99c3c03a1425cdfc07f9af (image=quay.io/ceph/ceph:v19, name=brave_gould, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:12:10 compute-0 podman[74195]: 2025-09-30 14:12:10.08270129 +0000 UTC m=+0.404103581 container remove c3ec437a8965b5fbd4788939d061582a4acd344e5c99c3c03a1425cdfc07f9af (image=quay.io/ceph/ceph:v19, name=brave_gould, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Sep 30 14:12:10 compute-0 systemd[1]: libpod-conmon-c3ec437a8965b5fbd4788939d061582a4acd344e5c99c3c03a1425cdfc07f9af.scope: Deactivated successfully.
Sep 30 14:12:10 compute-0 podman[74286]: 2025-09-30 14:12:10.151140271 +0000 UTC m=+0.043341299 container create a209a991d77635db881368516f320eb62f7ad69425ce8535a1632f42e316220b (image=quay.io/ceph/ceph:v19, name=musing_liskov, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:12:10 compute-0 podman[74286]: 2025-09-30 14:12:10.132570238 +0000 UTC m=+0.024771296 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:12:10 compute-0 systemd[1]: Started libpod-conmon-a209a991d77635db881368516f320eb62f7ad69425ce8535a1632f42e316220b.scope.
Sep 30 14:12:10 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:12:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06bc52214ec22d6a8f91184b6510714550ce6cde36a92f4eb41014b20c99ecdd/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06bc52214ec22d6a8f91184b6510714550ce6cde36a92f4eb41014b20c99ecdd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06bc52214ec22d6a8f91184b6510714550ce6cde36a92f4eb41014b20c99ecdd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:10 compute-0 podman[74286]: 2025-09-30 14:12:10.261801389 +0000 UTC m=+0.154002437 container init a209a991d77635db881368516f320eb62f7ad69425ce8535a1632f42e316220b (image=quay.io/ceph/ceph:v19, name=musing_liskov, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Sep 30 14:12:10 compute-0 podman[74286]: 2025-09-30 14:12:10.268870243 +0000 UTC m=+0.161071271 container start a209a991d77635db881368516f320eb62f7ad69425ce8535a1632f42e316220b (image=quay.io/ceph/ceph:v19, name=musing_liskov, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:12:10 compute-0 podman[74286]: 2025-09-30 14:12:10.272004885 +0000 UTC m=+0.164205943 container attach a209a991d77635db881368516f320eb62f7ad69425ce8535a1632f42e316220b (image=quay.io/ceph/ceph:v19, name=musing_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Sep 30 14:12:10 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0)
Sep 30 14:12:10 compute-0 systemd[1]: libpod-a209a991d77635db881368516f320eb62f7ad69425ce8535a1632f42e316220b.scope: Deactivated successfully.
Sep 30 14:12:10 compute-0 podman[74286]: 2025-09-30 14:12:10.484594175 +0000 UTC m=+0.376795203 container died a209a991d77635db881368516f320eb62f7ad69425ce8535a1632f42e316220b (image=quay.io/ceph/ceph:v19, name=musing_liskov, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Sep 30 14:12:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-06bc52214ec22d6a8f91184b6510714550ce6cde36a92f4eb41014b20c99ecdd-merged.mount: Deactivated successfully.
Sep 30 14:12:10 compute-0 podman[74286]: 2025-09-30 14:12:10.518452495 +0000 UTC m=+0.410653513 container remove a209a991d77635db881368516f320eb62f7ad69425ce8535a1632f42e316220b (image=quay.io/ceph/ceph:v19, name=musing_liskov, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Sep 30 14:12:10 compute-0 systemd[1]: libpod-conmon-a209a991d77635db881368516f320eb62f7ad69425ce8535a1632f42e316220b.scope: Deactivated successfully.
Sep 30 14:12:10 compute-0 systemd[1]: Reloading.
Sep 30 14:12:10 compute-0 systemd-rc-local-generator[74366]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:12:10 compute-0 systemd-sysv-generator[74370]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 14:12:10 compute-0 systemd[1]: Reloading.
Sep 30 14:12:10 compute-0 systemd-rc-local-generator[74408]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:12:10 compute-0 systemd-sysv-generator[74411]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 14:12:11 compute-0 systemd[1]: Starting Ceph mgr.compute-0.buxlkm for 5e3c7776-ac03-5698-b79f-a6dc2d80cae6...
Sep 30 14:12:11 compute-0 podman[74465]: 2025-09-30 14:12:11.261315559 +0000 UTC m=+0.045972627 container create a69f0208767c90727f44f4457d5f69ab07fd858d685385d13ee282093cfe6ff8 (image=quay.io/ceph/ceph:v19, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Sep 30 14:12:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c1a2d1993219931da6c66ea4e273a7e533e9f58f4b32f0e14839cb63dd3f0a6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c1a2d1993219931da6c66ea4e273a7e533e9f58f4b32f0e14839cb63dd3f0a6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c1a2d1993219931da6c66ea4e273a7e533e9f58f4b32f0e14839cb63dd3f0a6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c1a2d1993219931da6c66ea4e273a7e533e9f58f4b32f0e14839cb63dd3f0a6/merged/var/lib/ceph/mgr/ceph-compute-0.buxlkm supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:11 compute-0 podman[74465]: 2025-09-30 14:12:11.317051758 +0000 UTC m=+0.101708846 container init a69f0208767c90727f44f4457d5f69ab07fd858d685385d13ee282093cfe6ff8 (image=quay.io/ceph/ceph:v19, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:12:11 compute-0 podman[74465]: 2025-09-30 14:12:11.321703939 +0000 UTC m=+0.106360997 container start a69f0208767c90727f44f4457d5f69ab07fd858d685385d13ee282093cfe6ff8 (image=quay.io/ceph/ceph:v19, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Sep 30 14:12:11 compute-0 bash[74465]: a69f0208767c90727f44f4457d5f69ab07fd858d685385d13ee282093cfe6ff8
Sep 30 14:12:11 compute-0 podman[74465]: 2025-09-30 14:12:11.239961083 +0000 UTC m=+0.024618171 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:12:11 compute-0 systemd[1]: Started Ceph mgr.compute-0.buxlkm for 5e3c7776-ac03-5698-b79f-a6dc2d80cae6.
Sep 30 14:12:11 compute-0 ceph-mgr[74485]: set uid:gid to 167:167 (ceph:ceph)
Sep 30 14:12:11 compute-0 ceph-mgr[74485]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Sep 30 14:12:11 compute-0 ceph-mgr[74485]: pidfile_write: ignore empty --pid-file
Sep 30 14:12:11 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'alerts'
Sep 30 14:12:11 compute-0 podman[74486]: 2025-09-30 14:12:11.416334771 +0000 UTC m=+0.049093728 container create ebf9e708fa631a577e8ccad24a07e7a7d44fae7c28b055e323c667268d401ed4 (image=quay.io/ceph/ceph:v19, name=compassionate_poincare, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:12:11 compute-0 systemd[1]: Started libpod-conmon-ebf9e708fa631a577e8ccad24a07e7a7d44fae7c28b055e323c667268d401ed4.scope.
Sep 30 14:12:11 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:12:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95f32d5b426f82e6eece15f2f171cd794d28249851a091ef6dd8436818aa8dd9/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95f32d5b426f82e6eece15f2f171cd794d28249851a091ef6dd8436818aa8dd9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95f32d5b426f82e6eece15f2f171cd794d28249851a091ef6dd8436818aa8dd9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:11 compute-0 podman[74486]: 2025-09-30 14:12:11.399834682 +0000 UTC m=+0.032593669 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:12:11 compute-0 podman[74486]: 2025-09-30 14:12:11.501904697 +0000 UTC m=+0.134663674 container init ebf9e708fa631a577e8ccad24a07e7a7d44fae7c28b055e323c667268d401ed4 (image=quay.io/ceph/ceph:v19, name=compassionate_poincare, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:12:11 compute-0 podman[74486]: 2025-09-30 14:12:11.509780732 +0000 UTC m=+0.142539689 container start ebf9e708fa631a577e8ccad24a07e7a7d44fae7c28b055e323c667268d401ed4 (image=quay.io/ceph/ceph:v19, name=compassionate_poincare, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:12:11 compute-0 podman[74486]: 2025-09-30 14:12:11.512849132 +0000 UTC m=+0.145608099 container attach ebf9e708fa631a577e8ccad24a07e7a7d44fae7c28b055e323c667268d401ed4 (image=quay.io/ceph/ceph:v19, name=compassionate_poincare, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:12:11 compute-0 ceph-mgr[74485]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Sep 30 14:12:11 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'balancer'
Sep 30 14:12:11 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:12:11.512+0000 7faa20791140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Sep 30 14:12:11 compute-0 ceph-mgr[74485]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Sep 30 14:12:11 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'cephadm'
Sep 30 14:12:11 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:12:11.622+0000 7faa20791140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Sep 30 14:12:11 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Sep 30 14:12:11 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4040880175' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Sep 30 14:12:11 compute-0 compassionate_poincare[74522]: 
Sep 30 14:12:11 compute-0 compassionate_poincare[74522]: {
Sep 30 14:12:11 compute-0 compassionate_poincare[74522]:     "fsid": "5e3c7776-ac03-5698-b79f-a6dc2d80cae6",
Sep 30 14:12:11 compute-0 compassionate_poincare[74522]:     "health": {
Sep 30 14:12:11 compute-0 compassionate_poincare[74522]:         "status": "HEALTH_OK",
Sep 30 14:12:11 compute-0 compassionate_poincare[74522]:         "checks": {},
Sep 30 14:12:11 compute-0 compassionate_poincare[74522]:         "mutes": []
Sep 30 14:12:11 compute-0 compassionate_poincare[74522]:     },
Sep 30 14:12:11 compute-0 compassionate_poincare[74522]:     "election_epoch": 5,
Sep 30 14:12:11 compute-0 compassionate_poincare[74522]:     "quorum": [
Sep 30 14:12:11 compute-0 compassionate_poincare[74522]:         0
Sep 30 14:12:11 compute-0 compassionate_poincare[74522]:     ],
Sep 30 14:12:11 compute-0 compassionate_poincare[74522]:     "quorum_names": [
Sep 30 14:12:11 compute-0 compassionate_poincare[74522]:         "compute-0"
Sep 30 14:12:11 compute-0 compassionate_poincare[74522]:     ],
Sep 30 14:12:11 compute-0 compassionate_poincare[74522]:     "quorum_age": 2,
Sep 30 14:12:11 compute-0 compassionate_poincare[74522]:     "monmap": {
Sep 30 14:12:11 compute-0 compassionate_poincare[74522]:         "epoch": 1,
Sep 30 14:12:11 compute-0 compassionate_poincare[74522]:         "min_mon_release_name": "squid",
Sep 30 14:12:11 compute-0 compassionate_poincare[74522]:         "num_mons": 1
Sep 30 14:12:11 compute-0 compassionate_poincare[74522]:     },
Sep 30 14:12:11 compute-0 compassionate_poincare[74522]:     "osdmap": {
Sep 30 14:12:11 compute-0 compassionate_poincare[74522]:         "epoch": 1,
Sep 30 14:12:11 compute-0 compassionate_poincare[74522]:         "num_osds": 0,
Sep 30 14:12:11 compute-0 compassionate_poincare[74522]:         "num_up_osds": 0,
Sep 30 14:12:11 compute-0 compassionate_poincare[74522]:         "osd_up_since": 0,
Sep 30 14:12:11 compute-0 compassionate_poincare[74522]:         "num_in_osds": 0,
Sep 30 14:12:11 compute-0 compassionate_poincare[74522]:         "osd_in_since": 0,
Sep 30 14:12:11 compute-0 compassionate_poincare[74522]:         "num_remapped_pgs": 0
Sep 30 14:12:11 compute-0 compassionate_poincare[74522]:     },
Sep 30 14:12:11 compute-0 compassionate_poincare[74522]:     "pgmap": {
Sep 30 14:12:11 compute-0 compassionate_poincare[74522]:         "pgs_by_state": [],
Sep 30 14:12:11 compute-0 compassionate_poincare[74522]:         "num_pgs": 0,
Sep 30 14:12:11 compute-0 compassionate_poincare[74522]:         "num_pools": 0,
Sep 30 14:12:11 compute-0 compassionate_poincare[74522]:         "num_objects": 0,
Sep 30 14:12:11 compute-0 compassionate_poincare[74522]:         "data_bytes": 0,
Sep 30 14:12:11 compute-0 compassionate_poincare[74522]:         "bytes_used": 0,
Sep 30 14:12:11 compute-0 compassionate_poincare[74522]:         "bytes_avail": 0,
Sep 30 14:12:11 compute-0 compassionate_poincare[74522]:         "bytes_total": 0
Sep 30 14:12:11 compute-0 compassionate_poincare[74522]:     },
Sep 30 14:12:11 compute-0 compassionate_poincare[74522]:     "fsmap": {
Sep 30 14:12:11 compute-0 compassionate_poincare[74522]:         "epoch": 1,
Sep 30 14:12:11 compute-0 compassionate_poincare[74522]:         "btime": "2025-09-30T14:12:06:949277+0000",
Sep 30 14:12:11 compute-0 compassionate_poincare[74522]:         "by_rank": [],
Sep 30 14:12:11 compute-0 compassionate_poincare[74522]:         "up:standby": 0
Sep 30 14:12:11 compute-0 compassionate_poincare[74522]:     },
Sep 30 14:12:11 compute-0 compassionate_poincare[74522]:     "mgrmap": {
Sep 30 14:12:11 compute-0 compassionate_poincare[74522]:         "available": false,
Sep 30 14:12:11 compute-0 compassionate_poincare[74522]:         "num_standbys": 0,
Sep 30 14:12:11 compute-0 compassionate_poincare[74522]:         "modules": [
Sep 30 14:12:11 compute-0 compassionate_poincare[74522]:             "iostat",
Sep 30 14:12:11 compute-0 compassionate_poincare[74522]:             "nfs",
Sep 30 14:12:11 compute-0 compassionate_poincare[74522]:             "restful"
Sep 30 14:12:11 compute-0 compassionate_poincare[74522]:         ],
Sep 30 14:12:11 compute-0 compassionate_poincare[74522]:         "services": {}
Sep 30 14:12:11 compute-0 compassionate_poincare[74522]:     },
Sep 30 14:12:11 compute-0 compassionate_poincare[74522]:     "servicemap": {
Sep 30 14:12:11 compute-0 compassionate_poincare[74522]:         "epoch": 1,
Sep 30 14:12:11 compute-0 compassionate_poincare[74522]:         "modified": "2025-09-30T14:12:06.978138+0000",
Sep 30 14:12:11 compute-0 compassionate_poincare[74522]:         "services": {}
Sep 30 14:12:11 compute-0 compassionate_poincare[74522]:     },
Sep 30 14:12:11 compute-0 compassionate_poincare[74522]:     "progress_events": {}
Sep 30 14:12:11 compute-0 compassionate_poincare[74522]: }
Sep 30 14:12:11 compute-0 systemd[1]: libpod-ebf9e708fa631a577e8ccad24a07e7a7d44fae7c28b055e323c667268d401ed4.scope: Deactivated successfully.
Sep 30 14:12:11 compute-0 podman[74486]: 2025-09-30 14:12:11.767245229 +0000 UTC m=+0.400004196 container died ebf9e708fa631a577e8ccad24a07e7a7d44fae7c28b055e323c667268d401ed4 (image=quay.io/ceph/ceph:v19, name=compassionate_poincare, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Sep 30 14:12:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-95f32d5b426f82e6eece15f2f171cd794d28249851a091ef6dd8436818aa8dd9-merged.mount: Deactivated successfully.
Sep 30 14:12:11 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/4040880175' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Sep 30 14:12:11 compute-0 podman[74486]: 2025-09-30 14:12:11.814256042 +0000 UTC m=+0.447014999 container remove ebf9e708fa631a577e8ccad24a07e7a7d44fae7c28b055e323c667268d401ed4 (image=quay.io/ceph/ceph:v19, name=compassionate_poincare, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Sep 30 14:12:11 compute-0 systemd[1]: libpod-conmon-ebf9e708fa631a577e8ccad24a07e7a7d44fae7c28b055e323c667268d401ed4.scope: Deactivated successfully.
Sep 30 14:12:12 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'crash'
Sep 30 14:12:12 compute-0 ceph-mgr[74485]: mgr[py] Module crash has missing NOTIFY_TYPES member
Sep 30 14:12:12 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'dashboard'
Sep 30 14:12:12 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:12:12.558+0000 7faa20791140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Sep 30 14:12:13 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'devicehealth'
Sep 30 14:12:13 compute-0 ceph-mgr[74485]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Sep 30 14:12:13 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'diskprediction_local'
Sep 30 14:12:13 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:12:13.267+0000 7faa20791140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Sep 30 14:12:13 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Sep 30 14:12:13 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Sep 30 14:12:13 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]:   from numpy import show_config as show_numpy_config
Sep 30 14:12:13 compute-0 ceph-mgr[74485]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Sep 30 14:12:13 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'influx'
Sep 30 14:12:13 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:12:13.452+0000 7faa20791140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Sep 30 14:12:13 compute-0 ceph-mgr[74485]: mgr[py] Module influx has missing NOTIFY_TYPES member
Sep 30 14:12:13 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'insights'
Sep 30 14:12:13 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:12:13.534+0000 7faa20791140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Sep 30 14:12:13 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'iostat'
Sep 30 14:12:13 compute-0 ceph-mgr[74485]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Sep 30 14:12:13 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'k8sevents'
Sep 30 14:12:13 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:12:13.691+0000 7faa20791140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Sep 30 14:12:13 compute-0 podman[74569]: 2025-09-30 14:12:13.88863671 +0000 UTC m=+0.045352681 container create a6794ed6db79ee8b4b796901c67cbff1fade17983a765071cb2193bfbcd25160 (image=quay.io/ceph/ceph:v19, name=upbeat_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:12:13 compute-0 systemd[1]: Started libpod-conmon-a6794ed6db79ee8b4b796901c67cbff1fade17983a765071cb2193bfbcd25160.scope.
Sep 30 14:12:13 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:12:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b46d1bce08f760a03e770aba30ef24d6171136e12f3677fdfe8dd33a16b15be7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:13 compute-0 podman[74569]: 2025-09-30 14:12:13.869090071 +0000 UTC m=+0.025806042 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:12:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b46d1bce08f760a03e770aba30ef24d6171136e12f3677fdfe8dd33a16b15be7/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b46d1bce08f760a03e770aba30ef24d6171136e12f3677fdfe8dd33a16b15be7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:13 compute-0 podman[74569]: 2025-09-30 14:12:13.980851398 +0000 UTC m=+0.137567399 container init a6794ed6db79ee8b4b796901c67cbff1fade17983a765071cb2193bfbcd25160 (image=quay.io/ceph/ceph:v19, name=upbeat_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325)
Sep 30 14:12:13 compute-0 podman[74569]: 2025-09-30 14:12:13.986815073 +0000 UTC m=+0.143531044 container start a6794ed6db79ee8b4b796901c67cbff1fade17983a765071cb2193bfbcd25160 (image=quay.io/ceph/ceph:v19, name=upbeat_cray, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid)
Sep 30 14:12:13 compute-0 podman[74569]: 2025-09-30 14:12:13.989996246 +0000 UTC m=+0.146712227 container attach a6794ed6db79ee8b4b796901c67cbff1fade17983a765071cb2193bfbcd25160 (image=quay.io/ceph/ceph:v19, name=upbeat_cray, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Sep 30 14:12:14 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'localpool'
Sep 30 14:12:14 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'mds_autoscaler'
Sep 30 14:12:14 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Sep 30 14:12:14 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1954919079' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Sep 30 14:12:14 compute-0 upbeat_cray[74586]: 
Sep 30 14:12:14 compute-0 upbeat_cray[74586]: {
Sep 30 14:12:14 compute-0 upbeat_cray[74586]:     "fsid": "5e3c7776-ac03-5698-b79f-a6dc2d80cae6",
Sep 30 14:12:14 compute-0 upbeat_cray[74586]:     "health": {
Sep 30 14:12:14 compute-0 upbeat_cray[74586]:         "status": "HEALTH_OK",
Sep 30 14:12:14 compute-0 upbeat_cray[74586]:         "checks": {},
Sep 30 14:12:14 compute-0 upbeat_cray[74586]:         "mutes": []
Sep 30 14:12:14 compute-0 upbeat_cray[74586]:     },
Sep 30 14:12:14 compute-0 upbeat_cray[74586]:     "election_epoch": 5,
Sep 30 14:12:14 compute-0 upbeat_cray[74586]:     "quorum": [
Sep 30 14:12:14 compute-0 upbeat_cray[74586]:         0
Sep 30 14:12:14 compute-0 upbeat_cray[74586]:     ],
Sep 30 14:12:14 compute-0 upbeat_cray[74586]:     "quorum_names": [
Sep 30 14:12:14 compute-0 upbeat_cray[74586]:         "compute-0"
Sep 30 14:12:14 compute-0 upbeat_cray[74586]:     ],
Sep 30 14:12:14 compute-0 upbeat_cray[74586]:     "quorum_age": 4,
Sep 30 14:12:14 compute-0 upbeat_cray[74586]:     "monmap": {
Sep 30 14:12:14 compute-0 upbeat_cray[74586]:         "epoch": 1,
Sep 30 14:12:14 compute-0 upbeat_cray[74586]:         "min_mon_release_name": "squid",
Sep 30 14:12:14 compute-0 upbeat_cray[74586]:         "num_mons": 1
Sep 30 14:12:14 compute-0 upbeat_cray[74586]:     },
Sep 30 14:12:14 compute-0 upbeat_cray[74586]:     "osdmap": {
Sep 30 14:12:14 compute-0 upbeat_cray[74586]:         "epoch": 1,
Sep 30 14:12:14 compute-0 upbeat_cray[74586]:         "num_osds": 0,
Sep 30 14:12:14 compute-0 upbeat_cray[74586]:         "num_up_osds": 0,
Sep 30 14:12:14 compute-0 upbeat_cray[74586]:         "osd_up_since": 0,
Sep 30 14:12:14 compute-0 upbeat_cray[74586]:         "num_in_osds": 0,
Sep 30 14:12:14 compute-0 upbeat_cray[74586]:         "osd_in_since": 0,
Sep 30 14:12:14 compute-0 upbeat_cray[74586]:         "num_remapped_pgs": 0
Sep 30 14:12:14 compute-0 upbeat_cray[74586]:     },
Sep 30 14:12:14 compute-0 upbeat_cray[74586]:     "pgmap": {
Sep 30 14:12:14 compute-0 upbeat_cray[74586]:         "pgs_by_state": [],
Sep 30 14:12:14 compute-0 upbeat_cray[74586]:         "num_pgs": 0,
Sep 30 14:12:14 compute-0 upbeat_cray[74586]:         "num_pools": 0,
Sep 30 14:12:14 compute-0 upbeat_cray[74586]:         "num_objects": 0,
Sep 30 14:12:14 compute-0 upbeat_cray[74586]:         "data_bytes": 0,
Sep 30 14:12:14 compute-0 upbeat_cray[74586]:         "bytes_used": 0,
Sep 30 14:12:14 compute-0 upbeat_cray[74586]:         "bytes_avail": 0,
Sep 30 14:12:14 compute-0 upbeat_cray[74586]:         "bytes_total": 0
Sep 30 14:12:14 compute-0 upbeat_cray[74586]:     },
Sep 30 14:12:14 compute-0 upbeat_cray[74586]:     "fsmap": {
Sep 30 14:12:14 compute-0 upbeat_cray[74586]:         "epoch": 1,
Sep 30 14:12:14 compute-0 upbeat_cray[74586]:         "btime": "2025-09-30T14:12:06:949277+0000",
Sep 30 14:12:14 compute-0 upbeat_cray[74586]:         "by_rank": [],
Sep 30 14:12:14 compute-0 upbeat_cray[74586]:         "up:standby": 0
Sep 30 14:12:14 compute-0 upbeat_cray[74586]:     },
Sep 30 14:12:14 compute-0 upbeat_cray[74586]:     "mgrmap": {
Sep 30 14:12:14 compute-0 upbeat_cray[74586]:         "available": false,
Sep 30 14:12:14 compute-0 upbeat_cray[74586]:         "num_standbys": 0,
Sep 30 14:12:14 compute-0 upbeat_cray[74586]:         "modules": [
Sep 30 14:12:14 compute-0 upbeat_cray[74586]:             "iostat",
Sep 30 14:12:14 compute-0 upbeat_cray[74586]:             "nfs",
Sep 30 14:12:14 compute-0 upbeat_cray[74586]:             "restful"
Sep 30 14:12:14 compute-0 upbeat_cray[74586]:         ],
Sep 30 14:12:14 compute-0 upbeat_cray[74586]:         "services": {}
Sep 30 14:12:14 compute-0 upbeat_cray[74586]:     },
Sep 30 14:12:14 compute-0 upbeat_cray[74586]:     "servicemap": {
Sep 30 14:12:14 compute-0 upbeat_cray[74586]:         "epoch": 1,
Sep 30 14:12:14 compute-0 upbeat_cray[74586]:         "modified": "2025-09-30T14:12:06.978138+0000",
Sep 30 14:12:14 compute-0 upbeat_cray[74586]:         "services": {}
Sep 30 14:12:14 compute-0 upbeat_cray[74586]:     },
Sep 30 14:12:14 compute-0 upbeat_cray[74586]:     "progress_events": {}
Sep 30 14:12:14 compute-0 upbeat_cray[74586]: }
Sep 30 14:12:14 compute-0 systemd[1]: libpod-a6794ed6db79ee8b4b796901c67cbff1fade17983a765071cb2193bfbcd25160.scope: Deactivated successfully.
Sep 30 14:12:14 compute-0 podman[74569]: 2025-09-30 14:12:14.202700939 +0000 UTC m=+0.359416910 container died a6794ed6db79ee8b4b796901c67cbff1fade17983a765071cb2193bfbcd25160 (image=quay.io/ceph/ceph:v19, name=upbeat_cray, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:12:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-b46d1bce08f760a03e770aba30ef24d6171136e12f3677fdfe8dd33a16b15be7-merged.mount: Deactivated successfully.
Sep 30 14:12:14 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/1954919079' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Sep 30 14:12:14 compute-0 podman[74569]: 2025-09-30 14:12:14.240739148 +0000 UTC m=+0.397455119 container remove a6794ed6db79ee8b4b796901c67cbff1fade17983a765071cb2193bfbcd25160 (image=quay.io/ceph/ceph:v19, name=upbeat_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:12:14 compute-0 systemd[1]: libpod-conmon-a6794ed6db79ee8b4b796901c67cbff1fade17983a765071cb2193bfbcd25160.scope: Deactivated successfully.
Sep 30 14:12:14 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'mirroring'
Sep 30 14:12:14 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'nfs'
Sep 30 14:12:14 compute-0 ceph-mgr[74485]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Sep 30 14:12:14 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'orchestrator'
Sep 30 14:12:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:12:14.764+0000 7faa20791140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Sep 30 14:12:15 compute-0 ceph-mgr[74485]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Sep 30 14:12:15 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'osd_perf_query'
Sep 30 14:12:15 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:12:14.998+0000 7faa20791140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Sep 30 14:12:15 compute-0 ceph-mgr[74485]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Sep 30 14:12:15 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'osd_support'
Sep 30 14:12:15 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:12:15.085+0000 7faa20791140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Sep 30 14:12:15 compute-0 ceph-mgr[74485]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Sep 30 14:12:15 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'pg_autoscaler'
Sep 30 14:12:15 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:12:15.158+0000 7faa20791140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Sep 30 14:12:15 compute-0 ceph-mgr[74485]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Sep 30 14:12:15 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'progress'
Sep 30 14:12:15 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:12:15.244+0000 7faa20791140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Sep 30 14:12:15 compute-0 ceph-mgr[74485]: mgr[py] Module progress has missing NOTIFY_TYPES member
Sep 30 14:12:15 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'prometheus'
Sep 30 14:12:15 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:12:15.322+0000 7faa20791140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Sep 30 14:12:15 compute-0 ceph-mgr[74485]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Sep 30 14:12:15 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'rbd_support'
Sep 30 14:12:15 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:12:15.705+0000 7faa20791140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Sep 30 14:12:15 compute-0 ceph-mgr[74485]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Sep 30 14:12:15 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'restful'
Sep 30 14:12:15 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:12:15.822+0000 7faa20791140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Sep 30 14:12:16 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'rgw'
Sep 30 14:12:16 compute-0 ceph-mgr[74485]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Sep 30 14:12:16 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:12:16.328+0000 7faa20791140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Sep 30 14:12:16 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'rook'
Sep 30 14:12:16 compute-0 podman[74624]: 2025-09-30 14:12:16.285761154 +0000 UTC m=+0.022122367 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:12:16 compute-0 ceph-mgr[74485]: mgr[py] Module rook has missing NOTIFY_TYPES member
Sep 30 14:12:16 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'selftest'
Sep 30 14:12:16 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:12:16.910+0000 7faa20791140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Sep 30 14:12:16 compute-0 ceph-mgr[74485]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Sep 30 14:12:16 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'snap_schedule'
Sep 30 14:12:16 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:12:16.987+0000 7faa20791140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Sep 30 14:12:17 compute-0 podman[74624]: 2025-09-30 14:12:17.05409584 +0000 UTC m=+0.790457033 container create 2d772ff36b27f53ced469afa913bb007f0e109d286da7f178d9d91842d141341 (image=quay.io/ceph/ceph:v19, name=competent_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Sep 30 14:12:17 compute-0 ceph-mgr[74485]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Sep 30 14:12:17 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'stats'
Sep 30 14:12:17 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:12:17.070+0000 7faa20791140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Sep 30 14:12:17 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'status'
Sep 30 14:12:17 compute-0 ceph-mgr[74485]: mgr[py] Module status has missing NOTIFY_TYPES member
Sep 30 14:12:17 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'telegraf'
Sep 30 14:12:17 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:12:17.229+0000 7faa20791140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Sep 30 14:12:17 compute-0 ceph-mgr[74485]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Sep 30 14:12:17 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'telemetry'
Sep 30 14:12:17 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:12:17.310+0000 7faa20791140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Sep 30 14:12:17 compute-0 ceph-mgr[74485]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Sep 30 14:12:17 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'test_orchestrator'
Sep 30 14:12:17 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:12:17.480+0000 7faa20791140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Sep 30 14:12:17 compute-0 systemd[1]: Started libpod-conmon-2d772ff36b27f53ced469afa913bb007f0e109d286da7f178d9d91842d141341.scope.
Sep 30 14:12:17 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:12:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1303491b7f9ac9e6eaa1e55dcd9ddd40cd15c73687e776fa4a08e911dce17d9e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1303491b7f9ac9e6eaa1e55dcd9ddd40cd15c73687e776fa4a08e911dce17d9e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1303491b7f9ac9e6eaa1e55dcd9ddd40cd15c73687e776fa4a08e911dce17d9e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:17 compute-0 ceph-mgr[74485]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Sep 30 14:12:17 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'volumes'
Sep 30 14:12:17 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:12:17.713+0000 7faa20791140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Sep 30 14:12:17 compute-0 ceph-mgr[74485]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Sep 30 14:12:17 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'zabbix'
Sep 30 14:12:17 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:12:17.990+0000 7faa20791140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Sep 30 14:12:18 compute-0 ceph-mgr[74485]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Sep 30 14:12:18 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:12:18.064+0000 7faa20791140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Sep 30 14:12:18 compute-0 ceph-mgr[74485]: ms_deliver_dispatch: unhandled message 0x5647246189c0 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Sep 30 14:12:18 compute-0 ceph-mon[74194]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.buxlkm
Sep 30 14:12:18 compute-0 podman[74624]: 2025-09-30 14:12:18.198481577 +0000 UTC m=+1.934842790 container init 2d772ff36b27f53ced469afa913bb007f0e109d286da7f178d9d91842d141341 (image=quay.io/ceph/ceph:v19, name=competent_mirzakhani, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:12:18 compute-0 podman[74624]: 2025-09-30 14:12:18.207717267 +0000 UTC m=+1.944078460 container start 2d772ff36b27f53ced469afa913bb007f0e109d286da7f178d9d91842d141341 (image=quay.io/ceph/ceph:v19, name=competent_mirzakhani, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Sep 30 14:12:18 compute-0 ceph-mgr[74485]: mgr handle_mgr_map Activating!
Sep 30 14:12:18 compute-0 ceph-mgr[74485]: mgr handle_mgr_map I am now activating
Sep 30 14:12:18 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.buxlkm(active, starting, since 0.151229s)
Sep 30 14:12:18 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Sep 30 14:12:18 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/307075564' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mds metadata"}]: dispatch
Sep 30 14:12:18 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).mds e1 all = 1
Sep 30 14:12:18 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Sep 30 14:12:18 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/307075564' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata"}]: dispatch
Sep 30 14:12:18 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Sep 30 14:12:18 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/307075564' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mon metadata"}]: dispatch
Sep 30 14:12:18 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Sep 30 14:12:18 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/307075564' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Sep 30 14:12:18 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.buxlkm", "id": "compute-0.buxlkm"} v 0)
Sep 30 14:12:18 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/307075564' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mgr metadata", "who": "compute-0.buxlkm", "id": "compute-0.buxlkm"}]: dispatch
Sep 30 14:12:18 compute-0 ceph-mgr[74485]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 14:12:18 compute-0 ceph-mgr[74485]: mgr load Constructed class from module: balancer
Sep 30 14:12:18 compute-0 ceph-mgr[74485]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 14:12:18 compute-0 ceph-mgr[74485]: mgr load Constructed class from module: crash
Sep 30 14:12:18 compute-0 ceph-mgr[74485]: [balancer INFO root] Starting
Sep 30 14:12:18 compute-0 ceph-mon[74194]: log_channel(cluster) log [INF] : Manager daemon compute-0.buxlkm is now available
Sep 30 14:12:18 compute-0 ceph-mgr[74485]: [balancer INFO root] Optimize plan auto_2025-09-30_14:12:18
Sep 30 14:12:18 compute-0 ceph-mgr[74485]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 14:12:18 compute-0 podman[74624]: 2025-09-30 14:12:18.236316331 +0000 UTC m=+1.972677544 container attach 2d772ff36b27f53ced469afa913bb007f0e109d286da7f178d9d91842d141341 (image=quay.io/ceph/ceph:v19, name=competent_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Sep 30 14:12:18 compute-0 ceph-mgr[74485]: [balancer INFO root] do_upmap
Sep 30 14:12:18 compute-0 ceph-mgr[74485]: [balancer INFO root] No pools available
Sep 30 14:12:18 compute-0 ceph-mgr[74485]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 14:12:18 compute-0 ceph-mgr[74485]: mgr load Constructed class from module: devicehealth
Sep 30 14:12:18 compute-0 ceph-mgr[74485]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 14:12:18 compute-0 ceph-mgr[74485]: mgr load Constructed class from module: iostat
Sep 30 14:12:18 compute-0 ceph-mgr[74485]: [devicehealth INFO root] Starting
Sep 30 14:12:18 compute-0 ceph-mgr[74485]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 14:12:18 compute-0 ceph-mgr[74485]: mgr load Constructed class from module: nfs
Sep 30 14:12:18 compute-0 ceph-mgr[74485]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 14:12:18 compute-0 ceph-mgr[74485]: mgr load Constructed class from module: orchestrator
Sep 30 14:12:18 compute-0 ceph-mgr[74485]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 14:12:18 compute-0 ceph-mgr[74485]: mgr load Constructed class from module: pg_autoscaler
Sep 30 14:12:18 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 14:12:18 compute-0 ceph-mgr[74485]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 14:12:18 compute-0 ceph-mgr[74485]: mgr load Constructed class from module: progress
Sep 30 14:12:18 compute-0 ceph-mgr[74485]: [progress INFO root] Loading...
Sep 30 14:12:18 compute-0 ceph-mgr[74485]: [progress INFO root] No stored events to load
Sep 30 14:12:18 compute-0 ceph-mgr[74485]: [progress INFO root] Loaded [] historic events
Sep 30 14:12:18 compute-0 ceph-mgr[74485]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 14:12:18 compute-0 ceph-mgr[74485]: [progress INFO root] Loaded OSDMap, ready.
Sep 30 14:12:18 compute-0 ceph-mon[74194]: Activating manager daemon compute-0.buxlkm
Sep 30 14:12:18 compute-0 ceph-mon[74194]: mgrmap e2: compute-0.buxlkm(active, starting, since 0.151229s)
Sep 30 14:12:18 compute-0 ceph-mon[74194]: from='mgr.14102 192.168.122.100:0/307075564' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mds metadata"}]: dispatch
Sep 30 14:12:18 compute-0 ceph-mon[74194]: from='mgr.14102 192.168.122.100:0/307075564' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata"}]: dispatch
Sep 30 14:12:18 compute-0 ceph-mon[74194]: from='mgr.14102 192.168.122.100:0/307075564' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mon metadata"}]: dispatch
Sep 30 14:12:18 compute-0 ceph-mon[74194]: from='mgr.14102 192.168.122.100:0/307075564' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Sep 30 14:12:18 compute-0 ceph-mon[74194]: from='mgr.14102 192.168.122.100:0/307075564' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mgr metadata", "who": "compute-0.buxlkm", "id": "compute-0.buxlkm"}]: dispatch
Sep 30 14:12:18 compute-0 ceph-mon[74194]: Manager daemon compute-0.buxlkm is now available
Sep 30 14:12:18 compute-0 ceph-mgr[74485]: [rbd_support INFO root] recovery thread starting
Sep 30 14:12:18 compute-0 ceph-mgr[74485]: [rbd_support INFO root] starting setup
Sep 30 14:12:18 compute-0 ceph-mgr[74485]: mgr load Constructed class from module: rbd_support
Sep 30 14:12:18 compute-0 ceph-mgr[74485]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 14:12:18 compute-0 ceph-mgr[74485]: mgr load Constructed class from module: restful
Sep 30 14:12:18 compute-0 ceph-mgr[74485]: [restful INFO root] server_addr: :: server_port: 8003
Sep 30 14:12:18 compute-0 ceph-mgr[74485]: [restful WARNING root] server not running: no certificate configured
Sep 30 14:12:18 compute-0 ceph-mgr[74485]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 14:12:18 compute-0 ceph-mgr[74485]: mgr load Constructed class from module: status
Sep 30 14:12:18 compute-0 ceph-mgr[74485]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 14:12:18 compute-0 ceph-mgr[74485]: mgr load Constructed class from module: telemetry
Sep 30 14:12:18 compute-0 ceph-mgr[74485]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 14:12:18 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0)
Sep 30 14:12:18 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.buxlkm/mirror_snapshot_schedule"} v 0)
Sep 30 14:12:18 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/307075564' entity='mgr.compute-0.buxlkm' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.buxlkm/mirror_snapshot_schedule"}]: dispatch
Sep 30 14:12:18 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/307075564' entity='mgr.compute-0.buxlkm' 
Sep 30 14:12:18 compute-0 ceph-mgr[74485]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 14:12:18 compute-0 ceph-mgr[74485]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Sep 30 14:12:18 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0)
Sep 30 14:12:18 compute-0 ceph-mgr[74485]: [rbd_support INFO root] PerfHandler: starting
Sep 30 14:12:18 compute-0 ceph-mgr[74485]: [rbd_support INFO root] TaskHandler: starting
Sep 30 14:12:18 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.buxlkm/trash_purge_schedule"} v 0)
Sep 30 14:12:18 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/307075564' entity='mgr.compute-0.buxlkm' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.buxlkm/trash_purge_schedule"}]: dispatch
Sep 30 14:12:18 compute-0 ceph-mgr[74485]: mgr load Constructed class from module: volumes
Sep 30 14:12:18 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/307075564' entity='mgr.compute-0.buxlkm' 
Sep 30 14:12:18 compute-0 ceph-mgr[74485]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 14:12:18 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0)
Sep 30 14:12:18 compute-0 ceph-mgr[74485]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Sep 30 14:12:18 compute-0 ceph-mgr[74485]: [rbd_support INFO root] setup complete
Sep 30 14:12:18 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/307075564' entity='mgr.compute-0.buxlkm' 
Sep 30 14:12:18 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Sep 30 14:12:18 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3506888913' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Sep 30 14:12:18 compute-0 competent_mirzakhani[74641]: 
Sep 30 14:12:18 compute-0 competent_mirzakhani[74641]: {
Sep 30 14:12:18 compute-0 competent_mirzakhani[74641]:     "fsid": "5e3c7776-ac03-5698-b79f-a6dc2d80cae6",
Sep 30 14:12:18 compute-0 competent_mirzakhani[74641]:     "health": {
Sep 30 14:12:18 compute-0 competent_mirzakhani[74641]:         "status": "HEALTH_OK",
Sep 30 14:12:18 compute-0 competent_mirzakhani[74641]:         "checks": {},
Sep 30 14:12:18 compute-0 competent_mirzakhani[74641]:         "mutes": []
Sep 30 14:12:18 compute-0 competent_mirzakhani[74641]:     },
Sep 30 14:12:18 compute-0 competent_mirzakhani[74641]:     "election_epoch": 5,
Sep 30 14:12:18 compute-0 competent_mirzakhani[74641]:     "quorum": [
Sep 30 14:12:18 compute-0 competent_mirzakhani[74641]:         0
Sep 30 14:12:18 compute-0 competent_mirzakhani[74641]:     ],
Sep 30 14:12:18 compute-0 competent_mirzakhani[74641]:     "quorum_names": [
Sep 30 14:12:18 compute-0 competent_mirzakhani[74641]:         "compute-0"
Sep 30 14:12:18 compute-0 competent_mirzakhani[74641]:     ],
Sep 30 14:12:18 compute-0 competent_mirzakhani[74641]:     "quorum_age": 8,
Sep 30 14:12:18 compute-0 competent_mirzakhani[74641]:     "monmap": {
Sep 30 14:12:18 compute-0 competent_mirzakhani[74641]:         "epoch": 1,
Sep 30 14:12:18 compute-0 competent_mirzakhani[74641]:         "min_mon_release_name": "squid",
Sep 30 14:12:18 compute-0 competent_mirzakhani[74641]:         "num_mons": 1
Sep 30 14:12:18 compute-0 competent_mirzakhani[74641]:     },
Sep 30 14:12:18 compute-0 competent_mirzakhani[74641]:     "osdmap": {
Sep 30 14:12:18 compute-0 competent_mirzakhani[74641]:         "epoch": 1,
Sep 30 14:12:18 compute-0 competent_mirzakhani[74641]:         "num_osds": 0,
Sep 30 14:12:18 compute-0 competent_mirzakhani[74641]:         "num_up_osds": 0,
Sep 30 14:12:18 compute-0 competent_mirzakhani[74641]:         "osd_up_since": 0,
Sep 30 14:12:18 compute-0 competent_mirzakhani[74641]:         "num_in_osds": 0,
Sep 30 14:12:18 compute-0 competent_mirzakhani[74641]:         "osd_in_since": 0,
Sep 30 14:12:18 compute-0 competent_mirzakhani[74641]:         "num_remapped_pgs": 0
Sep 30 14:12:18 compute-0 competent_mirzakhani[74641]:     },
Sep 30 14:12:18 compute-0 competent_mirzakhani[74641]:     "pgmap": {
Sep 30 14:12:18 compute-0 competent_mirzakhani[74641]:         "pgs_by_state": [],
Sep 30 14:12:18 compute-0 competent_mirzakhani[74641]:         "num_pgs": 0,
Sep 30 14:12:18 compute-0 competent_mirzakhani[74641]:         "num_pools": 0,
Sep 30 14:12:18 compute-0 competent_mirzakhani[74641]:         "num_objects": 0,
Sep 30 14:12:18 compute-0 competent_mirzakhani[74641]:         "data_bytes": 0,
Sep 30 14:12:18 compute-0 competent_mirzakhani[74641]:         "bytes_used": 0,
Sep 30 14:12:18 compute-0 competent_mirzakhani[74641]:         "bytes_avail": 0,
Sep 30 14:12:18 compute-0 competent_mirzakhani[74641]:         "bytes_total": 0
Sep 30 14:12:18 compute-0 competent_mirzakhani[74641]:     },
Sep 30 14:12:18 compute-0 competent_mirzakhani[74641]:     "fsmap": {
Sep 30 14:12:18 compute-0 competent_mirzakhani[74641]:         "epoch": 1,
Sep 30 14:12:18 compute-0 competent_mirzakhani[74641]:         "btime": "2025-09-30T14:12:06:949277+0000",
Sep 30 14:12:18 compute-0 competent_mirzakhani[74641]:         "by_rank": [],
Sep 30 14:12:18 compute-0 competent_mirzakhani[74641]:         "up:standby": 0
Sep 30 14:12:18 compute-0 competent_mirzakhani[74641]:     },
Sep 30 14:12:18 compute-0 competent_mirzakhani[74641]:     "mgrmap": {
Sep 30 14:12:18 compute-0 competent_mirzakhani[74641]:         "available": false,
Sep 30 14:12:18 compute-0 competent_mirzakhani[74641]:         "num_standbys": 0,
Sep 30 14:12:18 compute-0 competent_mirzakhani[74641]:         "modules": [
Sep 30 14:12:18 compute-0 competent_mirzakhani[74641]:             "iostat",
Sep 30 14:12:18 compute-0 competent_mirzakhani[74641]:             "nfs",
Sep 30 14:12:18 compute-0 competent_mirzakhani[74641]:             "restful"
Sep 30 14:12:18 compute-0 competent_mirzakhani[74641]:         ],
Sep 30 14:12:18 compute-0 competent_mirzakhani[74641]:         "services": {}
Sep 30 14:12:18 compute-0 competent_mirzakhani[74641]:     },
Sep 30 14:12:18 compute-0 competent_mirzakhani[74641]:     "servicemap": {
Sep 30 14:12:18 compute-0 competent_mirzakhani[74641]:         "epoch": 1,
Sep 30 14:12:18 compute-0 competent_mirzakhani[74641]:         "modified": "2025-09-30T14:12:06.978138+0000",
Sep 30 14:12:18 compute-0 competent_mirzakhani[74641]:         "services": {}
Sep 30 14:12:18 compute-0 competent_mirzakhani[74641]:     },
Sep 30 14:12:18 compute-0 competent_mirzakhani[74641]:     "progress_events": {}
Sep 30 14:12:18 compute-0 competent_mirzakhani[74641]: }
Sep 30 14:12:18 compute-0 systemd[1]: libpod-2d772ff36b27f53ced469afa913bb007f0e109d286da7f178d9d91842d141341.scope: Deactivated successfully.
Sep 30 14:12:18 compute-0 podman[74624]: 2025-09-30 14:12:18.428408757 +0000 UTC m=+2.164769950 container died 2d772ff36b27f53ced469afa913bb007f0e109d286da7f178d9d91842d141341 (image=quay.io/ceph/ceph:v19, name=competent_mirzakhani, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325)
Sep 30 14:12:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-1303491b7f9ac9e6eaa1e55dcd9ddd40cd15c73687e776fa4a08e911dce17d9e-merged.mount: Deactivated successfully.
Sep 30 14:12:18 compute-0 podman[74624]: 2025-09-30 14:12:18.466274782 +0000 UTC m=+2.202635975 container remove 2d772ff36b27f53ced469afa913bb007f0e109d286da7f178d9d91842d141341 (image=quay.io/ceph/ceph:v19, name=competent_mirzakhani, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Sep 30 14:12:18 compute-0 systemd[1]: libpod-conmon-2d772ff36b27f53ced469afa913bb007f0e109d286da7f178d9d91842d141341.scope: Deactivated successfully.
Sep 30 14:12:19 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.buxlkm(active, since 1.1658s)
Sep 30 14:12:19 compute-0 ceph-mon[74194]: from='mgr.14102 192.168.122.100:0/307075564' entity='mgr.compute-0.buxlkm' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.buxlkm/mirror_snapshot_schedule"}]: dispatch
Sep 30 14:12:19 compute-0 ceph-mon[74194]: from='mgr.14102 192.168.122.100:0/307075564' entity='mgr.compute-0.buxlkm' 
Sep 30 14:12:19 compute-0 ceph-mon[74194]: from='mgr.14102 192.168.122.100:0/307075564' entity='mgr.compute-0.buxlkm' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.buxlkm/trash_purge_schedule"}]: dispatch
Sep 30 14:12:19 compute-0 ceph-mon[74194]: from='mgr.14102 192.168.122.100:0/307075564' entity='mgr.compute-0.buxlkm' 
Sep 30 14:12:19 compute-0 ceph-mon[74194]: from='mgr.14102 192.168.122.100:0/307075564' entity='mgr.compute-0.buxlkm' 
Sep 30 14:12:19 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/3506888913' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Sep 30 14:12:19 compute-0 ceph-mon[74194]: mgrmap e3: compute-0.buxlkm(active, since 1.1658s)
Sep 30 14:12:20 compute-0 ceph-mgr[74485]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Sep 30 14:12:20 compute-0 podman[74758]: 2025-09-30 14:12:20.508107614 +0000 UTC m=+0.020206506 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:12:20 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.buxlkm(active, since 2s)
Sep 30 14:12:22 compute-0 podman[74758]: 2025-09-30 14:12:22.199868269 +0000 UTC m=+1.711967091 container create bd9bef2957fbbbf2537fc8ce0eb94ad92af0eb654ca156cea0e4896ef2fc4086 (image=quay.io/ceph/ceph:v19, name=elastic_hamilton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1)
Sep 30 14:12:22 compute-0 ceph-mon[74194]: mgrmap e4: compute-0.buxlkm(active, since 2s)
Sep 30 14:12:22 compute-0 ceph-mgr[74485]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Sep 30 14:12:22 compute-0 systemd[1]: Started libpod-conmon-bd9bef2957fbbbf2537fc8ce0eb94ad92af0eb654ca156cea0e4896ef2fc4086.scope.
Sep 30 14:12:22 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:12:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9c27fb5154349e7d4586ac6446b5ad47ca287c08c742f079888aaefb7620e72/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9c27fb5154349e7d4586ac6446b5ad47ca287c08c742f079888aaefb7620e72/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9c27fb5154349e7d4586ac6446b5ad47ca287c08c742f079888aaefb7620e72/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:22 compute-0 podman[74758]: 2025-09-30 14:12:22.356666358 +0000 UTC m=+1.868765230 container init bd9bef2957fbbbf2537fc8ce0eb94ad92af0eb654ca156cea0e4896ef2fc4086 (image=quay.io/ceph/ceph:v19, name=elastic_hamilton, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:12:22 compute-0 podman[74758]: 2025-09-30 14:12:22.362443078 +0000 UTC m=+1.874541890 container start bd9bef2957fbbbf2537fc8ce0eb94ad92af0eb654ca156cea0e4896ef2fc4086 (image=quay.io/ceph/ceph:v19, name=elastic_hamilton, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:12:22 compute-0 podman[74758]: 2025-09-30 14:12:22.398714082 +0000 UTC m=+1.910812894 container attach bd9bef2957fbbbf2537fc8ce0eb94ad92af0eb654ca156cea0e4896ef2fc4086 (image=quay.io/ceph/ceph:v19, name=elastic_hamilton, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Sep 30 14:12:22 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Sep 30 14:12:22 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/23463823' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Sep 30 14:12:22 compute-0 elastic_hamilton[74774]: 
Sep 30 14:12:22 compute-0 elastic_hamilton[74774]: {
Sep 30 14:12:22 compute-0 elastic_hamilton[74774]:     "fsid": "5e3c7776-ac03-5698-b79f-a6dc2d80cae6",
Sep 30 14:12:22 compute-0 elastic_hamilton[74774]:     "health": {
Sep 30 14:12:22 compute-0 elastic_hamilton[74774]:         "status": "HEALTH_OK",
Sep 30 14:12:22 compute-0 elastic_hamilton[74774]:         "checks": {},
Sep 30 14:12:22 compute-0 elastic_hamilton[74774]:         "mutes": []
Sep 30 14:12:22 compute-0 elastic_hamilton[74774]:     },
Sep 30 14:12:22 compute-0 elastic_hamilton[74774]:     "election_epoch": 5,
Sep 30 14:12:22 compute-0 elastic_hamilton[74774]:     "quorum": [
Sep 30 14:12:22 compute-0 elastic_hamilton[74774]:         0
Sep 30 14:12:22 compute-0 elastic_hamilton[74774]:     ],
Sep 30 14:12:22 compute-0 elastic_hamilton[74774]:     "quorum_names": [
Sep 30 14:12:22 compute-0 elastic_hamilton[74774]:         "compute-0"
Sep 30 14:12:22 compute-0 elastic_hamilton[74774]:     ],
Sep 30 14:12:22 compute-0 elastic_hamilton[74774]:     "quorum_age": 13,
Sep 30 14:12:22 compute-0 elastic_hamilton[74774]:     "monmap": {
Sep 30 14:12:22 compute-0 elastic_hamilton[74774]:         "epoch": 1,
Sep 30 14:12:22 compute-0 elastic_hamilton[74774]:         "min_mon_release_name": "squid",
Sep 30 14:12:22 compute-0 elastic_hamilton[74774]:         "num_mons": 1
Sep 30 14:12:22 compute-0 elastic_hamilton[74774]:     },
Sep 30 14:12:22 compute-0 elastic_hamilton[74774]:     "osdmap": {
Sep 30 14:12:22 compute-0 elastic_hamilton[74774]:         "epoch": 1,
Sep 30 14:12:22 compute-0 elastic_hamilton[74774]:         "num_osds": 0,
Sep 30 14:12:22 compute-0 elastic_hamilton[74774]:         "num_up_osds": 0,
Sep 30 14:12:22 compute-0 elastic_hamilton[74774]:         "osd_up_since": 0,
Sep 30 14:12:22 compute-0 elastic_hamilton[74774]:         "num_in_osds": 0,
Sep 30 14:12:22 compute-0 elastic_hamilton[74774]:         "osd_in_since": 0,
Sep 30 14:12:22 compute-0 elastic_hamilton[74774]:         "num_remapped_pgs": 0
Sep 30 14:12:22 compute-0 elastic_hamilton[74774]:     },
Sep 30 14:12:22 compute-0 elastic_hamilton[74774]:     "pgmap": {
Sep 30 14:12:22 compute-0 elastic_hamilton[74774]:         "pgs_by_state": [],
Sep 30 14:12:22 compute-0 elastic_hamilton[74774]:         "num_pgs": 0,
Sep 30 14:12:22 compute-0 elastic_hamilton[74774]:         "num_pools": 0,
Sep 30 14:12:22 compute-0 elastic_hamilton[74774]:         "num_objects": 0,
Sep 30 14:12:22 compute-0 elastic_hamilton[74774]:         "data_bytes": 0,
Sep 30 14:12:22 compute-0 elastic_hamilton[74774]:         "bytes_used": 0,
Sep 30 14:12:22 compute-0 elastic_hamilton[74774]:         "bytes_avail": 0,
Sep 30 14:12:22 compute-0 elastic_hamilton[74774]:         "bytes_total": 0
Sep 30 14:12:22 compute-0 elastic_hamilton[74774]:     },
Sep 30 14:12:22 compute-0 elastic_hamilton[74774]:     "fsmap": {
Sep 30 14:12:22 compute-0 elastic_hamilton[74774]:         "epoch": 1,
Sep 30 14:12:22 compute-0 elastic_hamilton[74774]:         "btime": "2025-09-30T14:12:06:949277+0000",
Sep 30 14:12:22 compute-0 elastic_hamilton[74774]:         "by_rank": [],
Sep 30 14:12:22 compute-0 elastic_hamilton[74774]:         "up:standby": 0
Sep 30 14:12:22 compute-0 elastic_hamilton[74774]:     },
Sep 30 14:12:22 compute-0 elastic_hamilton[74774]:     "mgrmap": {
Sep 30 14:12:22 compute-0 elastic_hamilton[74774]:         "available": true,
Sep 30 14:12:22 compute-0 elastic_hamilton[74774]:         "num_standbys": 0,
Sep 30 14:12:22 compute-0 elastic_hamilton[74774]:         "modules": [
Sep 30 14:12:22 compute-0 elastic_hamilton[74774]:             "iostat",
Sep 30 14:12:22 compute-0 elastic_hamilton[74774]:             "nfs",
Sep 30 14:12:22 compute-0 elastic_hamilton[74774]:             "restful"
Sep 30 14:12:22 compute-0 elastic_hamilton[74774]:         ],
Sep 30 14:12:22 compute-0 elastic_hamilton[74774]:         "services": {}
Sep 30 14:12:22 compute-0 elastic_hamilton[74774]:     },
Sep 30 14:12:22 compute-0 elastic_hamilton[74774]:     "servicemap": {
Sep 30 14:12:22 compute-0 elastic_hamilton[74774]:         "epoch": 1,
Sep 30 14:12:22 compute-0 elastic_hamilton[74774]:         "modified": "2025-09-30T14:12:06.978138+0000",
Sep 30 14:12:22 compute-0 elastic_hamilton[74774]:         "services": {}
Sep 30 14:12:22 compute-0 elastic_hamilton[74774]:     },
Sep 30 14:12:22 compute-0 elastic_hamilton[74774]:     "progress_events": {}
Sep 30 14:12:22 compute-0 elastic_hamilton[74774]: }
Sep 30 14:12:22 compute-0 systemd[1]: libpod-bd9bef2957fbbbf2537fc8ce0eb94ad92af0eb654ca156cea0e4896ef2fc4086.scope: Deactivated successfully.
Sep 30 14:12:22 compute-0 podman[74758]: 2025-09-30 14:12:22.82385767 +0000 UTC m=+2.335956482 container died bd9bef2957fbbbf2537fc8ce0eb94ad92af0eb654ca156cea0e4896ef2fc4086 (image=quay.io/ceph/ceph:v19, name=elastic_hamilton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid)
Sep 30 14:12:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-c9c27fb5154349e7d4586ac6446b5ad47ca287c08c742f079888aaefb7620e72-merged.mount: Deactivated successfully.
Sep 30 14:12:23 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/23463823' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Sep 30 14:12:23 compute-0 podman[74758]: 2025-09-30 14:12:23.524289901 +0000 UTC m=+3.036388713 container remove bd9bef2957fbbbf2537fc8ce0eb94ad92af0eb654ca156cea0e4896ef2fc4086 (image=quay.io/ceph/ceph:v19, name=elastic_hamilton, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325)
Sep 30 14:12:23 compute-0 systemd[1]: libpod-conmon-bd9bef2957fbbbf2537fc8ce0eb94ad92af0eb654ca156cea0e4896ef2fc4086.scope: Deactivated successfully.
Sep 30 14:12:23 compute-0 podman[74812]: 2025-09-30 14:12:23.592099314 +0000 UTC m=+0.044418196 container create 3bcdeac31d615c5da6d35910b0017eabccff76bcfc5039824771e1b2f760c8d7 (image=quay.io/ceph/ceph:v19, name=dazzling_shannon, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:12:23 compute-0 systemd[1]: Started libpod-conmon-3bcdeac31d615c5da6d35910b0017eabccff76bcfc5039824771e1b2f760c8d7.scope.
Sep 30 14:12:23 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:12:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8211681f8555e825434dfedc8688763f84eb7e43a546062c51217b96ec9acf8e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8211681f8555e825434dfedc8688763f84eb7e43a546062c51217b96ec9acf8e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8211681f8555e825434dfedc8688763f84eb7e43a546062c51217b96ec9acf8e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8211681f8555e825434dfedc8688763f84eb7e43a546062c51217b96ec9acf8e/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:23 compute-0 podman[74812]: 2025-09-30 14:12:23.575984725 +0000 UTC m=+0.028303637 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:12:23 compute-0 podman[74812]: 2025-09-30 14:12:23.679674042 +0000 UTC m=+0.131992924 container init 3bcdeac31d615c5da6d35910b0017eabccff76bcfc5039824771e1b2f760c8d7 (image=quay.io/ceph/ceph:v19, name=dazzling_shannon, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Sep 30 14:12:23 compute-0 podman[74812]: 2025-09-30 14:12:23.686481739 +0000 UTC m=+0.138800621 container start 3bcdeac31d615c5da6d35910b0017eabccff76bcfc5039824771e1b2f760c8d7 (image=quay.io/ceph/ceph:v19, name=dazzling_shannon, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:12:23 compute-0 podman[74812]: 2025-09-30 14:12:23.690574046 +0000 UTC m=+0.142892948 container attach 3bcdeac31d615c5da6d35910b0017eabccff76bcfc5039824771e1b2f760c8d7 (image=quay.io/ceph/ceph:v19, name=dazzling_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:12:24 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Sep 30 14:12:24 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/740415002' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Sep 30 14:12:24 compute-0 dazzling_shannon[74829]: 
Sep 30 14:12:24 compute-0 dazzling_shannon[74829]: [global]
Sep 30 14:12:24 compute-0 dazzling_shannon[74829]:         fsid = 5e3c7776-ac03-5698-b79f-a6dc2d80cae6
Sep 30 14:12:24 compute-0 dazzling_shannon[74829]:         mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Sep 30 14:12:24 compute-0 systemd[1]: libpod-3bcdeac31d615c5da6d35910b0017eabccff76bcfc5039824771e1b2f760c8d7.scope: Deactivated successfully.
Sep 30 14:12:24 compute-0 podman[74812]: 2025-09-30 14:12:24.059366759 +0000 UTC m=+0.511685641 container died 3bcdeac31d615c5da6d35910b0017eabccff76bcfc5039824771e1b2f760c8d7 (image=quay.io/ceph/ceph:v19, name=dazzling_shannon, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Sep 30 14:12:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-8211681f8555e825434dfedc8688763f84eb7e43a546062c51217b96ec9acf8e-merged.mount: Deactivated successfully.
Sep 30 14:12:24 compute-0 podman[74812]: 2025-09-30 14:12:24.096019122 +0000 UTC m=+0.548338004 container remove 3bcdeac31d615c5da6d35910b0017eabccff76bcfc5039824771e1b2f760c8d7 (image=quay.io/ceph/ceph:v19, name=dazzling_shannon, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:12:24 compute-0 systemd[1]: libpod-conmon-3bcdeac31d615c5da6d35910b0017eabccff76bcfc5039824771e1b2f760c8d7.scope: Deactivated successfully.
Sep 30 14:12:24 compute-0 podman[74866]: 2025-09-30 14:12:24.154040112 +0000 UTC m=+0.037587859 container create 12affc4c70376fdcbd9bf62d4faa19cbe713120936288def6866108f134b3e27 (image=quay.io/ceph/ceph:v19, name=jolly_brown, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:12:24 compute-0 systemd[1]: Started libpod-conmon-12affc4c70376fdcbd9bf62d4faa19cbe713120936288def6866108f134b3e27.scope.
Sep 30 14:12:24 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:12:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fc74776d5233b50da86b42951366daec41d8da99b9c91aef9c8776bd5a595e6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fc74776d5233b50da86b42951366daec41d8da99b9c91aef9c8776bd5a595e6/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fc74776d5233b50da86b42951366daec41d8da99b9c91aef9c8776bd5a595e6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:24 compute-0 ceph-mgr[74485]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Sep 30 14:12:24 compute-0 podman[74866]: 2025-09-30 14:12:24.224579536 +0000 UTC m=+0.108127303 container init 12affc4c70376fdcbd9bf62d4faa19cbe713120936288def6866108f134b3e27 (image=quay.io/ceph/ceph:v19, name=jolly_brown, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Sep 30 14:12:24 compute-0 podman[74866]: 2025-09-30 14:12:24.229909595 +0000 UTC m=+0.113457332 container start 12affc4c70376fdcbd9bf62d4faa19cbe713120936288def6866108f134b3e27 (image=quay.io/ceph/ceph:v19, name=jolly_brown, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:12:24 compute-0 podman[74866]: 2025-09-30 14:12:24.136788093 +0000 UTC m=+0.020335860 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:12:24 compute-0 podman[74866]: 2025-09-30 14:12:24.233205731 +0000 UTC m=+0.116753498 container attach 12affc4c70376fdcbd9bf62d4faa19cbe713120936288def6866108f134b3e27 (image=quay.io/ceph/ceph:v19, name=jolly_brown, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:12:24 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/740415002' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Sep 30 14:12:24 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0)
Sep 30 14:12:24 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2320178807' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Sep 30 14:12:25 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/2320178807' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Sep 30 14:12:25 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2320178807' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Sep 30 14:12:25 compute-0 ceph-mgr[74485]: mgr handle_mgr_map respawning because set of enabled modules changed!
Sep 30 14:12:25 compute-0 ceph-mgr[74485]: mgr respawn  e: '/usr/bin/ceph-mgr'
Sep 30 14:12:25 compute-0 ceph-mgr[74485]: mgr respawn  0: '/usr/bin/ceph-mgr'
Sep 30 14:12:25 compute-0 ceph-mgr[74485]: mgr respawn  1: '-n'
Sep 30 14:12:25 compute-0 ceph-mgr[74485]: mgr respawn  2: 'mgr.compute-0.buxlkm'
Sep 30 14:12:25 compute-0 ceph-mgr[74485]: mgr respawn  3: '-f'
Sep 30 14:12:25 compute-0 ceph-mgr[74485]: mgr respawn  4: '--setuser'
Sep 30 14:12:25 compute-0 ceph-mgr[74485]: mgr respawn  5: 'ceph'
Sep 30 14:12:25 compute-0 ceph-mgr[74485]: mgr respawn  6: '--setgroup'
Sep 30 14:12:25 compute-0 ceph-mgr[74485]: mgr respawn  7: 'ceph'
Sep 30 14:12:25 compute-0 ceph-mgr[74485]: mgr respawn  8: '--default-log-to-file=false'
Sep 30 14:12:25 compute-0 ceph-mgr[74485]: mgr respawn  9: '--default-log-to-journald=true'
Sep 30 14:12:25 compute-0 ceph-mgr[74485]: mgr respawn  10: '--default-log-to-stderr=false'
Sep 30 14:12:25 compute-0 ceph-mgr[74485]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Sep 30 14:12:25 compute-0 ceph-mgr[74485]: mgr respawn  exe_path /proc/self/exe
Sep 30 14:12:25 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.buxlkm(active, since 7s)
Sep 30 14:12:25 compute-0 systemd[1]: libpod-12affc4c70376fdcbd9bf62d4faa19cbe713120936288def6866108f134b3e27.scope: Deactivated successfully.
Sep 30 14:12:25 compute-0 podman[74866]: 2025-09-30 14:12:25.519367946 +0000 UTC m=+1.402915693 container died 12affc4c70376fdcbd9bf62d4faa19cbe713120936288def6866108f134b3e27 (image=quay.io/ceph/ceph:v19, name=jolly_brown, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:12:25 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ignoring --setuser ceph since I am not root
Sep 30 14:12:25 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ignoring --setgroup ceph since I am not root
Sep 30 14:12:25 compute-0 ceph-mgr[74485]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Sep 30 14:12:25 compute-0 ceph-mgr[74485]: pidfile_write: ignore empty --pid-file
Sep 30 14:12:25 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'alerts'
Sep 30 14:12:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-3fc74776d5233b50da86b42951366daec41d8da99b9c91aef9c8776bd5a595e6-merged.mount: Deactivated successfully.
Sep 30 14:12:25 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:12:25.736+0000 7f0329c96140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Sep 30 14:12:25 compute-0 ceph-mgr[74485]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Sep 30 14:12:25 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'balancer'
Sep 30 14:12:25 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:12:25.834+0000 7f0329c96140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Sep 30 14:12:25 compute-0 ceph-mgr[74485]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Sep 30 14:12:25 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'cephadm'
Sep 30 14:12:25 compute-0 podman[74866]: 2025-09-30 14:12:25.859629466 +0000 UTC m=+1.743177203 container remove 12affc4c70376fdcbd9bf62d4faa19cbe713120936288def6866108f134b3e27 (image=quay.io/ceph/ceph:v19, name=jolly_brown, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Sep 30 14:12:25 compute-0 systemd[1]: libpod-conmon-12affc4c70376fdcbd9bf62d4faa19cbe713120936288def6866108f134b3e27.scope: Deactivated successfully.
Sep 30 14:12:25 compute-0 podman[74941]: 2025-09-30 14:12:25.918869657 +0000 UTC m=+0.041538671 container create 11af567f4399e7f3ae0f63cc7d00d04a74f8e5dd18dce17e701c9b89f6b123db (image=quay.io/ceph/ceph:v19, name=bold_margulis, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:12:25 compute-0 systemd[1]: Started libpod-conmon-11af567f4399e7f3ae0f63cc7d00d04a74f8e5dd18dce17e701c9b89f6b123db.scope.
Sep 30 14:12:25 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:12:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3277d23ab25936a164c6267cba322f7ef286ca2aa6d777f57566b372730c70d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3277d23ab25936a164c6267cba322f7ef286ca2aa6d777f57566b372730c70d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3277d23ab25936a164c6267cba322f7ef286ca2aa6d777f57566b372730c70d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:25 compute-0 podman[74941]: 2025-09-30 14:12:25.898713873 +0000 UTC m=+0.021382917 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:12:26 compute-0 podman[74941]: 2025-09-30 14:12:26.068847998 +0000 UTC m=+0.191517022 container init 11af567f4399e7f3ae0f63cc7d00d04a74f8e5dd18dce17e701c9b89f6b123db (image=quay.io/ceph/ceph:v19, name=bold_margulis, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:12:26 compute-0 podman[74941]: 2025-09-30 14:12:26.078441428 +0000 UTC m=+0.201110482 container start 11af567f4399e7f3ae0f63cc7d00d04a74f8e5dd18dce17e701c9b89f6b123db (image=quay.io/ceph/ceph:v19, name=bold_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:12:26 compute-0 podman[74941]: 2025-09-30 14:12:26.095561363 +0000 UTC m=+0.218230387 container attach 11af567f4399e7f3ae0f63cc7d00d04a74f8e5dd18dce17e701c9b89f6b123db (image=quay.io/ceph/ceph:v19, name=bold_margulis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:12:26 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Sep 30 14:12:26 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3872835020' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Sep 30 14:12:26 compute-0 bold_margulis[74957]: {
Sep 30 14:12:26 compute-0 bold_margulis[74957]:     "epoch": 5,
Sep 30 14:12:26 compute-0 bold_margulis[74957]:     "available": true,
Sep 30 14:12:26 compute-0 bold_margulis[74957]:     "active_name": "compute-0.buxlkm",
Sep 30 14:12:26 compute-0 bold_margulis[74957]:     "num_standby": 0
Sep 30 14:12:26 compute-0 bold_margulis[74957]: }
Sep 30 14:12:26 compute-0 systemd[1]: libpod-11af567f4399e7f3ae0f63cc7d00d04a74f8e5dd18dce17e701c9b89f6b123db.scope: Deactivated successfully.
Sep 30 14:12:26 compute-0 podman[74941]: 2025-09-30 14:12:26.504285875 +0000 UTC m=+0.626954899 container died 11af567f4399e7f3ae0f63cc7d00d04a74f8e5dd18dce17e701c9b89f6b123db (image=quay.io/ceph/ceph:v19, name=bold_margulis, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:12:26 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'crash'
Sep 30 14:12:26 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:12:26.678+0000 7f0329c96140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Sep 30 14:12:26 compute-0 ceph-mgr[74485]: mgr[py] Module crash has missing NOTIFY_TYPES member
Sep 30 14:12:26 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'dashboard'
Sep 30 14:12:26 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/2320178807' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Sep 30 14:12:26 compute-0 ceph-mon[74194]: mgrmap e5: compute-0.buxlkm(active, since 7s)
Sep 30 14:12:26 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/3872835020' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Sep 30 14:12:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-c3277d23ab25936a164c6267cba322f7ef286ca2aa6d777f57566b372730c70d-merged.mount: Deactivated successfully.
Sep 30 14:12:27 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'devicehealth'
Sep 30 14:12:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:12:27.321+0000 7f0329c96140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Sep 30 14:12:27 compute-0 ceph-mgr[74485]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Sep 30 14:12:27 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'diskprediction_local'
Sep 30 14:12:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Sep 30 14:12:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Sep 30 14:12:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]:   from numpy import show_config as show_numpy_config
Sep 30 14:12:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:12:27.532+0000 7f0329c96140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Sep 30 14:12:27 compute-0 ceph-mgr[74485]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Sep 30 14:12:27 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'influx'
Sep 30 14:12:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:12:27.620+0000 7f0329c96140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Sep 30 14:12:27 compute-0 ceph-mgr[74485]: mgr[py] Module influx has missing NOTIFY_TYPES member
Sep 30 14:12:27 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'insights'
Sep 30 14:12:27 compute-0 podman[74941]: 2025-09-30 14:12:27.633243212 +0000 UTC m=+1.755912236 container remove 11af567f4399e7f3ae0f63cc7d00d04a74f8e5dd18dce17e701c9b89f6b123db (image=quay.io/ceph/ceph:v19, name=bold_margulis, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Sep 30 14:12:27 compute-0 systemd[1]: libpod-conmon-11af567f4399e7f3ae0f63cc7d00d04a74f8e5dd18dce17e701c9b89f6b123db.scope: Deactivated successfully.
Sep 30 14:12:27 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'iostat'
Sep 30 14:12:27 compute-0 podman[75006]: 2025-09-30 14:12:27.776112978 +0000 UTC m=+0.120238249 container create 17972ab5fc3b3b2ddb9ed41b00f97e1cc956d069777a5200c8816f45699e7913 (image=quay.io/ceph/ceph:v19, name=reverent_chaplygin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Sep 30 14:12:27 compute-0 podman[75006]: 2025-09-30 14:12:27.690232604 +0000 UTC m=+0.034357905 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:12:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:12:27.784+0000 7f0329c96140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Sep 30 14:12:27 compute-0 ceph-mgr[74485]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Sep 30 14:12:27 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'k8sevents'
Sep 30 14:12:27 compute-0 systemd[1]: Started libpod-conmon-17972ab5fc3b3b2ddb9ed41b00f97e1cc956d069777a5200c8816f45699e7913.scope.
Sep 30 14:12:27 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:12:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e7fe328f48da81f865ebf39df4c221ec41c7546500ead57aeeed521f8933174/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e7fe328f48da81f865ebf39df4c221ec41c7546500ead57aeeed521f8933174/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e7fe328f48da81f865ebf39df4c221ec41c7546500ead57aeeed521f8933174/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:27 compute-0 podman[75006]: 2025-09-30 14:12:27.849594279 +0000 UTC m=+0.193719580 container init 17972ab5fc3b3b2ddb9ed41b00f97e1cc956d069777a5200c8816f45699e7913 (image=quay.io/ceph/ceph:v19, name=reverent_chaplygin, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Sep 30 14:12:27 compute-0 podman[75006]: 2025-09-30 14:12:27.855644247 +0000 UTC m=+0.199769518 container start 17972ab5fc3b3b2ddb9ed41b00f97e1cc956d069777a5200c8816f45699e7913 (image=quay.io/ceph/ceph:v19, name=reverent_chaplygin, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:12:27 compute-0 podman[75006]: 2025-09-30 14:12:27.858868041 +0000 UTC m=+0.202993332 container attach 17972ab5fc3b3b2ddb9ed41b00f97e1cc956d069777a5200c8816f45699e7913 (image=quay.io/ceph/ceph:v19, name=reverent_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Sep 30 14:12:28 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'localpool'
Sep 30 14:12:28 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'mds_autoscaler'
Sep 30 14:12:28 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'mirroring'
Sep 30 14:12:28 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'nfs'
Sep 30 14:12:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:12:28.837+0000 7f0329c96140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Sep 30 14:12:28 compute-0 ceph-mgr[74485]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Sep 30 14:12:28 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'orchestrator'
Sep 30 14:12:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:12:29.093+0000 7f0329c96140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Sep 30 14:12:29 compute-0 ceph-mgr[74485]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Sep 30 14:12:29 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'osd_perf_query'
Sep 30 14:12:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:12:29.181+0000 7f0329c96140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Sep 30 14:12:29 compute-0 ceph-mgr[74485]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Sep 30 14:12:29 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'osd_support'
Sep 30 14:12:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:12:29.265+0000 7f0329c96140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Sep 30 14:12:29 compute-0 ceph-mgr[74485]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Sep 30 14:12:29 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'pg_autoscaler'
Sep 30 14:12:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:12:29.365+0000 7f0329c96140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Sep 30 14:12:29 compute-0 ceph-mgr[74485]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Sep 30 14:12:29 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'progress'
Sep 30 14:12:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:12:29.450+0000 7f0329c96140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Sep 30 14:12:29 compute-0 ceph-mgr[74485]: mgr[py] Module progress has missing NOTIFY_TYPES member
Sep 30 14:12:29 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'prometheus'
Sep 30 14:12:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:12:29.829+0000 7f0329c96140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Sep 30 14:12:29 compute-0 ceph-mgr[74485]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Sep 30 14:12:29 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'rbd_support'
Sep 30 14:12:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:12:29.931+0000 7f0329c96140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Sep 30 14:12:29 compute-0 ceph-mgr[74485]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Sep 30 14:12:29 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'restful'
Sep 30 14:12:30 compute-0 chronyd[54890]: Selected source 23.133.168.246 (pool.ntp.org)
Sep 30 14:12:30 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'rgw'
Sep 30 14:12:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:12:30.404+0000 7f0329c96140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Sep 30 14:12:30 compute-0 ceph-mgr[74485]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Sep 30 14:12:30 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'rook'
Sep 30 14:12:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:12:30.997+0000 7f0329c96140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Sep 30 14:12:30 compute-0 ceph-mgr[74485]: mgr[py] Module rook has missing NOTIFY_TYPES member
Sep 30 14:12:30 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'selftest'
Sep 30 14:12:31 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:12:31.077+0000 7f0329c96140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Sep 30 14:12:31 compute-0 ceph-mgr[74485]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Sep 30 14:12:31 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'snap_schedule'
Sep 30 14:12:31 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:12:31.167+0000 7f0329c96140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Sep 30 14:12:31 compute-0 ceph-mgr[74485]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Sep 30 14:12:31 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'stats'
Sep 30 14:12:31 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'status'
Sep 30 14:12:31 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:12:31.326+0000 7f0329c96140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Sep 30 14:12:31 compute-0 ceph-mgr[74485]: mgr[py] Module status has missing NOTIFY_TYPES member
Sep 30 14:12:31 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'telegraf'
Sep 30 14:12:31 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:12:31.402+0000 7f0329c96140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Sep 30 14:12:31 compute-0 ceph-mgr[74485]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Sep 30 14:12:31 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'telemetry'
Sep 30 14:12:31 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:12:31.575+0000 7f0329c96140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Sep 30 14:12:31 compute-0 ceph-mgr[74485]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Sep 30 14:12:31 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'test_orchestrator'
Sep 30 14:12:31 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:12:31.824+0000 7f0329c96140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Sep 30 14:12:31 compute-0 ceph-mgr[74485]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Sep 30 14:12:31 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'volumes'
Sep 30 14:12:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:12:32.127+0000 7f0329c96140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Sep 30 14:12:32 compute-0 ceph-mgr[74485]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Sep 30 14:12:32 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'zabbix'
Sep 30 14:12:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:12:32.208+0000 7f0329c96140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Sep 30 14:12:32 compute-0 ceph-mgr[74485]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Sep 30 14:12:32 compute-0 ceph-mon[74194]: log_channel(cluster) log [INF] : Active manager daemon compute-0.buxlkm restarted
Sep 30 14:12:32 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Sep 30 14:12:32 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Sep 30 14:12:32 compute-0 ceph-mon[74194]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.buxlkm
Sep 30 14:12:32 compute-0 ceph-mgr[74485]: ms_deliver_dispatch: unhandled message 0x559bebd4ed00 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Sep 30 14:12:32 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Sep 30 14:12:32 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Sep 30 14:12:32 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Sep 30 14:12:32 compute-0 ceph-mgr[74485]: mgr handle_mgr_map Activating!
Sep 30 14:12:32 compute-0 ceph-mgr[74485]: mgr handle_mgr_map I am now activating
Sep 30 14:12:32 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Sep 30 14:12:32 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.buxlkm(active, starting, since 0.145422s)
Sep 30 14:12:32 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Sep 30 14:12:32 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Sep 30 14:12:32 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.buxlkm", "id": "compute-0.buxlkm"} v 0)
Sep 30 14:12:32 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mgr metadata", "who": "compute-0.buxlkm", "id": "compute-0.buxlkm"}]: dispatch
Sep 30 14:12:32 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Sep 30 14:12:32 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mds metadata"}]: dispatch
Sep 30 14:12:32 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).mds e1 all = 1
Sep 30 14:12:32 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Sep 30 14:12:32 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata"}]: dispatch
Sep 30 14:12:32 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Sep 30 14:12:32 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mon metadata"}]: dispatch
Sep 30 14:12:32 compute-0 ceph-mgr[74485]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 14:12:32 compute-0 ceph-mgr[74485]: mgr load Constructed class from module: balancer
Sep 30 14:12:32 compute-0 ceph-mon[74194]: log_channel(cluster) log [INF] : Manager daemon compute-0.buxlkm is now available
Sep 30 14:12:32 compute-0 ceph-mgr[74485]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 14:12:32 compute-0 ceph-mgr[74485]: [balancer INFO root] Starting
Sep 30 14:12:32 compute-0 ceph-mgr[74485]: [balancer INFO root] Optimize plan auto_2025-09-30_14:12:32
Sep 30 14:12:32 compute-0 ceph-mgr[74485]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 14:12:32 compute-0 ceph-mgr[74485]: [balancer INFO root] do_upmap
Sep 30 14:12:32 compute-0 ceph-mgr[74485]: [balancer INFO root] No pools available
Sep 30 14:12:32 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Sep 30 14:12:32 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Sep 30 14:12:32 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0)
Sep 30 14:12:32 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:12:32 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0)
Sep 30 14:12:32 compute-0 ceph-mon[74194]: Active manager daemon compute-0.buxlkm restarted
Sep 30 14:12:32 compute-0 ceph-mon[74194]: Activating manager daemon compute-0.buxlkm
Sep 30 14:12:32 compute-0 ceph-mon[74194]: osdmap e2: 0 total, 0 up, 0 in
Sep 30 14:12:32 compute-0 ceph-mon[74194]: mgrmap e6: compute-0.buxlkm(active, starting, since 0.145422s)
Sep 30 14:12:32 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Sep 30 14:12:32 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mgr metadata", "who": "compute-0.buxlkm", "id": "compute-0.buxlkm"}]: dispatch
Sep 30 14:12:32 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mds metadata"}]: dispatch
Sep 30 14:12:32 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata"}]: dispatch
Sep 30 14:12:32 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mon metadata"}]: dispatch
Sep 30 14:12:32 compute-0 ceph-mon[74194]: Manager daemon compute-0.buxlkm is now available
Sep 30 14:12:33 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:12:33 compute-0 ceph-mgr[74485]: mgr load Constructed class from module: cephadm
Sep 30 14:12:33 compute-0 ceph-mgr[74485]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 14:12:33 compute-0 ceph-mgr[74485]: mgr load Constructed class from module: crash
Sep 30 14:12:33 compute-0 ceph-mgr[74485]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 14:12:33 compute-0 ceph-mgr[74485]: mgr load Constructed class from module: devicehealth
Sep 30 14:12:33 compute-0 ceph-mgr[74485]: [devicehealth INFO root] Starting
Sep 30 14:12:33 compute-0 ceph-mgr[74485]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 14:12:33 compute-0 ceph-mgr[74485]: mgr load Constructed class from module: iostat
Sep 30 14:12:33 compute-0 ceph-mgr[74485]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 14:12:33 compute-0 ceph-mgr[74485]: mgr load Constructed class from module: nfs
Sep 30 14:12:33 compute-0 ceph-mgr[74485]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 14:12:33 compute-0 ceph-mgr[74485]: mgr load Constructed class from module: orchestrator
Sep 30 14:12:33 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Sep 30 14:12:33 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Sep 30 14:12:33 compute-0 ceph-mgr[74485]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 14:12:33 compute-0 ceph-mgr[74485]: mgr load Constructed class from module: pg_autoscaler
Sep 30 14:12:33 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Sep 30 14:12:33 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Sep 30 14:12:33 compute-0 ceph-mgr[74485]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 14:12:33 compute-0 ceph-mgr[74485]: mgr load Constructed class from module: progress
Sep 30 14:12:33 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 14:12:33 compute-0 ceph-mgr[74485]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 14:12:33 compute-0 ceph-mgr[74485]: [progress INFO root] Loading...
Sep 30 14:12:33 compute-0 ceph-mgr[74485]: [progress INFO root] No stored events to load
Sep 30 14:12:33 compute-0 ceph-mgr[74485]: [progress INFO root] Loaded [] historic events
Sep 30 14:12:33 compute-0 ceph-mgr[74485]: [progress INFO root] Loaded OSDMap, ready.
Sep 30 14:12:33 compute-0 ceph-mgr[74485]: [rbd_support INFO root] recovery thread starting
Sep 30 14:12:33 compute-0 ceph-mgr[74485]: [rbd_support INFO root] starting setup
Sep 30 14:12:33 compute-0 ceph-mgr[74485]: mgr load Constructed class from module: rbd_support
Sep 30 14:12:33 compute-0 ceph-mgr[74485]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 14:12:33 compute-0 ceph-mgr[74485]: mgr load Constructed class from module: restful
Sep 30 14:12:33 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.buxlkm/mirror_snapshot_schedule"} v 0)
Sep 30 14:12:33 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.buxlkm/mirror_snapshot_schedule"}]: dispatch
Sep 30 14:12:33 compute-0 ceph-mgr[74485]: [restful INFO root] server_addr: :: server_port: 8003
Sep 30 14:12:33 compute-0 ceph-mgr[74485]: [restful WARNING root] server not running: no certificate configured
Sep 30 14:12:33 compute-0 ceph-mgr[74485]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 14:12:33 compute-0 ceph-mgr[74485]: mgr load Constructed class from module: status
Sep 30 14:12:33 compute-0 ceph-mgr[74485]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 14:12:33 compute-0 ceph-mgr[74485]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Sep 30 14:12:33 compute-0 ceph-mgr[74485]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 14:12:33 compute-0 ceph-mgr[74485]: mgr load Constructed class from module: telemetry
Sep 30 14:12:33 compute-0 ceph-mgr[74485]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 14:12:33 compute-0 ceph-mgr[74485]: [rbd_support INFO root] PerfHandler: starting
Sep 30 14:12:33 compute-0 ceph-mgr[74485]: [rbd_support INFO root] TaskHandler: starting
Sep 30 14:12:33 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.buxlkm/trash_purge_schedule"} v 0)
Sep 30 14:12:33 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.buxlkm/trash_purge_schedule"}]: dispatch
Sep 30 14:12:33 compute-0 ceph-mgr[74485]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 14:12:33 compute-0 ceph-mgr[74485]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Sep 30 14:12:33 compute-0 ceph-mgr[74485]: [rbd_support INFO root] setup complete
Sep 30 14:12:33 compute-0 ceph-mgr[74485]: mgr load Constructed class from module: volumes
Sep 30 14:12:33 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.buxlkm(active, since 1.37613s)
Sep 30 14:12:33 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Sep 30 14:12:33 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Sep 30 14:12:33 compute-0 reverent_chaplygin[75022]: {
Sep 30 14:12:33 compute-0 reverent_chaplygin[75022]:     "mgrmap_epoch": 7,
Sep 30 14:12:33 compute-0 reverent_chaplygin[75022]:     "initialized": true
Sep 30 14:12:33 compute-0 reverent_chaplygin[75022]: }
Sep 30 14:12:33 compute-0 systemd[1]: libpod-17972ab5fc3b3b2ddb9ed41b00f97e1cc956d069777a5200c8816f45699e7913.scope: Deactivated successfully.
Sep 30 14:12:33 compute-0 podman[75006]: 2025-09-30 14:12:33.618475983 +0000 UTC m=+5.962601274 container died 17972ab5fc3b3b2ddb9ed41b00f97e1cc956d069777a5200c8816f45699e7913 (image=quay.io/ceph/ceph:v19, name=reverent_chaplygin, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:12:34 compute-0 ceph-mon[74194]: Found migration_current of "None". Setting to last migration.
Sep 30 14:12:34 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:12:34 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:12:34 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Sep 30 14:12:34 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Sep 30 14:12:34 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.buxlkm/mirror_snapshot_schedule"}]: dispatch
Sep 30 14:12:34 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.buxlkm/trash_purge_schedule"}]: dispatch
Sep 30 14:12:34 compute-0 ceph-mon[74194]: mgrmap e7: compute-0.buxlkm(active, since 1.37613s)
Sep 30 14:12:34 compute-0 ceph-mon[74194]: from='client.14126 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Sep 30 14:12:34 compute-0 ceph-mon[74194]: from='client.14126 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Sep 30 14:12:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-4e7fe328f48da81f865ebf39df4c221ec41c7546500ead57aeeed521f8933174-merged.mount: Deactivated successfully.
Sep 30 14:12:34 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.cert.agent_endpoint_root_cert}] v 0)
Sep 30 14:12:34 compute-0 ceph-mgr[74485]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Sep 30 14:12:34 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:12:34 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.key.agent_endpoint_key}] v 0)
Sep 30 14:12:34 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019923177 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 14:12:34 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:12:34 compute-0 podman[75006]: 2025-09-30 14:12:34.855422649 +0000 UTC m=+7.199547920 container remove 17972ab5fc3b3b2ddb9ed41b00f97e1cc956d069777a5200c8816f45699e7913 (image=quay.io/ceph/ceph:v19, name=reverent_chaplygin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Sep 30 14:12:34 compute-0 systemd[1]: libpod-conmon-17972ab5fc3b3b2ddb9ed41b00f97e1cc956d069777a5200c8816f45699e7913.scope: Deactivated successfully.
Sep 30 14:12:34 compute-0 podman[75170]: 2025-09-30 14:12:34.896371594 +0000 UTC m=+0.021352117 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:12:35 compute-0 podman[75170]: 2025-09-30 14:12:35.256394338 +0000 UTC m=+0.381374851 container create f6d444e6678e4b8245ff8d4ed312258b35a6059516c95997d510f70fc75832f1 (image=quay.io/ceph/ceph:v19, name=sweet_hawking, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:12:35 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.buxlkm(active, since 3s)
Sep 30 14:12:35 compute-0 systemd[1]: Started libpod-conmon-f6d444e6678e4b8245ff8d4ed312258b35a6059516c95997d510f70fc75832f1.scope.
Sep 30 14:12:35 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:12:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/879ab5ba42a69f6652a9bae1690fb9278d8b3c2b3c6aec53083049037f3e2880/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/879ab5ba42a69f6652a9bae1690fb9278d8b3c2b3c6aec53083049037f3e2880/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/879ab5ba42a69f6652a9bae1690fb9278d8b3c2b3c6aec53083049037f3e2880/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:35 compute-0 podman[75170]: 2025-09-30 14:12:35.372455477 +0000 UTC m=+0.497436010 container init f6d444e6678e4b8245ff8d4ed312258b35a6059516c95997d510f70fc75832f1 (image=quay.io/ceph/ceph:v19, name=sweet_hawking, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Sep 30 14:12:35 compute-0 podman[75170]: 2025-09-30 14:12:35.378430052 +0000 UTC m=+0.503410555 container start f6d444e6678e4b8245ff8d4ed312258b35a6059516c95997d510f70fc75832f1 (image=quay.io/ceph/ceph:v19, name=sweet_hawking, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:12:35 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:12:35 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:12:35 compute-0 ceph-mon[74194]: mgrmap e8: compute-0.buxlkm(active, since 3s)
Sep 30 14:12:35 compute-0 podman[75170]: 2025-09-30 14:12:35.414598013 +0000 UTC m=+0.539578546 container attach f6d444e6678e4b8245ff8d4ed312258b35a6059516c95997d510f70fc75832f1 (image=quay.io/ceph/ceph:v19, name=sweet_hawking, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1)
Sep 30 14:12:35 compute-0 ceph-mgr[74485]: [cephadm INFO cherrypy.error] [30/Sep/2025:14:12:35] ENGINE Bus STARTING
Sep 30 14:12:35 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : [30/Sep/2025:14:12:35] ENGINE Bus STARTING
Sep 30 14:12:35 compute-0 ceph-mgr[74485]: [cephadm INFO cherrypy.error] [30/Sep/2025:14:12:35] ENGINE Serving on https://192.168.122.100:7150
Sep 30 14:12:35 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : [30/Sep/2025:14:12:35] ENGINE Serving on https://192.168.122.100:7150
Sep 30 14:12:35 compute-0 ceph-mgr[74485]: [cephadm INFO cherrypy.error] [30/Sep/2025:14:12:35] ENGINE Client ('192.168.122.100', 55284) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Sep 30 14:12:35 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : [30/Sep/2025:14:12:35] ENGINE Client ('192.168.122.100', 55284) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Sep 30 14:12:35 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.14134 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:12:35 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0)
Sep 30 14:12:35 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:12:35 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Sep 30 14:12:35 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Sep 30 14:12:35 compute-0 systemd[1]: libpod-f6d444e6678e4b8245ff8d4ed312258b35a6059516c95997d510f70fc75832f1.scope: Deactivated successfully.
Sep 30 14:12:35 compute-0 podman[75170]: 2025-09-30 14:12:35.824735982 +0000 UTC m=+0.949716485 container died f6d444e6678e4b8245ff8d4ed312258b35a6059516c95997d510f70fc75832f1 (image=quay.io/ceph/ceph:v19, name=sweet_hawking, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True)
Sep 30 14:12:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-879ab5ba42a69f6652a9bae1690fb9278d8b3c2b3c6aec53083049037f3e2880-merged.mount: Deactivated successfully.
Sep 30 14:12:35 compute-0 ceph-mgr[74485]: [cephadm INFO cherrypy.error] [30/Sep/2025:14:12:35] ENGINE Serving on http://192.168.122.100:8765
Sep 30 14:12:35 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : [30/Sep/2025:14:12:35] ENGINE Serving on http://192.168.122.100:8765
Sep 30 14:12:35 compute-0 ceph-mgr[74485]: [cephadm INFO cherrypy.error] [30/Sep/2025:14:12:35] ENGINE Bus STARTED
Sep 30 14:12:35 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : [30/Sep/2025:14:12:35] ENGINE Bus STARTED
Sep 30 14:12:35 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Sep 30 14:12:35 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Sep 30 14:12:35 compute-0 podman[75170]: 2025-09-30 14:12:35.864468155 +0000 UTC m=+0.989448658 container remove f6d444e6678e4b8245ff8d4ed312258b35a6059516c95997d510f70fc75832f1 (image=quay.io/ceph/ceph:v19, name=sweet_hawking, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Sep 30 14:12:35 compute-0 systemd[1]: libpod-conmon-f6d444e6678e4b8245ff8d4ed312258b35a6059516c95997d510f70fc75832f1.scope: Deactivated successfully.
Sep 30 14:12:36 compute-0 podman[75248]: 2025-09-30 14:12:35.906966171 +0000 UTC m=+0.020814783 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:12:36 compute-0 podman[75248]: 2025-09-30 14:12:36.072077816 +0000 UTC m=+0.185926408 container create ce69679bd882d7d64b6aa7599f90e6c619025779042dc51a289bd2b6881f3ab9 (image=quay.io/ceph/ceph:v19, name=crazy_khorana, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Sep 30 14:12:36 compute-0 systemd[1]: Started libpod-conmon-ce69679bd882d7d64b6aa7599f90e6c619025779042dc51a289bd2b6881f3ab9.scope.
Sep 30 14:12:36 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:12:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb30dd145d1b5a15edb3785d17fa137987865c0776cd08fede809d0ae06f7d50/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb30dd145d1b5a15edb3785d17fa137987865c0776cd08fede809d0ae06f7d50/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb30dd145d1b5a15edb3785d17fa137987865c0776cd08fede809d0ae06f7d50/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:36 compute-0 podman[75248]: 2025-09-30 14:12:36.175144817 +0000 UTC m=+0.288993639 container init ce69679bd882d7d64b6aa7599f90e6c619025779042dc51a289bd2b6881f3ab9 (image=quay.io/ceph/ceph:v19, name=crazy_khorana, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:12:36 compute-0 podman[75248]: 2025-09-30 14:12:36.180780003 +0000 UTC m=+0.294628595 container start ce69679bd882d7d64b6aa7599f90e6c619025779042dc51a289bd2b6881f3ab9 (image=quay.io/ceph/ceph:v19, name=crazy_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:12:36 compute-0 podman[75248]: 2025-09-30 14:12:36.189007087 +0000 UTC m=+0.302855699 container attach ce69679bd882d7d64b6aa7599f90e6c619025779042dc51a289bd2b6881f3ab9 (image=quay.io/ceph/ceph:v19, name=crazy_khorana, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:12:36 compute-0 ceph-mgr[74485]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Sep 30 14:12:36 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:12:36 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0)
Sep 30 14:12:36 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:12:36 compute-0 ceph-mgr[74485]: [cephadm INFO root] Set ssh ssh_user
Sep 30 14:12:36 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Sep 30 14:12:36 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0)
Sep 30 14:12:36 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:12:36 compute-0 ceph-mgr[74485]: [cephadm INFO root] Set ssh ssh_config
Sep 30 14:12:36 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Sep 30 14:12:36 compute-0 ceph-mgr[74485]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Sep 30 14:12:36 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Sep 30 14:12:36 compute-0 crazy_khorana[75264]: ssh user set to ceph-admin. sudo will be used
Sep 30 14:12:36 compute-0 systemd[1]: libpod-ce69679bd882d7d64b6aa7599f90e6c619025779042dc51a289bd2b6881f3ab9.scope: Deactivated successfully.
Sep 30 14:12:36 compute-0 podman[75290]: 2025-09-30 14:12:36.626091067 +0000 UTC m=+0.024468178 container died ce69679bd882d7d64b6aa7599f90e6c619025779042dc51a289bd2b6881f3ab9 (image=quay.io/ceph/ceph:v19, name=crazy_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Sep 30 14:12:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-bb30dd145d1b5a15edb3785d17fa137987865c0776cd08fede809d0ae06f7d50-merged.mount: Deactivated successfully.
Sep 30 14:12:36 compute-0 podman[75290]: 2025-09-30 14:12:36.686279262 +0000 UTC m=+0.084656353 container remove ce69679bd882d7d64b6aa7599f90e6c619025779042dc51a289bd2b6881f3ab9 (image=quay.io/ceph/ceph:v19, name=crazy_khorana, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Sep 30 14:12:36 compute-0 systemd[1]: libpod-conmon-ce69679bd882d7d64b6aa7599f90e6c619025779042dc51a289bd2b6881f3ab9.scope: Deactivated successfully.
Sep 30 14:12:36 compute-0 podman[75305]: 2025-09-30 14:12:36.797743893 +0000 UTC m=+0.087578030 container create 9c66d98d9be9b01aeab8b68e7c2a6ab274bf7ae383beb1197c296ed493527070 (image=quay.io/ceph/ceph:v19, name=recursing_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:12:36 compute-0 podman[75305]: 2025-09-30 14:12:36.729484146 +0000 UTC m=+0.019318313 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:12:36 compute-0 ceph-mon[74194]: [30/Sep/2025:14:12:35] ENGINE Bus STARTING
Sep 30 14:12:36 compute-0 ceph-mon[74194]: [30/Sep/2025:14:12:35] ENGINE Serving on https://192.168.122.100:7150
Sep 30 14:12:36 compute-0 ceph-mon[74194]: [30/Sep/2025:14:12:35] ENGINE Client ('192.168.122.100', 55284) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Sep 30 14:12:36 compute-0 ceph-mon[74194]: from='client.14134 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:12:36 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:12:36 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Sep 30 14:12:36 compute-0 ceph-mon[74194]: [30/Sep/2025:14:12:35] ENGINE Serving on http://192.168.122.100:8765
Sep 30 14:12:36 compute-0 ceph-mon[74194]: [30/Sep/2025:14:12:35] ENGINE Bus STARTED
Sep 30 14:12:36 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Sep 30 14:12:36 compute-0 ceph-mon[74194]: from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:12:36 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:12:36 compute-0 ceph-mon[74194]: Set ssh ssh_user
Sep 30 14:12:36 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:12:36 compute-0 ceph-mon[74194]: Set ssh ssh_config
Sep 30 14:12:36 compute-0 ceph-mon[74194]: ssh user set to ceph-admin. sudo will be used
Sep 30 14:12:36 compute-0 systemd[1]: Started libpod-conmon-9c66d98d9be9b01aeab8b68e7c2a6ab274bf7ae383beb1197c296ed493527070.scope.
Sep 30 14:12:36 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:12:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddb9f913a26ec404852a99589bbba18e82b33ae0da6165d726e2e9e0e075ade2/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddb9f913a26ec404852a99589bbba18e82b33ae0da6165d726e2e9e0e075ade2/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddb9f913a26ec404852a99589bbba18e82b33ae0da6165d726e2e9e0e075ade2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddb9f913a26ec404852a99589bbba18e82b33ae0da6165d726e2e9e0e075ade2/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddb9f913a26ec404852a99589bbba18e82b33ae0da6165d726e2e9e0e075ade2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:36 compute-0 podman[75305]: 2025-09-30 14:12:36.918093262 +0000 UTC m=+0.207927419 container init 9c66d98d9be9b01aeab8b68e7c2a6ab274bf7ae383beb1197c296ed493527070 (image=quay.io/ceph/ceph:v19, name=recursing_poitras, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:12:36 compute-0 podman[75305]: 2025-09-30 14:12:36.924242002 +0000 UTC m=+0.214076139 container start 9c66d98d9be9b01aeab8b68e7c2a6ab274bf7ae383beb1197c296ed493527070 (image=quay.io/ceph/ceph:v19, name=recursing_poitras, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0)
Sep 30 14:12:36 compute-0 podman[75305]: 2025-09-30 14:12:36.953829432 +0000 UTC m=+0.243663599 container attach 9c66d98d9be9b01aeab8b68e7c2a6ab274bf7ae383beb1197c296ed493527070 (image=quay.io/ceph/ceph:v19, name=recursing_poitras, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1)
Sep 30 14:12:37 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:12:37 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0)
Sep 30 14:12:37 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:12:37 compute-0 ceph-mgr[74485]: [cephadm INFO root] Set ssh ssh_identity_key
Sep 30 14:12:37 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Sep 30 14:12:37 compute-0 ceph-mgr[74485]: [cephadm INFO root] Set ssh private key
Sep 30 14:12:37 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Set ssh private key
Sep 30 14:12:37 compute-0 systemd[1]: libpod-9c66d98d9be9b01aeab8b68e7c2a6ab274bf7ae383beb1197c296ed493527070.scope: Deactivated successfully.
Sep 30 14:12:37 compute-0 podman[75347]: 2025-09-30 14:12:37.359099104 +0000 UTC m=+0.021599953 container died 9c66d98d9be9b01aeab8b68e7c2a6ab274bf7ae383beb1197c296ed493527070 (image=quay.io/ceph/ceph:v19, name=recursing_poitras, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Sep 30 14:12:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-ddb9f913a26ec404852a99589bbba18e82b33ae0da6165d726e2e9e0e075ade2-merged.mount: Deactivated successfully.
Sep 30 14:12:37 compute-0 podman[75347]: 2025-09-30 14:12:37.525349398 +0000 UTC m=+0.187850237 container remove 9c66d98d9be9b01aeab8b68e7c2a6ab274bf7ae383beb1197c296ed493527070 (image=quay.io/ceph/ceph:v19, name=recursing_poitras, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:12:37 compute-0 systemd[1]: libpod-conmon-9c66d98d9be9b01aeab8b68e7c2a6ab274bf7ae383beb1197c296ed493527070.scope: Deactivated successfully.
Sep 30 14:12:37 compute-0 podman[75363]: 2025-09-30 14:12:37.597144266 +0000 UTC m=+0.045108815 container create 48fb326c57583c3b046430e6940e63e4dfafda68cd72d468b5ac7030180e182c (image=quay.io/ceph/ceph:v19, name=friendly_nightingale, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:12:37 compute-0 systemd[1]: Started libpod-conmon-48fb326c57583c3b046430e6940e63e4dfafda68cd72d468b5ac7030180e182c.scope.
Sep 30 14:12:37 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:12:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/648020190c29706de371fbcc44e664e70cd945c8628f9c4b72c943a14c5ebe69/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/648020190c29706de371fbcc44e664e70cd945c8628f9c4b72c943a14c5ebe69/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/648020190c29706de371fbcc44e664e70cd945c8628f9c4b72c943a14c5ebe69/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/648020190c29706de371fbcc44e664e70cd945c8628f9c4b72c943a14c5ebe69/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:37 compute-0 podman[75363]: 2025-09-30 14:12:37.573393728 +0000 UTC m=+0.021358307 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:12:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/648020190c29706de371fbcc44e664e70cd945c8628f9c4b72c943a14c5ebe69/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:37 compute-0 podman[75363]: 2025-09-30 14:12:37.728091412 +0000 UTC m=+0.176055981 container init 48fb326c57583c3b046430e6940e63e4dfafda68cd72d468b5ac7030180e182c (image=quay.io/ceph/ceph:v19, name=friendly_nightingale, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Sep 30 14:12:37 compute-0 podman[75363]: 2025-09-30 14:12:37.73414854 +0000 UTC m=+0.182113089 container start 48fb326c57583c3b046430e6940e63e4dfafda68cd72d468b5ac7030180e182c (image=quay.io/ceph/ceph:v19, name=friendly_nightingale, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Sep 30 14:12:37 compute-0 podman[75363]: 2025-09-30 14:12:37.800525866 +0000 UTC m=+0.248490435 container attach 48fb326c57583c3b046430e6940e63e4dfafda68cd72d468b5ac7030180e182c (image=quay.io/ceph/ceph:v19, name=friendly_nightingale, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:12:38 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.14140 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:12:38 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0)
Sep 30 14:12:38 compute-0 ceph-mgr[74485]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Sep 30 14:12:38 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:12:38 compute-0 ceph-mgr[74485]: [cephadm INFO root] Set ssh ssh_identity_pub
Sep 30 14:12:38 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Sep 30 14:12:38 compute-0 systemd[1]: libpod-48fb326c57583c3b046430e6940e63e4dfafda68cd72d468b5ac7030180e182c.scope: Deactivated successfully.
Sep 30 14:12:38 compute-0 podman[75363]: 2025-09-30 14:12:38.425089313 +0000 UTC m=+0.873053862 container died 48fb326c57583c3b046430e6940e63e4dfafda68cd72d468b5ac7030180e182c (image=quay.io/ceph/ceph:v19, name=friendly_nightingale, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325)
Sep 30 14:12:38 compute-0 ceph-mon[74194]: from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:12:38 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:12:38 compute-0 ceph-mon[74194]: Set ssh ssh_identity_key
Sep 30 14:12:38 compute-0 ceph-mon[74194]: Set ssh private key
Sep 30 14:12:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-648020190c29706de371fbcc44e664e70cd945c8628f9c4b72c943a14c5ebe69-merged.mount: Deactivated successfully.
Sep 30 14:12:38 compute-0 podman[75363]: 2025-09-30 14:12:38.967949733 +0000 UTC m=+1.415914332 container remove 48fb326c57583c3b046430e6940e63e4dfafda68cd72d468b5ac7030180e182c (image=quay.io/ceph/ceph:v19, name=friendly_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Sep 30 14:12:39 compute-0 systemd[1]: libpod-conmon-48fb326c57583c3b046430e6940e63e4dfafda68cd72d468b5ac7030180e182c.scope: Deactivated successfully.
Sep 30 14:12:39 compute-0 podman[75416]: 2025-09-30 14:12:39.07432561 +0000 UTC m=+0.083112063 container create 38d2caee088f31ad7f7e9d1b3816d88e584f4497baeef97e5c4e783283e2e56d (image=quay.io/ceph/ceph:v19, name=thirsty_einstein, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2)
Sep 30 14:12:39 compute-0 podman[75416]: 2025-09-30 14:12:39.011431474 +0000 UTC m=+0.020217947 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:12:39 compute-0 systemd[1]: Started libpod-conmon-38d2caee088f31ad7f7e9d1b3816d88e584f4497baeef97e5c4e783283e2e56d.scope.
Sep 30 14:12:39 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:12:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fac765e11040e3366e52065d032c372e3ed32d5833db32d73e62b85135cf835/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fac765e11040e3366e52065d032c372e3ed32d5833db32d73e62b85135cf835/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fac765e11040e3366e52065d032c372e3ed32d5833db32d73e62b85135cf835/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:39 compute-0 podman[75416]: 2025-09-30 14:12:39.186418466 +0000 UTC m=+0.195204939 container init 38d2caee088f31ad7f7e9d1b3816d88e584f4497baeef97e5c4e783283e2e56d (image=quay.io/ceph/ceph:v19, name=thirsty_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Sep 30 14:12:39 compute-0 podman[75416]: 2025-09-30 14:12:39.20735817 +0000 UTC m=+0.216144623 container start 38d2caee088f31ad7f7e9d1b3816d88e584f4497baeef97e5c4e783283e2e56d (image=quay.io/ceph/ceph:v19, name=thirsty_einstein, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:12:39 compute-0 podman[75416]: 2025-09-30 14:12:39.312640349 +0000 UTC m=+0.321426802 container attach 38d2caee088f31ad7f7e9d1b3816d88e584f4497baeef97e5c4e783283e2e56d (image=quay.io/ceph/ceph:v19, name=thirsty_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:12:39 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.14142 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:12:39 compute-0 thirsty_einstein[75431]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCgUy2Jk2Te7YDC1tIwoMDCgqZeldnlvfxCGKNpl/++pwP0H2yyhM8KU861/3VDCcqfI/09qSeA4CQ6X9jpZDdocfBrcyE7CocvpvgmPjDbthwGLfuDXkjney3Xo3ms05mFbTurouv7WFhLhq8IClLjNPUmdU+AxP3JytoYzdjqdn+Uw37iICcIWUCM5UFAeGFYbGoKOGDmNsEZXF0NwfFuA0lVl2JqMaDO5nUzpelWUjZ1XYYiFi8JWGxN3EBQLhaYgZ7xO0Z2IhIeVfRCupcpl+7g2eoUm0FwnVe0TFPFpSnaDj7A7zZNYVbxcX1debuysfYdG3QzDKD+SNf0CkInwnnBTiLW773p47rL/pA86yDuthlDWju48FhgIcGByl+NzoEdQB0cQ7VD/k4+afxGlvNjSDQNcWxj4FNlXu+UPvYm4x701b0BUF9c9kZgL1gU5Q1fp6nYQURPLcCnRJhWV44o1VNbMyOW92BYB917nfe0Y2gJw5mOb6O7S8CNf5c= zuul@controller
Sep 30 14:12:39 compute-0 ceph-mon[74194]: from='client.14140 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:12:39 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:12:39 compute-0 ceph-mon[74194]: Set ssh ssh_identity_pub
Sep 30 14:12:39 compute-0 systemd[1]: libpod-38d2caee088f31ad7f7e9d1b3816d88e584f4497baeef97e5c4e783283e2e56d.scope: Deactivated successfully.
Sep 30 14:12:39 compute-0 podman[75416]: 2025-09-30 14:12:39.59216056 +0000 UTC m=+0.600947023 container died 38d2caee088f31ad7f7e9d1b3816d88e584f4497baeef97e5c4e783283e2e56d (image=quay.io/ceph/ceph:v19, name=thirsty_einstein, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:12:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-5fac765e11040e3366e52065d032c372e3ed32d5833db32d73e62b85135cf835-merged.mount: Deactivated successfully.
Sep 30 14:12:39 compute-0 podman[75416]: 2025-09-30 14:12:39.67210115 +0000 UTC m=+0.680887603 container remove 38d2caee088f31ad7f7e9d1b3816d88e584f4497baeef97e5c4e783283e2e56d (image=quay.io/ceph/ceph:v19, name=thirsty_einstein, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:12:39 compute-0 systemd[1]: libpod-conmon-38d2caee088f31ad7f7e9d1b3816d88e584f4497baeef97e5c4e783283e2e56d.scope: Deactivated successfully.
Sep 30 14:12:39 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020053026 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 14:12:39 compute-0 podman[75468]: 2025-09-30 14:12:39.734250326 +0000 UTC m=+0.040577856 container create 47b3ad6cc4d531d81113faa2a362de7bee87a8e4170cb8e17f2b9d3aad7dc198 (image=quay.io/ceph/ceph:v19, name=upbeat_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True)
Sep 30 14:12:39 compute-0 systemd[1]: Started libpod-conmon-47b3ad6cc4d531d81113faa2a362de7bee87a8e4170cb8e17f2b9d3aad7dc198.scope.
Sep 30 14:12:39 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:12:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/986414074f153df161df59490723cc303f01495f5e3be3ef639f63da8c05473b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/986414074f153df161df59490723cc303f01495f5e3be3ef639f63da8c05473b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/986414074f153df161df59490723cc303f01495f5e3be3ef639f63da8c05473b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:39 compute-0 podman[75468]: 2025-09-30 14:12:39.716208417 +0000 UTC m=+0.022535957 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:12:39 compute-0 podman[75468]: 2025-09-30 14:12:39.955797819 +0000 UTC m=+0.262125359 container init 47b3ad6cc4d531d81113faa2a362de7bee87a8e4170cb8e17f2b9d3aad7dc198 (image=quay.io/ceph/ceph:v19, name=upbeat_goldstine, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:12:39 compute-0 podman[75468]: 2025-09-30 14:12:39.96160276 +0000 UTC m=+0.267930280 container start 47b3ad6cc4d531d81113faa2a362de7bee87a8e4170cb8e17f2b9d3aad7dc198 (image=quay.io/ceph/ceph:v19, name=upbeat_goldstine, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:12:40 compute-0 podman[75468]: 2025-09-30 14:12:40.16956818 +0000 UTC m=+0.475895730 container attach 47b3ad6cc4d531d81113faa2a362de7bee87a8e4170cb8e17f2b9d3aad7dc198 (image=quay.io/ceph/ceph:v19, name=upbeat_goldstine, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:12:40 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:12:40 compute-0 ceph-mgr[74485]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Sep 30 14:12:40 compute-0 sshd-session[75510]: Accepted publickey for ceph-admin from 192.168.122.100 port 39270 ssh2: RSA SHA256:xW6Secl6o9Q/fOm6V4KS97DIZ06Q0FgYLSMG01uhfVw
Sep 30 14:12:40 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Sep 30 14:12:40 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Sep 30 14:12:40 compute-0 systemd-logind[808]: New session 22 of user ceph-admin.
Sep 30 14:12:40 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Sep 30 14:12:40 compute-0 systemd[1]: Starting User Manager for UID 42477...
Sep 30 14:12:40 compute-0 systemd[75514]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Sep 30 14:12:40 compute-0 ceph-mon[74194]: from='client.14142 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:12:40 compute-0 systemd[75514]: Queued start job for default target Main User Target.
Sep 30 14:12:40 compute-0 systemd[75514]: Created slice User Application Slice.
Sep 30 14:12:40 compute-0 systemd[75514]: Started Mark boot as successful after the user session has run 2 minutes.
Sep 30 14:12:40 compute-0 systemd[75514]: Started Daily Cleanup of User's Temporary Directories.
Sep 30 14:12:40 compute-0 systemd[75514]: Reached target Paths.
Sep 30 14:12:40 compute-0 systemd[75514]: Reached target Timers.
Sep 30 14:12:40 compute-0 sshd-session[75527]: Accepted publickey for ceph-admin from 192.168.122.100 port 39278 ssh2: RSA SHA256:xW6Secl6o9Q/fOm6V4KS97DIZ06Q0FgYLSMG01uhfVw
Sep 30 14:12:40 compute-0 systemd[75514]: Starting D-Bus User Message Bus Socket...
Sep 30 14:12:40 compute-0 systemd[75514]: Starting Create User's Volatile Files and Directories...
Sep 30 14:12:40 compute-0 systemd-logind[808]: New session 24 of user ceph-admin.
Sep 30 14:12:40 compute-0 systemd[75514]: Listening on D-Bus User Message Bus Socket.
Sep 30 14:12:40 compute-0 systemd[75514]: Finished Create User's Volatile Files and Directories.
Sep 30 14:12:40 compute-0 systemd[75514]: Reached target Sockets.
Sep 30 14:12:40 compute-0 systemd[75514]: Reached target Basic System.
Sep 30 14:12:40 compute-0 systemd[75514]: Reached target Main User Target.
Sep 30 14:12:40 compute-0 systemd[75514]: Startup finished in 124ms.
Sep 30 14:12:40 compute-0 systemd[1]: Started User Manager for UID 42477.
Sep 30 14:12:40 compute-0 systemd[1]: Started Session 22 of User ceph-admin.
Sep 30 14:12:40 compute-0 systemd[1]: Started Session 24 of User ceph-admin.
Sep 30 14:12:40 compute-0 sshd-session[75510]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Sep 30 14:12:40 compute-0 sshd-session[75527]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Sep 30 14:12:40 compute-0 sudo[75534]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:12:40 compute-0 sudo[75534]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:12:40 compute-0 sudo[75534]: pam_unix(sudo:session): session closed for user root
Sep 30 14:12:41 compute-0 sshd-session[75559]: Accepted publickey for ceph-admin from 192.168.122.100 port 39294 ssh2: RSA SHA256:xW6Secl6o9Q/fOm6V4KS97DIZ06Q0FgYLSMG01uhfVw
Sep 30 14:12:41 compute-0 systemd-logind[808]: New session 25 of user ceph-admin.
Sep 30 14:12:41 compute-0 systemd[1]: Started Session 25 of User ceph-admin.
Sep 30 14:12:41 compute-0 sshd-session[75559]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Sep 30 14:12:41 compute-0 sudo[75563]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host --expect-hostname compute-0
Sep 30 14:12:41 compute-0 sudo[75563]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:12:41 compute-0 sudo[75563]: pam_unix(sudo:session): session closed for user root
Sep 30 14:12:41 compute-0 sshd-session[75588]: Accepted publickey for ceph-admin from 192.168.122.100 port 39306 ssh2: RSA SHA256:xW6Secl6o9Q/fOm6V4KS97DIZ06Q0FgYLSMG01uhfVw
Sep 30 14:12:41 compute-0 systemd-logind[808]: New session 26 of user ceph-admin.
Sep 30 14:12:41 compute-0 systemd[1]: Started Session 26 of User ceph-admin.
Sep 30 14:12:41 compute-0 sshd-session[75588]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Sep 30 14:12:41 compute-0 sudo[75592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36
Sep 30 14:12:41 compute-0 sudo[75592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:12:41 compute-0 sudo[75592]: pam_unix(sudo:session): session closed for user root
Sep 30 14:12:41 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Sep 30 14:12:41 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Sep 30 14:12:41 compute-0 ceph-mon[74194]: from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:12:41 compute-0 sshd-session[75617]: Accepted publickey for ceph-admin from 192.168.122.100 port 39320 ssh2: RSA SHA256:xW6Secl6o9Q/fOm6V4KS97DIZ06Q0FgYLSMG01uhfVw
Sep 30 14:12:41 compute-0 systemd-logind[808]: New session 27 of user ceph-admin.
Sep 30 14:12:41 compute-0 systemd[1]: Started Session 27 of User ceph-admin.
Sep 30 14:12:41 compute-0 sshd-session[75617]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Sep 30 14:12:41 compute-0 sudo[75621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6
Sep 30 14:12:41 compute-0 sudo[75621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:12:41 compute-0 sudo[75621]: pam_unix(sudo:session): session closed for user root
Sep 30 14:12:41 compute-0 sshd-session[75646]: Accepted publickey for ceph-admin from 192.168.122.100 port 39334 ssh2: RSA SHA256:xW6Secl6o9Q/fOm6V4KS97DIZ06Q0FgYLSMG01uhfVw
Sep 30 14:12:41 compute-0 systemd-logind[808]: New session 28 of user ceph-admin.
Sep 30 14:12:41 compute-0 systemd[1]: Started Session 28 of User ceph-admin.
Sep 30 14:12:41 compute-0 sshd-session[75646]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Sep 30 14:12:42 compute-0 sudo[75650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6
Sep 30 14:12:42 compute-0 sudo[75650]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:12:42 compute-0 sudo[75650]: pam_unix(sudo:session): session closed for user root
Sep 30 14:12:42 compute-0 sshd-session[75675]: Accepted publickey for ceph-admin from 192.168.122.100 port 39348 ssh2: RSA SHA256:xW6Secl6o9Q/fOm6V4KS97DIZ06Q0FgYLSMG01uhfVw
Sep 30 14:12:42 compute-0 systemd-logind[808]: New session 29 of user ceph-admin.
Sep 30 14:12:42 compute-0 systemd[1]: Started Session 29 of User ceph-admin.
Sep 30 14:12:42 compute-0 sshd-session[75675]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Sep 30 14:12:42 compute-0 sudo[75679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36.new
Sep 30 14:12:42 compute-0 sudo[75679]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:12:42 compute-0 sudo[75679]: pam_unix(sudo:session): session closed for user root
Sep 30 14:12:42 compute-0 ceph-mgr[74485]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Sep 30 14:12:42 compute-0 sshd-session[75704]: Accepted publickey for ceph-admin from 192.168.122.100 port 39350 ssh2: RSA SHA256:xW6Secl6o9Q/fOm6V4KS97DIZ06Q0FgYLSMG01uhfVw
Sep 30 14:12:42 compute-0 systemd-logind[808]: New session 30 of user ceph-admin.
Sep 30 14:12:42 compute-0 systemd[1]: Started Session 30 of User ceph-admin.
Sep 30 14:12:42 compute-0 sshd-session[75704]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Sep 30 14:12:42 compute-0 ceph-mon[74194]: Deploying cephadm binary to compute-0
Sep 30 14:12:42 compute-0 sudo[75708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6
Sep 30 14:12:42 compute-0 sudo[75708]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:12:42 compute-0 sudo[75708]: pam_unix(sudo:session): session closed for user root
Sep 30 14:12:42 compute-0 sshd-session[75733]: Accepted publickey for ceph-admin from 192.168.122.100 port 39352 ssh2: RSA SHA256:xW6Secl6o9Q/fOm6V4KS97DIZ06Q0FgYLSMG01uhfVw
Sep 30 14:12:42 compute-0 systemd-logind[808]: New session 31 of user ceph-admin.
Sep 30 14:12:42 compute-0 systemd[1]: Started Session 31 of User ceph-admin.
Sep 30 14:12:42 compute-0 sshd-session[75733]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Sep 30 14:12:42 compute-0 sudo[75737]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36.new
Sep 30 14:12:42 compute-0 sudo[75737]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:12:42 compute-0 sudo[75737]: pam_unix(sudo:session): session closed for user root
Sep 30 14:12:43 compute-0 sshd-session[75762]: Accepted publickey for ceph-admin from 192.168.122.100 port 39354 ssh2: RSA SHA256:xW6Secl6o9Q/fOm6V4KS97DIZ06Q0FgYLSMG01uhfVw
Sep 30 14:12:43 compute-0 systemd-logind[808]: New session 32 of user ceph-admin.
Sep 30 14:12:43 compute-0 systemd[1]: Started Session 32 of User ceph-admin.
Sep 30 14:12:43 compute-0 sshd-session[75762]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Sep 30 14:12:44 compute-0 sshd-session[75789]: Accepted publickey for ceph-admin from 192.168.122.100 port 39362 ssh2: RSA SHA256:xW6Secl6o9Q/fOm6V4KS97DIZ06Q0FgYLSMG01uhfVw
Sep 30 14:12:44 compute-0 systemd-logind[808]: New session 33 of user ceph-admin.
Sep 30 14:12:44 compute-0 systemd[1]: Started Session 33 of User ceph-admin.
Sep 30 14:12:44 compute-0 sshd-session[75789]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Sep 30 14:12:44 compute-0 ceph-mgr[74485]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Sep 30 14:12:44 compute-0 sudo[75793]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36.new /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36
Sep 30 14:12:44 compute-0 sudo[75793]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:12:44 compute-0 sudo[75793]: pam_unix(sudo:session): session closed for user root
Sep 30 14:12:44 compute-0 sshd-session[75818]: Accepted publickey for ceph-admin from 192.168.122.100 port 39374 ssh2: RSA SHA256:xW6Secl6o9Q/fOm6V4KS97DIZ06Q0FgYLSMG01uhfVw
Sep 30 14:12:44 compute-0 systemd-logind[808]: New session 34 of user ceph-admin.
Sep 30 14:12:44 compute-0 systemd[1]: Started Session 34 of User ceph-admin.
Sep 30 14:12:44 compute-0 sshd-session[75818]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Sep 30 14:12:44 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054709 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 14:12:44 compute-0 sudo[75822]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host --expect-hostname compute-0
Sep 30 14:12:44 compute-0 sudo[75822]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:12:45 compute-0 sudo[75822]: pam_unix(sudo:session): session closed for user root
Sep 30 14:12:45 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Sep 30 14:12:45 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:12:45 compute-0 ceph-mgr[74485]: [cephadm INFO root] Added host compute-0
Sep 30 14:12:45 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Added host compute-0
Sep 30 14:12:45 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Sep 30 14:12:45 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Sep 30 14:12:45 compute-0 upbeat_goldstine[75484]: Added host 'compute-0' with addr '192.168.122.100'
Sep 30 14:12:45 compute-0 systemd[1]: libpod-47b3ad6cc4d531d81113faa2a362de7bee87a8e4170cb8e17f2b9d3aad7dc198.scope: Deactivated successfully.
Sep 30 14:12:45 compute-0 podman[75468]: 2025-09-30 14:12:45.041611171 +0000 UTC m=+5.347938701 container died 47b3ad6cc4d531d81113faa2a362de7bee87a8e4170cb8e17f2b9d3aad7dc198 (image=quay.io/ceph/ceph:v19, name=upbeat_goldstine, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Sep 30 14:12:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-986414074f153df161df59490723cc303f01495f5e3be3ef639f63da8c05473b-merged.mount: Deactivated successfully.
Sep 30 14:12:45 compute-0 sudo[75867]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:12:45 compute-0 sudo[75867]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:12:45 compute-0 sudo[75867]: pam_unix(sudo:session): session closed for user root
Sep 30 14:12:45 compute-0 podman[75468]: 2025-09-30 14:12:45.080422961 +0000 UTC m=+5.386750481 container remove 47b3ad6cc4d531d81113faa2a362de7bee87a8e4170cb8e17f2b9d3aad7dc198 (image=quay.io/ceph/ceph:v19, name=upbeat_goldstine, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:12:45 compute-0 systemd[1]: libpod-conmon-47b3ad6cc4d531d81113faa2a362de7bee87a8e4170cb8e17f2b9d3aad7dc198.scope: Deactivated successfully.
Sep 30 14:12:45 compute-0 sudo[75905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph:v19 --timeout 895 pull
Sep 30 14:12:45 compute-0 sudo[75905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:12:45 compute-0 podman[75909]: 2025-09-30 14:12:45.141590472 +0000 UTC m=+0.040107025 container create 1a27fcb4306c1ed4fd01e7ec94087750a95c11794b29d9da31697d5bc522f8a3 (image=quay.io/ceph/ceph:v19, name=nice_visvesvaraya, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:12:45 compute-0 systemd[1]: Started libpod-conmon-1a27fcb4306c1ed4fd01e7ec94087750a95c11794b29d9da31697d5bc522f8a3.scope.
Sep 30 14:12:45 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:12:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/517ddaf95f32cded1786ce028818f210398e7ef669eceba256a8f8304fd0b610/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/517ddaf95f32cded1786ce028818f210398e7ef669eceba256a8f8304fd0b610/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/517ddaf95f32cded1786ce028818f210398e7ef669eceba256a8f8304fd0b610/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:45 compute-0 podman[75909]: 2025-09-30 14:12:45.123286355 +0000 UTC m=+0.021802898 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:12:45 compute-0 podman[75909]: 2025-09-30 14:12:45.344950722 +0000 UTC m=+0.243467265 container init 1a27fcb4306c1ed4fd01e7ec94087750a95c11794b29d9da31697d5bc522f8a3 (image=quay.io/ceph/ceph:v19, name=nice_visvesvaraya, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Sep 30 14:12:45 compute-0 podman[75909]: 2025-09-30 14:12:45.352255842 +0000 UTC m=+0.250772365 container start 1a27fcb4306c1ed4fd01e7ec94087750a95c11794b29d9da31697d5bc522f8a3 (image=quay.io/ceph/ceph:v19, name=nice_visvesvaraya, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Sep 30 14:12:45 compute-0 podman[75909]: 2025-09-30 14:12:45.500898838 +0000 UTC m=+0.399415361 container attach 1a27fcb4306c1ed4fd01e7ec94087750a95c11794b29d9da31697d5bc522f8a3 (image=quay.io/ceph/ceph:v19, name=nice_visvesvaraya, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Sep 30 14:12:45 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:12:45 compute-0 ceph-mgr[74485]: [cephadm INFO root] Saving service mon spec with placement count:5
Sep 30 14:12:45 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Sep 30 14:12:45 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Sep 30 14:12:45 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:12:45 compute-0 nice_visvesvaraya[75946]: Scheduled mon update...
Sep 30 14:12:45 compute-0 systemd[1]: libpod-1a27fcb4306c1ed4fd01e7ec94087750a95c11794b29d9da31697d5bc522f8a3.scope: Deactivated successfully.
Sep 30 14:12:45 compute-0 podman[75909]: 2025-09-30 14:12:45.813373975 +0000 UTC m=+0.711890518 container died 1a27fcb4306c1ed4fd01e7ec94087750a95c11794b29d9da31697d5bc522f8a3 (image=quay.io/ceph/ceph:v19, name=nice_visvesvaraya, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:12:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-517ddaf95f32cded1786ce028818f210398e7ef669eceba256a8f8304fd0b610-merged.mount: Deactivated successfully.
Sep 30 14:12:46 compute-0 ceph-mgr[74485]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Sep 30 14:12:46 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:12:46 compute-0 ceph-mon[74194]: Added host compute-0
Sep 30 14:12:46 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Sep 30 14:12:46 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:12:47 compute-0 podman[75963]: 2025-09-30 14:12:47.285698744 +0000 UTC m=+1.919418929 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:12:47 compute-0 podman[75909]: 2025-09-30 14:12:47.38396992 +0000 UTC m=+2.282486473 container remove 1a27fcb4306c1ed4fd01e7ec94087750a95c11794b29d9da31697d5bc522f8a3 (image=quay.io/ceph/ceph:v19, name=nice_visvesvaraya, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2)
Sep 30 14:12:47 compute-0 systemd[1]: libpod-conmon-1a27fcb4306c1ed4fd01e7ec94087750a95c11794b29d9da31697d5bc522f8a3.scope: Deactivated successfully.
Sep 30 14:12:47 compute-0 podman[76021]: 2025-09-30 14:12:47.424098864 +0000 UTC m=+0.020060113 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:12:47 compute-0 podman[76029]: 2025-09-30 14:12:47.438331304 +0000 UTC m=+0.020846993 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:12:47 compute-0 podman[76021]: 2025-09-30 14:12:47.58079408 +0000 UTC m=+0.176755309 container create 3623ef5b957e3324692a33093b5f90a13dabe7c8862dff8810f3cd8297631eeb (image=quay.io/ceph/ceph:v19, name=upbeat_brattain, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1)
Sep 30 14:12:47 compute-0 ceph-mon[74194]: from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:12:47 compute-0 ceph-mon[74194]: Saving service mon spec with placement count:5
Sep 30 14:12:47 compute-0 systemd[1]: Started libpod-conmon-3623ef5b957e3324692a33093b5f90a13dabe7c8862dff8810f3cd8297631eeb.scope.
Sep 30 14:12:47 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:12:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cc5709fa9a01de4318cd0b8de0c3fe12398c072321c246e7bf8106325e38916/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cc5709fa9a01de4318cd0b8de0c3fe12398c072321c246e7bf8106325e38916/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cc5709fa9a01de4318cd0b8de0c3fe12398c072321c246e7bf8106325e38916/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:47 compute-0 podman[76021]: 2025-09-30 14:12:47.798089222 +0000 UTC m=+0.394050461 container init 3623ef5b957e3324692a33093b5f90a13dabe7c8862dff8810f3cd8297631eeb (image=quay.io/ceph/ceph:v19, name=upbeat_brattain, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Sep 30 14:12:47 compute-0 podman[76021]: 2025-09-30 14:12:47.804703624 +0000 UTC m=+0.400664853 container start 3623ef5b957e3324692a33093b5f90a13dabe7c8862dff8810f3cd8297631eeb (image=quay.io/ceph/ceph:v19, name=upbeat_brattain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1)
Sep 30 14:12:47 compute-0 podman[76021]: 2025-09-30 14:12:47.889613623 +0000 UTC m=+0.485574882 container attach 3623ef5b957e3324692a33093b5f90a13dabe7c8862dff8810f3cd8297631eeb (image=quay.io/ceph/ceph:v19, name=upbeat_brattain, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:12:48 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:12:48 compute-0 ceph-mgr[74485]: [cephadm INFO root] Saving service mgr spec with placement count:2
Sep 30 14:12:48 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Sep 30 14:12:48 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Sep 30 14:12:48 compute-0 podman[76029]: 2025-09-30 14:12:48.173412545 +0000 UTC m=+0.755928214 container create c187c017c4d9b1703cd1a351d3b758ce9e9ccb5f0c20e5db9b63e534c40bcc4b (image=quay.io/ceph/ceph:v19, name=thirsty_northcutt, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Sep 30 14:12:48 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:12:48 compute-0 upbeat_brattain[76052]: Scheduled mgr update...
Sep 30 14:12:48 compute-0 podman[76021]: 2025-09-30 14:12:48.224923215 +0000 UTC m=+0.820884454 container died 3623ef5b957e3324692a33093b5f90a13dabe7c8862dff8810f3cd8297631eeb (image=quay.io/ceph/ceph:v19, name=upbeat_brattain, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Sep 30 14:12:48 compute-0 systemd[1]: Started libpod-conmon-c187c017c4d9b1703cd1a351d3b758ce9e9ccb5f0c20e5db9b63e534c40bcc4b.scope.
Sep 30 14:12:48 compute-0 systemd[1]: libpod-3623ef5b957e3324692a33093b5f90a13dabe7c8862dff8810f3cd8297631eeb.scope: Deactivated successfully.
Sep 30 14:12:48 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:12:48 compute-0 podman[76029]: 2025-09-30 14:12:48.330040429 +0000 UTC m=+0.912556128 container init c187c017c4d9b1703cd1a351d3b758ce9e9ccb5f0c20e5db9b63e534c40bcc4b (image=quay.io/ceph/ceph:v19, name=thirsty_northcutt, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Sep 30 14:12:48 compute-0 podman[76029]: 2025-09-30 14:12:48.33699434 +0000 UTC m=+0.919510009 container start c187c017c4d9b1703cd1a351d3b758ce9e9ccb5f0c20e5db9b63e534c40bcc4b (image=quay.io/ceph/ceph:v19, name=thirsty_northcutt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:12:48 compute-0 ceph-mgr[74485]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Sep 30 14:12:48 compute-0 thirsty_northcutt[76080]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)
Sep 30 14:12:48 compute-0 systemd[1]: libpod-c187c017c4d9b1703cd1a351d3b758ce9e9ccb5f0c20e5db9b63e534c40bcc4b.scope: Deactivated successfully.
Sep 30 14:12:48 compute-0 podman[76029]: 2025-09-30 14:12:48.450563124 +0000 UTC m=+1.033078793 container attach c187c017c4d9b1703cd1a351d3b758ce9e9ccb5f0c20e5db9b63e534c40bcc4b (image=quay.io/ceph/ceph:v19, name=thirsty_northcutt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:12:48 compute-0 podman[76029]: 2025-09-30 14:12:48.452018602 +0000 UTC m=+1.034534271 container died c187c017c4d9b1703cd1a351d3b758ce9e9ccb5f0c20e5db9b63e534c40bcc4b (image=quay.io/ceph/ceph:v19, name=thirsty_northcutt, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Sep 30 14:12:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-09e9accc5bc3c5c2b4c5f5766dc5bd710f1b73279221fdc2aa96c00f449d90cd-merged.mount: Deactivated successfully.
Sep 30 14:12:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-0cc5709fa9a01de4318cd0b8de0c3fe12398c072321c246e7bf8106325e38916-merged.mount: Deactivated successfully.
Sep 30 14:12:48 compute-0 podman[76021]: 2025-09-30 14:12:48.592870716 +0000 UTC m=+1.188831945 container remove 3623ef5b957e3324692a33093b5f90a13dabe7c8862dff8810f3cd8297631eeb (image=quay.io/ceph/ceph:v19, name=upbeat_brattain, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Sep 30 14:12:48 compute-0 systemd[1]: libpod-conmon-3623ef5b957e3324692a33093b5f90a13dabe7c8862dff8810f3cd8297631eeb.scope: Deactivated successfully.
Sep 30 14:12:48 compute-0 podman[76029]: 2025-09-30 14:12:48.680712221 +0000 UTC m=+1.263227890 container remove c187c017c4d9b1703cd1a351d3b758ce9e9ccb5f0c20e5db9b63e534c40bcc4b (image=quay.io/ceph/ceph:v19, name=thirsty_northcutt, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Sep 30 14:12:48 compute-0 systemd[1]: libpod-conmon-c187c017c4d9b1703cd1a351d3b758ce9e9ccb5f0c20e5db9b63e534c40bcc4b.scope: Deactivated successfully.
Sep 30 14:12:48 compute-0 sudo[75905]: pam_unix(sudo:session): session closed for user root
Sep 30 14:12:48 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0)
Sep 30 14:12:48 compute-0 podman[76111]: 2025-09-30 14:12:48.690599168 +0000 UTC m=+0.078350869 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:12:48 compute-0 podman[76111]: 2025-09-30 14:12:48.867852759 +0000 UTC m=+0.255604430 container create b5493c6cf715cf902321881fcb1d4d38e34ad183c15572e1cca77c1916fc6dfa (image=quay.io/ceph/ceph:v19, name=stupefied_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Sep 30 14:12:48 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:12:48 compute-0 systemd[1]: Started libpod-conmon-b5493c6cf715cf902321881fcb1d4d38e34ad183c15572e1cca77c1916fc6dfa.scope.
Sep 30 14:12:48 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:12:48 compute-0 sudo[76128]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:12:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73a8634e78ebb0c8d68da2ee6a1e47a1f9e26c222f9645ae1c723fe3ea358686/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73a8634e78ebb0c8d68da2ee6a1e47a1f9e26c222f9645ae1c723fe3ea358686/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73a8634e78ebb0c8d68da2ee6a1e47a1f9e26c222f9645ae1c723fe3ea358686/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:48 compute-0 sudo[76128]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:12:48 compute-0 sudo[76128]: pam_unix(sudo:session): session closed for user root
Sep 30 14:12:48 compute-0 podman[76111]: 2025-09-30 14:12:48.980421657 +0000 UTC m=+0.368173358 container init b5493c6cf715cf902321881fcb1d4d38e34ad183c15572e1cca77c1916fc6dfa (image=quay.io/ceph/ceph:v19, name=stupefied_cartwright, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Sep 30 14:12:48 compute-0 podman[76111]: 2025-09-30 14:12:48.988818256 +0000 UTC m=+0.376569937 container start b5493c6cf715cf902321881fcb1d4d38e34ad183c15572e1cca77c1916fc6dfa (image=quay.io/ceph/ceph:v19, name=stupefied_cartwright, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Sep 30 14:12:49 compute-0 podman[76111]: 2025-09-30 14:12:49.008867477 +0000 UTC m=+0.396619148 container attach b5493c6cf715cf902321881fcb1d4d38e34ad183c15572e1cca77c1916fc6dfa (image=quay.io/ceph/ceph:v19, name=stupefied_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Sep 30 14:12:49 compute-0 sudo[76158]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Sep 30 14:12:49 compute-0 sudo[76158]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:12:49 compute-0 ceph-mon[74194]: from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:12:49 compute-0 ceph-mon[74194]: Saving service mgr spec with placement count:2
Sep 30 14:12:49 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:12:49 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:12:49 compute-0 sudo[76158]: pam_unix(sudo:session): session closed for user root
Sep 30 14:12:49 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 14:12:49 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:12:49 compute-0 ceph-mgr[74485]: [cephadm INFO root] Saving service crash spec with placement *
Sep 30 14:12:49 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Sep 30 14:12:49 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Sep 30 14:12:49 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:12:49 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:12:49 compute-0 stupefied_cartwright[76153]: Scheduled crash update...
Sep 30 14:12:49 compute-0 systemd[1]: libpod-b5493c6cf715cf902321881fcb1d4d38e34ad183c15572e1cca77c1916fc6dfa.scope: Deactivated successfully.
Sep 30 14:12:49 compute-0 podman[76111]: 2025-09-30 14:12:49.444459197 +0000 UTC m=+0.832210868 container died b5493c6cf715cf902321881fcb1d4d38e34ad183c15572e1cca77c1916fc6dfa (image=quay.io/ceph/ceph:v19, name=stupefied_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:12:49 compute-0 sudo[76224]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:12:49 compute-0 sudo[76224]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:12:49 compute-0 sudo[76224]: pam_unix(sudo:session): session closed for user root
Sep 30 14:12:49 compute-0 sudo[76257]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Sep 30 14:12:49 compute-0 sudo[76257]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:12:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-73a8634e78ebb0c8d68da2ee6a1e47a1f9e26c222f9645ae1c723fe3ea358686-merged.mount: Deactivated successfully.
Sep 30 14:12:49 compute-0 podman[76111]: 2025-09-30 14:12:49.700009284 +0000 UTC m=+1.087760955 container remove b5493c6cf715cf902321881fcb1d4d38e34ad183c15572e1cca77c1916fc6dfa (image=quay.io/ceph/ceph:v19, name=stupefied_cartwright, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:12:49 compute-0 systemd[1]: libpod-conmon-b5493c6cf715cf902321881fcb1d4d38e34ad183c15572e1cca77c1916fc6dfa.scope: Deactivated successfully.
Sep 30 14:12:49 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 14:12:49 compute-0 podman[76286]: 2025-09-30 14:12:49.749896682 +0000 UTC m=+0.024512789 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:12:49 compute-0 podman[76286]: 2025-09-30 14:12:49.865679794 +0000 UTC m=+0.140295881 container create dfcf7957b6acf7e6dcf4d2144b3e7320a5dcfca3e7ca4b35a38d59c1b31ecdba (image=quay.io/ceph/ceph:v19, name=jovial_wiles, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Sep 30 14:12:49 compute-0 systemd[1]: Started libpod-conmon-dfcf7957b6acf7e6dcf4d2144b3e7320a5dcfca3e7ca4b35a38d59c1b31ecdba.scope.
Sep 30 14:12:49 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:12:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe5eebd0aa4edbd6b9b0e230800512f656ebd80c128c4ba680cbfc7c0cdedd58/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe5eebd0aa4edbd6b9b0e230800512f656ebd80c128c4ba680cbfc7c0cdedd58/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe5eebd0aa4edbd6b9b0e230800512f656ebd80c128c4ba680cbfc7c0cdedd58/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:50 compute-0 podman[76286]: 2025-09-30 14:12:50.115235215 +0000 UTC m=+0.389851332 container init dfcf7957b6acf7e6dcf4d2144b3e7320a5dcfca3e7ca4b35a38d59c1b31ecdba (image=quay.io/ceph/ceph:v19, name=jovial_wiles, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:12:50 compute-0 podman[76286]: 2025-09-30 14:12:50.123097589 +0000 UTC m=+0.397713696 container start dfcf7957b6acf7e6dcf4d2144b3e7320a5dcfca3e7ca4b35a38d59c1b31ecdba (image=quay.io/ceph/ceph:v19, name=jovial_wiles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:12:50 compute-0 podman[76286]: 2025-09-30 14:12:50.169859766 +0000 UTC m=+0.444476033 container attach dfcf7957b6acf7e6dcf4d2144b3e7320a5dcfca3e7ca4b35a38d59c1b31ecdba (image=quay.io/ceph/ceph:v19, name=jovial_wiles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:12:50 compute-0 ceph-mgr[74485]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Sep 30 14:12:50 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0)
Sep 30 14:12:50 compute-0 ceph-mon[74194]: from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:12:50 compute-0 ceph-mon[74194]: Saving service crash spec with placement *
Sep 30 14:12:50 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:12:50 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:12:50 compute-0 podman[76394]: 2025-09-30 14:12:50.591500243 +0000 UTC m=+0.209420298 container exec a277d7b6b6f3cf10a7ce0ade5eebf0f8127074c248f9bce4451399614b97ded5 (image=quay.io/ceph/ceph:v19, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mon-compute-0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:12:50 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2470270130' entity='client.admin' 
Sep 30 14:12:50 compute-0 systemd[1]: libpod-dfcf7957b6acf7e6dcf4d2144b3e7320a5dcfca3e7ca4b35a38d59c1b31ecdba.scope: Deactivated successfully.
Sep 30 14:12:50 compute-0 podman[76286]: 2025-09-30 14:12:50.649128192 +0000 UTC m=+0.923744279 container died dfcf7957b6acf7e6dcf4d2144b3e7320a5dcfca3e7ca4b35a38d59c1b31ecdba (image=quay.io/ceph/ceph:v19, name=jovial_wiles, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1)
Sep 30 14:12:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-fe5eebd0aa4edbd6b9b0e230800512f656ebd80c128c4ba680cbfc7c0cdedd58-merged.mount: Deactivated successfully.
Sep 30 14:12:51 compute-0 podman[76417]: 2025-09-30 14:12:51.183974525 +0000 UTC m=+0.556920518 container remove dfcf7957b6acf7e6dcf4d2144b3e7320a5dcfca3e7ca4b35a38d59c1b31ecdba (image=quay.io/ceph/ceph:v19, name=jovial_wiles, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:12:51 compute-0 systemd[1]: libpod-conmon-dfcf7957b6acf7e6dcf4d2144b3e7320a5dcfca3e7ca4b35a38d59c1b31ecdba.scope: Deactivated successfully.
Sep 30 14:12:51 compute-0 podman[76394]: 2025-09-30 14:12:51.220566107 +0000 UTC m=+0.838486132 container exec_died a277d7b6b6f3cf10a7ce0ade5eebf0f8127074c248f9bce4451399614b97ded5 (image=quay.io/ceph/ceph:v19, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mon-compute-0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Sep 30 14:12:51 compute-0 podman[76440]: 2025-09-30 14:12:51.475608241 +0000 UTC m=+0.265652471 container create 2ffda0ab6c935cd8275241020e66a77b68b5d1e3be304307f2dc0557ccbbfd49 (image=quay.io/ceph/ceph:v19, name=pedantic_villani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:12:51 compute-0 podman[76440]: 2025-09-30 14:12:51.416018061 +0000 UTC m=+0.206062331 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:12:51 compute-0 systemd[1]: Started libpod-conmon-2ffda0ab6c935cd8275241020e66a77b68b5d1e3be304307f2dc0557ccbbfd49.scope.
Sep 30 14:12:51 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:12:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08b866cc15b0d6c0a873c0fd10112207950cfb9fb25bd930572f00f816867686/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08b866cc15b0d6c0a873c0fd10112207950cfb9fb25bd930572f00f816867686/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08b866cc15b0d6c0a873c0fd10112207950cfb9fb25bd930572f00f816867686/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:51 compute-0 podman[76440]: 2025-09-30 14:12:51.54897359 +0000 UTC m=+0.339017850 container init 2ffda0ab6c935cd8275241020e66a77b68b5d1e3be304307f2dc0557ccbbfd49 (image=quay.io/ceph/ceph:v19, name=pedantic_villani, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:12:51 compute-0 podman[76440]: 2025-09-30 14:12:51.556224468 +0000 UTC m=+0.346268708 container start 2ffda0ab6c935cd8275241020e66a77b68b5d1e3be304307f2dc0557ccbbfd49 (image=quay.io/ceph/ceph:v19, name=pedantic_villani, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Sep 30 14:12:51 compute-0 sudo[76257]: pam_unix(sudo:session): session closed for user root
Sep 30 14:12:51 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 14:12:51 compute-0 podman[76440]: 2025-09-30 14:12:51.606959488 +0000 UTC m=+0.397003728 container attach 2ffda0ab6c935cd8275241020e66a77b68b5d1e3be304307f2dc0557ccbbfd49 (image=quay.io/ceph/ceph:v19, name=pedantic_villani, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:12:51 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:12:51 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/2470270130' entity='client.admin' 
Sep 30 14:12:51 compute-0 sudo[76484]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:12:51 compute-0 sudo[76484]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:12:51 compute-0 sudo[76484]: pam_unix(sudo:session): session closed for user root
Sep 30 14:12:51 compute-0 sudo[76528]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 14:12:51 compute-0 sudo[76528]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:12:51 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:12:51 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0)
Sep 30 14:12:51 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:12:51 compute-0 systemd[1]: libpod-2ffda0ab6c935cd8275241020e66a77b68b5d1e3be304307f2dc0557ccbbfd49.scope: Deactivated successfully.
Sep 30 14:12:51 compute-0 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 76567 (sysctl)
Sep 30 14:12:51 compute-0 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Sep 30 14:12:51 compute-0 podman[76566]: 2025-09-30 14:12:51.985129195 +0000 UTC m=+0.021235144 container died 2ffda0ab6c935cd8275241020e66a77b68b5d1e3be304307f2dc0557ccbbfd49 (image=quay.io/ceph/ceph:v19, name=pedantic_villani, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:12:51 compute-0 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Sep 30 14:12:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-08b866cc15b0d6c0a873c0fd10112207950cfb9fb25bd930572f00f816867686-merged.mount: Deactivated successfully.
Sep 30 14:12:52 compute-0 podman[76566]: 2025-09-30 14:12:52.09913582 +0000 UTC m=+0.135241749 container remove 2ffda0ab6c935cd8275241020e66a77b68b5d1e3be304307f2dc0557ccbbfd49 (image=quay.io/ceph/ceph:v19, name=pedantic_villani, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2)
Sep 30 14:12:52 compute-0 systemd[1]: libpod-conmon-2ffda0ab6c935cd8275241020e66a77b68b5d1e3be304307f2dc0557ccbbfd49.scope: Deactivated successfully.
Sep 30 14:12:52 compute-0 podman[76587]: 2025-09-30 14:12:52.170736163 +0000 UTC m=+0.046474920 container create 68c381a08d2a407fcb8f47f29f971074e1994f8bcd15334177234fdbfba54157 (image=quay.io/ceph/ceph:v19, name=determined_gould, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Sep 30 14:12:52 compute-0 systemd[1]: Started libpod-conmon-68c381a08d2a407fcb8f47f29f971074e1994f8bcd15334177234fdbfba54157.scope.
Sep 30 14:12:52 compute-0 podman[76587]: 2025-09-30 14:12:52.147091218 +0000 UTC m=+0.022829995 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:12:52 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:12:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa00634f14db4b914e5921f585c5a20a7e289bd728870a075a87b58f3112edab/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa00634f14db4b914e5921f585c5a20a7e289bd728870a075a87b58f3112edab/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa00634f14db4b914e5921f585c5a20a7e289bd728870a075a87b58f3112edab/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:52 compute-0 sudo[76528]: pam_unix(sudo:session): session closed for user root
Sep 30 14:12:52 compute-0 podman[76587]: 2025-09-30 14:12:52.270695703 +0000 UTC m=+0.146434470 container init 68c381a08d2a407fcb8f47f29f971074e1994f8bcd15334177234fdbfba54157 (image=quay.io/ceph/ceph:v19, name=determined_gould, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:12:52 compute-0 podman[76587]: 2025-09-30 14:12:52.277750546 +0000 UTC m=+0.153489303 container start 68c381a08d2a407fcb8f47f29f971074e1994f8bcd15334177234fdbfba54157 (image=quay.io/ceph/ceph:v19, name=determined_gould, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:12:52 compute-0 podman[76587]: 2025-09-30 14:12:52.299271066 +0000 UTC m=+0.175009823 container attach 68c381a08d2a407fcb8f47f29f971074e1994f8bcd15334177234fdbfba54157 (image=quay.io/ceph/ceph:v19, name=determined_gould, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:12:52 compute-0 sudo[76622]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:12:52 compute-0 sudo[76622]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:12:52 compute-0 sudo[76622]: pam_unix(sudo:session): session closed for user root
Sep 30 14:12:52 compute-0 ceph-mgr[74485]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Sep 30 14:12:52 compute-0 ceph-mon[74194]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Sep 30 14:12:52 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 14:12:52 compute-0 sudo[76647]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Sep 30 14:12:52 compute-0 sudo[76647]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:12:52 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:12:52 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Sep 30 14:12:52 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:12:52 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:12:52 compute-0 ceph-mon[74194]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Sep 30 14:12:52 compute-0 sudo[76647]: pam_unix(sudo:session): session closed for user root
Sep 30 14:12:52 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 14:12:52 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:12:52 compute-0 ceph-mgr[74485]: [cephadm INFO root] Added label _admin to host compute-0
Sep 30 14:12:52 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Sep 30 14:12:52 compute-0 determined_gould[76617]: Added label _admin to host compute-0
Sep 30 14:12:52 compute-0 systemd[1]: libpod-68c381a08d2a407fcb8f47f29f971074e1994f8bcd15334177234fdbfba54157.scope: Deactivated successfully.
Sep 30 14:12:52 compute-0 podman[76587]: 2025-09-30 14:12:52.724416025 +0000 UTC m=+0.600154792 container died 68c381a08d2a407fcb8f47f29f971074e1994f8bcd15334177234fdbfba54157 (image=quay.io/ceph/ceph:v19, name=determined_gould, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:12:52 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:12:52 compute-0 sudo[76714]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:12:52 compute-0 sudo[76714]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:12:52 compute-0 sudo[76714]: pam_unix(sudo:session): session closed for user root
Sep 30 14:12:52 compute-0 sudo[76743]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -- inventory --format=json-pretty --filter-for-batch
Sep 30 14:12:52 compute-0 sudo[76743]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:12:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-fa00634f14db4b914e5921f585c5a20a7e289bd728870a075a87b58f3112edab-merged.mount: Deactivated successfully.
Sep 30 14:12:53 compute-0 podman[76587]: 2025-09-30 14:12:53.019067819 +0000 UTC m=+0.894806576 container remove 68c381a08d2a407fcb8f47f29f971074e1994f8bcd15334177234fdbfba54157 (image=quay.io/ceph/ceph:v19, name=determined_gould, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid)
Sep 30 14:12:53 compute-0 systemd[1]: libpod-conmon-68c381a08d2a407fcb8f47f29f971074e1994f8bcd15334177234fdbfba54157.scope: Deactivated successfully.
Sep 30 14:12:53 compute-0 podman[76770]: 2025-09-30 14:12:53.068202207 +0000 UTC m=+0.022338842 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:12:53 compute-0 podman[76770]: 2025-09-30 14:12:53.16751965 +0000 UTC m=+0.121656285 container create 657caaaccb603cf0f09d8369227babd29915bf175812a421b4c2c4f8f2b87898 (image=quay.io/ceph/ceph:v19, name=stupefied_neumann, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Sep 30 14:12:53 compute-0 systemd[1]: Started libpod-conmon-657caaaccb603cf0f09d8369227babd29915bf175812a421b4c2c4f8f2b87898.scope.
Sep 30 14:12:53 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:12:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f21627b9f23ae495a587d101561456a40e9568a7f16ca7dfdcdac2f4a180b1e5/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f21627b9f23ae495a587d101561456a40e9568a7f16ca7dfdcdac2f4a180b1e5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f21627b9f23ae495a587d101561456a40e9568a7f16ca7dfdcdac2f4a180b1e5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:53 compute-0 podman[76770]: 2025-09-30 14:12:53.362451202 +0000 UTC m=+0.316587857 container init 657caaaccb603cf0f09d8369227babd29915bf175812a421b4c2c4f8f2b87898 (image=quay.io/ceph/ceph:v19, name=stupefied_neumann, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Sep 30 14:12:53 compute-0 podman[76770]: 2025-09-30 14:12:53.368467248 +0000 UTC m=+0.322603883 container start 657caaaccb603cf0f09d8369227babd29915bf175812a421b4c2c4f8f2b87898 (image=quay.io/ceph/ceph:v19, name=stupefied_neumann, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Sep 30 14:12:53 compute-0 podman[76770]: 2025-09-30 14:12:53.476828326 +0000 UTC m=+0.430964961 container attach 657caaaccb603cf0f09d8369227babd29915bf175812a421b4c2c4f8f2b87898 (image=quay.io/ceph/ceph:v19, name=stupefied_neumann, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid)
Sep 30 14:12:53 compute-0 podman[76847]: 2025-09-30 14:12:53.559202899 +0000 UTC m=+0.020105074 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:12:53 compute-0 podman[76847]: 2025-09-30 14:12:53.788314419 +0000 UTC m=+0.249216564 container create 081de4fd03d2c9f020ca79f1293552b0614bd9bf0032312da69f42978cbb5cf2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_lamport, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:12:53 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0)
Sep 30 14:12:53 compute-0 ceph-mon[74194]: from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:12:53 compute-0 ceph-mon[74194]: pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 14:12:53 compute-0 ceph-mon[74194]: from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:12:53 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:12:53 compute-0 ceph-mon[74194]: Added label _admin to host compute-0
Sep 30 14:12:53 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:12:53 compute-0 systemd[1]: Started libpod-conmon-081de4fd03d2c9f020ca79f1293552b0614bd9bf0032312da69f42978cbb5cf2.scope.
Sep 30 14:12:53 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:12:54 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3196363516' entity='client.admin' 
Sep 30 14:12:54 compute-0 podman[76847]: 2025-09-30 14:12:54.042096801 +0000 UTC m=+0.502998966 container init 081de4fd03d2c9f020ca79f1293552b0614bd9bf0032312da69f42978cbb5cf2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_lamport, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:12:54 compute-0 stupefied_neumann[76810]: set mgr/dashboard/cluster/status
Sep 30 14:12:54 compute-0 podman[76847]: 2025-09-30 14:12:54.051780132 +0000 UTC m=+0.512682267 container start 081de4fd03d2c9f020ca79f1293552b0614bd9bf0032312da69f42978cbb5cf2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:12:54 compute-0 mystifying_lamport[76864]: 167 167
Sep 30 14:12:54 compute-0 systemd[1]: libpod-081de4fd03d2c9f020ca79f1293552b0614bd9bf0032312da69f42978cbb5cf2.scope: Deactivated successfully.
Sep 30 14:12:54 compute-0 podman[76847]: 2025-09-30 14:12:54.057014858 +0000 UTC m=+0.517917003 container attach 081de4fd03d2c9f020ca79f1293552b0614bd9bf0032312da69f42978cbb5cf2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_lamport, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default)
Sep 30 14:12:54 compute-0 podman[76847]: 2025-09-30 14:12:54.05745989 +0000 UTC m=+0.518362035 container died 081de4fd03d2c9f020ca79f1293552b0614bd9bf0032312da69f42978cbb5cf2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_lamport, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Sep 30 14:12:54 compute-0 systemd[1]: libpod-657caaaccb603cf0f09d8369227babd29915bf175812a421b4c2c4f8f2b87898.scope: Deactivated successfully.
Sep 30 14:12:54 compute-0 podman[76770]: 2025-09-30 14:12:54.072757438 +0000 UTC m=+1.026894073 container died 657caaaccb603cf0f09d8369227babd29915bf175812a421b4c2c4f8f2b87898 (image=quay.io/ceph/ceph:v19, name=stupefied_neumann, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Sep 30 14:12:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-9c5fe51195c0c57a858a9ce9ca30444a949c148dc47a726625d12f314d881b6c-merged.mount: Deactivated successfully.
Sep 30 14:12:54 compute-0 podman[76847]: 2025-09-30 14:12:54.102876181 +0000 UTC m=+0.563778326 container remove 081de4fd03d2c9f020ca79f1293552b0614bd9bf0032312da69f42978cbb5cf2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_lamport, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:12:54 compute-0 systemd[1]: libpod-conmon-081de4fd03d2c9f020ca79f1293552b0614bd9bf0032312da69f42978cbb5cf2.scope: Deactivated successfully.
Sep 30 14:12:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-f21627b9f23ae495a587d101561456a40e9568a7f16ca7dfdcdac2f4a180b1e5-merged.mount: Deactivated successfully.
Sep 30 14:12:54 compute-0 podman[76770]: 2025-09-30 14:12:54.148619431 +0000 UTC m=+1.102756066 container remove 657caaaccb603cf0f09d8369227babd29915bf175812a421b4c2c4f8f2b87898 (image=quay.io/ceph/ceph:v19, name=stupefied_neumann, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Sep 30 14:12:54 compute-0 systemd[1]: libpod-conmon-657caaaccb603cf0f09d8369227babd29915bf175812a421b4c2c4f8f2b87898.scope: Deactivated successfully.
Sep 30 14:12:54 compute-0 sudo[73138]: pam_unix(sudo:session): session closed for user root
Sep 30 14:12:54 compute-0 podman[76902]: 2025-09-30 14:12:54.315003989 +0000 UTC m=+0.043990125 container create ab6a4d12d6ddcbb541845282a0d4cb46564b85917219351996f9316f1cd497a2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_wu, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325)
Sep 30 14:12:54 compute-0 systemd[1]: Started libpod-conmon-ab6a4d12d6ddcbb541845282a0d4cb46564b85917219351996f9316f1cd497a2.scope.
Sep 30 14:12:54 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 14:12:54 compute-0 podman[76902]: 2025-09-30 14:12:54.29390041 +0000 UTC m=+0.022886586 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:12:54 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:12:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9aed35c02b392dce5c4d8d4670925c4be4735ef1132b068c9c529dd2f8fab4ff/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9aed35c02b392dce5c4d8d4670925c4be4735ef1132b068c9c529dd2f8fab4ff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9aed35c02b392dce5c4d8d4670925c4be4735ef1132b068c9c529dd2f8fab4ff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9aed35c02b392dce5c4d8d4670925c4be4735ef1132b068c9c529dd2f8fab4ff/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:54 compute-0 podman[76902]: 2025-09-30 14:12:54.420446782 +0000 UTC m=+0.149432928 container init ab6a4d12d6ddcbb541845282a0d4cb46564b85917219351996f9316f1cd497a2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_wu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:12:54 compute-0 podman[76902]: 2025-09-30 14:12:54.427465854 +0000 UTC m=+0.156451990 container start ab6a4d12d6ddcbb541845282a0d4cb46564b85917219351996f9316f1cd497a2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_wu, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Sep 30 14:12:54 compute-0 podman[76902]: 2025-09-30 14:12:54.440137624 +0000 UTC m=+0.169123780 container attach ab6a4d12d6ddcbb541845282a0d4cb46564b85917219351996f9316f1cd497a2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_wu, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True)
Sep 30 14:12:54 compute-0 sudo[76947]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkuhmfvlrxhuszntdrxjdnqyeuqszinq ; /usr/bin/python3'
Sep 30 14:12:54 compute-0 sudo[76947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:12:54 compute-0 python3[76949]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:12:54 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 14:12:54 compute-0 podman[76955]: 2025-09-30 14:12:54.735543238 +0000 UTC m=+0.037872476 container create 42188a3bc60ecff43473af6766294b60c22d4e2ec5a18dce2bb6d9a13faf2a2b (image=quay.io/ceph/ceph:v19, name=zen_aryabhata, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Sep 30 14:12:54 compute-0 podman[76955]: 2025-09-30 14:12:54.717838508 +0000 UTC m=+0.020167776 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:12:54 compute-0 systemd[1]: Started libpod-conmon-42188a3bc60ecff43473af6766294b60c22d4e2ec5a18dce2bb6d9a13faf2a2b.scope.
Sep 30 14:12:54 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:12:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fff32a2757df4008831f1d99174cc95070c06f55e351afcdb9afd37cbf0b018e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fff32a2757df4008831f1d99174cc95070c06f55e351afcdb9afd37cbf0b018e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:54 compute-0 podman[76955]: 2025-09-30 14:12:54.912590564 +0000 UTC m=+0.214919832 container init 42188a3bc60ecff43473af6766294b60c22d4e2ec5a18dce2bb6d9a13faf2a2b (image=quay.io/ceph/ceph:v19, name=zen_aryabhata, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Sep 30 14:12:54 compute-0 podman[76955]: 2025-09-30 14:12:54.919477723 +0000 UTC m=+0.221806961 container start 42188a3bc60ecff43473af6766294b60c22d4e2ec5a18dce2bb6d9a13faf2a2b (image=quay.io/ceph/ceph:v19, name=zen_aryabhata, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:12:54 compute-0 podman[76955]: 2025-09-30 14:12:54.926772642 +0000 UTC m=+0.229101900 container attach 42188a3bc60ecff43473af6766294b60c22d4e2ec5a18dce2bb6d9a13faf2a2b (image=quay.io/ceph/ceph:v19, name=zen_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Sep 30 14:12:55 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/3196363516' entity='client.admin' 
Sep 30 14:12:55 compute-0 ceph-mon[74194]: pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 14:12:55 compute-0 naughty_wu[76919]: [
Sep 30 14:12:55 compute-0 naughty_wu[76919]:     {
Sep 30 14:12:55 compute-0 naughty_wu[76919]:         "available": false,
Sep 30 14:12:55 compute-0 naughty_wu[76919]:         "being_replaced": false,
Sep 30 14:12:55 compute-0 naughty_wu[76919]:         "ceph_device_lvm": false,
Sep 30 14:12:55 compute-0 naughty_wu[76919]:         "device_id": "QEMU_DVD-ROM_QM00001",
Sep 30 14:12:55 compute-0 naughty_wu[76919]:         "lsm_data": {},
Sep 30 14:12:55 compute-0 naughty_wu[76919]:         "lvs": [],
Sep 30 14:12:55 compute-0 naughty_wu[76919]:         "path": "/dev/sr0",
Sep 30 14:12:55 compute-0 naughty_wu[76919]:         "rejected_reasons": [
Sep 30 14:12:55 compute-0 naughty_wu[76919]:             "Insufficient space (<5GB)",
Sep 30 14:12:55 compute-0 naughty_wu[76919]:             "Has a FileSystem"
Sep 30 14:12:55 compute-0 naughty_wu[76919]:         ],
Sep 30 14:12:55 compute-0 naughty_wu[76919]:         "sys_api": {
Sep 30 14:12:55 compute-0 naughty_wu[76919]:             "actuators": null,
Sep 30 14:12:55 compute-0 naughty_wu[76919]:             "device_nodes": [
Sep 30 14:12:55 compute-0 naughty_wu[76919]:                 "sr0"
Sep 30 14:12:55 compute-0 naughty_wu[76919]:             ],
Sep 30 14:12:55 compute-0 naughty_wu[76919]:             "devname": "sr0",
Sep 30 14:12:55 compute-0 naughty_wu[76919]:             "human_readable_size": "482.00 KB",
Sep 30 14:12:55 compute-0 naughty_wu[76919]:             "id_bus": "ata",
Sep 30 14:12:55 compute-0 naughty_wu[76919]:             "model": "QEMU DVD-ROM",
Sep 30 14:12:55 compute-0 naughty_wu[76919]:             "nr_requests": "2",
Sep 30 14:12:55 compute-0 naughty_wu[76919]:             "parent": "/dev/sr0",
Sep 30 14:12:55 compute-0 naughty_wu[76919]:             "partitions": {},
Sep 30 14:12:55 compute-0 naughty_wu[76919]:             "path": "/dev/sr0",
Sep 30 14:12:55 compute-0 naughty_wu[76919]:             "removable": "1",
Sep 30 14:12:55 compute-0 naughty_wu[76919]:             "rev": "2.5+",
Sep 30 14:12:55 compute-0 naughty_wu[76919]:             "ro": "0",
Sep 30 14:12:55 compute-0 naughty_wu[76919]:             "rotational": "0",
Sep 30 14:12:55 compute-0 naughty_wu[76919]:             "sas_address": "",
Sep 30 14:12:55 compute-0 naughty_wu[76919]:             "sas_device_handle": "",
Sep 30 14:12:55 compute-0 naughty_wu[76919]:             "scheduler_mode": "mq-deadline",
Sep 30 14:12:55 compute-0 naughty_wu[76919]:             "sectors": 0,
Sep 30 14:12:55 compute-0 naughty_wu[76919]:             "sectorsize": "2048",
Sep 30 14:12:55 compute-0 naughty_wu[76919]:             "size": 493568.0,
Sep 30 14:12:55 compute-0 naughty_wu[76919]:             "support_discard": "2048",
Sep 30 14:12:55 compute-0 naughty_wu[76919]:             "type": "disk",
Sep 30 14:12:55 compute-0 naughty_wu[76919]:             "vendor": "QEMU"
Sep 30 14:12:55 compute-0 naughty_wu[76919]:         }
Sep 30 14:12:55 compute-0 naughty_wu[76919]:     }
Sep 30 14:12:55 compute-0 naughty_wu[76919]: ]
Sep 30 14:12:55 compute-0 systemd[1]: libpod-ab6a4d12d6ddcbb541845282a0d4cb46564b85917219351996f9316f1cd497a2.scope: Deactivated successfully.
Sep 30 14:12:55 compute-0 podman[76902]: 2025-09-30 14:12:55.134967628 +0000 UTC m=+0.863953764 container died ab6a4d12d6ddcbb541845282a0d4cb46564b85917219351996f9316f1cd497a2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_wu, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Sep 30 14:12:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-9aed35c02b392dce5c4d8d4670925c4be4735ef1132b068c9c529dd2f8fab4ff-merged.mount: Deactivated successfully.
Sep 30 14:12:55 compute-0 podman[76902]: 2025-09-30 14:12:55.214735123 +0000 UTC m=+0.943721259 container remove ab6a4d12d6ddcbb541845282a0d4cb46564b85917219351996f9316f1cd497a2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_wu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Sep 30 14:12:55 compute-0 systemd[1]: libpod-conmon-ab6a4d12d6ddcbb541845282a0d4cb46564b85917219351996f9316f1cd497a2.scope: Deactivated successfully.
Sep 30 14:12:55 compute-0 sudo[76743]: pam_unix(sudo:session): session closed for user root
Sep 30 14:12:55 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 14:12:55 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0)
Sep 30 14:12:55 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:12:55 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 14:12:55 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2083723829' entity='client.admin' 
Sep 30 14:12:55 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:12:55 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 14:12:55 compute-0 systemd[1]: libpod-42188a3bc60ecff43473af6766294b60c22d4e2ec5a18dce2bb6d9a13faf2a2b.scope: Deactivated successfully.
Sep 30 14:12:55 compute-0 podman[76955]: 2025-09-30 14:12:55.374989362 +0000 UTC m=+0.677318620 container died 42188a3bc60ecff43473af6766294b60c22d4e2ec5a18dce2bb6d9a13faf2a2b (image=quay.io/ceph/ceph:v19, name=zen_aryabhata, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:12:55 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:12:55 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 14:12:55 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:12:55 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Sep 30 14:12:55 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Sep 30 14:12:55 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:12:55 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:12:55 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 14:12:55 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 14:12:55 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Sep 30 14:12:55 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Sep 30 14:12:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-fff32a2757df4008831f1d99174cc95070c06f55e351afcdb9afd37cbf0b018e-merged.mount: Deactivated successfully.
Sep 30 14:12:55 compute-0 podman[76955]: 2025-09-30 14:12:55.427776635 +0000 UTC m=+0.730105873 container remove 42188a3bc60ecff43473af6766294b60c22d4e2ec5a18dce2bb6d9a13faf2a2b (image=quay.io/ceph/ceph:v19, name=zen_aryabhata, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:12:55 compute-0 systemd[1]: libpod-conmon-42188a3bc60ecff43473af6766294b60c22d4e2ec5a18dce2bb6d9a13faf2a2b.scope: Deactivated successfully.
Sep 30 14:12:55 compute-0 sudo[76947]: pam_unix(sudo:session): session closed for user root
Sep 30 14:12:55 compute-0 sudo[78157]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Sep 30 14:12:55 compute-0 sudo[78157]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:12:55 compute-0 sudo[78157]: pam_unix(sudo:session): session closed for user root
Sep 30 14:12:55 compute-0 sudo[78182]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/etc/ceph
Sep 30 14:12:55 compute-0 sudo[78182]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:12:55 compute-0 sudo[78182]: pam_unix(sudo:session): session closed for user root
Sep 30 14:12:55 compute-0 sudo[78207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/etc/ceph/ceph.conf.new
Sep 30 14:12:55 compute-0 sudo[78207]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:12:55 compute-0 sudo[78207]: pam_unix(sudo:session): session closed for user root
Sep 30 14:12:55 compute-0 sudo[78232]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6
Sep 30 14:12:55 compute-0 sudo[78232]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:12:55 compute-0 sudo[78232]: pam_unix(sudo:session): session closed for user root
Sep 30 14:12:55 compute-0 sudo[78257]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/etc/ceph/ceph.conf.new
Sep 30 14:12:55 compute-0 sudo[78257]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:12:55 compute-0 sudo[78257]: pam_unix(sudo:session): session closed for user root
Sep 30 14:12:55 compute-0 sudo[78327]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/etc/ceph/ceph.conf.new
Sep 30 14:12:55 compute-0 sudo[78327]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:12:55 compute-0 sudo[78327]: pam_unix(sudo:session): session closed for user root
Sep 30 14:12:55 compute-0 sudo[78381]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/etc/ceph/ceph.conf.new
Sep 30 14:12:55 compute-0 sudo[78381]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:12:55 compute-0 sudo[78381]: pam_unix(sudo:session): session closed for user root
Sep 30 14:12:55 compute-0 sudo[78430]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Sep 30 14:12:55 compute-0 sudo[78430]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:12:55 compute-0 sudo[78430]: pam_unix(sudo:session): session closed for user root
Sep 30 14:12:55 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.conf
Sep 30 14:12:55 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.conf
Sep 30 14:12:55 compute-0 sudo[78455]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config
Sep 30 14:12:55 compute-0 sudo[78455]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:12:55 compute-0 sudo[78455]: pam_unix(sudo:session): session closed for user root
Sep 30 14:12:56 compute-0 sudo[78480]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config
Sep 30 14:12:56 compute-0 sudo[78480]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:12:56 compute-0 sudo[78480]: pam_unix(sudo:session): session closed for user root
Sep 30 14:12:56 compute-0 sudo[78510]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.conf.new
Sep 30 14:12:56 compute-0 sudo[78510]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:12:56 compute-0 sudo[78510]: pam_unix(sudo:session): session closed for user root
Sep 30 14:12:56 compute-0 sudo[78559]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6
Sep 30 14:12:56 compute-0 sudo[78559]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:12:56 compute-0 sudo[78559]: pam_unix(sudo:session): session closed for user root
Sep 30 14:12:56 compute-0 sudo[78650]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnekyhgzctvyavqladyjurowsfgrwuws ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759241575.75148-35095-69380012072863/async_wrapper.py j47590079662 30 /home/zuul/.ansible/tmp/ansible-tmp-1759241575.75148-35095-69380012072863/AnsiballZ_command.py _'
Sep 30 14:12:56 compute-0 sudo[78650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:12:56 compute-0 sudo[78608]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.conf.new
Sep 30 14:12:56 compute-0 sudo[78608]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:12:56 compute-0 sudo[78608]: pam_unix(sudo:session): session closed for user root
Sep 30 14:12:56 compute-0 sudo[78678]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.conf.new
Sep 30 14:12:56 compute-0 sudo[78678]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:12:56 compute-0 sudo[78678]: pam_unix(sudo:session): session closed for user root
Sep 30 14:12:56 compute-0 ansible-async_wrapper.py[78653]: Invoked with j47590079662 30 /home/zuul/.ansible/tmp/ansible-tmp-1759241575.75148-35095-69380012072863/AnsiballZ_command.py _
Sep 30 14:12:56 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:12:56 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/2083723829' entity='client.admin' 
Sep 30 14:12:56 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:12:56 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:12:56 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:12:56 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Sep 30 14:12:56 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:12:56 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 14:12:56 compute-0 ceph-mon[74194]: Updating compute-0:/etc/ceph/ceph.conf
Sep 30 14:12:56 compute-0 ansible-async_wrapper.py[78728]: Starting module and watcher
Sep 30 14:12:56 compute-0 ansible-async_wrapper.py[78728]: Start watching 78730 (30)
Sep 30 14:12:56 compute-0 sudo[78703]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.conf.new
Sep 30 14:12:56 compute-0 ansible-async_wrapper.py[78730]: Start module (78730)
Sep 30 14:12:56 compute-0 ansible-async_wrapper.py[78653]: Return async_wrapper task started.
Sep 30 14:12:56 compute-0 sudo[78703]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:12:56 compute-0 sudo[78703]: pam_unix(sudo:session): session closed for user root
Sep 30 14:12:56 compute-0 sudo[78650]: pam_unix(sudo:session): session closed for user root
Sep 30 14:12:56 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 14:12:56 compute-0 sudo[78733]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.conf.new /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.conf
Sep 30 14:12:56 compute-0 sudo[78733]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:12:56 compute-0 sudo[78733]: pam_unix(sudo:session): session closed for user root
Sep 30 14:12:56 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Sep 30 14:12:56 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Sep 30 14:12:56 compute-0 sudo[78758]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Sep 30 14:12:56 compute-0 sudo[78758]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:12:56 compute-0 sudo[78758]: pam_unix(sudo:session): session closed for user root
Sep 30 14:12:56 compute-0 python3[78732]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:12:56 compute-0 sudo[78783]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/etc/ceph
Sep 30 14:12:56 compute-0 sudo[78783]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:12:56 compute-0 sudo[78783]: pam_unix(sudo:session): session closed for user root
Sep 30 14:12:56 compute-0 podman[78806]: 2025-09-30 14:12:56.533164468 +0000 UTC m=+0.045343560 container create f115caab9c7387305dbc05d3d9a53d094510a7be8438aeeb812858a2972247d1 (image=quay.io/ceph/ceph:v19, name=inspiring_keldysh, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Sep 30 14:12:56 compute-0 systemd[1]: Started libpod-conmon-f115caab9c7387305dbc05d3d9a53d094510a7be8438aeeb812858a2972247d1.scope.
Sep 30 14:12:56 compute-0 sudo[78819]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/etc/ceph/ceph.client.admin.keyring.new
Sep 30 14:12:56 compute-0 sudo[78819]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:12:56 compute-0 sudo[78819]: pam_unix(sudo:session): session closed for user root
Sep 30 14:12:56 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:12:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a28eac43f61367c9812f9207f0d84d6f9a16f866cf921d169a5cd38b54fefa5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a28eac43f61367c9812f9207f0d84d6f9a16f866cf921d169a5cd38b54fefa5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:56 compute-0 podman[78806]: 2025-09-30 14:12:56.513745373 +0000 UTC m=+0.025924495 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:12:56 compute-0 podman[78806]: 2025-09-30 14:12:56.618400984 +0000 UTC m=+0.130580096 container init f115caab9c7387305dbc05d3d9a53d094510a7be8438aeeb812858a2972247d1 (image=quay.io/ceph/ceph:v19, name=inspiring_keldysh, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Sep 30 14:12:56 compute-0 podman[78806]: 2025-09-30 14:12:56.625075108 +0000 UTC m=+0.137254190 container start f115caab9c7387305dbc05d3d9a53d094510a7be8438aeeb812858a2972247d1 (image=quay.io/ceph/ceph:v19, name=inspiring_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Sep 30 14:12:56 compute-0 sudo[78851]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6
Sep 30 14:12:56 compute-0 podman[78806]: 2025-09-30 14:12:56.629356959 +0000 UTC m=+0.141536051 container attach f115caab9c7387305dbc05d3d9a53d094510a7be8438aeeb812858a2972247d1 (image=quay.io/ceph/ceph:v19, name=inspiring_keldysh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:12:56 compute-0 sudo[78851]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:12:56 compute-0 sudo[78851]: pam_unix(sudo:session): session closed for user root
Sep 30 14:12:56 compute-0 sudo[78877]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/etc/ceph/ceph.client.admin.keyring.new
Sep 30 14:12:56 compute-0 sudo[78877]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:12:56 compute-0 sudo[78877]: pam_unix(sudo:session): session closed for user root
Sep 30 14:12:56 compute-0 sudo[78927]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/etc/ceph/ceph.client.admin.keyring.new
Sep 30 14:12:56 compute-0 sudo[78927]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:12:56 compute-0 sudo[78927]: pam_unix(sudo:session): session closed for user root
Sep 30 14:12:56 compute-0 sudo[78969]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/etc/ceph/ceph.client.admin.keyring.new
Sep 30 14:12:56 compute-0 sudo[78969]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:12:56 compute-0 sudo[78969]: pam_unix(sudo:session): session closed for user root
Sep 30 14:12:56 compute-0 sudo[78994]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Sep 30 14:12:56 compute-0 sudo[78994]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:12:56 compute-0 sudo[78994]: pam_unix(sudo:session): session closed for user root
Sep 30 14:12:56 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.client.admin.keyring
Sep 30 14:12:56 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.client.admin.keyring
Sep 30 14:12:56 compute-0 sudo[79019]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config
Sep 30 14:12:56 compute-0 sudo[79019]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:12:56 compute-0 sudo[79019]: pam_unix(sudo:session): session closed for user root
Sep 30 14:12:56 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.14162 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Sep 30 14:12:56 compute-0 sudo[79044]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config
Sep 30 14:12:56 compute-0 inspiring_keldysh[78847]: 
Sep 30 14:12:56 compute-0 inspiring_keldysh[78847]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Sep 30 14:12:56 compute-0 sudo[79044]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:12:56 compute-0 sudo[79044]: pam_unix(sudo:session): session closed for user root
Sep 30 14:12:57 compute-0 systemd[1]: libpod-f115caab9c7387305dbc05d3d9a53d094510a7be8438aeeb812858a2972247d1.scope: Deactivated successfully.
Sep 30 14:12:57 compute-0 podman[78806]: 2025-09-30 14:12:57.010796822 +0000 UTC m=+0.522975924 container died f115caab9c7387305dbc05d3d9a53d094510a7be8438aeeb812858a2972247d1 (image=quay.io/ceph/ceph:v19, name=inspiring_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:12:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-8a28eac43f61367c9812f9207f0d84d6f9a16f866cf921d169a5cd38b54fefa5-merged.mount: Deactivated successfully.
Sep 30 14:12:57 compute-0 sudo[79071]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.client.admin.keyring.new
Sep 30 14:12:57 compute-0 sudo[79071]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:12:57 compute-0 sudo[79071]: pam_unix(sudo:session): session closed for user root
Sep 30 14:12:57 compute-0 podman[78806]: 2025-09-30 14:12:57.053460111 +0000 UTC m=+0.565639203 container remove f115caab9c7387305dbc05d3d9a53d094510a7be8438aeeb812858a2972247d1 (image=quay.io/ceph/ceph:v19, name=inspiring_keldysh, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Sep 30 14:12:57 compute-0 systemd[1]: libpod-conmon-f115caab9c7387305dbc05d3d9a53d094510a7be8438aeeb812858a2972247d1.scope: Deactivated successfully.
Sep 30 14:12:57 compute-0 ansible-async_wrapper.py[78730]: Module complete (78730)
Sep 30 14:12:57 compute-0 sudo[79108]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6
Sep 30 14:12:57 compute-0 sudo[79108]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:12:57 compute-0 sudo[79108]: pam_unix(sudo:session): session closed for user root
Sep 30 14:12:57 compute-0 sudo[79133]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.client.admin.keyring.new
Sep 30 14:12:57 compute-0 sudo[79133]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:12:57 compute-0 sudo[79133]: pam_unix(sudo:session): session closed for user root
Sep 30 14:12:57 compute-0 sudo[79181]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.client.admin.keyring.new
Sep 30 14:12:57 compute-0 sudo[79181]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:12:57 compute-0 sudo[79181]: pam_unix(sudo:session): session closed for user root
Sep 30 14:12:57 compute-0 sudo[79206]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.client.admin.keyring.new
Sep 30 14:12:57 compute-0 sudo[79206]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:12:57 compute-0 sudo[79206]: pam_unix(sudo:session): session closed for user root
Sep 30 14:12:57 compute-0 ceph-mon[74194]: Updating compute-0:/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.conf
Sep 30 14:12:57 compute-0 ceph-mon[74194]: pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 14:12:57 compute-0 ceph-mon[74194]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Sep 30 14:12:57 compute-0 sudo[79231]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.client.admin.keyring.new /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.client.admin.keyring
Sep 30 14:12:57 compute-0 sudo[79231]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:12:57 compute-0 sudo[79231]: pam_unix(sudo:session): session closed for user root
Sep 30 14:12:57 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 14:12:57 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:12:57 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 14:12:57 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:12:57 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 14:12:57 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:12:57 compute-0 ceph-mgr[74485]: [progress INFO root] update: starting ev 05f04c6d-5c5e-42d0-9227-6a59b00583b4 (Updating crash deployment (+1 -> 1))
Sep 30 14:12:57 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Sep 30 14:12:57 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Sep 30 14:12:57 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Sep 30 14:12:57 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:12:57 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:12:57 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Sep 30 14:12:57 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Sep 30 14:12:57 compute-0 sudo[79279]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:12:57 compute-0 sudo[79279]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:12:57 compute-0 sudo[79279]: pam_unix(sudo:session): session closed for user root
Sep 30 14:12:57 compute-0 sudo[79304]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6
Sep 30 14:12:57 compute-0 sudo[79304]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:12:57 compute-0 sudo[79352]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkhzfkiwkskrwskfxasxfcruiygkormt ; /usr/bin/python3'
Sep 30 14:12:57 compute-0 sudo[79352]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:12:57 compute-0 python3[79354]: ansible-ansible.legacy.async_status Invoked with jid=j47590079662.78653 mode=status _async_dir=/root/.ansible_async
Sep 30 14:12:57 compute-0 sudo[79352]: pam_unix(sudo:session): session closed for user root
Sep 30 14:12:57 compute-0 sudo[79454]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oulbeawiiozznohksaxxxgnsljxolxml ; /usr/bin/python3'
Sep 30 14:12:57 compute-0 sudo[79454]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:12:57 compute-0 podman[79420]: 2025-09-30 14:12:57.914194151 +0000 UTC m=+0.039376005 container create 962a0a730c3616e962d6a3e19f180a3ee5b54271bbf700080c77bfa29751c75f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Sep 30 14:12:57 compute-0 systemd[1]: Started libpod-conmon-962a0a730c3616e962d6a3e19f180a3ee5b54271bbf700080c77bfa29751c75f.scope.
Sep 30 14:12:57 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:12:57 compute-0 podman[79420]: 2025-09-30 14:12:57.969042768 +0000 UTC m=+0.094224652 container init 962a0a730c3616e962d6a3e19f180a3ee5b54271bbf700080c77bfa29751c75f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_herschel, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Sep 30 14:12:57 compute-0 podman[79420]: 2025-09-30 14:12:57.976216694 +0000 UTC m=+0.101398548 container start 962a0a730c3616e962d6a3e19f180a3ee5b54271bbf700080c77bfa29751c75f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_herschel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Sep 30 14:12:57 compute-0 podman[79420]: 2025-09-30 14:12:57.979345015 +0000 UTC m=+0.104526899 container attach 962a0a730c3616e962d6a3e19f180a3ee5b54271bbf700080c77bfa29751c75f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_herschel, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Sep 30 14:12:57 compute-0 lucid_herschel[79461]: 167 167
Sep 30 14:12:57 compute-0 systemd[1]: libpod-962a0a730c3616e962d6a3e19f180a3ee5b54271bbf700080c77bfa29751c75f.scope: Deactivated successfully.
Sep 30 14:12:57 compute-0 conmon[79461]: conmon 962a0a730c3616e962d6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-962a0a730c3616e962d6a3e19f180a3ee5b54271bbf700080c77bfa29751c75f.scope/container/memory.events
Sep 30 14:12:57 compute-0 podman[79420]: 2025-09-30 14:12:57.981602744 +0000 UTC m=+0.106784598 container died 962a0a730c3616e962d6a3e19f180a3ee5b54271bbf700080c77bfa29751c75f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_herschel, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Sep 30 14:12:57 compute-0 podman[79420]: 2025-09-30 14:12:57.896033398 +0000 UTC m=+0.021215292 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:12:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-6ad43f92a8fac9fb8467c63957cc1fad18e26caf9c8893e6cbe673a686e327a9-merged.mount: Deactivated successfully.
Sep 30 14:12:58 compute-0 podman[79420]: 2025-09-30 14:12:58.018321999 +0000 UTC m=+0.143503853 container remove 962a0a730c3616e962d6a3e19f180a3ee5b54271bbf700080c77bfa29751c75f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_herschel, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:12:58 compute-0 systemd[1]: libpod-conmon-962a0a730c3616e962d6a3e19f180a3ee5b54271bbf700080c77bfa29751c75f.scope: Deactivated successfully.
Sep 30 14:12:58 compute-0 python3[79458]: ansible-ansible.legacy.async_status Invoked with jid=j47590079662.78653 mode=cleanup _async_dir=/root/.ansible_async
Sep 30 14:12:58 compute-0 sudo[79454]: pam_unix(sudo:session): session closed for user root
Sep 30 14:12:58 compute-0 systemd[1]: Reloading.
Sep 30 14:12:58 compute-0 systemd-rc-local-generator[79503]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:12:58 compute-0 systemd-sysv-generator[79508]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 14:12:58 compute-0 sshd-session[79355]: Received disconnect from 209.38.228.14 port 43592:11: Bye Bye [preauth]
Sep 30 14:12:58 compute-0 sshd-session[79355]: Disconnected from authenticating user root 209.38.228.14 port 43592 [preauth]
Sep 30 14:12:58 compute-0 systemd[1]: Reloading.
Sep 30 14:12:58 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 14:12:58 compute-0 ceph-mon[74194]: Updating compute-0:/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.client.admin.keyring
Sep 30 14:12:58 compute-0 ceph-mon[74194]: from='client.14162 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Sep 30 14:12:58 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:12:58 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:12:58 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:12:58 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Sep 30 14:12:58 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Sep 30 14:12:58 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:12:58 compute-0 ceph-mon[74194]: Deploying daemon crash.compute-0 on compute-0
Sep 30 14:12:58 compute-0 systemd-sysv-generator[79575]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 14:12:58 compute-0 systemd-rc-local-generator[79571]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:12:58 compute-0 sudo[79543]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qsgvnrpzwkdkahqinazqbhxruczhsbox ; /usr/bin/python3'
Sep 30 14:12:58 compute-0 sudo[79543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:12:58 compute-0 systemd[1]: Starting Ceph crash.compute-0 for 5e3c7776-ac03-5698-b79f-a6dc2d80cae6...
Sep 30 14:12:58 compute-0 python3[79581]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 14:12:58 compute-0 sudo[79543]: pam_unix(sudo:session): session closed for user root
Sep 30 14:12:58 compute-0 podman[79631]: 2025-09-30 14:12:58.811674526 +0000 UTC m=+0.041439159 container create ccc58ffcac3b6037ccbe9f63a400879e34a9fda4cd233b3566d23e3d16bb0e65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-crash-compute-0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Sep 30 14:12:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44745e24b080125062ce2d939a075511ed5e353dc2c645c0e06605391790af9b/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44745e24b080125062ce2d939a075511ed5e353dc2c645c0e06605391790af9b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44745e24b080125062ce2d939a075511ed5e353dc2c645c0e06605391790af9b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44745e24b080125062ce2d939a075511ed5e353dc2c645c0e06605391790af9b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:58 compute-0 podman[79631]: 2025-09-30 14:12:58.863664798 +0000 UTC m=+0.093429461 container init ccc58ffcac3b6037ccbe9f63a400879e34a9fda4cd233b3566d23e3d16bb0e65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-crash-compute-0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:12:58 compute-0 podman[79631]: 2025-09-30 14:12:58.868355891 +0000 UTC m=+0.098120514 container start ccc58ffcac3b6037ccbe9f63a400879e34a9fda4cd233b3566d23e3d16bb0e65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-crash-compute-0, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:12:58 compute-0 bash[79631]: ccc58ffcac3b6037ccbe9f63a400879e34a9fda4cd233b3566d23e3d16bb0e65
Sep 30 14:12:58 compute-0 podman[79631]: 2025-09-30 14:12:58.793802761 +0000 UTC m=+0.023567414 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:12:58 compute-0 systemd[1]: Started Ceph crash.compute-0 for 5e3c7776-ac03-5698-b79f-a6dc2d80cae6.
Sep 30 14:12:58 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-crash-compute-0[79646]: INFO:ceph-crash:pinging cluster to exercise our key
Sep 30 14:12:58 compute-0 sudo[79304]: pam_unix(sudo:session): session closed for user root
Sep 30 14:12:58 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 14:12:58 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:12:58 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 14:12:58 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:12:58 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Sep 30 14:12:58 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:12:58 compute-0 ceph-mgr[74485]: [progress INFO root] complete: finished ev 05f04c6d-5c5e-42d0-9227-6a59b00583b4 (Updating crash deployment (+1 -> 1))
Sep 30 14:12:58 compute-0 ceph-mgr[74485]: [progress INFO root] Completed event 05f04c6d-5c5e-42d0-9227-6a59b00583b4 (Updating crash deployment (+1 -> 1)) in 2 seconds
Sep 30 14:12:58 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Sep 30 14:12:58 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:12:58 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Sep 30 14:12:58 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:12:58 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Sep 30 14:12:58 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:12:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-crash-compute-0[79646]: 2025-09-30T14:12:59.017+0000 7fae63e6d640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Sep 30 14:12:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-crash-compute-0[79646]: 2025-09-30T14:12:59.017+0000 7fae63e6d640 -1 AuthRegistry(0x7fae5c0698f0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Sep 30 14:12:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-crash-compute-0[79646]: 2025-09-30T14:12:59.019+0000 7fae63e6d640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Sep 30 14:12:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-crash-compute-0[79646]: 2025-09-30T14:12:59.019+0000 7fae63e6d640 -1 AuthRegistry(0x7fae63e6bff0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Sep 30 14:12:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-crash-compute-0[79646]: 2025-09-30T14:12:59.020+0000 7fae61be2640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Sep 30 14:12:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-crash-compute-0[79646]: 2025-09-30T14:12:59.020+0000 7fae63e6d640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Sep 30 14:12:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-crash-compute-0[79646]: [errno 13] RADOS permission denied (error connecting to the cluster)
Sep 30 14:12:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-crash-compute-0[79646]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Sep 30 14:12:59 compute-0 sudo[79653]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 14:12:59 compute-0 sudo[79653]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:12:59 compute-0 sudo[79653]: pam_unix(sudo:session): session closed for user root
Sep 30 14:12:59 compute-0 sudo[79711]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmjbrsrocobnlhajopwmqugoeqxmswzi ; /usr/bin/python3'
Sep 30 14:12:59 compute-0 sudo[79711]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:12:59 compute-0 sudo[79712]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:12:59 compute-0 sudo[79712]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:12:59 compute-0 sudo[79712]: pam_unix(sudo:session): session closed for user root
Sep 30 14:12:59 compute-0 sudo[79739]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Sep 30 14:12:59 compute-0 sudo[79739]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:12:59 compute-0 python3[79717]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:12:59 compute-0 podman[79764]: 2025-09-30 14:12:59.282543284 +0000 UTC m=+0.047514316 container create 1272070ad559267ff0eaeb299184155175af80d7fb2dc6afcab272c4b4f68898 (image=quay.io/ceph/ceph:v19, name=nifty_dijkstra, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:12:59 compute-0 systemd[1]: Started libpod-conmon-1272070ad559267ff0eaeb299184155175af80d7fb2dc6afcab272c4b4f68898.scope.
Sep 30 14:12:59 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:12:59 compute-0 podman[79764]: 2025-09-30 14:12:59.26161217 +0000 UTC m=+0.026583222 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:12:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac9aab0c478ea045cb9823db344e860065b054edf7c42a0b2741ba92379e1dee/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac9aab0c478ea045cb9823db344e860065b054edf7c42a0b2741ba92379e1dee/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac9aab0c478ea045cb9823db344e860065b054edf7c42a0b2741ba92379e1dee/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Sep 30 14:12:59 compute-0 podman[79764]: 2025-09-30 14:12:59.371629972 +0000 UTC m=+0.136601034 container init 1272070ad559267ff0eaeb299184155175af80d7fb2dc6afcab272c4b4f68898 (image=quay.io/ceph/ceph:v19, name=nifty_dijkstra, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Sep 30 14:12:59 compute-0 podman[79764]: 2025-09-30 14:12:59.382191227 +0000 UTC m=+0.147162259 container start 1272070ad559267ff0eaeb299184155175af80d7fb2dc6afcab272c4b4f68898 (image=quay.io/ceph/ceph:v19, name=nifty_dijkstra, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True)
Sep 30 14:12:59 compute-0 podman[79764]: 2025-09-30 14:12:59.386918309 +0000 UTC m=+0.151889371 container attach 1272070ad559267ff0eaeb299184155175af80d7fb2dc6afcab272c4b4f68898 (image=quay.io/ceph/ceph:v19, name=nifty_dijkstra, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Sep 30 14:12:59 compute-0 ceph-mon[74194]: pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 14:12:59 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:12:59 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:12:59 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:12:59 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:12:59 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:12:59 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:12:59 compute-0 podman[79872]: 2025-09-30 14:12:59.676287357 +0000 UTC m=+0.053404011 container exec a277d7b6b6f3cf10a7ce0ade5eebf0f8127074c248f9bce4451399614b97ded5 (image=quay.io/ceph/ceph:v19, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Sep 30 14:12:59 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 14:12:59 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Sep 30 14:12:59 compute-0 nifty_dijkstra[79779]: 
Sep 30 14:12:59 compute-0 nifty_dijkstra[79779]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Sep 30 14:12:59 compute-0 systemd[1]: libpod-1272070ad559267ff0eaeb299184155175af80d7fb2dc6afcab272c4b4f68898.scope: Deactivated successfully.
Sep 30 14:12:59 compute-0 podman[79764]: 2025-09-30 14:12:59.750665391 +0000 UTC m=+0.515636453 container died 1272070ad559267ff0eaeb299184155175af80d7fb2dc6afcab272c4b4f68898 (image=quay.io/ceph/ceph:v19, name=nifty_dijkstra, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:12:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-ac9aab0c478ea045cb9823db344e860065b054edf7c42a0b2741ba92379e1dee-merged.mount: Deactivated successfully.
Sep 30 14:12:59 compute-0 podman[79764]: 2025-09-30 14:12:59.786106333 +0000 UTC m=+0.551077365 container remove 1272070ad559267ff0eaeb299184155175af80d7fb2dc6afcab272c4b4f68898 (image=quay.io/ceph/ceph:v19, name=nifty_dijkstra, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:12:59 compute-0 podman[79872]: 2025-09-30 14:12:59.787960291 +0000 UTC m=+0.165076935 container exec_died a277d7b6b6f3cf10a7ce0ade5eebf0f8127074c248f9bce4451399614b97ded5 (image=quay.io/ceph/ceph:v19, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Sep 30 14:12:59 compute-0 systemd[1]: libpod-conmon-1272070ad559267ff0eaeb299184155175af80d7fb2dc6afcab272c4b4f68898.scope: Deactivated successfully.
Sep 30 14:12:59 compute-0 sudo[79711]: pam_unix(sudo:session): session closed for user root
Sep 30 14:12:59 compute-0 sudo[79739]: pam_unix(sudo:session): session closed for user root
Sep 30 14:12:59 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 14:13:00 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:00 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 14:13:00 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:00 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:13:00 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:13:00 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 14:13:00 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 14:13:00 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 14:13:00 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:00 compute-0 sudo[79955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 14:13:00 compute-0 sudo[79955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:13:00 compute-0 sudo[79955]: pam_unix(sudo:session): session closed for user root
Sep 30 14:13:00 compute-0 sudo[80001]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agkzcxdtjjqypkszgzjhauuuejztluik ; /usr/bin/python3'
Sep 30 14:13:00 compute-0 sudo[80001]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:13:00 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_user}] v 0)
Sep 30 14:13:00 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:00 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_password}] v 0)
Sep 30 14:13:00 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:00 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_user}] v 0)
Sep 30 14:13:00 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:00 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_password}] v 0)
Sep 30 14:13:00 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:00 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Sep 30 14:13:00 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Sep 30 14:13:00 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Sep 30 14:13:00 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Sep 30 14:13:00 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Sep 30 14:13:00 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Sep 30 14:13:00 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:13:00 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:13:00 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Sep 30 14:13:00 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Sep 30 14:13:00 compute-0 sudo[80006]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:13:00 compute-0 sudo[80006]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:13:00 compute-0 sudo[80006]: pam_unix(sudo:session): session closed for user root
Sep 30 14:13:00 compute-0 sudo[80031]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph:v19 --timeout 895 _orch deploy --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6
Sep 30 14:13:00 compute-0 sudo[80031]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:13:00 compute-0 python3[80005]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:13:00 compute-0 podman[80056]: 2025-09-30 14:13:00.290564894 +0000 UTC m=+0.037766793 container create 4b09f233c4f38e5129768862cbd0565330ef4518d1155cfa8127220187642b11 (image=quay.io/ceph/ceph:v19, name=wizardly_ramanujan, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:13:00 compute-0 systemd[1]: Started libpod-conmon-4b09f233c4f38e5129768862cbd0565330ef4518d1155cfa8127220187642b11.scope.
Sep 30 14:13:00 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:13:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18d7715f7b27b4d775f932933a6b815ab8d45de1cdee4e04bd62ed5eea495609/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:13:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18d7715f7b27b4d775f932933a6b815ab8d45de1cdee4e04bd62ed5eea495609/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Sep 30 14:13:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18d7715f7b27b4d775f932933a6b815ab8d45de1cdee4e04bd62ed5eea495609/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:13:00 compute-0 podman[80056]: 2025-09-30 14:13:00.271850617 +0000 UTC m=+0.019052546 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:13:00 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 14:13:00 compute-0 podman[80056]: 2025-09-30 14:13:00.375835672 +0000 UTC m=+0.123037571 container init 4b09f233c4f38e5129768862cbd0565330ef4518d1155cfa8127220187642b11 (image=quay.io/ceph/ceph:v19, name=wizardly_ramanujan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:13:00 compute-0 podman[80056]: 2025-09-30 14:13:00.383077141 +0000 UTC m=+0.130279040 container start 4b09f233c4f38e5129768862cbd0565330ef4518d1155cfa8127220187642b11 (image=quay.io/ceph/ceph:v19, name=wizardly_ramanujan, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:13:00 compute-0 podman[80056]: 2025-09-30 14:13:00.386394257 +0000 UTC m=+0.133596186 container attach 4b09f233c4f38e5129768862cbd0565330ef4518d1155cfa8127220187642b11 (image=quay.io/ceph/ceph:v19, name=wizardly_ramanujan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:13:00 compute-0 podman[80089]: 2025-09-30 14:13:00.479616302 +0000 UTC m=+0.037558528 container create 597e413d7fa025b60fe1144e27129b3f1a7640d8e4bcafe6c756c070dd092831 (image=quay.io/ceph/ceph:v19, name=zen_bardeen, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid)
Sep 30 14:13:00 compute-0 systemd[1]: Started libpod-conmon-597e413d7fa025b60fe1144e27129b3f1a7640d8e4bcafe6c756c070dd092831.scope.
Sep 30 14:13:00 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:13:00 compute-0 podman[80089]: 2025-09-30 14:13:00.544986052 +0000 UTC m=+0.102928288 container init 597e413d7fa025b60fe1144e27129b3f1a7640d8e4bcafe6c756c070dd092831 (image=quay.io/ceph/ceph:v19, name=zen_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Sep 30 14:13:00 compute-0 podman[80089]: 2025-09-30 14:13:00.550023333 +0000 UTC m=+0.107965559 container start 597e413d7fa025b60fe1144e27129b3f1a7640d8e4bcafe6c756c070dd092831 (image=quay.io/ceph/ceph:v19, name=zen_bardeen, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2)
Sep 30 14:13:00 compute-0 zen_bardeen[80124]: 167 167
Sep 30 14:13:00 compute-0 podman[80089]: 2025-09-30 14:13:00.553213986 +0000 UTC m=+0.111156242 container attach 597e413d7fa025b60fe1144e27129b3f1a7640d8e4bcafe6c756c070dd092831 (image=quay.io/ceph/ceph:v19, name=zen_bardeen, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Sep 30 14:13:00 compute-0 systemd[1]: libpod-597e413d7fa025b60fe1144e27129b3f1a7640d8e4bcafe6c756c070dd092831.scope: Deactivated successfully.
Sep 30 14:13:00 compute-0 conmon[80124]: conmon 597e413d7fa025b60fe1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-597e413d7fa025b60fe1144e27129b3f1a7640d8e4bcafe6c756c070dd092831.scope/container/memory.events
Sep 30 14:13:00 compute-0 podman[80089]: 2025-09-30 14:13:00.554765807 +0000 UTC m=+0.112708033 container died 597e413d7fa025b60fe1144e27129b3f1a7640d8e4bcafe6c756c070dd092831 (image=quay.io/ceph/ceph:v19, name=zen_bardeen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Sep 30 14:13:00 compute-0 podman[80089]: 2025-09-30 14:13:00.460396552 +0000 UTC m=+0.018338798 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:13:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-fead34f62c16a09c4355af969055bc35e4ed7a78503b61dc2b95acfeec0771b9-merged.mount: Deactivated successfully.
Sep 30 14:13:00 compute-0 podman[80089]: 2025-09-30 14:13:00.593618847 +0000 UTC m=+0.151561073 container remove 597e413d7fa025b60fe1144e27129b3f1a7640d8e4bcafe6c756c070dd092831 (image=quay.io/ceph/ceph:v19, name=zen_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Sep 30 14:13:00 compute-0 systemd[1]: libpod-conmon-597e413d7fa025b60fe1144e27129b3f1a7640d8e4bcafe6c756c070dd092831.scope: Deactivated successfully.
Sep 30 14:13:00 compute-0 sudo[80031]: pam_unix(sudo:session): session closed for user root
Sep 30 14:13:00 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 14:13:00 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:00 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 14:13:00 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:00 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.buxlkm (unknown last config time)...
Sep 30 14:13:00 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.buxlkm (unknown last config time)...
Sep 30 14:13:00 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.buxlkm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Sep 30 14:13:00 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.buxlkm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Sep 30 14:13:00 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Sep 30 14:13:00 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mgr services"}]: dispatch
Sep 30 14:13:00 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:13:00 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:13:00 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.buxlkm on compute-0
Sep 30 14:13:00 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.buxlkm on compute-0
Sep 30 14:13:00 compute-0 sudo[80139]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:13:00 compute-0 sudo[80139]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:13:00 compute-0 sudo[80139]: pam_unix(sudo:session): session closed for user root
Sep 30 14:13:00 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0)
Sep 30 14:13:00 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1095434383' entity='client.admin' 
Sep 30 14:13:00 compute-0 sudo[80164]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph:v19 --timeout 895 _orch deploy --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6
Sep 30 14:13:00 compute-0 sudo[80164]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:13:00 compute-0 systemd[1]: libpod-4b09f233c4f38e5129768862cbd0565330ef4518d1155cfa8127220187642b11.scope: Deactivated successfully.
Sep 30 14:13:00 compute-0 podman[80056]: 2025-09-30 14:13:00.773921937 +0000 UTC m=+0.521123836 container died 4b09f233c4f38e5129768862cbd0565330ef4518d1155cfa8127220187642b11 (image=quay.io/ceph/ceph:v19, name=wizardly_ramanujan, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Sep 30 14:13:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-18d7715f7b27b4d775f932933a6b815ab8d45de1cdee4e04bd62ed5eea495609-merged.mount: Deactivated successfully.
Sep 30 14:13:00 compute-0 podman[80056]: 2025-09-30 14:13:00.813911208 +0000 UTC m=+0.561113107 container remove 4b09f233c4f38e5129768862cbd0565330ef4518d1155cfa8127220187642b11 (image=quay.io/ceph/ceph:v19, name=wizardly_ramanujan, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:13:00 compute-0 systemd[1]: libpod-conmon-4b09f233c4f38e5129768862cbd0565330ef4518d1155cfa8127220187642b11.scope: Deactivated successfully.
Sep 30 14:13:00 compute-0 sudo[80001]: pam_unix(sudo:session): session closed for user root
Sep 30 14:13:00 compute-0 sudo[80224]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bognfginxcpgbjggkkewfmgigcvisgzr ; /usr/bin/python3'
Sep 30 14:13:00 compute-0 sudo[80224]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:13:01 compute-0 ceph-mon[74194]: from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Sep 30 14:13:01 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:01 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:01 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:13:01 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 14:13:01 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:01 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:01 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:01 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:01 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:01 compute-0 ceph-mon[74194]: Reconfiguring mon.compute-0 (unknown last config time)...
Sep 30 14:13:01 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Sep 30 14:13:01 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Sep 30 14:13:01 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:13:01 compute-0 ceph-mon[74194]: Reconfiguring daemon mon.compute-0 on compute-0
Sep 30 14:13:01 compute-0 ceph-mon[74194]: pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 14:13:01 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:01 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:01 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.buxlkm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Sep 30 14:13:01 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mgr services"}]: dispatch
Sep 30 14:13:01 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:13:01 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/1095434383' entity='client.admin' 
Sep 30 14:13:01 compute-0 podman[80244]: 2025-09-30 14:13:01.09657802 +0000 UTC m=+0.045874244 container create 7782da0a7dfef5364e3d51003bac51ed94c98b7cb096c84fb33601c6b4edda79 (image=quay.io/ceph/ceph:v19, name=vigilant_keldysh, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:13:01 compute-0 systemd[1]: Started libpod-conmon-7782da0a7dfef5364e3d51003bac51ed94c98b7cb096c84fb33601c6b4edda79.scope.
Sep 30 14:13:01 compute-0 python3[80236]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:13:01 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:13:01 compute-0 podman[80244]: 2025-09-30 14:13:01.162254099 +0000 UTC m=+0.111550363 container init 7782da0a7dfef5364e3d51003bac51ed94c98b7cb096c84fb33601c6b4edda79 (image=quay.io/ceph/ceph:v19, name=vigilant_keldysh, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:13:01 compute-0 podman[80244]: 2025-09-30 14:13:01.072380991 +0000 UTC m=+0.021677235 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:13:01 compute-0 podman[80244]: 2025-09-30 14:13:01.170118363 +0000 UTC m=+0.119414577 container start 7782da0a7dfef5364e3d51003bac51ed94c98b7cb096c84fb33601c6b4edda79 (image=quay.io/ceph/ceph:v19, name=vigilant_keldysh, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:13:01 compute-0 vigilant_keldysh[80260]: 167 167
Sep 30 14:13:01 compute-0 podman[80244]: 2025-09-30 14:13:01.173857071 +0000 UTC m=+0.123153295 container attach 7782da0a7dfef5364e3d51003bac51ed94c98b7cb096c84fb33601c6b4edda79 (image=quay.io/ceph/ceph:v19, name=vigilant_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:13:01 compute-0 systemd[1]: libpod-7782da0a7dfef5364e3d51003bac51ed94c98b7cb096c84fb33601c6b4edda79.scope: Deactivated successfully.
Sep 30 14:13:01 compute-0 podman[80244]: 2025-09-30 14:13:01.176089059 +0000 UTC m=+0.125385303 container died 7782da0a7dfef5364e3d51003bac51ed94c98b7cb096c84fb33601c6b4edda79 (image=quay.io/ceph/ceph:v19, name=vigilant_keldysh, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:13:01 compute-0 podman[80262]: 2025-09-30 14:13:01.198776649 +0000 UTC m=+0.053951005 container create 2cadece1e237cb076193e0aec287d24177d153b4e059ed70eaadb81b9a496d0f (image=quay.io/ceph/ceph:v19, name=vigorous_roentgen, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Sep 30 14:13:01 compute-0 systemd[1]: Started libpod-conmon-2cadece1e237cb076193e0aec287d24177d153b4e059ed70eaadb81b9a496d0f.scope.
Sep 30 14:13:01 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:13:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8951ff44f3cab39a8fb27b98dfd4ad412face5efa4b77e2343870180a432cbdd/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:13:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8951ff44f3cab39a8fb27b98dfd4ad412face5efa4b77e2343870180a432cbdd/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Sep 30 14:13:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8951ff44f3cab39a8fb27b98dfd4ad412face5efa4b77e2343870180a432cbdd/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:13:01 compute-0 podman[80244]: 2025-09-30 14:13:01.255877654 +0000 UTC m=+0.205173878 container remove 7782da0a7dfef5364e3d51003bac51ed94c98b7cb096c84fb33601c6b4edda79 (image=quay.io/ceph/ceph:v19, name=vigilant_keldysh, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Sep 30 14:13:01 compute-0 systemd[1]: libpod-conmon-7782da0a7dfef5364e3d51003bac51ed94c98b7cb096c84fb33601c6b4edda79.scope: Deactivated successfully.
Sep 30 14:13:01 compute-0 podman[80262]: 2025-09-30 14:13:01.168426689 +0000 UTC m=+0.023601065 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:13:01 compute-0 podman[80262]: 2025-09-30 14:13:01.269336074 +0000 UTC m=+0.124510460 container init 2cadece1e237cb076193e0aec287d24177d153b4e059ed70eaadb81b9a496d0f (image=quay.io/ceph/ceph:v19, name=vigorous_roentgen, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Sep 30 14:13:01 compute-0 podman[80262]: 2025-09-30 14:13:01.274257372 +0000 UTC m=+0.129431728 container start 2cadece1e237cb076193e0aec287d24177d153b4e059ed70eaadb81b9a496d0f (image=quay.io/ceph/ceph:v19, name=vigorous_roentgen, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:13:01 compute-0 podman[80262]: 2025-09-30 14:13:01.279876248 +0000 UTC m=+0.135050634 container attach 2cadece1e237cb076193e0aec287d24177d153b4e059ed70eaadb81b9a496d0f (image=quay.io/ceph/ceph:v19, name=vigorous_roentgen, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Sep 30 14:13:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-9ea7cc431782a2150a45cd3c5d6037ebd484515614d3a5990be890fb49a7e465-merged.mount: Deactivated successfully.
Sep 30 14:13:01 compute-0 ansible-async_wrapper.py[78728]: Done in kid B.
Sep 30 14:13:01 compute-0 sudo[80164]: pam_unix(sudo:session): session closed for user root
Sep 30 14:13:01 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 14:13:01 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:01 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 14:13:01 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:01 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:13:01 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:13:01 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 14:13:01 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 14:13:01 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 14:13:01 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:01 compute-0 sudo[80301]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 14:13:01 compute-0 sudo[80301]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:13:01 compute-0 sudo[80301]: pam_unix(sudo:session): session closed for user root
Sep 30 14:13:01 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0)
Sep 30 14:13:01 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2084516514' entity='client.admin' 
Sep 30 14:13:01 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:13:01 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:13:01 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 14:13:01 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 14:13:01 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 14:13:01 compute-0 systemd[1]: libpod-2cadece1e237cb076193e0aec287d24177d153b4e059ed70eaadb81b9a496d0f.scope: Deactivated successfully.
Sep 30 14:13:01 compute-0 podman[80262]: 2025-09-30 14:13:01.644975455 +0000 UTC m=+0.500149811 container died 2cadece1e237cb076193e0aec287d24177d153b4e059ed70eaadb81b9a496d0f (image=quay.io/ceph/ceph:v19, name=vigorous_roentgen, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Sep 30 14:13:01 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-8951ff44f3cab39a8fb27b98dfd4ad412face5efa4b77e2343870180a432cbdd-merged.mount: Deactivated successfully.
Sep 30 14:13:01 compute-0 podman[80262]: 2025-09-30 14:13:01.7097298 +0000 UTC m=+0.564904156 container remove 2cadece1e237cb076193e0aec287d24177d153b4e059ed70eaadb81b9a496d0f (image=quay.io/ceph/ceph:v19, name=vigorous_roentgen, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Sep 30 14:13:01 compute-0 systemd[1]: libpod-conmon-2cadece1e237cb076193e0aec287d24177d153b4e059ed70eaadb81b9a496d0f.scope: Deactivated successfully.
Sep 30 14:13:01 compute-0 sudo[80351]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 14:13:01 compute-0 sudo[80351]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:13:01 compute-0 sudo[80351]: pam_unix(sudo:session): session closed for user root
Sep 30 14:13:01 compute-0 sudo[80224]: pam_unix(sudo:session): session closed for user root
Sep 30 14:13:01 compute-0 sudo[80406]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpnumaddaswhcneepwrrdywzcemwylun ; /usr/bin/python3'
Sep 30 14:13:01 compute-0 sudo[80406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:13:02 compute-0 python3[80408]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:13:02 compute-0 podman[80409]: 2025-09-30 14:13:02.133506383 +0000 UTC m=+0.061173222 container create 6e74a851e9910fddf083565e79da27f518c35101f3a18f5b5f28c84d16437a8b (image=quay.io/ceph/ceph:v19, name=gifted_nobel, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Sep 30 14:13:02 compute-0 systemd[1]: Started libpod-conmon-6e74a851e9910fddf083565e79da27f518c35101f3a18f5b5f28c84d16437a8b.scope.
Sep 30 14:13:02 compute-0 podman[80409]: 2025-09-30 14:13:02.095460643 +0000 UTC m=+0.023127512 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:13:02 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:13:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44e9a1a08464881ac3d01c160bd6c9ab9e3c7774ce960d938c0114b1395615d0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:13:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44e9a1a08464881ac3d01c160bd6c9ab9e3c7774ce960d938c0114b1395615d0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:13:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44e9a1a08464881ac3d01c160bd6c9ab9e3c7774ce960d938c0114b1395615d0/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Sep 30 14:13:02 compute-0 podman[80409]: 2025-09-30 14:13:02.226307427 +0000 UTC m=+0.153974306 container init 6e74a851e9910fddf083565e79da27f518c35101f3a18f5b5f28c84d16437a8b (image=quay.io/ceph/ceph:v19, name=gifted_nobel, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:13:02 compute-0 podman[80409]: 2025-09-30 14:13:02.231182654 +0000 UTC m=+0.158849513 container start 6e74a851e9910fddf083565e79da27f518c35101f3a18f5b5f28c84d16437a8b (image=quay.io/ceph/ceph:v19, name=gifted_nobel, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:13:02 compute-0 podman[80409]: 2025-09-30 14:13:02.234293965 +0000 UTC m=+0.161960804 container attach 6e74a851e9910fddf083565e79da27f518c35101f3a18f5b5f28c84d16437a8b (image=quay.io/ceph/ceph:v19, name=gifted_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:13:02 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 14:13:02 compute-0 ceph-mon[74194]: Reconfiguring mgr.compute-0.buxlkm (unknown last config time)...
Sep 30 14:13:02 compute-0 ceph-mon[74194]: Reconfiguring daemon mgr.compute-0.buxlkm on compute-0
Sep 30 14:13:02 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:02 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:02 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:13:02 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 14:13:02 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:02 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/2084516514' entity='client.admin' 
Sep 30 14:13:02 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:13:02 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 14:13:02 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:02 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0)
Sep 30 14:13:02 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1127197694' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Sep 30 14:13:03 compute-0 ceph-mgr[74485]: [progress INFO root] Writing back 1 completed events
Sep 30 14:13:03 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Sep 30 14:13:03 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:03 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:13:03 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:13:03 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:13:03 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:13:03 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:13:03 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:13:03 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Sep 30 14:13:03 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Sep 30 14:13:03 compute-0 ceph-mon[74194]: pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 14:13:03 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/1127197694' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Sep 30 14:13:03 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:03 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1127197694' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Sep 30 14:13:03 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Sep 30 14:13:03 compute-0 gifted_nobel[80424]: set require_min_compat_client to mimic
Sep 30 14:13:03 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Sep 30 14:13:03 compute-0 systemd[1]: libpod-6e74a851e9910fddf083565e79da27f518c35101f3a18f5b5f28c84d16437a8b.scope: Deactivated successfully.
Sep 30 14:13:03 compute-0 podman[80409]: 2025-09-30 14:13:03.428855548 +0000 UTC m=+1.356522387 container died 6e74a851e9910fddf083565e79da27f518c35101f3a18f5b5f28c84d16437a8b (image=quay.io/ceph/ceph:v19, name=gifted_nobel, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:13:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-44e9a1a08464881ac3d01c160bd6c9ab9e3c7774ce960d938c0114b1395615d0-merged.mount: Deactivated successfully.
Sep 30 14:13:03 compute-0 podman[80409]: 2025-09-30 14:13:03.46467798 +0000 UTC m=+1.392344819 container remove 6e74a851e9910fddf083565e79da27f518c35101f3a18f5b5f28c84d16437a8b (image=quay.io/ceph/ceph:v19, name=gifted_nobel, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Sep 30 14:13:03 compute-0 systemd[1]: libpod-conmon-6e74a851e9910fddf083565e79da27f518c35101f3a18f5b5f28c84d16437a8b.scope: Deactivated successfully.
Sep 30 14:13:03 compute-0 sudo[80406]: pam_unix(sudo:session): session closed for user root
Sep 30 14:13:03 compute-0 sudo[80484]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aftjyrskfcevkmlsovdbbhnpedwbciim ; /usr/bin/python3'
Sep 30 14:13:03 compute-0 sudo[80484]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:13:04 compute-0 python3[80486]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:13:04 compute-0 podman[80487]: 2025-09-30 14:13:04.068086605 +0000 UTC m=+0.026356897 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:13:04 compute-0 podman[80487]: 2025-09-30 14:13:04.194328648 +0000 UTC m=+0.152598910 container create 3f3602f10bcfba60beb473ed7b17b58b82c8a4f72dbeea69a41d0b75e9157825 (image=quay.io/ceph/ceph:v19, name=dazzling_lehmann, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:13:04 compute-0 systemd[1]: Started libpod-conmon-3f3602f10bcfba60beb473ed7b17b58b82c8a4f72dbeea69a41d0b75e9157825.scope.
Sep 30 14:13:04 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:13:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a30c968ee31a505a60f0659e83c70eeef9c02725b60d6b30235e483b72fd84c4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:13:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a30c968ee31a505a60f0659e83c70eeef9c02725b60d6b30235e483b72fd84c4/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Sep 30 14:13:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a30c968ee31a505a60f0659e83c70eeef9c02725b60d6b30235e483b72fd84c4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:13:04 compute-0 podman[80487]: 2025-09-30 14:13:04.326061596 +0000 UTC m=+0.284331878 container init 3f3602f10bcfba60beb473ed7b17b58b82c8a4f72dbeea69a41d0b75e9157825 (image=quay.io/ceph/ceph:v19, name=dazzling_lehmann, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:13:04 compute-0 podman[80487]: 2025-09-30 14:13:04.332195756 +0000 UTC m=+0.290466028 container start 3f3602f10bcfba60beb473ed7b17b58b82c8a4f72dbeea69a41d0b75e9157825 (image=quay.io/ceph/ceph:v19, name=dazzling_lehmann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Sep 30 14:13:04 compute-0 podman[80487]: 2025-09-30 14:13:04.336507338 +0000 UTC m=+0.294777600 container attach 3f3602f10bcfba60beb473ed7b17b58b82c8a4f72dbeea69a41d0b75e9157825 (image=quay.io/ceph/ceph:v19, name=dazzling_lehmann, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Sep 30 14:13:04 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 14:13:04 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/1127197694' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Sep 30 14:13:04 compute-0 ceph-mon[74194]: osdmap e3: 0 total, 0 up, 0 in
Sep 30 14:13:04 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:13:04 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 14:13:04 compute-0 sudo[80526]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:13:04 compute-0 sudo[80526]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:13:04 compute-0 sudo[80526]: pam_unix(sudo:session): session closed for user root
Sep 30 14:13:04 compute-0 sudo[80551]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host --expect-hostname compute-0
Sep 30 14:13:04 compute-0 sudo[80551]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:13:05 compute-0 sudo[80551]: pam_unix(sudo:session): session closed for user root
Sep 30 14:13:05 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Sep 30 14:13:05 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:05 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Sep 30 14:13:05 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:05 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Sep 30 14:13:05 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:05 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Sep 30 14:13:05 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:05 compute-0 ceph-mgr[74485]: [cephadm INFO root] Added host compute-0
Sep 30 14:13:05 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Added host compute-0
Sep 30 14:13:05 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:13:05 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:13:05 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 14:13:05 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 14:13:05 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 14:13:05 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:05 compute-0 sudo[80597]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 14:13:05 compute-0 sudo[80597]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:13:05 compute-0 sudo[80597]: pam_unix(sudo:session): session closed for user root
Sep 30 14:13:05 compute-0 ceph-mon[74194]: pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 14:13:05 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:05 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:05 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:05 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:05 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:13:05 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 14:13:05 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:06 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-1
Sep 30 14:13:06 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-1
Sep 30 14:13:06 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 14:13:06 compute-0 ceph-mon[74194]: from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:13:06 compute-0 ceph-mon[74194]: Added host compute-0
Sep 30 14:13:07 compute-0 ceph-mon[74194]: Deploying cephadm binary to compute-1
Sep 30 14:13:07 compute-0 ceph-mon[74194]: pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 14:13:08 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 14:13:09 compute-0 sshd-session[80622]: Invalid user wqadmin from 210.90.155.80 port 36462
Sep 30 14:13:09 compute-0 sshd-session[80622]: Received disconnect from 210.90.155.80 port 36462:11: Bye Bye [preauth]
Sep 30 14:13:09 compute-0 sshd-session[80622]: Disconnected from invalid user wqadmin 210.90.155.80 port 36462 [preauth]
Sep 30 14:13:09 compute-0 ceph-mon[74194]: pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 14:13:09 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 14:13:10 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Sep 30 14:13:10 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:10 compute-0 ceph-mgr[74485]: [cephadm INFO root] Added host compute-1
Sep 30 14:13:10 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Added host compute-1
Sep 30 14:13:10 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 14:13:10 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Sep 30 14:13:10 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:11 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Sep 30 14:13:11 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:11 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:11 compute-0 ceph-mon[74194]: Added host compute-1
Sep 30 14:13:11 compute-0 ceph-mon[74194]: pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 14:13:11 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:11 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:11 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-2
Sep 30 14:13:11 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-2
Sep 30 14:13:12 compute-0 ceph-mon[74194]: Deploying cephadm binary to compute-2
Sep 30 14:13:12 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 14:13:12 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Sep 30 14:13:12 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:13 compute-0 ceph-mon[74194]: pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 14:13:13 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:14 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 14:13:14 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 14:13:14 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Sep 30 14:13:14 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:14 compute-0 ceph-mgr[74485]: [cephadm INFO root] Added host compute-2
Sep 30 14:13:14 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Added host compute-2
Sep 30 14:13:14 compute-0 ceph-mgr[74485]: [cephadm INFO root] Saving service mon spec with placement compute-0;compute-1;compute-2
Sep 30 14:13:15 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0;compute-1;compute-2
Sep 30 14:13:15 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Sep 30 14:13:15 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:15 compute-0 ceph-mgr[74485]: [cephadm INFO root] Saving service mgr spec with placement compute-0;compute-1;compute-2
Sep 30 14:13:15 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0;compute-1;compute-2
Sep 30 14:13:15 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Sep 30 14:13:15 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:15 compute-0 ceph-mgr[74485]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Sep 30 14:13:15 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Sep 30 14:13:15 compute-0 ceph-mgr[74485]: [cephadm INFO root] Marking host: compute-1 for OSDSpec preview refresh.
Sep 30 14:13:15 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Marking host: compute-1 for OSDSpec preview refresh.
Sep 30 14:13:15 compute-0 ceph-mgr[74485]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Sep 30 14:13:15 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Sep 30 14:13:15 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0)
Sep 30 14:13:15 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:15 compute-0 dazzling_lehmann[80502]: Added host 'compute-0' with addr '192.168.122.100'
Sep 30 14:13:15 compute-0 dazzling_lehmann[80502]: Added host 'compute-1' with addr '192.168.122.101'
Sep 30 14:13:15 compute-0 dazzling_lehmann[80502]: Added host 'compute-2' with addr '192.168.122.102'
Sep 30 14:13:15 compute-0 dazzling_lehmann[80502]: Scheduled mon update...
Sep 30 14:13:15 compute-0 dazzling_lehmann[80502]: Scheduled mgr update...
Sep 30 14:13:15 compute-0 dazzling_lehmann[80502]: Scheduled osd.default_drive_group update...
Sep 30 14:13:15 compute-0 systemd[1]: libpod-3f3602f10bcfba60beb473ed7b17b58b82c8a4f72dbeea69a41d0b75e9157825.scope: Deactivated successfully.
Sep 30 14:13:15 compute-0 podman[80487]: 2025-09-30 14:13:15.38581301 +0000 UTC m=+11.344083292 container died 3f3602f10bcfba60beb473ed7b17b58b82c8a4f72dbeea69a41d0b75e9157825 (image=quay.io/ceph/ceph:v19, name=dazzling_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:13:15 compute-0 ceph-mon[74194]: pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 14:13:15 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:15 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:15 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:15 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-a30c968ee31a505a60f0659e83c70eeef9c02725b60d6b30235e483b72fd84c4-merged.mount: Deactivated successfully.
Sep 30 14:13:15 compute-0 podman[80487]: 2025-09-30 14:13:15.984950394 +0000 UTC m=+11.943220656 container remove 3f3602f10bcfba60beb473ed7b17b58b82c8a4f72dbeea69a41d0b75e9157825 (image=quay.io/ceph/ceph:v19, name=dazzling_lehmann, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:13:15 compute-0 systemd[1]: libpod-conmon-3f3602f10bcfba60beb473ed7b17b58b82c8a4f72dbeea69a41d0b75e9157825.scope: Deactivated successfully.
Sep 30 14:13:16 compute-0 sudo[80484]: pam_unix(sudo:session): session closed for user root
Sep 30 14:13:16 compute-0 sudo[80661]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-klitfcvecszwuoieegytyknxlgdvdgbt ; /usr/bin/python3'
Sep 30 14:13:16 compute-0 sudo[80661]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:13:16 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 14:13:16 compute-0 python3[80663]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:13:16 compute-0 podman[80665]: 2025-09-30 14:13:16.457035004 +0000 UTC m=+0.037293581 container create b49b1b8c51978e4e95429037a58b0b761e05ab45d17c0cd9f57cdc7f320ee5be (image=quay.io/ceph/ceph:v19, name=thirsty_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:13:16 compute-0 systemd[1]: Started libpod-conmon-b49b1b8c51978e4e95429037a58b0b761e05ab45d17c0cd9f57cdc7f320ee5be.scope.
Sep 30 14:13:16 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:13:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27f577c05a0403ac52c4acfd84e37735aa204d5d2afacef8100eafbc08994aab/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:13:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27f577c05a0403ac52c4acfd84e37735aa204d5d2afacef8100eafbc08994aab/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:13:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27f577c05a0403ac52c4acfd84e37735aa204d5d2afacef8100eafbc08994aab/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Sep 30 14:13:16 compute-0 podman[80665]: 2025-09-30 14:13:16.531603844 +0000 UTC m=+0.111862421 container init b49b1b8c51978e4e95429037a58b0b761e05ab45d17c0cd9f57cdc7f320ee5be (image=quay.io/ceph/ceph:v19, name=thirsty_mclean, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Sep 30 14:13:16 compute-0 podman[80665]: 2025-09-30 14:13:16.440651488 +0000 UTC m=+0.020910095 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:13:16 compute-0 podman[80665]: 2025-09-30 14:13:16.536857241 +0000 UTC m=+0.117115828 container start b49b1b8c51978e4e95429037a58b0b761e05ab45d17c0cd9f57cdc7f320ee5be (image=quay.io/ceph/ceph:v19, name=thirsty_mclean, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Sep 30 14:13:16 compute-0 podman[80665]: 2025-09-30 14:13:16.541213784 +0000 UTC m=+0.121472381 container attach b49b1b8c51978e4e95429037a58b0b761e05ab45d17c0cd9f57cdc7f320ee5be (image=quay.io/ceph/ceph:v19, name=thirsty_mclean, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Sep 30 14:13:16 compute-0 ceph-mon[74194]: Added host compute-2
Sep 30 14:13:16 compute-0 ceph-mon[74194]: Saving service mon spec with placement compute-0;compute-1;compute-2
Sep 30 14:13:16 compute-0 ceph-mon[74194]: Saving service mgr spec with placement compute-0;compute-1;compute-2
Sep 30 14:13:16 compute-0 ceph-mon[74194]: Marking host: compute-0 for OSDSpec preview refresh.
Sep 30 14:13:16 compute-0 ceph-mon[74194]: Marking host: compute-1 for OSDSpec preview refresh.
Sep 30 14:13:16 compute-0 ceph-mon[74194]: Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Sep 30 14:13:16 compute-0 ceph-mon[74194]: pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 14:13:16 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Sep 30 14:13:16 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2299458637' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Sep 30 14:13:16 compute-0 thirsty_mclean[80682]: 
Sep 30 14:13:16 compute-0 thirsty_mclean[80682]: {"fsid":"5e3c7776-ac03-5698-b79f-a6dc2d80cae6","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":67,"monmap":{"epoch":1,"min_mon_release_name":"squid","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2025-09-30T14:12:06:949277+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":1,"modified":"2025-09-30T14:12:06.978138+0000","services":{}},"progress_events":{}}
Sep 30 14:13:17 compute-0 systemd[1]: libpod-b49b1b8c51978e4e95429037a58b0b761e05ab45d17c0cd9f57cdc7f320ee5be.scope: Deactivated successfully.
Sep 30 14:13:17 compute-0 podman[80665]: 2025-09-30 14:13:17.002066182 +0000 UTC m=+0.582324759 container died b49b1b8c51978e4e95429037a58b0b761e05ab45d17c0cd9f57cdc7f320ee5be (image=quay.io/ceph/ceph:v19, name=thirsty_mclean, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:13:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-27f577c05a0403ac52c4acfd84e37735aa204d5d2afacef8100eafbc08994aab-merged.mount: Deactivated successfully.
Sep 30 14:13:17 compute-0 podman[80665]: 2025-09-30 14:13:17.037105473 +0000 UTC m=+0.617364050 container remove b49b1b8c51978e4e95429037a58b0b761e05ab45d17c0cd9f57cdc7f320ee5be (image=quay.io/ceph/ceph:v19, name=thirsty_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1)
Sep 30 14:13:17 compute-0 systemd[1]: libpod-conmon-b49b1b8c51978e4e95429037a58b0b761e05ab45d17c0cd9f57cdc7f320ee5be.scope: Deactivated successfully.
Sep 30 14:13:17 compute-0 sudo[80661]: pam_unix(sudo:session): session closed for user root
Sep 30 14:13:17 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/2299458637' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Sep 30 14:13:18 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 14:13:19 compute-0 ceph-mon[74194]: pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 14:13:19 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 14:13:20 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 14:13:21 compute-0 ceph-mon[74194]: pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 14:13:22 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 14:13:23 compute-0 ceph-mon[74194]: pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 14:13:24 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 14:13:24 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 14:13:24 compute-0 ceph-mon[74194]: pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 14:13:26 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 14:13:27 compute-0 ceph-mon[74194]: pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 14:13:28 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 14:13:28 compute-0 ceph-mon[74194]: pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 14:13:29 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 14:13:30 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 14:13:31 compute-0 ceph-mon[74194]: pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 14:13:32 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 14:13:32 compute-0 ceph-mgr[74485]: [balancer INFO root] Optimize plan auto_2025-09-30_14:13:32
Sep 30 14:13:32 compute-0 ceph-mgr[74485]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 14:13:32 compute-0 ceph-mgr[74485]: [balancer INFO root] do_upmap
Sep 30 14:13:32 compute-0 ceph-mgr[74485]: [balancer INFO root] No pools available
Sep 30 14:13:33 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 14:13:33 compute-0 ceph-mgr[74485]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 14:13:33 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:13:33 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:13:33 compute-0 ceph-mgr[74485]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 14:13:33 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:13:33 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:13:33 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:13:33 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:13:33 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Sep 30 14:13:33 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:33 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Sep 30 14:13:33 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:33 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Sep 30 14:13:33 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:33 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Sep 30 14:13:33 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:33 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Sep 30 14:13:33 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Sep 30 14:13:33 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:13:33 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:13:33 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 14:13:33 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 14:13:33 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Sep 30 14:13:33 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Sep 30 14:13:33 compute-0 ceph-mon[74194]: pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 14:13:33 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:33 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:33 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:33 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:33 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Sep 30 14:13:33 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:13:33 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 14:13:33 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.conf
Sep 30 14:13:33 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.conf
Sep 30 14:13:34 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 14:13:34 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Sep 30 14:13:34 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Sep 30 14:13:34 compute-0 ceph-mon[74194]: Updating compute-1:/etc/ceph/ceph.conf
Sep 30 14:13:34 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 14:13:35 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.client.admin.keyring
Sep 30 14:13:35 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.client.admin.keyring
Sep 30 14:13:35 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Sep 30 14:13:35 compute-0 ceph-mon[74194]: Updating compute-1:/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.conf
Sep 30 14:13:35 compute-0 ceph-mon[74194]: pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 14:13:35 compute-0 ceph-mon[74194]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Sep 30 14:13:35 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:35 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Sep 30 14:13:35 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:35 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 14:13:35 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:35 compute-0 ceph-mgr[74485]: [cephadm ERROR cephadm.serve] Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
                                           service_name: mon
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Sep 30 14:13:35 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [ERR] : Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
                                           service_name: mon
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Sep 30 14:13:35 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 14:13:35 compute-0 ceph-mgr[74485]: [cephadm ERROR cephadm.serve] Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
                                           service_name: mgr
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Sep 30 14:13:35 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [ERR] : Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
                                           service_name: mgr
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Sep 30 14:13:35 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 14:13:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:13:35.792+0000 7f02b80f8640 -1 log_channel(cephadm) log [ERR] : Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
Sep 30 14:13:35 compute-0 ceph-mgr[74485]: [progress INFO root] update: starting ev c3b3be7a-9d11-455f-9067-627ff1239e21 (Updating crash deployment (+1 -> 2))
Sep 30 14:13:35 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Sep 30 14:13:35 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Sep 30 14:13:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: service_name: mon
Sep 30 14:13:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: placement:
Sep 30 14:13:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]:   hosts:
Sep 30 14:13:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]:   - compute-0
Sep 30 14:13:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]:   - compute-1
Sep 30 14:13:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]:   - compute-2
Sep 30 14:13:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Sep 30 14:13:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:13:35.794+0000 7f02b80f8640 -1 log_channel(cephadm) log [ERR] : Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
Sep 30 14:13:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: service_name: mgr
Sep 30 14:13:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: placement:
Sep 30 14:13:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]:   hosts:
Sep 30 14:13:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]:   - compute-0
Sep 30 14:13:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]:   - compute-1
Sep 30 14:13:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]:   - compute-2
Sep 30 14:13:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Sep 30 14:13:35 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Sep 30 14:13:35 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:13:35 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:13:35 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-1 on compute-1
Sep 30 14:13:35 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-1 on compute-1
Sep 30 14:13:36 compute-0 ceph-mon[74194]: Updating compute-1:/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.client.admin.keyring
Sep 30 14:13:36 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:36 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:36 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:36 compute-0 ceph-mon[74194]: Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
                                           service_name: mon
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Sep 30 14:13:36 compute-0 ceph-mon[74194]: pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 14:13:36 compute-0 ceph-mon[74194]: Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
                                           service_name: mgr
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Sep 30 14:13:36 compute-0 ceph-mon[74194]: pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 14:13:36 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Sep 30 14:13:36 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Sep 30 14:13:36 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:13:36 compute-0 ceph-mon[74194]: Deploying daemon crash.compute-1 on compute-1
Sep 30 14:13:36 compute-0 ceph-mon[74194]: log_channel(cluster) log [WRN] : Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Sep 30 14:13:37 compute-0 ceph-mon[74194]: Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Sep 30 14:13:37 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 14:13:38 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Sep 30 14:13:38 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:38 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Sep 30 14:13:38 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:38 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Sep 30 14:13:38 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:38 compute-0 ceph-mgr[74485]: [progress INFO root] complete: finished ev c3b3be7a-9d11-455f-9067-627ff1239e21 (Updating crash deployment (+1 -> 2))
Sep 30 14:13:38 compute-0 ceph-mgr[74485]: [progress INFO root] Completed event c3b3be7a-9d11-455f-9067-627ff1239e21 (Updating crash deployment (+1 -> 2)) in 3 seconds
Sep 30 14:13:38 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Sep 30 14:13:38 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:38 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 14:13:38 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 14:13:38 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 14:13:38 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 14:13:38 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:13:38 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:13:38 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 14:13:38 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 14:13:38 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:13:38 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:13:38 compute-0 sudo[80719]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:13:38 compute-0 sudo[80719]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:13:38 compute-0 sudo[80719]: pam_unix(sudo:session): session closed for user root
Sep 30 14:13:38 compute-0 sudo[80744]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 14:13:38 compute-0 sudo[80744]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:13:38 compute-0 podman[80809]: 2025-09-30 14:13:38.849263914 +0000 UTC m=+0.036113052 container create ace7f343744b05ec811b6f2ffdcd072f9a2780606e31569f41687966a0745bc6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_turing, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:13:38 compute-0 systemd[1]: Started libpod-conmon-ace7f343744b05ec811b6f2ffdcd072f9a2780606e31569f41687966a0745bc6.scope.
Sep 30 14:13:38 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:13:38 compute-0 podman[80809]: 2025-09-30 14:13:38.834116456 +0000 UTC m=+0.020965614 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:13:38 compute-0 podman[80809]: 2025-09-30 14:13:38.942457138 +0000 UTC m=+0.129306296 container init ace7f343744b05ec811b6f2ffdcd072f9a2780606e31569f41687966a0745bc6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_turing, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Sep 30 14:13:38 compute-0 podman[80809]: 2025-09-30 14:13:38.948003436 +0000 UTC m=+0.134852574 container start ace7f343744b05ec811b6f2ffdcd072f9a2780606e31569f41687966a0745bc6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_turing, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:13:38 compute-0 suspicious_turing[80825]: 167 167
Sep 30 14:13:38 compute-0 systemd[1]: libpod-ace7f343744b05ec811b6f2ffdcd072f9a2780606e31569f41687966a0745bc6.scope: Deactivated successfully.
Sep 30 14:13:38 compute-0 podman[80809]: 2025-09-30 14:13:38.956928629 +0000 UTC m=+0.143777787 container attach ace7f343744b05ec811b6f2ffdcd072f9a2780606e31569f41687966a0745bc6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_turing, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:13:38 compute-0 podman[80809]: 2025-09-30 14:13:38.957337659 +0000 UTC m=+0.144186797 container died ace7f343744b05ec811b6f2ffdcd072f9a2780606e31569f41687966a0745bc6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_turing, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Sep 30 14:13:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-52744e534aa3996d584dbcae01ee2e710c72da14b6844989d8856b8187e7d781-merged.mount: Deactivated successfully.
Sep 30 14:13:39 compute-0 podman[80809]: 2025-09-30 14:13:39.011186502 +0000 UTC m=+0.198035640 container remove ace7f343744b05ec811b6f2ffdcd072f9a2780606e31569f41687966a0745bc6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_turing, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Sep 30 14:13:39 compute-0 systemd[1]: libpod-conmon-ace7f343744b05ec811b6f2ffdcd072f9a2780606e31569f41687966a0745bc6.scope: Deactivated successfully.
Sep 30 14:13:39 compute-0 podman[80848]: 2025-09-30 14:13:39.16192 +0000 UTC m=+0.038727776 container create 3631f91d1887678343d6769ac16a1545c21b7eeae10a486d3df4ffa00081e254 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_sinoussi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:13:39 compute-0 systemd[1]: Started libpod-conmon-3631f91d1887678343d6769ac16a1545c21b7eeae10a486d3df4ffa00081e254.scope.
Sep 30 14:13:39 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:13:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0404197c84b17de1151422eaffe13e7d43a09ce5ff9d68bb155e432c2ce8c65d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:13:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0404197c84b17de1151422eaffe13e7d43a09ce5ff9d68bb155e432c2ce8c65d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:13:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0404197c84b17de1151422eaffe13e7d43a09ce5ff9d68bb155e432c2ce8c65d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:13:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0404197c84b17de1151422eaffe13e7d43a09ce5ff9d68bb155e432c2ce8c65d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:13:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0404197c84b17de1151422eaffe13e7d43a09ce5ff9d68bb155e432c2ce8c65d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 14:13:39 compute-0 podman[80848]: 2025-09-30 14:13:39.235155897 +0000 UTC m=+0.111963693 container init 3631f91d1887678343d6769ac16a1545c21b7eeae10a486d3df4ffa00081e254 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_sinoussi, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:13:39 compute-0 podman[80848]: 2025-09-30 14:13:39.145831219 +0000 UTC m=+0.022639015 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:13:39 compute-0 podman[80848]: 2025-09-30 14:13:39.242387197 +0000 UTC m=+0.119194973 container start 3631f91d1887678343d6769ac16a1545c21b7eeae10a486d3df4ffa00081e254 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_sinoussi, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325)
Sep 30 14:13:39 compute-0 podman[80848]: 2025-09-30 14:13:39.246769876 +0000 UTC m=+0.123577652 container attach 3631f91d1887678343d6769ac16a1545c21b7eeae10a486d3df4ffa00081e254 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_sinoussi, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Sep 30 14:13:39 compute-0 ceph-mon[74194]: pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 14:13:39 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:39 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:39 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:39 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:39 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 14:13:39 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 14:13:39 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:13:39 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 14:13:39 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:13:39 compute-0 flamboyant_sinoussi[80864]: --> passed data devices: 0 physical, 1 LVM
Sep 30 14:13:39 compute-0 flamboyant_sinoussi[80864]: Running command: /usr/bin/ceph-authtool --gen-print-key
Sep 30 14:13:39 compute-0 flamboyant_sinoussi[80864]: Running command: /usr/bin/ceph-authtool --gen-print-key
Sep 30 14:13:39 compute-0 flamboyant_sinoussi[80864]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 1bf35304-bfb4-41f5-b832-570aa31de1b2
Sep 30 14:13:39 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 14:13:39 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 14:13:40 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "1bf35304-bfb4-41f5-b832-570aa31de1b2"} v 0)
Sep 30 14:13:40 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1573291948' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "1bf35304-bfb4-41f5-b832-570aa31de1b2"}]: dispatch
Sep 30 14:13:40 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Sep 30 14:13:40 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Sep 30 14:13:40 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1573291948' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "1bf35304-bfb4-41f5-b832-570aa31de1b2"}]': finished
Sep 30 14:13:40 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Sep 30 14:13:40 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Sep 30 14:13:40 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Sep 30 14:13:40 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Sep 30 14:13:40 compute-0 ceph-mgr[74485]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Sep 30 14:13:40 compute-0 flamboyant_sinoussi[80864]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Sep 30 14:13:40 compute-0 flamboyant_sinoussi[80864]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Sep 30 14:13:40 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "d748213d-faa0-40a0-8834-07f3126b404a"} v 0)
Sep 30 14:13:40 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/3016784148' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d748213d-faa0-40a0-8834-07f3126b404a"}]: dispatch
Sep 30 14:13:40 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Sep 30 14:13:40 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Sep 30 14:13:40 compute-0 flamboyant_sinoussi[80864]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Sep 30 14:13:40 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/3016784148' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d748213d-faa0-40a0-8834-07f3126b404a"}]': finished
Sep 30 14:13:40 compute-0 lvm[80925]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 14:13:40 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Sep 30 14:13:40 compute-0 lvm[80925]: VG ceph_vg0 finished
Sep 30 14:13:40 compute-0 flamboyant_sinoussi[80864]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Sep 30 14:13:40 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Sep 30 14:13:40 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Sep 30 14:13:40 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Sep 30 14:13:40 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Sep 30 14:13:40 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Sep 30 14:13:40 compute-0 ceph-mgr[74485]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Sep 30 14:13:40 compute-0 ceph-mgr[74485]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Sep 30 14:13:40 compute-0 flamboyant_sinoussi[80864]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Sep 30 14:13:40 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/1573291948' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "1bf35304-bfb4-41f5-b832-570aa31de1b2"}]: dispatch
Sep 30 14:13:40 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/1573291948' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "1bf35304-bfb4-41f5-b832-570aa31de1b2"}]': finished
Sep 30 14:13:40 compute-0 ceph-mon[74194]: osdmap e4: 1 total, 0 up, 1 in
Sep 30 14:13:40 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Sep 30 14:13:40 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/3016784148' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d748213d-faa0-40a0-8834-07f3126b404a"}]: dispatch
Sep 30 14:13:40 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/3016784148' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d748213d-faa0-40a0-8834-07f3126b404a"}]': finished
Sep 30 14:13:40 compute-0 ceph-mon[74194]: osdmap e5: 2 total, 0 up, 2 in
Sep 30 14:13:40 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Sep 30 14:13:40 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Sep 30 14:13:40 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Sep 30 14:13:40 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4094420979' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Sep 30 14:13:40 compute-0 flamboyant_sinoussi[80864]:  stderr: got monmap epoch 1
Sep 30 14:13:40 compute-0 flamboyant_sinoussi[80864]: --> Creating keyring file for osd.0
Sep 30 14:13:40 compute-0 flamboyant_sinoussi[80864]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Sep 30 14:13:40 compute-0 flamboyant_sinoussi[80864]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Sep 30 14:13:40 compute-0 flamboyant_sinoussi[80864]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid 1bf35304-bfb4-41f5-b832-570aa31de1b2 --setuser ceph --setgroup ceph
Sep 30 14:13:40 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Sep 30 14:13:40 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4245724636' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Sep 30 14:13:41 compute-0 ceph-mon[74194]: pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 14:13:41 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/4094420979' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Sep 30 14:13:41 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/4245724636' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Sep 30 14:13:41 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 14:13:42 compute-0 ceph-mon[74194]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Sep 30 14:13:42 compute-0 ceph-mon[74194]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Sep 30 14:13:43 compute-0 ceph-mgr[74485]: [progress INFO root] Writing back 2 completed events
Sep 30 14:13:43 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Sep 30 14:13:43 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:43 compute-0 ceph-mon[74194]: pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 14:13:43 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:43 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 14:13:43 compute-0 flamboyant_sinoussi[80864]:  stderr: 2025-09-30T14:13:40.825+0000 7f87ed259740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) No valid bdev label found
Sep 30 14:13:43 compute-0 flamboyant_sinoussi[80864]:  stderr: 2025-09-30T14:13:41.090+0000 7f87ed259740 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Sep 30 14:13:43 compute-0 flamboyant_sinoussi[80864]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Sep 30 14:13:43 compute-0 flamboyant_sinoussi[80864]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Sep 30 14:13:43 compute-0 flamboyant_sinoussi[80864]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Sep 30 14:13:44 compute-0 flamboyant_sinoussi[80864]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Sep 30 14:13:44 compute-0 flamboyant_sinoussi[80864]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Sep 30 14:13:44 compute-0 flamboyant_sinoussi[80864]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Sep 30 14:13:44 compute-0 flamboyant_sinoussi[80864]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Sep 30 14:13:44 compute-0 flamboyant_sinoussi[80864]: --> ceph-volume lvm activate successful for osd ID: 0
Sep 30 14:13:44 compute-0 flamboyant_sinoussi[80864]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Sep 30 14:13:44 compute-0 systemd[1]: libpod-3631f91d1887678343d6769ac16a1545c21b7eeae10a486d3df4ffa00081e254.scope: Deactivated successfully.
Sep 30 14:13:44 compute-0 systemd[1]: libpod-3631f91d1887678343d6769ac16a1545c21b7eeae10a486d3df4ffa00081e254.scope: Consumed 2.038s CPU time.
Sep 30 14:13:44 compute-0 podman[81841]: 2025-09-30 14:13:44.33675243 +0000 UTC m=+0.025710722 container died 3631f91d1887678343d6769ac16a1545c21b7eeae10a486d3df4ffa00081e254 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_sinoussi, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:13:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-0404197c84b17de1151422eaffe13e7d43a09ce5ff9d68bb155e432c2ce8c65d-merged.mount: Deactivated successfully.
Sep 30 14:13:44 compute-0 podman[81841]: 2025-09-30 14:13:44.406435438 +0000 UTC m=+0.095393700 container remove 3631f91d1887678343d6769ac16a1545c21b7eeae10a486d3df4ffa00081e254 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_sinoussi, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:13:44 compute-0 systemd[1]: libpod-conmon-3631f91d1887678343d6769ac16a1545c21b7eeae10a486d3df4ffa00081e254.scope: Deactivated successfully.
Sep 30 14:13:44 compute-0 sudo[80744]: pam_unix(sudo:session): session closed for user root
Sep 30 14:13:44 compute-0 sudo[81854]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:13:44 compute-0 sudo[81854]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:13:44 compute-0 sudo[81854]: pam_unix(sudo:session): session closed for user root
Sep 30 14:13:44 compute-0 sudo[81879]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -- lvm list --format json
Sep 30 14:13:44 compute-0 sudo[81879]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:13:44 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 14:13:44 compute-0 ceph-mon[74194]: pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 14:13:45 compute-0 podman[81944]: 2025-09-30 14:13:44.916841166 +0000 UTC m=+0.020390370 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:13:45 compute-0 podman[81944]: 2025-09-30 14:13:45.071762389 +0000 UTC m=+0.175311603 container create 59cb8f02426faf6051174d4a9120c0e44fb8f72a104cefe4f2d865e248e30405 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_noyce, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:13:45 compute-0 systemd[1]: Started libpod-conmon-59cb8f02426faf6051174d4a9120c0e44fb8f72a104cefe4f2d865e248e30405.scope.
Sep 30 14:13:45 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:13:45 compute-0 podman[81944]: 2025-09-30 14:13:45.159873026 +0000 UTC m=+0.263422240 container init 59cb8f02426faf6051174d4a9120c0e44fb8f72a104cefe4f2d865e248e30405 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_noyce, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:13:45 compute-0 podman[81944]: 2025-09-30 14:13:45.166587664 +0000 UTC m=+0.270136848 container start 59cb8f02426faf6051174d4a9120c0e44fb8f72a104cefe4f2d865e248e30405 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_noyce, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:13:45 compute-0 epic_noyce[81960]: 167 167
Sep 30 14:13:45 compute-0 systemd[1]: libpod-59cb8f02426faf6051174d4a9120c0e44fb8f72a104cefe4f2d865e248e30405.scope: Deactivated successfully.
Sep 30 14:13:45 compute-0 podman[81944]: 2025-09-30 14:13:45.184694255 +0000 UTC m=+0.288243439 container attach 59cb8f02426faf6051174d4a9120c0e44fb8f72a104cefe4f2d865e248e30405 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_noyce, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Sep 30 14:13:45 compute-0 podman[81944]: 2025-09-30 14:13:45.185207518 +0000 UTC m=+0.288756702 container died 59cb8f02426faf6051174d4a9120c0e44fb8f72a104cefe4f2d865e248e30405 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_noyce, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True)
Sep 30 14:13:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b8304e39341050d7d3cb9a55b6b54ea66ef1cc57114dbc8ad506b7c6c587dca-merged.mount: Deactivated successfully.
Sep 30 14:13:45 compute-0 podman[81944]: 2025-09-30 14:13:45.318334738 +0000 UTC m=+0.421883922 container remove 59cb8f02426faf6051174d4a9120c0e44fb8f72a104cefe4f2d865e248e30405 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_noyce, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Sep 30 14:13:45 compute-0 systemd[1]: libpod-conmon-59cb8f02426faf6051174d4a9120c0e44fb8f72a104cefe4f2d865e248e30405.scope: Deactivated successfully.
Sep 30 14:13:45 compute-0 podman[81987]: 2025-09-30 14:13:45.483884266 +0000 UTC m=+0.053698010 container create 1c0b91d7104241e090dd413317fc5273332ae50d4f51eafac15d94b3ab9f17b3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_perlman, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Sep 30 14:13:45 compute-0 systemd[1]: Started libpod-conmon-1c0b91d7104241e090dd413317fc5273332ae50d4f51eafac15d94b3ab9f17b3.scope.
Sep 30 14:13:45 compute-0 podman[81987]: 2025-09-30 14:13:45.452144784 +0000 UTC m=+0.021958548 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:13:45 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:13:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8013358a1108faaa32dc3266f1ef63ef23bb34c9449b669d2040cd4996ae1ff/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:13:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8013358a1108faaa32dc3266f1ef63ef23bb34c9449b669d2040cd4996ae1ff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:13:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8013358a1108faaa32dc3266f1ef63ef23bb34c9449b669d2040cd4996ae1ff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:13:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8013358a1108faaa32dc3266f1ef63ef23bb34c9449b669d2040cd4996ae1ff/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:13:45 compute-0 podman[81987]: 2025-09-30 14:13:45.587364316 +0000 UTC m=+0.157178080 container init 1c0b91d7104241e090dd413317fc5273332ae50d4f51eafac15d94b3ab9f17b3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_perlman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Sep 30 14:13:45 compute-0 podman[81987]: 2025-09-30 14:13:45.593421597 +0000 UTC m=+0.163235341 container start 1c0b91d7104241e090dd413317fc5273332ae50d4f51eafac15d94b3ab9f17b3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_perlman, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:13:45 compute-0 podman[81987]: 2025-09-30 14:13:45.615343944 +0000 UTC m=+0.185157688 container attach 1c0b91d7104241e090dd413317fc5273332ae50d4f51eafac15d94b3ab9f17b3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_perlman, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Sep 30 14:13:45 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 14:13:45 compute-0 brave_perlman[82003]: {
Sep 30 14:13:45 compute-0 brave_perlman[82003]:     "0": [
Sep 30 14:13:45 compute-0 brave_perlman[82003]:         {
Sep 30 14:13:45 compute-0 brave_perlman[82003]:             "devices": [
Sep 30 14:13:45 compute-0 brave_perlman[82003]:                 "/dev/loop3"
Sep 30 14:13:45 compute-0 brave_perlman[82003]:             ],
Sep 30 14:13:45 compute-0 brave_perlman[82003]:             "lv_name": "ceph_lv0",
Sep 30 14:13:45 compute-0 brave_perlman[82003]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:13:45 compute-0 brave_perlman[82003]:             "lv_size": "21470642176",
Sep 30 14:13:45 compute-0 brave_perlman[82003]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5e3c7776-ac03-5698-b79f-a6dc2d80cae6,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1bf35304-bfb4-41f5-b832-570aa31de1b2,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 14:13:45 compute-0 brave_perlman[82003]:             "lv_uuid": "f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L",
Sep 30 14:13:45 compute-0 brave_perlman[82003]:             "name": "ceph_lv0",
Sep 30 14:13:45 compute-0 brave_perlman[82003]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:13:45 compute-0 brave_perlman[82003]:             "tags": {
Sep 30 14:13:45 compute-0 brave_perlman[82003]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:13:45 compute-0 brave_perlman[82003]:                 "ceph.block_uuid": "f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L",
Sep 30 14:13:45 compute-0 brave_perlman[82003]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 14:13:45 compute-0 brave_perlman[82003]:                 "ceph.cluster_fsid": "5e3c7776-ac03-5698-b79f-a6dc2d80cae6",
Sep 30 14:13:45 compute-0 brave_perlman[82003]:                 "ceph.cluster_name": "ceph",
Sep 30 14:13:45 compute-0 brave_perlman[82003]:                 "ceph.crush_device_class": "",
Sep 30 14:13:45 compute-0 brave_perlman[82003]:                 "ceph.encrypted": "0",
Sep 30 14:13:45 compute-0 brave_perlman[82003]:                 "ceph.osd_fsid": "1bf35304-bfb4-41f5-b832-570aa31de1b2",
Sep 30 14:13:45 compute-0 brave_perlman[82003]:                 "ceph.osd_id": "0",
Sep 30 14:13:45 compute-0 brave_perlman[82003]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 14:13:45 compute-0 brave_perlman[82003]:                 "ceph.type": "block",
Sep 30 14:13:45 compute-0 brave_perlman[82003]:                 "ceph.vdo": "0",
Sep 30 14:13:45 compute-0 brave_perlman[82003]:                 "ceph.with_tpm": "0"
Sep 30 14:13:45 compute-0 brave_perlman[82003]:             },
Sep 30 14:13:45 compute-0 brave_perlman[82003]:             "type": "block",
Sep 30 14:13:45 compute-0 brave_perlman[82003]:             "vg_name": "ceph_vg0"
Sep 30 14:13:45 compute-0 brave_perlman[82003]:         }
Sep 30 14:13:45 compute-0 brave_perlman[82003]:     ]
Sep 30 14:13:45 compute-0 brave_perlman[82003]: }
Sep 30 14:13:45 compute-0 systemd[1]: libpod-1c0b91d7104241e090dd413317fc5273332ae50d4f51eafac15d94b3ab9f17b3.scope: Deactivated successfully.
Sep 30 14:13:45 compute-0 podman[81987]: 2025-09-30 14:13:45.873869681 +0000 UTC m=+0.443683435 container died 1c0b91d7104241e090dd413317fc5273332ae50d4f51eafac15d94b3ab9f17b3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_perlman, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:13:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-c8013358a1108faaa32dc3266f1ef63ef23bb34c9449b669d2040cd4996ae1ff-merged.mount: Deactivated successfully.
Sep 30 14:13:45 compute-0 podman[81987]: 2025-09-30 14:13:45.931393665 +0000 UTC m=+0.501207409 container remove 1c0b91d7104241e090dd413317fc5273332ae50d4f51eafac15d94b3ab9f17b3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_perlman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:13:45 compute-0 systemd[1]: libpod-conmon-1c0b91d7104241e090dd413317fc5273332ae50d4f51eafac15d94b3ab9f17b3.scope: Deactivated successfully.
Sep 30 14:13:45 compute-0 sudo[81879]: pam_unix(sudo:session): session closed for user root
Sep 30 14:13:45 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0)
Sep 30 14:13:45 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Sep 30 14:13:45 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:13:45 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:13:45 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-0
Sep 30 14:13:45 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-0
Sep 30 14:13:46 compute-0 sudo[82024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:13:46 compute-0 sudo[82024]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:13:46 compute-0 sudo[82024]: pam_unix(sudo:session): session closed for user root
Sep 30 14:13:46 compute-0 sudo[82049]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6
Sep 30 14:13:46 compute-0 sudo[82049]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:13:46 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0)
Sep 30 14:13:46 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Sep 30 14:13:46 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:13:46 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:13:46 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-1
Sep 30 14:13:46 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-1
Sep 30 14:13:46 compute-0 podman[82115]: 2025-09-30 14:13:46.50261384 +0000 UTC m=+0.050623974 container create 994fb88bbd817888d8ea192afeef5061bc35fa2103002a42d612332b2491a2d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_villani, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True)
Sep 30 14:13:46 compute-0 systemd[1]: Started libpod-conmon-994fb88bbd817888d8ea192afeef5061bc35fa2103002a42d612332b2491a2d1.scope.
Sep 30 14:13:46 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:13:46 compute-0 podman[82115]: 2025-09-30 14:13:46.471807931 +0000 UTC m=+0.019818095 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:13:46 compute-0 podman[82115]: 2025-09-30 14:13:46.578238285 +0000 UTC m=+0.126248459 container init 994fb88bbd817888d8ea192afeef5061bc35fa2103002a42d612332b2491a2d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_villani, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:13:46 compute-0 podman[82115]: 2025-09-30 14:13:46.588780598 +0000 UTC m=+0.136790782 container start 994fb88bbd817888d8ea192afeef5061bc35fa2103002a42d612332b2491a2d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_villani, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Sep 30 14:13:46 compute-0 vibrant_villani[82132]: 167 167
Sep 30 14:13:46 compute-0 systemd[1]: libpod-994fb88bbd817888d8ea192afeef5061bc35fa2103002a42d612332b2491a2d1.scope: Deactivated successfully.
Sep 30 14:13:46 compute-0 podman[82115]: 2025-09-30 14:13:46.618224633 +0000 UTC m=+0.166234777 container attach 994fb88bbd817888d8ea192afeef5061bc35fa2103002a42d612332b2491a2d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:13:46 compute-0 podman[82115]: 2025-09-30 14:13:46.618716395 +0000 UTC m=+0.166726539 container died 994fb88bbd817888d8ea192afeef5061bc35fa2103002a42d612332b2491a2d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_villani, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:13:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-882c4c309996ad56c6af48e956cf0c7d4e2329b61eda1d8e144a111b39cc2190-merged.mount: Deactivated successfully.
Sep 30 14:13:46 compute-0 podman[82115]: 2025-09-30 14:13:46.728984363 +0000 UTC m=+0.276994507 container remove 994fb88bbd817888d8ea192afeef5061bc35fa2103002a42d612332b2491a2d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_villani, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:13:46 compute-0 systemd[1]: libpod-conmon-994fb88bbd817888d8ea192afeef5061bc35fa2103002a42d612332b2491a2d1.scope: Deactivated successfully.
Sep 30 14:13:46 compute-0 ceph-mon[74194]: pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 14:13:46 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Sep 30 14:13:46 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:13:46 compute-0 ceph-mon[74194]: Deploying daemon osd.0 on compute-0
Sep 30 14:13:46 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Sep 30 14:13:46 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:13:46 compute-0 ceph-mon[74194]: Deploying daemon osd.1 on compute-1
Sep 30 14:13:47 compute-0 podman[82163]: 2025-09-30 14:13:46.940225791 +0000 UTC m=+0.022443021 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:13:47 compute-0 podman[82163]: 2025-09-30 14:13:47.092494038 +0000 UTC m=+0.174711258 container create d5f4b6381f9791290047b392545d346c2864ea7cc20c2fa6940429010a9efda8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-osd-0-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Sep 30 14:13:47 compute-0 systemd[1]: Started libpod-conmon-d5f4b6381f9791290047b392545d346c2864ea7cc20c2fa6940429010a9efda8.scope.
Sep 30 14:13:47 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:13:47 compute-0 sudo[82206]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etdqpphropjrxaipamwfgvotsthljhjb ; /usr/bin/python3'
Sep 30 14:13:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/566e902aa71426065c290459ff0c56c31e6a0208dbd189ff5be3137f32717014/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:13:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/566e902aa71426065c290459ff0c56c31e6a0208dbd189ff5be3137f32717014/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:13:47 compute-0 sudo[82206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:13:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/566e902aa71426065c290459ff0c56c31e6a0208dbd189ff5be3137f32717014/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:13:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/566e902aa71426065c290459ff0c56c31e6a0208dbd189ff5be3137f32717014/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:13:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/566e902aa71426065c290459ff0c56c31e6a0208dbd189ff5be3137f32717014/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Sep 30 14:13:47 compute-0 podman[82163]: 2025-09-30 14:13:47.190444401 +0000 UTC m=+0.272661641 container init d5f4b6381f9791290047b392545d346c2864ea7cc20c2fa6940429010a9efda8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-osd-0-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Sep 30 14:13:47 compute-0 podman[82163]: 2025-09-30 14:13:47.198576703 +0000 UTC m=+0.280793913 container start d5f4b6381f9791290047b392545d346c2864ea7cc20c2fa6940429010a9efda8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-osd-0-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:13:47 compute-0 podman[82163]: 2025-09-30 14:13:47.25537102 +0000 UTC m=+0.337588250 container attach d5f4b6381f9791290047b392545d346c2864ea7cc20c2fa6940429010a9efda8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-osd-0-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Sep 30 14:13:47 compute-0 python3[82208]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:13:47 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-osd-0-activate-test[82196]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Sep 30 14:13:47 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-osd-0-activate-test[82196]:                             [--no-systemd] [--no-tmpfs]
Sep 30 14:13:47 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-osd-0-activate-test[82196]: ceph-volume activate: error: unrecognized arguments: --bad-option
Sep 30 14:13:47 compute-0 systemd[1]: libpod-d5f4b6381f9791290047b392545d346c2864ea7cc20c2fa6940429010a9efda8.scope: Deactivated successfully.
Sep 30 14:13:47 compute-0 podman[82163]: 2025-09-30 14:13:47.385742981 +0000 UTC m=+0.467960191 container died d5f4b6381f9791290047b392545d346c2864ea7cc20c2fa6940429010a9efda8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-osd-0-activate-test, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Sep 30 14:13:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-566e902aa71426065c290459ff0c56c31e6a0208dbd189ff5be3137f32717014-merged.mount: Deactivated successfully.
Sep 30 14:13:47 compute-0 podman[82163]: 2025-09-30 14:13:47.464252698 +0000 UTC m=+0.546469898 container remove d5f4b6381f9791290047b392545d346c2864ea7cc20c2fa6940429010a9efda8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-osd-0-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Sep 30 14:13:47 compute-0 systemd[1]: libpod-conmon-d5f4b6381f9791290047b392545d346c2864ea7cc20c2fa6940429010a9efda8.scope: Deactivated successfully.
Sep 30 14:13:47 compute-0 podman[82212]: 2025-09-30 14:13:47.373159137 +0000 UTC m=+0.020940643 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:13:47 compute-0 podman[82212]: 2025-09-30 14:13:47.54972011 +0000 UTC m=+0.197501586 container create 518d81af9965a0fee0e908d4b97ac9e5cd0bb732089dfe95486a96d6870e684a (image=quay.io/ceph/ceph:v19, name=infallible_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:13:47 compute-0 systemd[1]: Started libpod-conmon-518d81af9965a0fee0e908d4b97ac9e5cd0bb732089dfe95486a96d6870e684a.scope.
Sep 30 14:13:47 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:13:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee6286ab16628d122778e6fefa8cf21c9ebe0b751c3f023d7db4ba7a399fbfda/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Sep 30 14:13:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee6286ab16628d122778e6fefa8cf21c9ebe0b751c3f023d7db4ba7a399fbfda/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:13:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee6286ab16628d122778e6fefa8cf21c9ebe0b751c3f023d7db4ba7a399fbfda/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:13:47 compute-0 podman[82212]: 2025-09-30 14:13:47.750880646 +0000 UTC m=+0.398662172 container init 518d81af9965a0fee0e908d4b97ac9e5cd0bb732089dfe95486a96d6870e684a (image=quay.io/ceph/ceph:v19, name=infallible_agnesi, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Sep 30 14:13:47 compute-0 podman[82212]: 2025-09-30 14:13:47.757784528 +0000 UTC m=+0.405566014 container start 518d81af9965a0fee0e908d4b97ac9e5cd0bb732089dfe95486a96d6870e684a (image=quay.io/ceph/ceph:v19, name=infallible_agnesi, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Sep 30 14:13:47 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 14:13:47 compute-0 podman[82212]: 2025-09-30 14:13:47.847510605 +0000 UTC m=+0.495292091 container attach 518d81af9965a0fee0e908d4b97ac9e5cd0bb732089dfe95486a96d6870e684a (image=quay.io/ceph/ceph:v19, name=infallible_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Sep 30 14:13:47 compute-0 systemd[1]: Reloading.
Sep 30 14:13:48 compute-0 systemd-sysv-generator[82314]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 14:13:48 compute-0 systemd-rc-local-generator[82311]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:13:48 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Sep 30 14:13:48 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2366720611' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Sep 30 14:13:48 compute-0 infallible_agnesi[82255]: 
Sep 30 14:13:48 compute-0 infallible_agnesi[82255]: {"fsid":"5e3c7776-ac03-5698-b79f-a6dc2d80cae6","health":{"status":"HEALTH_WARN","checks":{"CEPHADM_APPLY_SPEC_FAIL":{"severity":"HEALTH_WARN","summary":{"message":"Failed to apply 2 service(s): mon,mgr","count":2},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":98,"monmap":{"epoch":1,"min_mon_release_name":"squid","num_mons":1},"osdmap":{"epoch":5,"num_osds":2,"num_up_osds":0,"osd_up_since":0,"num_in_osds":2,"osd_in_since":1759241620,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2025-09-30T14:12:06:949277+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-09-30T14:13:34.377104+0000","services":{}},"progress_events":{}}
Sep 30 14:13:48 compute-0 podman[82212]: 2025-09-30 14:13:48.21381828 +0000 UTC m=+0.861599776 container died 518d81af9965a0fee0e908d4b97ac9e5cd0bb732089dfe95486a96d6870e684a (image=quay.io/ceph/ceph:v19, name=infallible_agnesi, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:13:48 compute-0 systemd[1]: libpod-518d81af9965a0fee0e908d4b97ac9e5cd0bb732089dfe95486a96d6870e684a.scope: Deactivated successfully.
Sep 30 14:13:48 compute-0 systemd[1]: Reloading.
Sep 30 14:13:48 compute-0 systemd-sysv-generator[82368]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 14:13:48 compute-0 systemd-rc-local-generator[82363]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:13:48 compute-0 systemd[1]: Starting Ceph osd.0 for 5e3c7776-ac03-5698-b79f-a6dc2d80cae6...
Sep 30 14:13:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-ee6286ab16628d122778e6fefa8cf21c9ebe0b751c3f023d7db4ba7a399fbfda-merged.mount: Deactivated successfully.
Sep 30 14:13:48 compute-0 podman[82212]: 2025-09-30 14:13:48.579333164 +0000 UTC m=+1.227114640 container remove 518d81af9965a0fee0e908d4b97ac9e5cd0bb732089dfe95486a96d6870e684a (image=quay.io/ceph/ceph:v19, name=infallible_agnesi, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Sep 30 14:13:48 compute-0 systemd[1]: libpod-conmon-518d81af9965a0fee0e908d4b97ac9e5cd0bb732089dfe95486a96d6870e684a.scope: Deactivated successfully.
Sep 30 14:13:48 compute-0 sudo[82206]: pam_unix(sudo:session): session closed for user root
Sep 30 14:13:48 compute-0 podman[82426]: 2025-09-30 14:13:48.7375651 +0000 UTC m=+0.042075250 container create 14439abefcd2b5d4534f65919d014a92a07c89c97a34a71ba16da1267ec56a55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-osd-0-activate, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:13:48 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:13:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d51071ebd687f48851697d119dd0589c5175a29e20ae450a29199c8d740b4197/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:13:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d51071ebd687f48851697d119dd0589c5175a29e20ae450a29199c8d740b4197/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:13:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d51071ebd687f48851697d119dd0589c5175a29e20ae450a29199c8d740b4197/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:13:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d51071ebd687f48851697d119dd0589c5175a29e20ae450a29199c8d740b4197/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:13:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d51071ebd687f48851697d119dd0589c5175a29e20ae450a29199c8d740b4197/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Sep 30 14:13:48 compute-0 podman[82426]: 2025-09-30 14:13:48.806061968 +0000 UTC m=+0.110572138 container init 14439abefcd2b5d4534f65919d014a92a07c89c97a34a71ba16da1267ec56a55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-osd-0-activate, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:13:48 compute-0 podman[82426]: 2025-09-30 14:13:48.812013597 +0000 UTC m=+0.116523747 container start 14439abefcd2b5d4534f65919d014a92a07c89c97a34a71ba16da1267ec56a55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-osd-0-activate, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:13:48 compute-0 podman[82426]: 2025-09-30 14:13:48.717373447 +0000 UTC m=+0.021883647 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:13:48 compute-0 podman[82426]: 2025-09-30 14:13:48.817877953 +0000 UTC m=+0.122388103 container attach 14439abefcd2b5d4534f65919d014a92a07c89c97a34a71ba16da1267ec56a55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-osd-0-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Sep 30 14:13:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-osd-0-activate[82441]: Running command: /usr/bin/ceph-authtool --gen-print-key
Sep 30 14:13:48 compute-0 bash[82426]: Running command: /usr/bin/ceph-authtool --gen-print-key
Sep 30 14:13:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-osd-0-activate[82441]: Running command: /usr/bin/ceph-authtool --gen-print-key
Sep 30 14:13:48 compute-0 bash[82426]: Running command: /usr/bin/ceph-authtool --gen-print-key
Sep 30 14:13:48 compute-0 ceph-mon[74194]: pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 14:13:48 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/2366720611' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Sep 30 14:13:49 compute-0 lvm[82522]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 14:13:49 compute-0 lvm[82522]: VG ceph_vg0 finished
Sep 30 14:13:49 compute-0 lvm[82526]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 14:13:49 compute-0 lvm[82526]: VG ceph_vg0 finished
Sep 30 14:13:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-osd-0-activate[82441]: --> Failed to activate via raw: did not find any matching OSD to activate
Sep 30 14:13:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-osd-0-activate[82441]: Running command: /usr/bin/ceph-authtool --gen-print-key
Sep 30 14:13:49 compute-0 bash[82426]: --> Failed to activate via raw: did not find any matching OSD to activate
Sep 30 14:13:49 compute-0 bash[82426]: Running command: /usr/bin/ceph-authtool --gen-print-key
Sep 30 14:13:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-osd-0-activate[82441]: Running command: /usr/bin/ceph-authtool --gen-print-key
Sep 30 14:13:49 compute-0 bash[82426]: Running command: /usr/bin/ceph-authtool --gen-print-key
Sep 30 14:13:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-osd-0-activate[82441]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Sep 30 14:13:49 compute-0 bash[82426]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Sep 30 14:13:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-osd-0-activate[82441]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Sep 30 14:13:49 compute-0 bash[82426]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Sep 30 14:13:49 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 14:13:49 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 14:13:50 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-osd-0-activate[82441]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Sep 30 14:13:50 compute-0 bash[82426]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Sep 30 14:13:50 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-osd-0-activate[82441]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Sep 30 14:13:50 compute-0 bash[82426]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Sep 30 14:13:50 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-osd-0-activate[82441]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Sep 30 14:13:50 compute-0 bash[82426]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Sep 30 14:13:50 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-osd-0-activate[82441]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Sep 30 14:13:50 compute-0 bash[82426]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Sep 30 14:13:50 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-osd-0-activate[82441]: --> ceph-volume lvm activate successful for osd ID: 0
Sep 30 14:13:50 compute-0 bash[82426]: --> ceph-volume lvm activate successful for osd ID: 0
Sep 30 14:13:50 compute-0 systemd[1]: libpod-14439abefcd2b5d4534f65919d014a92a07c89c97a34a71ba16da1267ec56a55.scope: Deactivated successfully.
Sep 30 14:13:50 compute-0 systemd[1]: libpod-14439abefcd2b5d4534f65919d014a92a07c89c97a34a71ba16da1267ec56a55.scope: Consumed 1.352s CPU time.
Sep 30 14:13:50 compute-0 podman[82426]: 2025-09-30 14:13:50.078895509 +0000 UTC m=+1.383405669 container died 14439abefcd2b5d4534f65919d014a92a07c89c97a34a71ba16da1267ec56a55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-osd-0-activate, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:13:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-d51071ebd687f48851697d119dd0589c5175a29e20ae450a29199c8d740b4197-merged.mount: Deactivated successfully.
Sep 30 14:13:50 compute-0 podman[82426]: 2025-09-30 14:13:50.18924294 +0000 UTC m=+1.493753090 container remove 14439abefcd2b5d4534f65919d014a92a07c89c97a34a71ba16da1267ec56a55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-osd-0-activate, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:13:50 compute-0 podman[82688]: 2025-09-30 14:13:50.384735024 +0000 UTC m=+0.048135261 container create 2db0b61839febaeb169c319a8ec41f31021cd825e11b2bc8323ea29067005683 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-osd-0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Sep 30 14:13:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb9c2ec9fc8be0b29b9eb5f254d76384f2ea4cff60ef011d49a1b00cb0bb56ab/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:13:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb9c2ec9fc8be0b29b9eb5f254d76384f2ea4cff60ef011d49a1b00cb0bb56ab/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:13:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb9c2ec9fc8be0b29b9eb5f254d76384f2ea4cff60ef011d49a1b00cb0bb56ab/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:13:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb9c2ec9fc8be0b29b9eb5f254d76384f2ea4cff60ef011d49a1b00cb0bb56ab/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:13:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb9c2ec9fc8be0b29b9eb5f254d76384f2ea4cff60ef011d49a1b00cb0bb56ab/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Sep 30 14:13:50 compute-0 podman[82688]: 2025-09-30 14:13:50.450537475 +0000 UTC m=+0.113937732 container init 2db0b61839febaeb169c319a8ec41f31021cd825e11b2bc8323ea29067005683 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-osd-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:13:50 compute-0 podman[82688]: 2025-09-30 14:13:50.356974042 +0000 UTC m=+0.020374299 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:13:50 compute-0 podman[82688]: 2025-09-30 14:13:50.456705049 +0000 UTC m=+0.120105286 container start 2db0b61839febaeb169c319a8ec41f31021cd825e11b2bc8323ea29067005683 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-osd-0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:13:50 compute-0 bash[82688]: 2db0b61839febaeb169c319a8ec41f31021cd825e11b2bc8323ea29067005683
Sep 30 14:13:50 compute-0 systemd[1]: Started Ceph osd.0 for 5e3c7776-ac03-5698-b79f-a6dc2d80cae6.
Sep 30 14:13:50 compute-0 ceph-osd[82707]: set uid:gid to 167:167 (ceph:ceph)
Sep 30 14:13:50 compute-0 ceph-osd[82707]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-osd, pid 2
Sep 30 14:13:50 compute-0 ceph-osd[82707]: pidfile_write: ignore empty --pid-file
Sep 30 14:13:50 compute-0 ceph-osd[82707]: bdev(0x559a336d9800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Sep 30 14:13:50 compute-0 ceph-osd[82707]: bdev(0x559a336d9800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Sep 30 14:13:50 compute-0 ceph-osd[82707]: bdev(0x559a336d9800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Sep 30 14:13:50 compute-0 ceph-osd[82707]: bdev(0x559a336d9800 /var/lib/ceph/osd/ceph-0/block) close
Sep 30 14:13:50 compute-0 sudo[82049]: pam_unix(sudo:session): session closed for user root
Sep 30 14:13:50 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 14:13:50 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:50 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 14:13:50 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:50 compute-0 sudo[82719]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:13:50 compute-0 sudo[82719]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:13:50 compute-0 sudo[82719]: pam_unix(sudo:session): session closed for user root
Sep 30 14:13:50 compute-0 sudo[82744]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -- raw list --format json
Sep 30 14:13:50 compute-0 sudo[82744]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:13:50 compute-0 ceph-osd[82707]: bdev(0x559a336d9800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Sep 30 14:13:50 compute-0 ceph-osd[82707]: bdev(0x559a336d9800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Sep 30 14:13:50 compute-0 ceph-osd[82707]: bdev(0x559a336d9800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Sep 30 14:13:50 compute-0 ceph-osd[82707]: bdev(0x559a336d9800 /var/lib/ceph/osd/ceph-0/block) close
Sep 30 14:13:50 compute-0 ceph-osd[82707]: bdev(0x559a336d9800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Sep 30 14:13:50 compute-0 ceph-osd[82707]: bdev(0x559a336d9800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Sep 30 14:13:50 compute-0 ceph-osd[82707]: bdev(0x559a336d9800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Sep 30 14:13:50 compute-0 ceph-osd[82707]: bdev(0x559a336d9800 /var/lib/ceph/osd/ceph-0/block) close
Sep 30 14:13:50 compute-0 ceph-osd[82707]: bdev(0x559a336d9800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Sep 30 14:13:50 compute-0 ceph-osd[82707]: bdev(0x559a336d9800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Sep 30 14:13:50 compute-0 ceph-osd[82707]: bdev(0x559a336d9800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Sep 30 14:13:50 compute-0 ceph-osd[82707]: bdev(0x559a336d9800 /var/lib/ceph/osd/ceph-0/block) close
Sep 30 14:13:50 compute-0 ceph-osd[82707]: bdev(0x559a336d9800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Sep 30 14:13:50 compute-0 ceph-osd[82707]: bdev(0x559a336d9800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Sep 30 14:13:50 compute-0 ceph-osd[82707]: bdev(0x559a336d9800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Sep 30 14:13:50 compute-0 ceph-osd[82707]: bdev(0x559a336d9800 /var/lib/ceph/osd/ceph-0/block) close
Sep 30 14:13:50 compute-0 ceph-osd[82707]: bdev(0x559a336d9800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Sep 30 14:13:50 compute-0 ceph-osd[82707]: bdev(0x559a336d9800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Sep 30 14:13:50 compute-0 ceph-osd[82707]: bdev(0x559a336d9800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Sep 30 14:13:50 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Sep 30 14:13:50 compute-0 ceph-osd[82707]: bdev(0x559a344f9400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Sep 30 14:13:50 compute-0 ceph-osd[82707]: bdev(0x559a344f9400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Sep 30 14:13:50 compute-0 ceph-osd[82707]: bdev(0x559a344f9400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Sep 30 14:13:50 compute-0 ceph-osd[82707]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Sep 30 14:13:50 compute-0 ceph-osd[82707]: bdev(0x559a344f9400 /var/lib/ceph/osd/ceph-0/block) close
Sep 30 14:13:51 compute-0 podman[82821]: 2025-09-30 14:13:51.037136353 +0000 UTC m=+0.058123881 container create a87049d63065cff54f7bdf6f53dc0003e34e5df4dc1bb2f4c7749c220d9551f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_hermann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:13:51 compute-0 systemd[1]: Started libpod-conmon-a87049d63065cff54f7bdf6f53dc0003e34e5df4dc1bb2f4c7749c220d9551f8.scope.
Sep 30 14:13:51 compute-0 podman[82821]: 2025-09-30 14:13:50.998892259 +0000 UTC m=+0.019879807 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:13:51 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:13:51 compute-0 podman[82821]: 2025-09-30 14:13:51.129782003 +0000 UTC m=+0.150769561 container init a87049d63065cff54f7bdf6f53dc0003e34e5df4dc1bb2f4c7749c220d9551f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_hermann, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:13:51 compute-0 podman[82821]: 2025-09-30 14:13:51.138470739 +0000 UTC m=+0.159458267 container start a87049d63065cff54f7bdf6f53dc0003e34e5df4dc1bb2f4c7749c220d9551f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_hermann, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:13:51 compute-0 priceless_hermann[82837]: 167 167
Sep 30 14:13:51 compute-0 systemd[1]: libpod-a87049d63065cff54f7bdf6f53dc0003e34e5df4dc1bb2f4c7749c220d9551f8.scope: Deactivated successfully.
Sep 30 14:13:51 compute-0 ceph-osd[82707]: bdev(0x559a336d9800 /var/lib/ceph/osd/ceph-0/block) close
Sep 30 14:13:51 compute-0 podman[82821]: 2025-09-30 14:13:51.192419575 +0000 UTC m=+0.213407123 container attach a87049d63065cff54f7bdf6f53dc0003e34e5df4dc1bb2f4c7749c220d9551f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_hermann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:13:51 compute-0 podman[82821]: 2025-09-30 14:13:51.193683506 +0000 UTC m=+0.214671034 container died a87049d63065cff54f7bdf6f53dc0003e34e5df4dc1bb2f4c7749c220d9551f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_hermann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:13:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-4ac64db3caa62a939301c81876e4d33afc11fbee971c7af8a6d00043505b896c-merged.mount: Deactivated successfully.
Sep 30 14:13:51 compute-0 podman[82821]: 2025-09-30 14:13:51.262146054 +0000 UTC m=+0.283133582 container remove a87049d63065cff54f7bdf6f53dc0003e34e5df4dc1bb2f4c7749c220d9551f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_hermann, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Sep 30 14:13:51 compute-0 systemd[1]: libpod-conmon-a87049d63065cff54f7bdf6f53dc0003e34e5df4dc1bb2f4c7749c220d9551f8.scope: Deactivated successfully.
Sep 30 14:13:51 compute-0 ceph-osd[82707]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Sep 30 14:13:51 compute-0 podman[82862]: 2025-09-30 14:13:51.421386374 +0000 UTC m=+0.050923620 container create 08821c2a10fcede8c338f98d052a187214be30444e295e531316025df23b12b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_ramanujan, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Sep 30 14:13:51 compute-0 ceph-osd[82707]: load: jerasure load: lrc 
Sep 30 14:13:51 compute-0 ceph-osd[82707]: bdev(0x559a344f9c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Sep 30 14:13:51 compute-0 ceph-osd[82707]: bdev(0x559a344f9c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Sep 30 14:13:51 compute-0 ceph-osd[82707]: bdev(0x559a344f9c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Sep 30 14:13:51 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Sep 30 14:13:51 compute-0 ceph-osd[82707]: bdev(0x559a344f9c00 /var/lib/ceph/osd/ceph-0/block) close
Sep 30 14:13:51 compute-0 systemd[1]: Started libpod-conmon-08821c2a10fcede8c338f98d052a187214be30444e295e531316025df23b12b5.scope.
Sep 30 14:13:51 compute-0 podman[82862]: 2025-09-30 14:13:51.392991876 +0000 UTC m=+0.022529142 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:13:51 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:13:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1543c2ceaa87bb5b3c25d13bab54acab22893afef2932fbf0b57f9130aabd004/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:13:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1543c2ceaa87bb5b3c25d13bab54acab22893afef2932fbf0b57f9130aabd004/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:13:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1543c2ceaa87bb5b3c25d13bab54acab22893afef2932fbf0b57f9130aabd004/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:13:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1543c2ceaa87bb5b3c25d13bab54acab22893afef2932fbf0b57f9130aabd004/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:13:51 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Sep 30 14:13:51 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:51 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Sep 30 14:13:51 compute-0 podman[82862]: 2025-09-30 14:13:51.520963418 +0000 UTC m=+0.150500684 container init 08821c2a10fcede8c338f98d052a187214be30444e295e531316025df23b12b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_ramanujan, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Sep 30 14:13:51 compute-0 podman[82862]: 2025-09-30 14:13:51.528706041 +0000 UTC m=+0.158243287 container start 08821c2a10fcede8c338f98d052a187214be30444e295e531316025df23b12b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_ramanujan, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:13:51 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:51 compute-0 podman[82862]: 2025-09-30 14:13:51.533305815 +0000 UTC m=+0.162843081 container attach 08821c2a10fcede8c338f98d052a187214be30444e295e531316025df23b12b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_ramanujan, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:13:51 compute-0 ceph-mon[74194]: pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 14:13:51 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:51 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:51 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:51 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:51 compute-0 ceph-osd[82707]: bdev(0x559a344f9c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Sep 30 14:13:51 compute-0 ceph-osd[82707]: bdev(0x559a344f9c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Sep 30 14:13:51 compute-0 ceph-osd[82707]: bdev(0x559a344f9c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Sep 30 14:13:51 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Sep 30 14:13:51 compute-0 ceph-osd[82707]: bdev(0x559a344f9c00 /var/lib/ceph/osd/ceph-0/block) close
Sep 30 14:13:51 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 14:13:51 compute-0 ceph-osd[82707]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Sep 30 14:13:51 compute-0 ceph-osd[82707]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Sep 30 14:13:51 compute-0 ceph-osd[82707]: bdev(0x559a344f9c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Sep 30 14:13:51 compute-0 ceph-osd[82707]: bdev(0x559a344f9c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Sep 30 14:13:51 compute-0 ceph-osd[82707]: bdev(0x559a344f9c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Sep 30 14:13:51 compute-0 ceph-osd[82707]: bdev(0x559a344f9c00 /var/lib/ceph/osd/ceph-0/block) close
Sep 30 14:13:52 compute-0 lvm[82969]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 14:13:52 compute-0 lvm[82969]: VG ceph_vg0 finished
Sep 30 14:13:52 compute-0 angry_ramanujan[82883]: {}
Sep 30 14:13:52 compute-0 systemd[1]: libpod-08821c2a10fcede8c338f98d052a187214be30444e295e531316025df23b12b5.scope: Deactivated successfully.
Sep 30 14:13:52 compute-0 systemd[1]: libpod-08821c2a10fcede8c338f98d052a187214be30444e295e531316025df23b12b5.scope: Consumed 1.041s CPU time.
Sep 30 14:13:52 compute-0 podman[82862]: 2025-09-30 14:13:52.211793694 +0000 UTC m=+0.841330940 container died 08821c2a10fcede8c338f98d052a187214be30444e295e531316025df23b12b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_ramanujan, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Sep 30 14:13:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-1543c2ceaa87bb5b3c25d13bab54acab22893afef2932fbf0b57f9130aabd004-merged.mount: Deactivated successfully.
Sep 30 14:13:52 compute-0 ceph-osd[82707]: bdev(0x559a344f9c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Sep 30 14:13:52 compute-0 ceph-osd[82707]: bdev(0x559a344f9c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Sep 30 14:13:52 compute-0 ceph-osd[82707]: bdev(0x559a344f9c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Sep 30 14:13:52 compute-0 ceph-osd[82707]: bdev(0x559a344f9c00 /var/lib/ceph/osd/ceph-0/block) close
Sep 30 14:13:52 compute-0 podman[82862]: 2025-09-30 14:13:52.261553415 +0000 UTC m=+0.891090661 container remove 08821c2a10fcede8c338f98d052a187214be30444e295e531316025df23b12b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_ramanujan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:13:52 compute-0 systemd[1]: libpod-conmon-08821c2a10fcede8c338f98d052a187214be30444e295e531316025df23b12b5.scope: Deactivated successfully.
Sep 30 14:13:52 compute-0 sudo[82744]: pam_unix(sudo:session): session closed for user root
Sep 30 14:13:52 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 14:13:52 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:52 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 14:13:52 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:52 compute-0 ceph-osd[82707]: bdev(0x559a344f9c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Sep 30 14:13:52 compute-0 ceph-osd[82707]: bdev(0x559a344f9c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Sep 30 14:13:52 compute-0 ceph-osd[82707]: bdev(0x559a344f9c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Sep 30 14:13:52 compute-0 ceph-osd[82707]: bdev(0x559a344f9c00 /var/lib/ceph/osd/ceph-0/block) close
Sep 30 14:13:52 compute-0 ceph-osd[82707]: bdev(0x559a344f9c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Sep 30 14:13:52 compute-0 ceph-osd[82707]: bdev(0x559a344f9c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Sep 30 14:13:52 compute-0 ceph-osd[82707]: bdev(0x559a344f9c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Sep 30 14:13:52 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Sep 30 14:13:52 compute-0 ceph-osd[82707]: bdev(0x559a346e2000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Sep 30 14:13:52 compute-0 ceph-osd[82707]: bdev(0x559a346e2000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Sep 30 14:13:52 compute-0 ceph-osd[82707]: bdev(0x559a346e2000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Sep 30 14:13:52 compute-0 ceph-osd[82707]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Sep 30 14:13:52 compute-0 ceph-osd[82707]: bluefs mount
Sep 30 14:13:52 compute-0 ceph-osd[82707]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Sep 30 14:13:52 compute-0 ceph-osd[82707]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Sep 30 14:13:52 compute-0 ceph-osd[82707]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Sep 30 14:13:52 compute-0 ceph-osd[82707]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Sep 30 14:13:52 compute-0 ceph-osd[82707]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Sep 30 14:13:52 compute-0 ceph-osd[82707]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Sep 30 14:13:52 compute-0 ceph-osd[82707]: bluefs mount shared_bdev_used = 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: RocksDB version: 7.9.2
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Git sha 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Compile date 2025-07-17 03:12:14
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: DB SUMMARY
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: DB Session ID:  AJK2E8JX5EBGLIW1W53S
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: CURRENT file:  CURRENT
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: IDENTITY file:  IDENTITY
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                         Options.error_if_exists: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                       Options.create_if_missing: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                         Options.paranoid_checks: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:             Options.flush_verify_memtable_count: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                                     Options.env: 0x559a34545dc0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                                      Options.fs: LegacyFileSystem
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                                Options.info_log: 0x559a345497e0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                Options.max_file_opening_threads: 16
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                              Options.statistics: (nil)
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                               Options.use_fsync: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                       Options.max_log_file_size: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                   Options.log_file_time_to_roll: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                       Options.keep_log_file_num: 1000
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                    Options.recycle_log_file_num: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                         Options.allow_fallocate: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                        Options.allow_mmap_reads: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                       Options.allow_mmap_writes: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                        Options.use_direct_reads: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:          Options.create_missing_column_families: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                              Options.db_log_dir: 
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                                 Options.wal_dir: db.wal
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                Options.table_cache_numshardbits: 6
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                         Options.WAL_ttl_seconds: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                       Options.WAL_size_limit_MB: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:             Options.manifest_preallocation_size: 4194304
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                     Options.is_fd_close_on_exec: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                   Options.advise_random_on_open: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                    Options.db_write_buffer_size: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                    Options.write_buffer_manager: 0x559a3463ea00
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.access_hint_on_compaction_start: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                      Options.use_adaptive_mutex: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                            Options.rate_limiter: (nil)
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                       Options.wal_recovery_mode: 2
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                  Options.enable_thread_tracking: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                  Options.enable_pipelined_write: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                  Options.unordered_write: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:             Options.write_thread_max_yield_usec: 100
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                               Options.row_cache: None
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                              Options.wal_filter: None
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:             Options.avoid_flush_during_recovery: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:             Options.allow_ingest_behind: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:             Options.two_write_queues: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:             Options.manual_wal_flush: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:             Options.wal_compression: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:             Options.atomic_flush: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                 Options.persist_stats_to_disk: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                 Options.write_dbid_to_manifest: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                 Options.log_readahead_size: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                 Options.best_efforts_recovery: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:             Options.allow_data_in_errors: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:             Options.db_host_id: __hostname__
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:             Options.enforce_single_del_contracts: true
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:             Options.max_background_jobs: 4
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:             Options.max_background_compactions: -1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:             Options.max_subcompactions: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:           Options.writable_file_max_buffer_size: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:             Options.delayed_write_rate : 16777216
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:             Options.max_total_wal_size: 1073741824
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                   Options.stats_dump_period_sec: 600
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                 Options.stats_persist_period_sec: 600
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                          Options.max_open_files: -1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                          Options.bytes_per_sync: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                      Options.wal_bytes_per_sync: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                   Options.strict_bytes_per_sync: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:       Options.compaction_readahead_size: 2097152
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                  Options.max_background_flushes: -1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Compression algorithms supported:
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         kZSTD supported: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         kXpressCompression supported: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         kBZip2Compression supported: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         kZSTDNotFinalCompression supported: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         kLZ4Compression supported: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         kZlibCompression supported: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         kLZ4HCCompression supported: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         kSnappyCompression supported: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Fast CRC32 supported: Supported on x86
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: DMutex implementation: pthread_mutex_t
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:        Options.compaction_filter: None
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:        Options.compaction_filter_factory: None
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:  Options.sst_partitioner_factory: None
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.memtable_factory: SkipListFactory
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:            Options.table_factory: BlockBasedTable
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559a34549ba0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559a3376f350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:        Options.write_buffer_size: 16777216
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:  Options.max_write_buffer_number: 64
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:          Options.compression: LZ4
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                  Options.bottommost_compression: Disabled
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:       Options.prefix_extractor: nullptr
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:             Options.num_levels: 7
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:            Options.compression_opts.window_bits: -14
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                  Options.compression_opts.level: 32767
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:               Options.compression_opts.strategy: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.compression_opts.parallel_threads: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                  Options.compression_opts.enabled: false
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:              Options.level0_stop_writes_trigger: 36
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                   Options.target_file_size_base: 67108864
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:             Options.target_file_size_multiplier: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                        Options.arena_block_size: 1048576
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                Options.disable_auto_compactions: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                   Options.inplace_update_support: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                 Options.inplace_update_num_locks: 10000
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:               Options.memtable_whole_key_filtering: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:   Options.memtable_huge_page_size: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                           Options.bloom_locality: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                    Options.max_successive_merges: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                Options.optimize_filters_for_hits: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                Options.paranoid_file_checks: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                Options.force_consistency_checks: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                Options.report_bg_io_stats: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                               Options.ttl: 2592000
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:          Options.periodic_compaction_seconds: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:    Options.preserve_internal_time_seconds: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                       Options.enable_blob_files: false
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                           Options.min_blob_size: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                          Options.blob_file_size: 268435456
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                   Options.blob_compression_type: NoCompression
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:          Options.enable_blob_garbage_collection: false
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:          Options.blob_compaction_readahead_size: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                Options.blob_file_starting_level: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:           Options.merge_operator: None
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:        Options.compaction_filter: None
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:        Options.compaction_filter_factory: None
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:  Options.sst_partitioner_factory: None
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.memtable_factory: SkipListFactory
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:            Options.table_factory: BlockBasedTable
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559a34549ba0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559a3376f350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:        Options.write_buffer_size: 16777216
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:  Options.max_write_buffer_number: 64
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:          Options.compression: LZ4
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                  Options.bottommost_compression: Disabled
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:       Options.prefix_extractor: nullptr
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:             Options.num_levels: 7
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:            Options.compression_opts.window_bits: -14
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                  Options.compression_opts.level: 32767
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:               Options.compression_opts.strategy: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.compression_opts.parallel_threads: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                  Options.compression_opts.enabled: false
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:              Options.level0_stop_writes_trigger: 36
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                   Options.target_file_size_base: 67108864
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:             Options.target_file_size_multiplier: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                        Options.arena_block_size: 1048576
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                Options.disable_auto_compactions: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                   Options.inplace_update_support: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                 Options.inplace_update_num_locks: 10000
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:               Options.memtable_whole_key_filtering: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:   Options.memtable_huge_page_size: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                           Options.bloom_locality: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                    Options.max_successive_merges: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                Options.optimize_filters_for_hits: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                Options.paranoid_file_checks: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                Options.force_consistency_checks: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                Options.report_bg_io_stats: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                               Options.ttl: 2592000
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:          Options.periodic_compaction_seconds: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:    Options.preserve_internal_time_seconds: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                       Options.enable_blob_files: false
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                           Options.min_blob_size: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                          Options.blob_file_size: 268435456
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                   Options.blob_compression_type: NoCompression
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:          Options.enable_blob_garbage_collection: false
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:          Options.blob_compaction_readahead_size: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                Options.blob_file_starting_level: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:           Options.merge_operator: None
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:        Options.compaction_filter: None
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:        Options.compaction_filter_factory: None
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:  Options.sst_partitioner_factory: None
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.memtable_factory: SkipListFactory
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:            Options.table_factory: BlockBasedTable
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559a34549ba0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559a3376f350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:        Options.write_buffer_size: 16777216
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:  Options.max_write_buffer_number: 64
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:          Options.compression: LZ4
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                  Options.bottommost_compression: Disabled
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:       Options.prefix_extractor: nullptr
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:             Options.num_levels: 7
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:            Options.compression_opts.window_bits: -14
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                  Options.compression_opts.level: 32767
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:               Options.compression_opts.strategy: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.compression_opts.parallel_threads: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                  Options.compression_opts.enabled: false
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:              Options.level0_stop_writes_trigger: 36
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                   Options.target_file_size_base: 67108864
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:             Options.target_file_size_multiplier: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                        Options.arena_block_size: 1048576
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                Options.disable_auto_compactions: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                   Options.inplace_update_support: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                 Options.inplace_update_num_locks: 10000
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:               Options.memtable_whole_key_filtering: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:   Options.memtable_huge_page_size: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                           Options.bloom_locality: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                    Options.max_successive_merges: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                Options.optimize_filters_for_hits: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                Options.paranoid_file_checks: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                Options.force_consistency_checks: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                Options.report_bg_io_stats: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                               Options.ttl: 2592000
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:          Options.periodic_compaction_seconds: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:    Options.preserve_internal_time_seconds: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                       Options.enable_blob_files: false
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                           Options.min_blob_size: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                          Options.blob_file_size: 268435456
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                   Options.blob_compression_type: NoCompression
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:          Options.enable_blob_garbage_collection: false
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:          Options.blob_compaction_readahead_size: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                Options.blob_file_starting_level: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:           Options.merge_operator: None
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:        Options.compaction_filter: None
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:        Options.compaction_filter_factory: None
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:  Options.sst_partitioner_factory: None
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.memtable_factory: SkipListFactory
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:            Options.table_factory: BlockBasedTable
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559a34549ba0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559a3376f350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:        Options.write_buffer_size: 16777216
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:  Options.max_write_buffer_number: 64
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:          Options.compression: LZ4
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                  Options.bottommost_compression: Disabled
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:       Options.prefix_extractor: nullptr
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:             Options.num_levels: 7
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:            Options.compression_opts.window_bits: -14
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                  Options.compression_opts.level: 32767
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:               Options.compression_opts.strategy: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.compression_opts.parallel_threads: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                  Options.compression_opts.enabled: false
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:              Options.level0_stop_writes_trigger: 36
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                   Options.target_file_size_base: 67108864
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:             Options.target_file_size_multiplier: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                        Options.arena_block_size: 1048576
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                Options.disable_auto_compactions: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                   Options.inplace_update_support: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                 Options.inplace_update_num_locks: 10000
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:               Options.memtable_whole_key_filtering: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:   Options.memtable_huge_page_size: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                           Options.bloom_locality: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                    Options.max_successive_merges: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                Options.optimize_filters_for_hits: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                Options.paranoid_file_checks: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                Options.force_consistency_checks: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                Options.report_bg_io_stats: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                               Options.ttl: 2592000
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:          Options.periodic_compaction_seconds: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:    Options.preserve_internal_time_seconds: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                       Options.enable_blob_files: false
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                           Options.min_blob_size: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                          Options.blob_file_size: 268435456
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                   Options.blob_compression_type: NoCompression
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:          Options.enable_blob_garbage_collection: false
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:          Options.blob_compaction_readahead_size: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                Options.blob_file_starting_level: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:           Options.merge_operator: None
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:        Options.compaction_filter: None
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:        Options.compaction_filter_factory: None
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:  Options.sst_partitioner_factory: None
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.memtable_factory: SkipListFactory
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:            Options.table_factory: BlockBasedTable
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559a34549ba0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559a3376f350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:        Options.write_buffer_size: 16777216
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:  Options.max_write_buffer_number: 64
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:          Options.compression: LZ4
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                  Options.bottommost_compression: Disabled
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:       Options.prefix_extractor: nullptr
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:             Options.num_levels: 7
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:            Options.compression_opts.window_bits: -14
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                  Options.compression_opts.level: 32767
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:               Options.compression_opts.strategy: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.compression_opts.parallel_threads: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                  Options.compression_opts.enabled: false
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:              Options.level0_stop_writes_trigger: 36
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                   Options.target_file_size_base: 67108864
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:             Options.target_file_size_multiplier: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                        Options.arena_block_size: 1048576
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                Options.disable_auto_compactions: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                   Options.inplace_update_support: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                 Options.inplace_update_num_locks: 10000
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:               Options.memtable_whole_key_filtering: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:   Options.memtable_huge_page_size: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                           Options.bloom_locality: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                    Options.max_successive_merges: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                Options.optimize_filters_for_hits: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                Options.paranoid_file_checks: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                Options.force_consistency_checks: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                Options.report_bg_io_stats: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                               Options.ttl: 2592000
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:          Options.periodic_compaction_seconds: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:    Options.preserve_internal_time_seconds: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                       Options.enable_blob_files: false
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                           Options.min_blob_size: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                          Options.blob_file_size: 268435456
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                   Options.blob_compression_type: NoCompression
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:          Options.enable_blob_garbage_collection: false
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:          Options.blob_compaction_readahead_size: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                Options.blob_file_starting_level: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:           Options.merge_operator: None
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:        Options.compaction_filter: None
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:        Options.compaction_filter_factory: None
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:  Options.sst_partitioner_factory: None
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.memtable_factory: SkipListFactory
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:            Options.table_factory: BlockBasedTable
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559a34549ba0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559a3376f350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:        Options.write_buffer_size: 16777216
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:  Options.max_write_buffer_number: 64
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:          Options.compression: LZ4
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                  Options.bottommost_compression: Disabled
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:       Options.prefix_extractor: nullptr
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:             Options.num_levels: 7
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:            Options.compression_opts.window_bits: -14
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                  Options.compression_opts.level: 32767
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:               Options.compression_opts.strategy: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.compression_opts.parallel_threads: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                  Options.compression_opts.enabled: false
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:              Options.level0_stop_writes_trigger: 36
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                   Options.target_file_size_base: 67108864
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:             Options.target_file_size_multiplier: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                        Options.arena_block_size: 1048576
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                Options.disable_auto_compactions: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                   Options.inplace_update_support: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                 Options.inplace_update_num_locks: 10000
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:               Options.memtable_whole_key_filtering: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:   Options.memtable_huge_page_size: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                           Options.bloom_locality: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                    Options.max_successive_merges: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                Options.optimize_filters_for_hits: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                Options.paranoid_file_checks: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                Options.force_consistency_checks: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                Options.report_bg_io_stats: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                               Options.ttl: 2592000
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:          Options.periodic_compaction_seconds: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:    Options.preserve_internal_time_seconds: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                       Options.enable_blob_files: false
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                           Options.min_blob_size: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                          Options.blob_file_size: 268435456
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                   Options.blob_compression_type: NoCompression
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:          Options.enable_blob_garbage_collection: false
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:          Options.blob_compaction_readahead_size: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                Options.blob_file_starting_level: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:           Options.merge_operator: None
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:        Options.compaction_filter: None
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:        Options.compaction_filter_factory: None
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:  Options.sst_partitioner_factory: None
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.memtable_factory: SkipListFactory
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:            Options.table_factory: BlockBasedTable
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559a34549ba0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559a3376f350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:        Options.write_buffer_size: 16777216
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:  Options.max_write_buffer_number: 64
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:          Options.compression: LZ4
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                  Options.bottommost_compression: Disabled
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:       Options.prefix_extractor: nullptr
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:             Options.num_levels: 7
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:            Options.compression_opts.window_bits: -14
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                  Options.compression_opts.level: 32767
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:               Options.compression_opts.strategy: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.compression_opts.parallel_threads: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                  Options.compression_opts.enabled: false
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:              Options.level0_stop_writes_trigger: 36
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                   Options.target_file_size_base: 67108864
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:             Options.target_file_size_multiplier: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                        Options.arena_block_size: 1048576
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                Options.disable_auto_compactions: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                   Options.inplace_update_support: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                 Options.inplace_update_num_locks: 10000
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:               Options.memtable_whole_key_filtering: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:   Options.memtable_huge_page_size: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                           Options.bloom_locality: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                    Options.max_successive_merges: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                Options.optimize_filters_for_hits: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                Options.paranoid_file_checks: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                Options.force_consistency_checks: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                Options.report_bg_io_stats: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                               Options.ttl: 2592000
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:          Options.periodic_compaction_seconds: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:    Options.preserve_internal_time_seconds: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                       Options.enable_blob_files: false
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                           Options.min_blob_size: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                          Options.blob_file_size: 268435456
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                   Options.blob_compression_type: NoCompression
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:          Options.enable_blob_garbage_collection: false
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:          Options.blob_compaction_readahead_size: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                Options.blob_file_starting_level: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:           Options.merge_operator: None
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:        Options.compaction_filter: None
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:        Options.compaction_filter_factory: None
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:  Options.sst_partitioner_factory: None
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.memtable_factory: SkipListFactory
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:            Options.table_factory: BlockBasedTable
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559a34549bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559a3376e9b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:        Options.write_buffer_size: 16777216
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:  Options.max_write_buffer_number: 64
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:          Options.compression: LZ4
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                  Options.bottommost_compression: Disabled
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:       Options.prefix_extractor: nullptr
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:             Options.num_levels: 7
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:            Options.compression_opts.window_bits: -14
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                  Options.compression_opts.level: 32767
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:               Options.compression_opts.strategy: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.compression_opts.parallel_threads: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                  Options.compression_opts.enabled: false
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:              Options.level0_stop_writes_trigger: 36
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                   Options.target_file_size_base: 67108864
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:             Options.target_file_size_multiplier: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                        Options.arena_block_size: 1048576
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                Options.disable_auto_compactions: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                   Options.inplace_update_support: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                 Options.inplace_update_num_locks: 10000
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:               Options.memtable_whole_key_filtering: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:   Options.memtable_huge_page_size: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                           Options.bloom_locality: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                    Options.max_successive_merges: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                Options.optimize_filters_for_hits: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                Options.paranoid_file_checks: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                Options.force_consistency_checks: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                Options.report_bg_io_stats: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                               Options.ttl: 2592000
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:          Options.periodic_compaction_seconds: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:    Options.preserve_internal_time_seconds: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                       Options.enable_blob_files: false
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                           Options.min_blob_size: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                          Options.blob_file_size: 268435456
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                   Options.blob_compression_type: NoCompression
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:          Options.enable_blob_garbage_collection: false
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:          Options.blob_compaction_readahead_size: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                Options.blob_file_starting_level: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:           Options.merge_operator: None
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:        Options.compaction_filter: None
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:        Options.compaction_filter_factory: None
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:  Options.sst_partitioner_factory: None
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.memtable_factory: SkipListFactory
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:            Options.table_factory: BlockBasedTable
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559a34549bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559a3376e9b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:        Options.write_buffer_size: 16777216
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:  Options.max_write_buffer_number: 64
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:          Options.compression: LZ4
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                  Options.bottommost_compression: Disabled
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:       Options.prefix_extractor: nullptr
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:             Options.num_levels: 7
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:            Options.compression_opts.window_bits: -14
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                  Options.compression_opts.level: 32767
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:               Options.compression_opts.strategy: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.compression_opts.parallel_threads: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                  Options.compression_opts.enabled: false
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:              Options.level0_stop_writes_trigger: 36
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                   Options.target_file_size_base: 67108864
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:             Options.target_file_size_multiplier: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                        Options.arena_block_size: 1048576
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                Options.disable_auto_compactions: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                   Options.inplace_update_support: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                 Options.inplace_update_num_locks: 10000
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:               Options.memtable_whole_key_filtering: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:   Options.memtable_huge_page_size: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                           Options.bloom_locality: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                    Options.max_successive_merges: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                Options.optimize_filters_for_hits: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                Options.paranoid_file_checks: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                Options.force_consistency_checks: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                Options.report_bg_io_stats: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                               Options.ttl: 2592000
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:          Options.periodic_compaction_seconds: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:    Options.preserve_internal_time_seconds: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                       Options.enable_blob_files: false
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                           Options.min_blob_size: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                          Options.blob_file_size: 268435456
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                   Options.blob_compression_type: NoCompression
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:          Options.enable_blob_garbage_collection: false
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:          Options.blob_compaction_readahead_size: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                Options.blob_file_starting_level: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:           Options.merge_operator: None
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:        Options.compaction_filter: None
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:        Options.compaction_filter_factory: None
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:  Options.sst_partitioner_factory: None
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.memtable_factory: SkipListFactory
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:            Options.table_factory: BlockBasedTable
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559a34549bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559a3376e9b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:        Options.write_buffer_size: 16777216
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:  Options.max_write_buffer_number: 64
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:          Options.compression: LZ4
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                  Options.bottommost_compression: Disabled
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:       Options.prefix_extractor: nullptr
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:             Options.num_levels: 7
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:            Options.compression_opts.window_bits: -14
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                  Options.compression_opts.level: 32767
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:               Options.compression_opts.strategy: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.compression_opts.parallel_threads: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                  Options.compression_opts.enabled: false
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:              Options.level0_stop_writes_trigger: 36
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                   Options.target_file_size_base: 67108864
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:             Options.target_file_size_multiplier: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                        Options.arena_block_size: 1048576
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                Options.disable_auto_compactions: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                   Options.inplace_update_support: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                 Options.inplace_update_num_locks: 10000
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:               Options.memtable_whole_key_filtering: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:   Options.memtable_huge_page_size: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                           Options.bloom_locality: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                    Options.max_successive_merges: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                Options.optimize_filters_for_hits: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                Options.paranoid_file_checks: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                Options.force_consistency_checks: 1
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                Options.report_bg_io_stats: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                               Options.ttl: 2592000
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:          Options.periodic_compaction_seconds: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:    Options.preserve_internal_time_seconds: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                       Options.enable_blob_files: false
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                           Options.min_blob_size: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                          Options.blob_file_size: 268435456
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                   Options.blob_compression_type: NoCompression
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:          Options.enable_blob_garbage_collection: false
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:          Options.blob_compaction_readahead_size: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb:                Options.blob_file_starting_level: 0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: ec5851b7-6f91-46cf-a703-2ecb3eeaf577
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759241632796349, "job": 1, "event": "recovery_started", "wal_files": [31]}
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759241632796528, "job": 1, "event": "recovery_finished"}
Sep 30 14:13:52 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Sep 30 14:13:52 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Sep 30 14:13:52 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Sep 30 14:13:52 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Sep 30 14:13:52 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Sep 30 14:13:52 compute-0 ceph-osd[82707]: freelist init
Sep 30 14:13:52 compute-0 ceph-osd[82707]: freelist _read_cfg
Sep 30 14:13:52 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Sep 30 14:13:52 compute-0 ceph-osd[82707]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Sep 30 14:13:52 compute-0 ceph-osd[82707]: bluefs umount
Sep 30 14:13:52 compute-0 ceph-osd[82707]: bdev(0x559a346e2000 /var/lib/ceph/osd/ceph-0/block) close
Sep 30 14:13:53 compute-0 ceph-osd[82707]: bdev(0x559a346e2000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Sep 30 14:13:53 compute-0 ceph-osd[82707]: bdev(0x559a346e2000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Sep 30 14:13:53 compute-0 ceph-osd[82707]: bdev(0x559a346e2000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Sep 30 14:13:53 compute-0 ceph-osd[82707]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Sep 30 14:13:53 compute-0 ceph-osd[82707]: bluefs mount
Sep 30 14:13:53 compute-0 ceph-osd[82707]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Sep 30 14:13:53 compute-0 ceph-osd[82707]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Sep 30 14:13:53 compute-0 ceph-osd[82707]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Sep 30 14:13:53 compute-0 ceph-osd[82707]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Sep 30 14:13:53 compute-0 ceph-osd[82707]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Sep 30 14:13:53 compute-0 ceph-osd[82707]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Sep 30 14:13:53 compute-0 ceph-osd[82707]: bluefs mount shared_bdev_used = 4718592
Sep 30 14:13:53 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: RocksDB version: 7.9.2
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Git sha 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Compile date 2025-07-17 03:12:14
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: DB SUMMARY
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: DB Session ID:  AJK2E8JX5EBGLIW1W53T
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: CURRENT file:  CURRENT
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: IDENTITY file:  IDENTITY
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                         Options.error_if_exists: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                       Options.create_if_missing: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                         Options.paranoid_checks: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:             Options.flush_verify_memtable_count: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                                     Options.env: 0x559a346f0310
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                                      Options.fs: LegacyFileSystem
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                                Options.info_log: 0x559a34549960
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                Options.max_file_opening_threads: 16
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                              Options.statistics: (nil)
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                               Options.use_fsync: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                       Options.max_log_file_size: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                   Options.log_file_time_to_roll: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                       Options.keep_log_file_num: 1000
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                    Options.recycle_log_file_num: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                         Options.allow_fallocate: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                        Options.allow_mmap_reads: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                       Options.allow_mmap_writes: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                        Options.use_direct_reads: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:          Options.create_missing_column_families: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                              Options.db_log_dir: 
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                                 Options.wal_dir: db.wal
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                Options.table_cache_numshardbits: 6
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                         Options.WAL_ttl_seconds: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                       Options.WAL_size_limit_MB: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:             Options.manifest_preallocation_size: 4194304
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                     Options.is_fd_close_on_exec: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                   Options.advise_random_on_open: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                    Options.db_write_buffer_size: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                    Options.write_buffer_manager: 0x559a3463ea00
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.access_hint_on_compaction_start: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                      Options.use_adaptive_mutex: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                            Options.rate_limiter: (nil)
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                       Options.wal_recovery_mode: 2
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                  Options.enable_thread_tracking: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                  Options.enable_pipelined_write: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                  Options.unordered_write: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:             Options.write_thread_max_yield_usec: 100
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                               Options.row_cache: None
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                              Options.wal_filter: None
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:             Options.avoid_flush_during_recovery: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:             Options.allow_ingest_behind: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:             Options.two_write_queues: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:             Options.manual_wal_flush: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:             Options.wal_compression: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:             Options.atomic_flush: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                 Options.persist_stats_to_disk: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                 Options.write_dbid_to_manifest: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                 Options.log_readahead_size: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                 Options.best_efforts_recovery: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:             Options.allow_data_in_errors: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:             Options.db_host_id: __hostname__
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:             Options.enforce_single_del_contracts: true
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:             Options.max_background_jobs: 4
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:             Options.max_background_compactions: -1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:             Options.max_subcompactions: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:           Options.writable_file_max_buffer_size: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:             Options.delayed_write_rate : 16777216
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:             Options.max_total_wal_size: 1073741824
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                   Options.stats_dump_period_sec: 600
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                 Options.stats_persist_period_sec: 600
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                          Options.max_open_files: -1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                          Options.bytes_per_sync: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                      Options.wal_bytes_per_sync: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                   Options.strict_bytes_per_sync: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:       Options.compaction_readahead_size: 2097152
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                  Options.max_background_flushes: -1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Compression algorithms supported:
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         kZSTD supported: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         kXpressCompression supported: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         kBZip2Compression supported: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         kZSTDNotFinalCompression supported: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         kLZ4Compression supported: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         kZlibCompression supported: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         kLZ4HCCompression supported: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         kSnappyCompression supported: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Fast CRC32 supported: Supported on x86
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: DMutex implementation: pthread_mutex_t
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:        Options.compaction_filter: None
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:        Options.compaction_filter_factory: None
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:  Options.sst_partitioner_factory: None
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.memtable_factory: SkipListFactory
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:            Options.table_factory: BlockBasedTable
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559a345496c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559a3376f350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:        Options.write_buffer_size: 16777216
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:  Options.max_write_buffer_number: 64
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:          Options.compression: LZ4
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                  Options.bottommost_compression: Disabled
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:       Options.prefix_extractor: nullptr
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:             Options.num_levels: 7
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:            Options.compression_opts.window_bits: -14
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                  Options.compression_opts.level: 32767
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:               Options.compression_opts.strategy: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.compression_opts.parallel_threads: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                  Options.compression_opts.enabled: false
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:              Options.level0_stop_writes_trigger: 36
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                   Options.target_file_size_base: 67108864
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:             Options.target_file_size_multiplier: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                        Options.arena_block_size: 1048576
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                Options.disable_auto_compactions: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                   Options.inplace_update_support: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                 Options.inplace_update_num_locks: 10000
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:               Options.memtable_whole_key_filtering: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:   Options.memtable_huge_page_size: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                           Options.bloom_locality: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                    Options.max_successive_merges: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                Options.optimize_filters_for_hits: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                Options.paranoid_file_checks: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                Options.force_consistency_checks: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                Options.report_bg_io_stats: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                               Options.ttl: 2592000
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:          Options.periodic_compaction_seconds: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:    Options.preserve_internal_time_seconds: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                       Options.enable_blob_files: false
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                           Options.min_blob_size: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                          Options.blob_file_size: 268435456
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                   Options.blob_compression_type: NoCompression
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:          Options.enable_blob_garbage_collection: false
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:          Options.blob_compaction_readahead_size: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                Options.blob_file_starting_level: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:           Options.merge_operator: None
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:        Options.compaction_filter: None
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:        Options.compaction_filter_factory: None
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:  Options.sst_partitioner_factory: None
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.memtable_factory: SkipListFactory
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:            Options.table_factory: BlockBasedTable
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559a345496c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559a3376f350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:        Options.write_buffer_size: 16777216
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:  Options.max_write_buffer_number: 64
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:          Options.compression: LZ4
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                  Options.bottommost_compression: Disabled
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:       Options.prefix_extractor: nullptr
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:             Options.num_levels: 7
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:            Options.compression_opts.window_bits: -14
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                  Options.compression_opts.level: 32767
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:               Options.compression_opts.strategy: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.compression_opts.parallel_threads: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                  Options.compression_opts.enabled: false
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:              Options.level0_stop_writes_trigger: 36
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                   Options.target_file_size_base: 67108864
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:             Options.target_file_size_multiplier: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                        Options.arena_block_size: 1048576
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                Options.disable_auto_compactions: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                   Options.inplace_update_support: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                 Options.inplace_update_num_locks: 10000
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:               Options.memtable_whole_key_filtering: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:   Options.memtable_huge_page_size: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                           Options.bloom_locality: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                    Options.max_successive_merges: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                Options.optimize_filters_for_hits: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                Options.paranoid_file_checks: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                Options.force_consistency_checks: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                Options.report_bg_io_stats: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                               Options.ttl: 2592000
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:          Options.periodic_compaction_seconds: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:    Options.preserve_internal_time_seconds: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                       Options.enable_blob_files: false
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                           Options.min_blob_size: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                          Options.blob_file_size: 268435456
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                   Options.blob_compression_type: NoCompression
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:          Options.enable_blob_garbage_collection: false
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:          Options.blob_compaction_readahead_size: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                Options.blob_file_starting_level: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:           Options.merge_operator: None
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:        Options.compaction_filter: None
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:        Options.compaction_filter_factory: None
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:  Options.sst_partitioner_factory: None
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.memtable_factory: SkipListFactory
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:            Options.table_factory: BlockBasedTable
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559a345496c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559a3376f350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:        Options.write_buffer_size: 16777216
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:  Options.max_write_buffer_number: 64
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:          Options.compression: LZ4
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                  Options.bottommost_compression: Disabled
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:       Options.prefix_extractor: nullptr
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:             Options.num_levels: 7
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:            Options.compression_opts.window_bits: -14
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                  Options.compression_opts.level: 32767
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:               Options.compression_opts.strategy: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.compression_opts.parallel_threads: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                  Options.compression_opts.enabled: false
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:              Options.level0_stop_writes_trigger: 36
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                   Options.target_file_size_base: 67108864
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:             Options.target_file_size_multiplier: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                        Options.arena_block_size: 1048576
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                Options.disable_auto_compactions: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                   Options.inplace_update_support: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                 Options.inplace_update_num_locks: 10000
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:               Options.memtable_whole_key_filtering: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:   Options.memtable_huge_page_size: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                           Options.bloom_locality: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                    Options.max_successive_merges: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                Options.optimize_filters_for_hits: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                Options.paranoid_file_checks: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                Options.force_consistency_checks: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                Options.report_bg_io_stats: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                               Options.ttl: 2592000
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:          Options.periodic_compaction_seconds: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:    Options.preserve_internal_time_seconds: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                       Options.enable_blob_files: false
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                           Options.min_blob_size: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                          Options.blob_file_size: 268435456
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                   Options.blob_compression_type: NoCompression
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:          Options.enable_blob_garbage_collection: false
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:          Options.blob_compaction_readahead_size: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                Options.blob_file_starting_level: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:           Options.merge_operator: None
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:        Options.compaction_filter: None
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:        Options.compaction_filter_factory: None
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:  Options.sst_partitioner_factory: None
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.memtable_factory: SkipListFactory
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:            Options.table_factory: BlockBasedTable
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559a345496c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559a3376f350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:        Options.write_buffer_size: 16777216
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:  Options.max_write_buffer_number: 64
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:          Options.compression: LZ4
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                  Options.bottommost_compression: Disabled
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:       Options.prefix_extractor: nullptr
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:             Options.num_levels: 7
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:            Options.compression_opts.window_bits: -14
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                  Options.compression_opts.level: 32767
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:               Options.compression_opts.strategy: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.compression_opts.parallel_threads: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                  Options.compression_opts.enabled: false
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:              Options.level0_stop_writes_trigger: 36
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                   Options.target_file_size_base: 67108864
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:             Options.target_file_size_multiplier: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                        Options.arena_block_size: 1048576
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                Options.disable_auto_compactions: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                   Options.inplace_update_support: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                 Options.inplace_update_num_locks: 10000
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:               Options.memtable_whole_key_filtering: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:   Options.memtable_huge_page_size: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                           Options.bloom_locality: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                    Options.max_successive_merges: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                Options.optimize_filters_for_hits: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                Options.paranoid_file_checks: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                Options.force_consistency_checks: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                Options.report_bg_io_stats: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                               Options.ttl: 2592000
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:          Options.periodic_compaction_seconds: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:    Options.preserve_internal_time_seconds: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                       Options.enable_blob_files: false
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                           Options.min_blob_size: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                          Options.blob_file_size: 268435456
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                   Options.blob_compression_type: NoCompression
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:          Options.enable_blob_garbage_collection: false
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:          Options.blob_compaction_readahead_size: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                Options.blob_file_starting_level: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:           Options.merge_operator: None
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:        Options.compaction_filter: None
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:        Options.compaction_filter_factory: None
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:  Options.sst_partitioner_factory: None
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.memtable_factory: SkipListFactory
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:            Options.table_factory: BlockBasedTable
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559a345496c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559a3376f350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:        Options.write_buffer_size: 16777216
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:  Options.max_write_buffer_number: 64
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:          Options.compression: LZ4
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                  Options.bottommost_compression: Disabled
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:       Options.prefix_extractor: nullptr
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:             Options.num_levels: 7
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:            Options.compression_opts.window_bits: -14
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                  Options.compression_opts.level: 32767
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:               Options.compression_opts.strategy: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.compression_opts.parallel_threads: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                  Options.compression_opts.enabled: false
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:              Options.level0_stop_writes_trigger: 36
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                   Options.target_file_size_base: 67108864
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:             Options.target_file_size_multiplier: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                        Options.arena_block_size: 1048576
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                Options.disable_auto_compactions: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                   Options.inplace_update_support: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                 Options.inplace_update_num_locks: 10000
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:               Options.memtable_whole_key_filtering: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:   Options.memtable_huge_page_size: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                           Options.bloom_locality: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                    Options.max_successive_merges: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                Options.optimize_filters_for_hits: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                Options.paranoid_file_checks: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                Options.force_consistency_checks: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                Options.report_bg_io_stats: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                               Options.ttl: 2592000
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:          Options.periodic_compaction_seconds: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:    Options.preserve_internal_time_seconds: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                       Options.enable_blob_files: false
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                           Options.min_blob_size: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                          Options.blob_file_size: 268435456
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                   Options.blob_compression_type: NoCompression
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:          Options.enable_blob_garbage_collection: false
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:          Options.blob_compaction_readahead_size: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                Options.blob_file_starting_level: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:           Options.merge_operator: None
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:        Options.compaction_filter: None
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:        Options.compaction_filter_factory: None
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:  Options.sst_partitioner_factory: None
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.memtable_factory: SkipListFactory
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:            Options.table_factory: BlockBasedTable
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559a345496c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559a3376f350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:        Options.write_buffer_size: 16777216
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:  Options.max_write_buffer_number: 64
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:          Options.compression: LZ4
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                  Options.bottommost_compression: Disabled
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:       Options.prefix_extractor: nullptr
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:             Options.num_levels: 7
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:            Options.compression_opts.window_bits: -14
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                  Options.compression_opts.level: 32767
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:               Options.compression_opts.strategy: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.compression_opts.parallel_threads: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                  Options.compression_opts.enabled: false
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:              Options.level0_stop_writes_trigger: 36
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                   Options.target_file_size_base: 67108864
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:             Options.target_file_size_multiplier: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                        Options.arena_block_size: 1048576
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                Options.disable_auto_compactions: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                   Options.inplace_update_support: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                 Options.inplace_update_num_locks: 10000
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:               Options.memtable_whole_key_filtering: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:   Options.memtable_huge_page_size: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                           Options.bloom_locality: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                    Options.max_successive_merges: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                Options.optimize_filters_for_hits: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                Options.paranoid_file_checks: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                Options.force_consistency_checks: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                Options.report_bg_io_stats: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                               Options.ttl: 2592000
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:          Options.periodic_compaction_seconds: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:    Options.preserve_internal_time_seconds: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                       Options.enable_blob_files: false
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                           Options.min_blob_size: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                          Options.blob_file_size: 268435456
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                   Options.blob_compression_type: NoCompression
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:          Options.enable_blob_garbage_collection: false
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:          Options.blob_compaction_readahead_size: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                Options.blob_file_starting_level: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:           Options.merge_operator: None
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:        Options.compaction_filter: None
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:        Options.compaction_filter_factory: None
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:  Options.sst_partitioner_factory: None
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.memtable_factory: SkipListFactory
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:            Options.table_factory: BlockBasedTable
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559a345496c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559a3376f350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:        Options.write_buffer_size: 16777216
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:  Options.max_write_buffer_number: 64
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:          Options.compression: LZ4
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                  Options.bottommost_compression: Disabled
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:       Options.prefix_extractor: nullptr
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:             Options.num_levels: 7
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:            Options.compression_opts.window_bits: -14
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                  Options.compression_opts.level: 32767
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:               Options.compression_opts.strategy: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.compression_opts.parallel_threads: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                  Options.compression_opts.enabled: false
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:              Options.level0_stop_writes_trigger: 36
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                   Options.target_file_size_base: 67108864
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:             Options.target_file_size_multiplier: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                        Options.arena_block_size: 1048576
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                Options.disable_auto_compactions: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                   Options.inplace_update_support: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                 Options.inplace_update_num_locks: 10000
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:               Options.memtable_whole_key_filtering: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:   Options.memtable_huge_page_size: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                           Options.bloom_locality: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                    Options.max_successive_merges: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                Options.optimize_filters_for_hits: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                Options.paranoid_file_checks: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                Options.force_consistency_checks: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                Options.report_bg_io_stats: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                               Options.ttl: 2592000
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:          Options.periodic_compaction_seconds: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:    Options.preserve_internal_time_seconds: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                       Options.enable_blob_files: false
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                           Options.min_blob_size: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                          Options.blob_file_size: 268435456
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                   Options.blob_compression_type: NoCompression
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:          Options.enable_blob_garbage_collection: false
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:          Options.blob_compaction_readahead_size: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                Options.blob_file_starting_level: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:           Options.merge_operator: None
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:        Options.compaction_filter: None
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:        Options.compaction_filter_factory: None
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:  Options.sst_partitioner_factory: None
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.memtable_factory: SkipListFactory
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:            Options.table_factory: BlockBasedTable
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559a34549b00)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559a3376e9b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:        Options.write_buffer_size: 16777216
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:  Options.max_write_buffer_number: 64
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:          Options.compression: LZ4
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                  Options.bottommost_compression: Disabled
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:       Options.prefix_extractor: nullptr
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:             Options.num_levels: 7
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:            Options.compression_opts.window_bits: -14
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                  Options.compression_opts.level: 32767
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:               Options.compression_opts.strategy: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.compression_opts.parallel_threads: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                  Options.compression_opts.enabled: false
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:              Options.level0_stop_writes_trigger: 36
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                   Options.target_file_size_base: 67108864
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:             Options.target_file_size_multiplier: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                        Options.arena_block_size: 1048576
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                Options.disable_auto_compactions: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                   Options.inplace_update_support: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                 Options.inplace_update_num_locks: 10000
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:               Options.memtable_whole_key_filtering: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:   Options.memtable_huge_page_size: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                           Options.bloom_locality: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                    Options.max_successive_merges: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                Options.optimize_filters_for_hits: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                Options.paranoid_file_checks: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                Options.force_consistency_checks: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                Options.report_bg_io_stats: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                               Options.ttl: 2592000
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:          Options.periodic_compaction_seconds: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:    Options.preserve_internal_time_seconds: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                       Options.enable_blob_files: false
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                           Options.min_blob_size: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                          Options.blob_file_size: 268435456
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                   Options.blob_compression_type: NoCompression
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:          Options.enable_blob_garbage_collection: false
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:          Options.blob_compaction_readahead_size: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                Options.blob_file_starting_level: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:           Options.merge_operator: None
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:        Options.compaction_filter: None
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:        Options.compaction_filter_factory: None
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:  Options.sst_partitioner_factory: None
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.memtable_factory: SkipListFactory
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:            Options.table_factory: BlockBasedTable
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559a34549b00)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559a3376e9b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:        Options.write_buffer_size: 16777216
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:  Options.max_write_buffer_number: 64
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:          Options.compression: LZ4
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                  Options.bottommost_compression: Disabled
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:       Options.prefix_extractor: nullptr
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:             Options.num_levels: 7
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:            Options.compression_opts.window_bits: -14
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                  Options.compression_opts.level: 32767
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:               Options.compression_opts.strategy: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.compression_opts.parallel_threads: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                  Options.compression_opts.enabled: false
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:              Options.level0_stop_writes_trigger: 36
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                   Options.target_file_size_base: 67108864
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:             Options.target_file_size_multiplier: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                        Options.arena_block_size: 1048576
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                Options.disable_auto_compactions: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                   Options.inplace_update_support: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                 Options.inplace_update_num_locks: 10000
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:               Options.memtable_whole_key_filtering: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:   Options.memtable_huge_page_size: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                           Options.bloom_locality: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                    Options.max_successive_merges: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                Options.optimize_filters_for_hits: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                Options.paranoid_file_checks: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                Options.force_consistency_checks: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                Options.report_bg_io_stats: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                               Options.ttl: 2592000
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:          Options.periodic_compaction_seconds: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:    Options.preserve_internal_time_seconds: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                       Options.enable_blob_files: false
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                           Options.min_blob_size: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                          Options.blob_file_size: 268435456
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                   Options.blob_compression_type: NoCompression
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:          Options.enable_blob_garbage_collection: false
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:          Options.blob_compaction_readahead_size: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                Options.blob_file_starting_level: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:           Options.merge_operator: None
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:        Options.compaction_filter: None
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:        Options.compaction_filter_factory: None
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:  Options.sst_partitioner_factory: None
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.memtable_factory: SkipListFactory
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:            Options.table_factory: BlockBasedTable
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559a34549b00)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x559a3376e9b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:        Options.write_buffer_size: 16777216
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:  Options.max_write_buffer_number: 64
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:          Options.compression: LZ4
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                  Options.bottommost_compression: Disabled
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:       Options.prefix_extractor: nullptr
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:             Options.num_levels: 7
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:            Options.compression_opts.window_bits: -14
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                  Options.compression_opts.level: 32767
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:               Options.compression_opts.strategy: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.compression_opts.parallel_threads: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                  Options.compression_opts.enabled: false
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:              Options.level0_stop_writes_trigger: 36
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                   Options.target_file_size_base: 67108864
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:             Options.target_file_size_multiplier: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                        Options.arena_block_size: 1048576
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                Options.disable_auto_compactions: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                   Options.inplace_update_support: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                 Options.inplace_update_num_locks: 10000
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:               Options.memtable_whole_key_filtering: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:   Options.memtable_huge_page_size: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                           Options.bloom_locality: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                    Options.max_successive_merges: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                Options.optimize_filters_for_hits: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                Options.paranoid_file_checks: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                Options.force_consistency_checks: 1
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                Options.report_bg_io_stats: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                               Options.ttl: 2592000
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:          Options.periodic_compaction_seconds: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:    Options.preserve_internal_time_seconds: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                       Options.enable_blob_files: false
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                           Options.min_blob_size: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                          Options.blob_file_size: 268435456
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                   Options.blob_compression_type: NoCompression
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:          Options.enable_blob_garbage_collection: false
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:          Options.blob_compaction_readahead_size: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb:                Options.blob_file_starting_level: 0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: ec5851b7-6f91-46cf-a703-2ecb3eeaf577
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759241633058336, "job": 1, "event": "recovery_started", "wal_files": [31]}
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759241633123039, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759241633, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ec5851b7-6f91-46cf-a703-2ecb3eeaf577", "db_session_id": "AJK2E8JX5EBGLIW1W53T", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759241633128799, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759241633, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ec5851b7-6f91-46cf-a703-2ecb3eeaf577", "db_session_id": "AJK2E8JX5EBGLIW1W53T", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759241633131289, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759241633, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ec5851b7-6f91-46cf-a703-2ecb3eeaf577", "db_session_id": "AJK2E8JX5EBGLIW1W53T", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759241633133197, "job": 1, "event": "recovery_finished"}
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x559a34754000
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: DB pointer 0x559a346fe000
Sep 30 14:13:53 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Sep 30 14:13:53 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Sep 30 14:13:53 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Sep 30 14:13:53 compute-0 ceph-osd[82707]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.06              0.00         1    0.065       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.06              0.00         1    0.065       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.06              0.00         1    0.065       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.06              0.00         1    0.065       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559a3376f350#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559a3376f350#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559a3376f350#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559a3376f350#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559a3376f350#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559a3376f350#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559a3376f350#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559a3376e9b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559a3376e9b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559a3376e9b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559a3376f350#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559a3376f350#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Sep 30 14:13:53 compute-0 ceph-osd[82707]: bluestore.MempoolThread fragmentation_score=0.000017 took=0.000010s
Sep 30 14:13:53 compute-0 ceph-osd[82707]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/19.2.3/rpm/el9/BUILD/ceph-19.2.3/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Sep 30 14:13:53 compute-0 ceph-osd[82707]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/19.2.3/rpm/el9/BUILD/ceph-19.2.3/src/cls/hello/cls_hello.cc:316: loading cls_hello
Sep 30 14:13:53 compute-0 ceph-osd[82707]: _get_class not permitted to load lua
Sep 30 14:13:53 compute-0 ceph-osd[82707]: _get_class not permitted to load sdk
Sep 30 14:13:53 compute-0 ceph-osd[82707]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Sep 30 14:13:53 compute-0 ceph-osd[82707]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Sep 30 14:13:53 compute-0 ceph-osd[82707]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Sep 30 14:13:53 compute-0 ceph-osd[82707]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Sep 30 14:13:53 compute-0 ceph-osd[82707]: osd.0 0 load_pgs
Sep 30 14:13:53 compute-0 ceph-osd[82707]: osd.0 0 load_pgs opened 0 pgs
Sep 30 14:13:53 compute-0 ceph-osd[82707]: osd.0 0 log_to_monitors true
Sep 30 14:13:53 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-osd-0[82703]: 2025-09-30T14:13:53.168+0000 7f3c45ac2740 -1 osd.0 0 log_to_monitors true
Sep 30 14:13:53 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0)
Sep 30 14:13:53 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3301960660,v1:192.168.122.100:6803/3301960660]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Sep 30 14:13:53 compute-0 ceph-mon[74194]: pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 14:13:53 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:53 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:53 compute-0 ceph-mon[74194]: from='osd.0 [v2:192.168.122.100:6802/3301960660,v1:192.168.122.100:6803/3301960660]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Sep 30 14:13:53 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Sep 30 14:13:53 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:53 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Sep 30 14:13:53 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:53 compute-0 sudo[83390]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 14:13:53 compute-0 sudo[83390]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:13:53 compute-0 sudo[83390]: pam_unix(sudo:session): session closed for user root
Sep 30 14:13:53 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Sep 30 14:13:53 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Sep 30 14:13:53 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3301960660,v1:192.168.122.100:6803/3301960660]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Sep 30 14:13:53 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e6 e6: 2 total, 0 up, 2 in
Sep 30 14:13:53 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e6: 2 total, 0 up, 2 in
Sep 30 14:13:53 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Sep 30 14:13:53 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3301960660,v1:192.168.122.100:6803/3301960660]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Sep 30 14:13:53 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e6 create-or-move crush item name 'osd.0' initial_weight 0.0195 at location {host=compute-0,root=default}
Sep 30 14:13:53 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Sep 30 14:13:53 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Sep 30 14:13:53 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Sep 30 14:13:53 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Sep 30 14:13:53 compute-0 ceph-mgr[74485]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Sep 30 14:13:53 compute-0 ceph-mgr[74485]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Sep 30 14:13:53 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v39: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 14:13:53 compute-0 sudo[83415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:13:53 compute-0 sudo[83415]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:13:53 compute-0 sudo[83415]: pam_unix(sudo:session): session closed for user root
Sep 30 14:13:53 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0)
Sep 30 14:13:53 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.101:6800/385011976,v1:192.168.122.101:6801/385011976]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Sep 30 14:13:53 compute-0 sudo[83440]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Sep 30 14:13:53 compute-0 sudo[83440]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:13:54 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Sep 30 14:13:54 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:54 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Sep 30 14:13:54 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Sep 30 14:13:54 compute-0 podman[83538]: 2025-09-30 14:13:54.375322524 +0000 UTC m=+0.048048109 container exec a277d7b6b6f3cf10a7ce0ade5eebf0f8127074c248f9bce4451399614b97ded5 (image=quay.io/ceph/ceph:v19, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:13:54 compute-0 podman[83538]: 2025-09-30 14:13:54.495509511 +0000 UTC m=+0.168235076 container exec_died a277d7b6b6f3cf10a7ce0ade5eebf0f8127074c248f9bce4451399614b97ded5 (image=quay.io/ceph/ceph:v19, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:13:54 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:54 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:54 compute-0 ceph-mon[74194]: from='osd.0 [v2:192.168.122.100:6802/3301960660,v1:192.168.122.100:6803/3301960660]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Sep 30 14:13:54 compute-0 ceph-mon[74194]: osdmap e6: 2 total, 0 up, 2 in
Sep 30 14:13:54 compute-0 ceph-mon[74194]: from='osd.0 [v2:192.168.122.100:6802/3301960660,v1:192.168.122.100:6803/3301960660]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Sep 30 14:13:54 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Sep 30 14:13:54 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Sep 30 14:13:54 compute-0 ceph-mon[74194]: from='osd.1 [v2:192.168.122.101:6800/385011976,v1:192.168.122.101:6801/385011976]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Sep 30 14:13:54 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:54 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Sep 30 14:13:54 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:54 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Sep 30 14:13:54 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:54 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Sep 30 14:13:54 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:54 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 14:13:54 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Sep 30 14:13:54 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Sep 30 14:13:54 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3301960660,v1:192.168.122.100:6803/3301960660]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Sep 30 14:13:54 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.101:6800/385011976,v1:192.168.122.101:6801/385011976]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Sep 30 14:13:54 compute-0 ceph-osd[82707]: osd.0 0 done with init, starting boot process
Sep 30 14:13:54 compute-0 ceph-osd[82707]: osd.0 0 start_boot
Sep 30 14:13:54 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e7 e7: 2 total, 0 up, 2 in
Sep 30 14:13:54 compute-0 ceph-osd[82707]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Sep 30 14:13:54 compute-0 ceph-osd[82707]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Sep 30 14:13:54 compute-0 ceph-osd[82707]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Sep 30 14:13:54 compute-0 ceph-osd[82707]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Sep 30 14:13:54 compute-0 ceph-osd[82707]: osd.0 0  bench count 12288000 bsize 4 KiB
Sep 30 14:13:54 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e7: 2 total, 0 up, 2 in
Sep 30 14:13:54 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-1", "root=default"]} v 0)
Sep 30 14:13:54 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.101:6800/385011976,v1:192.168.122.101:6801/385011976]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]: dispatch
Sep 30 14:13:54 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e7 create-or-move crush item name 'osd.1' initial_weight 0.0195 at location {host=compute-1,root=default}
Sep 30 14:13:54 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Sep 30 14:13:54 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Sep 30 14:13:54 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Sep 30 14:13:54 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Sep 30 14:13:54 compute-0 ceph-mgr[74485]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Sep 30 14:13:54 compute-0 ceph-mgr[74485]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Sep 30 14:13:54 compute-0 ceph-mgr[74485]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3301960660; not ready for session (expect reconnect)
Sep 30 14:13:54 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Sep 30 14:13:54 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Sep 30 14:13:54 compute-0 ceph-mgr[74485]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Sep 30 14:13:54 compute-0 sudo[83440]: pam_unix(sudo:session): session closed for user root
Sep 30 14:13:54 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 14:13:54 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:54 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 14:13:54 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:54 compute-0 sudo[83628]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:13:54 compute-0 sudo[83628]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:13:54 compute-0 sudo[83628]: pam_unix(sudo:session): session closed for user root
Sep 30 14:13:54 compute-0 sudo[83653]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 14:13:54 compute-0 sudo[83653]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:13:55 compute-0 sudo[83653]: pam_unix(sudo:session): session closed for user root
Sep 30 14:13:55 compute-0 sudo[83708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:13:55 compute-0 sudo[83708]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:13:55 compute-0 sudo[83708]: pam_unix(sudo:session): session closed for user root
Sep 30 14:13:55 compute-0 ceph-mon[74194]: pgmap v39: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 14:13:55 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:55 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:55 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:55 compute-0 ceph-mon[74194]: from='osd.0 [v2:192.168.122.100:6802/3301960660,v1:192.168.122.100:6803/3301960660]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Sep 30 14:13:55 compute-0 ceph-mon[74194]: from='osd.1 [v2:192.168.122.101:6800/385011976,v1:192.168.122.101:6801/385011976]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Sep 30 14:13:55 compute-0 ceph-mon[74194]: osdmap e7: 2 total, 0 up, 2 in
Sep 30 14:13:55 compute-0 ceph-mon[74194]: from='osd.1 [v2:192.168.122.101:6800/385011976,v1:192.168.122.101:6801/385011976]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]: dispatch
Sep 30 14:13:55 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Sep 30 14:13:55 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Sep 30 14:13:55 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Sep 30 14:13:55 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:55 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:55 compute-0 sudo[83733]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -- inventory --format=json-pretty --filter-for-batch
Sep 30 14:13:55 compute-0 sudo[83733]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:13:55 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Sep 30 14:13:55 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Sep 30 14:13:55 compute-0 ceph-mgr[74485]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3301960660; not ready for session (expect reconnect)
Sep 30 14:13:55 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.101:6800/385011976,v1:192.168.122.101:6801/385011976]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]': finished
Sep 30 14:13:55 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e8 e8: 2 total, 0 up, 2 in
Sep 30 14:13:55 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v41: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 14:13:55 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e8: 2 total, 0 up, 2 in
Sep 30 14:13:55 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Sep 30 14:13:55 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Sep 30 14:13:55 compute-0 ceph-mgr[74485]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Sep 30 14:13:55 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Sep 30 14:13:55 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Sep 30 14:13:55 compute-0 ceph-mgr[74485]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Sep 30 14:13:55 compute-0 ceph-mgr[74485]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/385011976; not ready for session (expect reconnect)
Sep 30 14:13:55 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Sep 30 14:13:55 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Sep 30 14:13:55 compute-0 ceph-mgr[74485]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Sep 30 14:13:55 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Sep 30 14:13:55 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:55 compute-0 podman[83798]: 2025-09-30 14:13:55.976730907 +0000 UTC m=+0.038804669 container create 2b75dba59eaf4b0aa478a6e7a9717fbbdbcc280a75feee74dbc9780a1e037dc6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_shirley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Sep 30 14:13:56 compute-0 systemd[1]: Started libpod-conmon-2b75dba59eaf4b0aa478a6e7a9717fbbdbcc280a75feee74dbc9780a1e037dc6.scope.
Sep 30 14:13:56 compute-0 podman[83798]: 2025-09-30 14:13:55.96040941 +0000 UTC m=+0.022483192 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:13:56 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:13:56 compute-0 podman[83798]: 2025-09-30 14:13:56.147473725 +0000 UTC m=+0.209547567 container init 2b75dba59eaf4b0aa478a6e7a9717fbbdbcc280a75feee74dbc9780a1e037dc6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_shirley, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:13:56 compute-0 podman[83798]: 2025-09-30 14:13:56.153616048 +0000 UTC m=+0.215689800 container start 2b75dba59eaf4b0aa478a6e7a9717fbbdbcc280a75feee74dbc9780a1e037dc6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_shirley, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:13:56 compute-0 peaceful_shirley[83813]: 167 167
Sep 30 14:13:56 compute-0 systemd[1]: libpod-2b75dba59eaf4b0aa478a6e7a9717fbbdbcc280a75feee74dbc9780a1e037dc6.scope: Deactivated successfully.
Sep 30 14:13:56 compute-0 podman[83798]: 2025-09-30 14:13:56.180736544 +0000 UTC m=+0.242810326 container attach 2b75dba59eaf4b0aa478a6e7a9717fbbdbcc280a75feee74dbc9780a1e037dc6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Sep 30 14:13:56 compute-0 podman[83798]: 2025-09-30 14:13:56.181989565 +0000 UTC m=+0.244063327 container died 2b75dba59eaf4b0aa478a6e7a9717fbbdbcc280a75feee74dbc9780a1e037dc6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_shirley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid)
Sep 30 14:13:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-711e9e6660d922627f75bb85a197ff0afb790971bab75e095e9817bfea2ab186-merged.mount: Deactivated successfully.
Sep 30 14:13:56 compute-0 podman[83798]: 2025-09-30 14:13:56.320462178 +0000 UTC m=+0.382536090 container remove 2b75dba59eaf4b0aa478a6e7a9717fbbdbcc280a75feee74dbc9780a1e037dc6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_shirley, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Sep 30 14:13:56 compute-0 systemd[1]: libpod-conmon-2b75dba59eaf4b0aa478a6e7a9717fbbdbcc280a75feee74dbc9780a1e037dc6.scope: Deactivated successfully.
Sep 30 14:13:56 compute-0 podman[83840]: 2025-09-30 14:13:56.475693769 +0000 UTC m=+0.052327506 container create f8449957c191109803886f0a64645073426bd8d40836233d16e66818a0cda462 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_lamarr, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Sep 30 14:13:56 compute-0 systemd[1]: Started libpod-conmon-f8449957c191109803886f0a64645073426bd8d40836233d16e66818a0cda462.scope.
Sep 30 14:13:56 compute-0 podman[83840]: 2025-09-30 14:13:56.449332052 +0000 UTC m=+0.025965819 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:13:56 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:13:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f55dffe0fca0793ddca4f7e09a8e223bb08560f02cb93f022fc7634206c3cefc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:13:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f55dffe0fca0793ddca4f7e09a8e223bb08560f02cb93f022fc7634206c3cefc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:13:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f55dffe0fca0793ddca4f7e09a8e223bb08560f02cb93f022fc7634206c3cefc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:13:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f55dffe0fca0793ddca4f7e09a8e223bb08560f02cb93f022fc7634206c3cefc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:13:56 compute-0 podman[83840]: 2025-09-30 14:13:56.62010354 +0000 UTC m=+0.196737377 container init f8449957c191109803886f0a64645073426bd8d40836233d16e66818a0cda462 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_lamarr, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Sep 30 14:13:56 compute-0 podman[83840]: 2025-09-30 14:13:56.628012447 +0000 UTC m=+0.204646234 container start f8449957c191109803886f0a64645073426bd8d40836233d16e66818a0cda462 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_lamarr, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Sep 30 14:13:56 compute-0 podman[83840]: 2025-09-30 14:13:56.668272851 +0000 UTC m=+0.244906618 container attach f8449957c191109803886f0a64645073426bd8d40836233d16e66818a0cda462 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Sep 30 14:13:56 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Sep 30 14:13:56 compute-0 ceph-mgr[74485]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3301960660; not ready for session (expect reconnect)
Sep 30 14:13:56 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Sep 30 14:13:56 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Sep 30 14:13:56 compute-0 ceph-mgr[74485]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Sep 30 14:13:56 compute-0 ceph-mgr[74485]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/385011976; not ready for session (expect reconnect)
Sep 30 14:13:56 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Sep 30 14:13:56 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Sep 30 14:13:56 compute-0 ceph-mgr[74485]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Sep 30 14:13:56 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:56 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Sep 30 14:13:56 compute-0 ceph-mon[74194]: purged_snaps scrub starts
Sep 30 14:13:56 compute-0 ceph-mon[74194]: purged_snaps scrub ok
Sep 30 14:13:56 compute-0 ceph-mon[74194]: from='osd.1 [v2:192.168.122.101:6800/385011976,v1:192.168.122.101:6801/385011976]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]': finished
Sep 30 14:13:56 compute-0 ceph-mon[74194]: pgmap v41: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 14:13:56 compute-0 ceph-mon[74194]: osdmap e8: 2 total, 0 up, 2 in
Sep 30 14:13:56 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Sep 30 14:13:56 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Sep 30 14:13:56 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Sep 30 14:13:56 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:56 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Sep 30 14:13:57 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:57 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Sep 30 14:13:57 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:57 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Sep 30 14:13:57 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:57 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Sep 30 14:13:57 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Sep 30 14:13:57 compute-0 ceph-mgr[74485]: [cephadm INFO root] Adjusting osd_memory_target on compute-1 to  5247M
Sep 30 14:13:57 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-1 to  5247M
Sep 30 14:13:57 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Sep 30 14:13:57 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:57 compute-0 xenodochial_lamarr[83857]: [
Sep 30 14:13:57 compute-0 xenodochial_lamarr[83857]:     {
Sep 30 14:13:57 compute-0 xenodochial_lamarr[83857]:         "available": false,
Sep 30 14:13:57 compute-0 xenodochial_lamarr[83857]:         "being_replaced": false,
Sep 30 14:13:57 compute-0 xenodochial_lamarr[83857]:         "ceph_device_lvm": false,
Sep 30 14:13:57 compute-0 xenodochial_lamarr[83857]:         "device_id": "QEMU_DVD-ROM_QM00001",
Sep 30 14:13:57 compute-0 xenodochial_lamarr[83857]:         "lsm_data": {},
Sep 30 14:13:57 compute-0 xenodochial_lamarr[83857]:         "lvs": [],
Sep 30 14:13:57 compute-0 xenodochial_lamarr[83857]:         "path": "/dev/sr0",
Sep 30 14:13:57 compute-0 xenodochial_lamarr[83857]:         "rejected_reasons": [
Sep 30 14:13:57 compute-0 xenodochial_lamarr[83857]:             "Has a FileSystem",
Sep 30 14:13:57 compute-0 xenodochial_lamarr[83857]:             "Insufficient space (<5GB)"
Sep 30 14:13:57 compute-0 xenodochial_lamarr[83857]:         ],
Sep 30 14:13:57 compute-0 xenodochial_lamarr[83857]:         "sys_api": {
Sep 30 14:13:57 compute-0 xenodochial_lamarr[83857]:             "actuators": null,
Sep 30 14:13:57 compute-0 xenodochial_lamarr[83857]:             "device_nodes": [
Sep 30 14:13:57 compute-0 xenodochial_lamarr[83857]:                 "sr0"
Sep 30 14:13:57 compute-0 xenodochial_lamarr[83857]:             ],
Sep 30 14:13:57 compute-0 xenodochial_lamarr[83857]:             "devname": "sr0",
Sep 30 14:13:57 compute-0 xenodochial_lamarr[83857]:             "human_readable_size": "482.00 KB",
Sep 30 14:13:57 compute-0 xenodochial_lamarr[83857]:             "id_bus": "ata",
Sep 30 14:13:57 compute-0 xenodochial_lamarr[83857]:             "model": "QEMU DVD-ROM",
Sep 30 14:13:57 compute-0 xenodochial_lamarr[83857]:             "nr_requests": "2",
Sep 30 14:13:57 compute-0 xenodochial_lamarr[83857]:             "parent": "/dev/sr0",
Sep 30 14:13:57 compute-0 xenodochial_lamarr[83857]:             "partitions": {},
Sep 30 14:13:57 compute-0 xenodochial_lamarr[83857]:             "path": "/dev/sr0",
Sep 30 14:13:57 compute-0 xenodochial_lamarr[83857]:             "removable": "1",
Sep 30 14:13:57 compute-0 xenodochial_lamarr[83857]:             "rev": "2.5+",
Sep 30 14:13:57 compute-0 xenodochial_lamarr[83857]:             "ro": "0",
Sep 30 14:13:57 compute-0 xenodochial_lamarr[83857]:             "rotational": "0",
Sep 30 14:13:57 compute-0 xenodochial_lamarr[83857]:             "sas_address": "",
Sep 30 14:13:57 compute-0 xenodochial_lamarr[83857]:             "sas_device_handle": "",
Sep 30 14:13:57 compute-0 xenodochial_lamarr[83857]:             "scheduler_mode": "mq-deadline",
Sep 30 14:13:57 compute-0 xenodochial_lamarr[83857]:             "sectors": 0,
Sep 30 14:13:57 compute-0 xenodochial_lamarr[83857]:             "sectorsize": "2048",
Sep 30 14:13:57 compute-0 xenodochial_lamarr[83857]:             "size": 493568.0,
Sep 30 14:13:57 compute-0 xenodochial_lamarr[83857]:             "support_discard": "2048",
Sep 30 14:13:57 compute-0 xenodochial_lamarr[83857]:             "type": "disk",
Sep 30 14:13:57 compute-0 xenodochial_lamarr[83857]:             "vendor": "QEMU"
Sep 30 14:13:57 compute-0 xenodochial_lamarr[83857]:         }
Sep 30 14:13:57 compute-0 xenodochial_lamarr[83857]:     }
Sep 30 14:13:57 compute-0 xenodochial_lamarr[83857]: ]
Sep 30 14:13:57 compute-0 systemd[1]: libpod-f8449957c191109803886f0a64645073426bd8d40836233d16e66818a0cda462.scope: Deactivated successfully.
Sep 30 14:13:57 compute-0 podman[83840]: 2025-09-30 14:13:57.362112063 +0000 UTC m=+0.938745840 container died f8449957c191109803886f0a64645073426bd8d40836233d16e66818a0cda462 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_lamarr, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:13:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-f55dffe0fca0793ddca4f7e09a8e223bb08560f02cb93f022fc7634206c3cefc-merged.mount: Deactivated successfully.
Sep 30 14:13:57 compute-0 podman[83840]: 2025-09-30 14:13:57.695555387 +0000 UTC m=+1.272189134 container remove f8449957c191109803886f0a64645073426bd8d40836233d16e66818a0cda462 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_lamarr, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:13:57 compute-0 systemd[1]: libpod-conmon-f8449957c191109803886f0a64645073426bd8d40836233d16e66818a0cda462.scope: Deactivated successfully.
Sep 30 14:13:57 compute-0 sudo[83733]: pam_unix(sudo:session): session closed for user root
Sep 30 14:13:57 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 14:13:57 compute-0 ceph-mgr[74485]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3301960660; not ready for session (expect reconnect)
Sep 30 14:13:57 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v43: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 14:13:57 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Sep 30 14:13:57 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Sep 30 14:13:57 compute-0 ceph-mgr[74485]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Sep 30 14:13:57 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:57 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 14:13:57 compute-0 ceph-mgr[74485]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/385011976; not ready for session (expect reconnect)
Sep 30 14:13:57 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Sep 30 14:13:57 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Sep 30 14:13:57 compute-0 ceph-mgr[74485]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Sep 30 14:13:57 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:57 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 14:13:57 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:57 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 14:13:57 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:57 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Sep 30 14:13:57 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Sep 30 14:13:57 compute-0 ceph-mgr[74485]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 127.8M
Sep 30 14:13:57 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 127.8M
Sep 30 14:13:57 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Sep 30 14:13:57 compute-0 ceph-mgr[74485]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 134071500: error parsing value: Value '134071500' is below minimum 939524096
Sep 30 14:13:57 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 134071500: error parsing value: Value '134071500' is below minimum 939524096
Sep 30 14:13:58 compute-0 ceph-mon[74194]: purged_snaps scrub starts
Sep 30 14:13:58 compute-0 ceph-mon[74194]: purged_snaps scrub ok
Sep 30 14:13:58 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Sep 30 14:13:58 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:58 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:58 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:58 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:58 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Sep 30 14:13:58 compute-0 ceph-mon[74194]: Adjusting osd_memory_target on compute-1 to  5247M
Sep 30 14:13:58 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:58 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Sep 30 14:13:58 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:58 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Sep 30 14:13:58 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:58 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:58 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:13:58 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Sep 30 14:13:58 compute-0 ceph-mgr[74485]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3301960660; not ready for session (expect reconnect)
Sep 30 14:13:58 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Sep 30 14:13:58 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Sep 30 14:13:58 compute-0 ceph-mgr[74485]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Sep 30 14:13:58 compute-0 ceph-mgr[74485]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/385011976; not ready for session (expect reconnect)
Sep 30 14:13:58 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Sep 30 14:13:58 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Sep 30 14:13:58 compute-0 ceph-mgr[74485]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Sep 30 14:13:59 compute-0 ceph-mon[74194]: pgmap v43: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 14:13:59 compute-0 ceph-mon[74194]: Adjusting osd_memory_target on compute-0 to 127.8M
Sep 30 14:13:59 compute-0 ceph-mon[74194]: Unable to set osd_memory_target on compute-0 to 134071500: error parsing value: Value '134071500' is below minimum 939524096
Sep 30 14:13:59 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Sep 30 14:13:59 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Sep 30 14:13:59 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e8 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 14:13:59 compute-0 ceph-mgr[74485]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3301960660; not ready for session (expect reconnect)
Sep 30 14:13:59 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Sep 30 14:13:59 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Sep 30 14:13:59 compute-0 ceph-mgr[74485]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Sep 30 14:13:59 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v44: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 14:13:59 compute-0 ceph-mgr[74485]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/385011976; not ready for session (expect reconnect)
Sep 30 14:13:59 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Sep 30 14:13:59 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Sep 30 14:13:59 compute-0 ceph-mgr[74485]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Sep 30 14:14:00 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Sep 30 14:14:00 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Sep 30 14:14:00 compute-0 ceph-mgr[74485]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3301960660; not ready for session (expect reconnect)
Sep 30 14:14:00 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Sep 30 14:14:00 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Sep 30 14:14:00 compute-0 ceph-mgr[74485]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Sep 30 14:14:00 compute-0 ceph-mgr[74485]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/385011976; not ready for session (expect reconnect)
Sep 30 14:14:00 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Sep 30 14:14:00 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Sep 30 14:14:00 compute-0 ceph-mgr[74485]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Sep 30 14:14:01 compute-0 ceph-osd[82707]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 6.245 iops: 1598.826 elapsed_sec: 1.876
Sep 30 14:14:01 compute-0 ceph-osd[82707]: log_channel(cluster) log [WRN] : OSD bench result of 1598.825619 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Sep 30 14:14:01 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-osd-0[82703]: 2025-09-30T14:14:01.590+0000 7f3c41a45640 -1 osd.0 0 waiting for initial osdmap
Sep 30 14:14:01 compute-0 ceph-osd[82707]: osd.0 0 waiting for initial osdmap
Sep 30 14:14:01 compute-0 ceph-osd[82707]: osd.0 8 crush map has features 288514050185494528, adjusting msgr requires for clients
Sep 30 14:14:01 compute-0 ceph-osd[82707]: osd.0 8 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Sep 30 14:14:01 compute-0 ceph-osd[82707]: osd.0 8 crush map has features 3314932999778484224, adjusting msgr requires for osds
Sep 30 14:14:01 compute-0 ceph-osd[82707]: osd.0 8 check_osdmap_features require_osd_release unknown -> squid
Sep 30 14:14:01 compute-0 ceph-osd[82707]: osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Sep 30 14:14:01 compute-0 ceph-osd[82707]: osd.0 8 set_numa_affinity not setting numa affinity
Sep 30 14:14:01 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-osd-0[82703]: 2025-09-30T14:14:01.647+0000 7f3c3d06d640 -1 osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Sep 30 14:14:01 compute-0 ceph-osd[82707]: osd.0 8 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial no unique device path for loop3: no symlink to loop3 in /dev/disk/by-path
Sep 30 14:14:01 compute-0 ceph-mon[74194]: pgmap v44: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Sep 30 14:14:01 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Sep 30 14:14:01 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Sep 30 14:14:01 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Sep 30 14:14:01 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Sep 30 14:14:01 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e9 e9: 2 total, 1 up, 2 in
Sep 30 14:14:01 compute-0 ceph-mon[74194]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.100:6802/3301960660,v1:192.168.122.100:6803/3301960660] boot
Sep 30 14:14:01 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e9: 2 total, 1 up, 2 in
Sep 30 14:14:01 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Sep 30 14:14:01 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Sep 30 14:14:01 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Sep 30 14:14:01 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Sep 30 14:14:01 compute-0 ceph-mgr[74485]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Sep 30 14:14:01 compute-0 ceph-osd[82707]: osd.0 9 state: booting -> active
Sep 30 14:14:01 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v46: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Sep 30 14:14:01 compute-0 ceph-mgr[74485]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/385011976; not ready for session (expect reconnect)
Sep 30 14:14:01 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Sep 30 14:14:01 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Sep 30 14:14:01 compute-0 ceph-mgr[74485]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Sep 30 14:14:02 compute-0 ceph-mgr[74485]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/385011976; not ready for session (expect reconnect)
Sep 30 14:14:02 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Sep 30 14:14:02 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Sep 30 14:14:02 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Sep 30 14:14:02 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Sep 30 14:14:02 compute-0 ceph-mgr[74485]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Sep 30 14:14:03 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:14:03 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:14:03 compute-0 ceph-mgr[74485]: [devicehealth INFO root] creating mgr pool
Sep 30 14:14:03 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0)
Sep 30 14:14:03 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Sep 30 14:14:03 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:14:03 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:14:03 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:14:03 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:14:03 compute-0 ceph-mon[74194]: OSD bench result of 1598.825619 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Sep 30 14:14:03 compute-0 ceph-mon[74194]: osd.0 [v2:192.168.122.100:6802/3301960660,v1:192.168.122.100:6803/3301960660] boot
Sep 30 14:14:03 compute-0 ceph-mon[74194]: osdmap e9: 2 total, 1 up, 2 in
Sep 30 14:14:03 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Sep 30 14:14:03 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Sep 30 14:14:03 compute-0 ceph-mon[74194]: pgmap v46: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Sep 30 14:14:03 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Sep 30 14:14:03 compute-0 sshd-session[84827]: Invalid user linda from 209.38.228.14 port 46810
Sep 30 14:14:03 compute-0 sshd-session[84827]: Received disconnect from 209.38.228.14 port 46810:11: Bye Bye [preauth]
Sep 30 14:14:03 compute-0 sshd-session[84827]: Disconnected from invalid user linda 209.38.228.14 port 46810 [preauth]
Sep 30 14:14:03 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v47: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Sep 30 14:14:03 compute-0 ceph-mgr[74485]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/385011976; not ready for session (expect reconnect)
Sep 30 14:14:04 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Sep 30 14:14:04 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Sep 30 14:14:04 compute-0 ceph-mgr[74485]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Sep 30 14:14:04 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e10 e10: 2 total, 1 up, 2 in
Sep 30 14:14:04 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e10: 2 total, 1 up, 2 in
Sep 30 14:14:04 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Sep 30 14:14:04 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Sep 30 14:14:04 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e10 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 14:14:04 compute-0 ceph-mgr[74485]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Sep 30 14:14:04 compute-0 ceph-mgr[74485]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/385011976; not ready for session (expect reconnect)
Sep 30 14:14:04 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Sep 30 14:14:04 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Sep 30 14:14:04 compute-0 ceph-mgr[74485]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Sep 30 14:14:04 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Sep 30 14:14:04 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Sep 30 14:14:04 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Sep 30 14:14:05 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Sep 30 14:14:05 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e10 encode_pending skipping prime_pg_temp; mapping job did not start
Sep 30 14:14:05 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Sep 30 14:14:05 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e11 e11: 2 total, 1 up, 2 in
Sep 30 14:14:05 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e11 crush map has features 3314933000852226048, adjusting msgr requires
Sep 30 14:14:05 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Sep 30 14:14:05 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Sep 30 14:14:05 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Sep 30 14:14:05 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e11: 2 total, 1 up, 2 in
Sep 30 14:14:05 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Sep 30 14:14:05 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Sep 30 14:14:05 compute-0 ceph-mgr[74485]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Sep 30 14:14:05 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0)
Sep 30 14:14:05 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Sep 30 14:14:05 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v50: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Sep 30 14:14:05 compute-0 ceph-mgr[74485]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/385011976; not ready for session (expect reconnect)
Sep 30 14:14:05 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Sep 30 14:14:05 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Sep 30 14:14:05 compute-0 ceph-mgr[74485]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Sep 30 14:14:05 compute-0 ceph-osd[82707]: osd.0 11 crush map has features 288514051259236352, adjusting msgr requires for clients
Sep 30 14:14:05 compute-0 ceph-osd[82707]: osd.0 11 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Sep 30 14:14:05 compute-0 ceph-osd[82707]: osd.0 11 crush map has features 3314933000852226048, adjusting msgr requires for osds
Sep 30 14:14:06 compute-0 ceph-mon[74194]: pgmap v47: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Sep 30 14:14:06 compute-0 ceph-mon[74194]: osdmap e10: 2 total, 1 up, 2 in
Sep 30 14:14:06 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Sep 30 14:14:06 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Sep 30 14:14:06 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Sep 30 14:14:06 compute-0 ceph-mon[74194]: osdmap e11: 2 total, 1 up, 2 in
Sep 30 14:14:06 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Sep 30 14:14:06 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Sep 30 14:14:06 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Sep 30 14:14:06 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Sep 30 14:14:06 compute-0 ceph-mon[74194]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Sep 30 14:14:06 compute-0 ceph-mgr[74485]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/385011976; not ready for session (expect reconnect)
Sep 30 14:14:06 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Sep 30 14:14:06 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Sep 30 14:14:06 compute-0 ceph-mgr[74485]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Sep 30 14:14:06 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Sep 30 14:14:06 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e12 e12: 2 total, 1 up, 2 in
Sep 30 14:14:06 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e12: 2 total, 1 up, 2 in
Sep 30 14:14:07 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Sep 30 14:14:07 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Sep 30 14:14:07 compute-0 ceph-mgr[74485]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Sep 30 14:14:07 compute-0 ceph-mon[74194]: pgmap v50: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Sep 30 14:14:07 compute-0 ceph-mon[74194]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Sep 30 14:14:07 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Sep 30 14:14:07 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Sep 30 14:14:07 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v52: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Sep 30 14:14:07 compute-0 ceph-mgr[74485]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/385011976; not ready for session (expect reconnect)
Sep 30 14:14:07 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Sep 30 14:14:07 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Sep 30 14:14:07 compute-0 ceph-mgr[74485]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Sep 30 14:14:08 compute-0 ceph-mon[74194]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Sep 30 14:14:08 compute-0 ceph-mon[74194]: osdmap e12: 2 total, 1 up, 2 in
Sep 30 14:14:08 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Sep 30 14:14:08 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Sep 30 14:14:08 compute-0 ceph-mgr[74485]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/385011976; not ready for session (expect reconnect)
Sep 30 14:14:08 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Sep 30 14:14:08 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Sep 30 14:14:08 compute-0 ceph-mgr[74485]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Sep 30 14:14:09 compute-0 ceph-mon[74194]: pgmap v52: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Sep 30 14:14:09 compute-0 ceph-mon[74194]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Sep 30 14:14:09 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Sep 30 14:14:09 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e12 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 14:14:09 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v53: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Sep 30 14:14:09 compute-0 ceph-mgr[74485]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/385011976; not ready for session (expect reconnect)
Sep 30 14:14:09 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Sep 30 14:14:09 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Sep 30 14:14:09 compute-0 ceph-mgr[74485]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Sep 30 14:14:10 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Sep 30 14:14:10 compute-0 ceph-mgr[74485]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/385011976; not ready for session (expect reconnect)
Sep 30 14:14:10 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Sep 30 14:14:10 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Sep 30 14:14:10 compute-0 ceph-mgr[74485]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Sep 30 14:14:11 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Sep 30 14:14:11 compute-0 ceph-mon[74194]: pgmap v53: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Sep 30 14:14:11 compute-0 ceph-mon[74194]: OSD bench result of 5559.976478 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Sep 30 14:14:11 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Sep 30 14:14:11 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e13 e13: 2 total, 2 up, 2 in
Sep 30 14:14:11 compute-0 ceph-mon[74194]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.101:6800/385011976,v1:192.168.122.101:6801/385011976] boot
Sep 30 14:14:11 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e13: 2 total, 2 up, 2 in
Sep 30 14:14:11 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Sep 30 14:14:11 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Sep 30 14:14:11 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v55: 1 pgs: 1 unknown; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail
Sep 30 14:14:12 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Sep 30 14:14:12 compute-0 ceph-mon[74194]: osd.1 [v2:192.168.122.101:6800/385011976,v1:192.168.122.101:6801/385011976] boot
Sep 30 14:14:12 compute-0 ceph-mon[74194]: osdmap e13: 2 total, 2 up, 2 in
Sep 30 14:14:12 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Sep 30 14:14:12 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e14 e14: 2 total, 2 up, 2 in
Sep 30 14:14:12 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e14: 2 total, 2 up, 2 in
Sep 30 14:14:12 compute-0 ceph-mgr[74485]: [devicehealth INFO root] creating main.db for devicehealth
Sep 30 14:14:12 compute-0 ceph-mgr[74485]: [devicehealth INFO root] Check health
Sep 30 14:14:12 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Sep 30 14:14:12 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Sep 30 14:14:12 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Sep 30 14:14:12 compute-0 sudo[84843]:     ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vda
Sep 30 14:14:12 compute-0 sudo[84843]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Sep 30 14:14:12 compute-0 sudo[84843]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167)
Sep 30 14:14:12 compute-0 sudo[84843]: pam_unix(sudo:session): session closed for user root
Sep 30 14:14:12 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Sep 30 14:14:13 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Sep 30 14:14:13 compute-0 ceph-mon[74194]: pgmap v55: 1 pgs: 1 unknown; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail
Sep 30 14:14:13 compute-0 ceph-mon[74194]: osdmap e14: 2 total, 2 up, 2 in
Sep 30 14:14:13 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Sep 30 14:14:13 compute-0 ceph-mon[74194]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Sep 30 14:14:13 compute-0 ceph-mon[74194]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Sep 30 14:14:13 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e15 e15: 2 total, 2 up, 2 in
Sep 30 14:14:13 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e15: 2 total, 2 up, 2 in
Sep 30 14:14:13 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v58: 1 pgs: 1 creating+peering; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Sep 30 14:14:14 compute-0 ceph-mon[74194]: log_channel(cluster) log [WRN] : Health check failed: 1 OSD(s) experiencing slow operations in BlueStore (BLUESTORE_SLOW_OP_ALERT)
Sep 30 14:14:14 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.buxlkm(active, since 102s)
Sep 30 14:14:14 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e15 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 14:14:14 compute-0 ceph-mon[74194]: osdmap e15: 2 total, 2 up, 2 in
Sep 30 14:14:15 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v59: 1 pgs: 1 creating+peering; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Sep 30 14:14:15 compute-0 ceph-mon[74194]: pgmap v58: 1 pgs: 1 creating+peering; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Sep 30 14:14:15 compute-0 ceph-mon[74194]: Health check failed: 1 OSD(s) experiencing slow operations in BlueStore (BLUESTORE_SLOW_OP_ALERT)
Sep 30 14:14:15 compute-0 ceph-mon[74194]: mgrmap e9: compute-0.buxlkm(active, since 102s)
Sep 30 14:14:16 compute-0 ceph-mon[74194]: pgmap v59: 1 pgs: 1 creating+peering; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Sep 30 14:14:16 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Sep 30 14:14:16 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:14:16 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Sep 30 14:14:16 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:14:16 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Sep 30 14:14:16 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:14:16 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Sep 30 14:14:17 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:14:17 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Sep 30 14:14:17 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Sep 30 14:14:17 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:14:17 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:14:17 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 14:14:17 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 14:14:17 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Sep 30 14:14:17 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Sep 30 14:14:17 compute-0 sshd-session[84846]: Invalid user precio01 from 210.90.155.80 port 59848
Sep 30 14:14:17 compute-0 sshd-session[84846]: Received disconnect from 210.90.155.80 port 59848:11: Bye Bye [preauth]
Sep 30 14:14:17 compute-0 sshd-session[84846]: Disconnected from invalid user precio01 210.90.155.80 port 59848 [preauth]
Sep 30 14:14:17 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.conf
Sep 30 14:14:17 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.conf
Sep 30 14:14:17 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v60: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Sep 30 14:14:17 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Sep 30 14:14:17 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Sep 30 14:14:18 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:14:18 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:14:18 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:14:18 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:14:18 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Sep 30 14:14:18 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:14:18 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 14:14:18 compute-0 ceph-mon[74194]: Updating compute-2:/etc/ceph/ceph.conf
Sep 30 14:14:18 compute-0 ceph-mon[74194]: Updating compute-2:/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.conf
Sep 30 14:14:18 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.client.admin.keyring
Sep 30 14:14:18 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.client.admin.keyring
Sep 30 14:14:18 compute-0 sudo[84871]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qewcqdrvnubaacraqbmraayvscugyncu ; /usr/bin/python3'
Sep 30 14:14:18 compute-0 sudo[84871]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:14:18 compute-0 python3[84873]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:14:18 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Sep 30 14:14:18 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:14:18 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Sep 30 14:14:19 compute-0 podman[84875]: 2025-09-30 14:14:19.003515571 +0000 UTC m=+0.049993957 container create 31a4ec81e3cb0d2748feb062daacfe08be241efdfaa26b77bb1b94bfc3d1468d (image=quay.io/ceph/ceph:v19, name=great_neumann, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:14:19 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:14:19 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 14:14:19 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:14:19 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v61: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Sep 30 14:14:19 compute-0 ceph-mgr[74485]: [progress INFO root] update: starting ev d69293bb-e9ba-4d53-9b86-9f8093b4e94f (Updating mon deployment (+2 -> 3))
Sep 30 14:14:19 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Sep 30 14:14:19 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Sep 30 14:14:19 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Sep 30 14:14:19 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Sep 30 14:14:19 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:14:19 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:14:19 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Deploying daemon mon.compute-2 on compute-2
Sep 30 14:14:19 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Deploying daemon mon.compute-2 on compute-2
Sep 30 14:14:19 compute-0 ceph-mon[74194]: pgmap v60: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Sep 30 14:14:19 compute-0 ceph-mon[74194]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Sep 30 14:14:19 compute-0 ceph-mon[74194]: Updating compute-2:/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.client.admin.keyring
Sep 30 14:14:19 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:14:19 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:14:19 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:14:19 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Sep 30 14:14:19 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Sep 30 14:14:19 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:14:19 compute-0 systemd[1]: Started libpod-conmon-31a4ec81e3cb0d2748feb062daacfe08be241efdfaa26b77bb1b94bfc3d1468d.scope.
Sep 30 14:14:19 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:14:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5747066dfceb90db8cf1944764636d22735cc5adc6b5f3ccbab24d7923449b2d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:14:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5747066dfceb90db8cf1944764636d22735cc5adc6b5f3ccbab24d7923449b2d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:14:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5747066dfceb90db8cf1944764636d22735cc5adc6b5f3ccbab24d7923449b2d/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Sep 30 14:14:19 compute-0 podman[84875]: 2025-09-30 14:14:18.984567659 +0000 UTC m=+0.031046045 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:14:19 compute-0 podman[84875]: 2025-09-30 14:14:19.091054694 +0000 UTC m=+0.137533080 container init 31a4ec81e3cb0d2748feb062daacfe08be241efdfaa26b77bb1b94bfc3d1468d (image=quay.io/ceph/ceph:v19, name=great_neumann, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:14:19 compute-0 podman[84875]: 2025-09-30 14:14:19.097942396 +0000 UTC m=+0.144420762 container start 31a4ec81e3cb0d2748feb062daacfe08be241efdfaa26b77bb1b94bfc3d1468d (image=quay.io/ceph/ceph:v19, name=great_neumann, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:14:19 compute-0 podman[84875]: 2025-09-30 14:14:19.101637978 +0000 UTC m=+0.148116344 container attach 31a4ec81e3cb0d2748feb062daacfe08be241efdfaa26b77bb1b94bfc3d1468d (image=quay.io/ceph/ceph:v19, name=great_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:14:19 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Sep 30 14:14:19 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2502882528' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Sep 30 14:14:19 compute-0 great_neumann[84891]: 
Sep 30 14:14:19 compute-0 great_neumann[84891]: {"fsid":"5e3c7776-ac03-5698-b79f-a6dc2d80cae6","health":{"status":"HEALTH_WARN","checks":{"BLUESTORE_SLOW_OP_ALERT":{"severity":"HEALTH_WARN","summary":{"message":"1 OSD(s) experiencing slow operations in BlueStore","count":1},"muted":false},"CEPHADM_APPLY_SPEC_FAIL":{"severity":"HEALTH_WARN","summary":{"message":"Failed to apply 2 service(s): mon,mgr","count":2},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":129,"monmap":{"epoch":1,"min_mon_release_name":"squid","num_mons":1},"osdmap":{"epoch":15,"num_osds":2,"num_up_osds":2,"osd_up_since":1759241651,"num_in_osds":2,"osd_in_since":1759241620,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":1}],"num_pgs":1,"num_pools":1,"num_objects":2,"data_bytes":459280,"bytes_used":475262976,"bytes_avail":42466021376,"bytes_total":42941284352},"fsmap":{"epoch":1,"btime":"2025-09-30T14:12:06:949277+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-09-30T14:13:34.377104+0000","services":{}},"progress_events":{}}
Sep 30 14:14:19 compute-0 systemd[1]: libpod-31a4ec81e3cb0d2748feb062daacfe08be241efdfaa26b77bb1b94bfc3d1468d.scope: Deactivated successfully.
Sep 30 14:14:19 compute-0 podman[84875]: 2025-09-30 14:14:19.568398397 +0000 UTC m=+0.614876783 container died 31a4ec81e3cb0d2748feb062daacfe08be241efdfaa26b77bb1b94bfc3d1468d (image=quay.io/ceph/ceph:v19, name=great_neumann, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Sep 30 14:14:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-5747066dfceb90db8cf1944764636d22735cc5adc6b5f3ccbab24d7923449b2d-merged.mount: Deactivated successfully.
Sep 30 14:14:19 compute-0 podman[84875]: 2025-09-30 14:14:19.715385023 +0000 UTC m=+0.761863389 container remove 31a4ec81e3cb0d2748feb062daacfe08be241efdfaa26b77bb1b94bfc3d1468d (image=quay.io/ceph/ceph:v19, name=great_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Sep 30 14:14:19 compute-0 systemd[1]: libpod-conmon-31a4ec81e3cb0d2748feb062daacfe08be241efdfaa26b77bb1b94bfc3d1468d.scope: Deactivated successfully.
Sep 30 14:14:19 compute-0 sudo[84871]: pam_unix(sudo:session): session closed for user root
Sep 30 14:14:19 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e15 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 14:14:20 compute-0 ceph-mon[74194]: log_channel(cluster) log [INF] : Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Sep 30 14:14:20 compute-0 sudo[84954]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqdzuipaggdnbkgurajeaspkxjiqrgpz ; /usr/bin/python3'
Sep 30 14:14:20 compute-0 sudo[84954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:14:20 compute-0 ceph-mon[74194]: pgmap v61: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Sep 30 14:14:20 compute-0 ceph-mon[74194]: Deploying daemon mon.compute-2 on compute-2
Sep 30 14:14:20 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/2502882528' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Sep 30 14:14:20 compute-0 ceph-mon[74194]: Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Sep 30 14:14:20 compute-0 python3[84956]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:14:20 compute-0 podman[84957]: 2025-09-30 14:14:20.249090791 +0000 UTC m=+0.041767632 container create da18e71a0b6ba3afaeab7b573e47c1f765edd5e69f08664523720e9886e5a011 (image=quay.io/ceph/ceph:v19, name=keen_clarke, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Sep 30 14:14:20 compute-0 systemd[1]: Started libpod-conmon-da18e71a0b6ba3afaeab7b573e47c1f765edd5e69f08664523720e9886e5a011.scope.
Sep 30 14:14:20 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:14:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdf0819e8445e1dc859f94f842d8c632ba2695c1535451dcd9e18a842b2cf924/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:14:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdf0819e8445e1dc859f94f842d8c632ba2695c1535451dcd9e18a842b2cf924/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:14:20 compute-0 podman[84957]: 2025-09-30 14:14:20.232166179 +0000 UTC m=+0.024843040 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:14:20 compute-0 podman[84957]: 2025-09-30 14:14:20.390656572 +0000 UTC m=+0.183333433 container init da18e71a0b6ba3afaeab7b573e47c1f765edd5e69f08664523720e9886e5a011 (image=quay.io/ceph/ceph:v19, name=keen_clarke, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Sep 30 14:14:20 compute-0 podman[84957]: 2025-09-30 14:14:20.397051691 +0000 UTC m=+0.189728532 container start da18e71a0b6ba3afaeab7b573e47c1f765edd5e69f08664523720e9886e5a011 (image=quay.io/ceph/ceph:v19, name=keen_clarke, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Sep 30 14:14:20 compute-0 podman[84957]: 2025-09-30 14:14:20.466767039 +0000 UTC m=+0.259443890 container attach da18e71a0b6ba3afaeab7b573e47c1f765edd5e69f08664523720e9886e5a011 (image=quay.io/ceph/ceph:v19, name=keen_clarke, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Sep 30 14:14:20 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Sep 30 14:14:20 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1095304407' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Sep 30 14:14:21 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v62: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Sep 30 14:14:21 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Sep 30 14:14:21 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/1095304407' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Sep 30 14:14:21 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1095304407' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Sep 30 14:14:21 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e16 e16: 2 total, 2 up, 2 in
Sep 30 14:14:21 compute-0 keen_clarke[84973]: pool 'vms' created
Sep 30 14:14:21 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e16: 2 total, 2 up, 2 in
Sep 30 14:14:21 compute-0 systemd[1]: libpod-da18e71a0b6ba3afaeab7b573e47c1f765edd5e69f08664523720e9886e5a011.scope: Deactivated successfully.
Sep 30 14:14:21 compute-0 podman[84957]: 2025-09-30 14:14:21.102369729 +0000 UTC m=+0.895046570 container died da18e71a0b6ba3afaeab7b573e47c1f765edd5e69f08664523720e9886e5a011 (image=quay.io/ceph/ceph:v19, name=keen_clarke, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:14:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-cdf0819e8445e1dc859f94f842d8c632ba2695c1535451dcd9e18a842b2cf924-merged.mount: Deactivated successfully.
Sep 30 14:14:21 compute-0 podman[84957]: 2025-09-30 14:14:21.146857308 +0000 UTC m=+0.939534149 container remove da18e71a0b6ba3afaeab7b573e47c1f765edd5e69f08664523720e9886e5a011 (image=quay.io/ceph/ceph:v19, name=keen_clarke, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Sep 30 14:14:21 compute-0 systemd[1]: libpod-conmon-da18e71a0b6ba3afaeab7b573e47c1f765edd5e69f08664523720e9886e5a011.scope: Deactivated successfully.
Sep 30 14:14:21 compute-0 sudo[84954]: pam_unix(sudo:session): session closed for user root
Sep 30 14:14:21 compute-0 sudo[85035]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnwectgoytaseneldalraitvxdensjgt ; /usr/bin/python3'
Sep 30 14:14:21 compute-0 sudo[85035]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:14:21 compute-0 python3[85037]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:14:21 compute-0 podman[85038]: 2025-09-30 14:14:21.531690104 +0000 UTC m=+0.048701105 container create 16c811e28294125f1fb7972ddfe83e284d2dd2f8257c9f4e315dbcf8a5491cca (image=quay.io/ceph/ceph:v19, name=stupefied_maxwell, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Sep 30 14:14:21 compute-0 systemd[1]: Started libpod-conmon-16c811e28294125f1fb7972ddfe83e284d2dd2f8257c9f4e315dbcf8a5491cca.scope.
Sep 30 14:14:21 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:14:21 compute-0 podman[85038]: 2025-09-30 14:14:21.505773618 +0000 UTC m=+0.022784639 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:14:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/598c30f3e7a8b528b8646984fcf0298773946737e7277b65a5694ed401636825/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:14:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/598c30f3e7a8b528b8646984fcf0298773946737e7277b65a5694ed401636825/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:14:21 compute-0 podman[85038]: 2025-09-30 14:14:21.616809297 +0000 UTC m=+0.133820318 container init 16c811e28294125f1fb7972ddfe83e284d2dd2f8257c9f4e315dbcf8a5491cca (image=quay.io/ceph/ceph:v19, name=stupefied_maxwell, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Sep 30 14:14:21 compute-0 podman[85038]: 2025-09-30 14:14:21.623521714 +0000 UTC m=+0.140532705 container start 16c811e28294125f1fb7972ddfe83e284d2dd2f8257c9f4e315dbcf8a5491cca (image=quay.io/ceph/ceph:v19, name=stupefied_maxwell, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Sep 30 14:14:21 compute-0 podman[85038]: 2025-09-30 14:14:21.627695719 +0000 UTC m=+0.144706710 container attach 16c811e28294125f1fb7972ddfe83e284d2dd2f8257c9f4e315dbcf8a5491cca (image=quay.io/ceph/ceph:v19, name=stupefied_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:14:21 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Sep 30 14:14:21 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3826202818' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Sep 30 14:14:22 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Sep 30 14:14:22 compute-0 ceph-mon[74194]: pgmap v62: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Sep 30 14:14:22 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/1095304407' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Sep 30 14:14:22 compute-0 ceph-mon[74194]: osdmap e16: 2 total, 2 up, 2 in
Sep 30 14:14:22 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/3826202818' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Sep 30 14:14:22 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3826202818' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Sep 30 14:14:22 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e17 e17: 2 total, 2 up, 2 in
Sep 30 14:14:22 compute-0 stupefied_maxwell[85053]: pool 'volumes' created
Sep 30 14:14:22 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e17: 2 total, 2 up, 2 in
Sep 30 14:14:22 compute-0 systemd[1]: libpod-16c811e28294125f1fb7972ddfe83e284d2dd2f8257c9f4e315dbcf8a5491cca.scope: Deactivated successfully.
Sep 30 14:14:22 compute-0 podman[85038]: 2025-09-30 14:14:22.113202295 +0000 UTC m=+0.630213286 container died 16c811e28294125f1fb7972ddfe83e284d2dd2f8257c9f4e315dbcf8a5491cca (image=quay.io/ceph/ceph:v19, name=stupefied_maxwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Sep 30 14:14:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-598c30f3e7a8b528b8646984fcf0298773946737e7277b65a5694ed401636825-merged.mount: Deactivated successfully.
Sep 30 14:14:22 compute-0 podman[85038]: 2025-09-30 14:14:22.158724991 +0000 UTC m=+0.675735982 container remove 16c811e28294125f1fb7972ddfe83e284d2dd2f8257c9f4e315dbcf8a5491cca (image=quay.io/ceph/ceph:v19, name=stupefied_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Sep 30 14:14:22 compute-0 systemd[1]: libpod-conmon-16c811e28294125f1fb7972ddfe83e284d2dd2f8257c9f4e315dbcf8a5491cca.scope: Deactivated successfully.
Sep 30 14:14:22 compute-0 sudo[85035]: pam_unix(sudo:session): session closed for user root
Sep 30 14:14:22 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 17 pg[3.0( empty local-lis/les=0/0 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [0] r=0 lpr=17 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:14:22 compute-0 sudo[85114]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtqvczfxsgaeowjclbadagssvzxdnyny ; /usr/bin/python3'
Sep 30 14:14:22 compute-0 sudo[85114]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:14:22 compute-0 python3[85116]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:14:22 compute-0 podman[85117]: 2025-09-30 14:14:22.528928761 +0000 UTC m=+0.043299931 container create 20c21274e5094f2a8b7e96bcbf9a6dff5942728f64a4481dd109c4df35e3fa1d (image=quay.io/ceph/ceph:v19, name=adoring_mirzakhani, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:14:22 compute-0 systemd[1]: Started libpod-conmon-20c21274e5094f2a8b7e96bcbf9a6dff5942728f64a4481dd109c4df35e3fa1d.scope.
Sep 30 14:14:22 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:14:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a60fb81009aeba81bd38e47cdecb02d5b95110d3dcf61c560e31223935663dc0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:14:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a60fb81009aeba81bd38e47cdecb02d5b95110d3dcf61c560e31223935663dc0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:14:22 compute-0 podman[85117]: 2025-09-30 14:14:22.600813204 +0000 UTC m=+0.115184404 container init 20c21274e5094f2a8b7e96bcbf9a6dff5942728f64a4481dd109c4df35e3fa1d (image=quay.io/ceph/ceph:v19, name=adoring_mirzakhani, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:14:22 compute-0 podman[85117]: 2025-09-30 14:14:22.511450515 +0000 UTC m=+0.025821705 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:14:22 compute-0 podman[85117]: 2025-09-30 14:14:22.606250559 +0000 UTC m=+0.120621729 container start 20c21274e5094f2a8b7e96bcbf9a6dff5942728f64a4481dd109c4df35e3fa1d (image=quay.io/ceph/ceph:v19, name=adoring_mirzakhani, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Sep 30 14:14:22 compute-0 podman[85117]: 2025-09-30 14:14:22.610380672 +0000 UTC m=+0.124751852 container attach 20c21274e5094f2a8b7e96bcbf9a6dff5942728f64a4481dd109c4df35e3fa1d (image=quay.io/ceph/ceph:v19, name=adoring_mirzakhani, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Sep 30 14:14:22 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Sep 30 14:14:22 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:14:22 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Sep 30 14:14:22 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Sep 30 14:14:22 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Sep 30 14:14:22 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:14:22 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Sep 30 14:14:22 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:14:22 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Sep 30 14:14:22 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Sep 30 14:14:22 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Sep 30 14:14:22 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Sep 30 14:14:22 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:14:22 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:14:22 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Deploying daemon mon.compute-1 on compute-1
Sep 30 14:14:22 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Deploying daemon mon.compute-1 on compute-1
Sep 30 14:14:22 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Sep 30 14:14:22 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).monmap v1 adding/updating compute-2 at [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to monitor cluster
Sep 30 14:14:22 compute-0 ceph-mgr[74485]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/2141061146; not ready for session (expect reconnect)
Sep 30 14:14:22 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Sep 30 14:14:22 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Sep 30 14:14:22 compute-0 ceph-mgr[74485]: mgr finish mon failed to return metadata for mon.compute-2: (2) No such file or directory
Sep 30 14:14:22 compute-0 ceph-mon[74194]: mon.compute-0@0(probing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Sep 30 14:14:22 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Sep 30 14:14:22 compute-0 ceph-mon[74194]: mon.compute-0@0(probing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Sep 30 14:14:22 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Sep 30 14:14:22 compute-0 ceph-mgr[74485]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Sep 30 14:14:22 compute-0 ceph-mon[74194]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Sep 30 14:14:22 compute-0 ceph-mon[74194]: paxos.0).electionLogic(5) init, last seen epoch 5, mid-election, bumping
Sep 30 14:14:22 compute-0 ceph-mon[74194]: mon.compute-0@0(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Sep 30 14:14:22 compute-0 ceph-mon[74194]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Sep 30 14:14:22 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/407658845' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Sep 30 14:14:23 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v65: 3 pgs: 1 unknown, 1 creating+peering, 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Sep 30 14:14:23 compute-0 ceph-mgr[74485]: [progress WARNING root] Starting Global Recovery Event,2 pgs not in active + clean state
Sep 30 14:14:23 compute-0 ceph-mgr[74485]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/2141061146; not ready for session (expect reconnect)
Sep 30 14:14:23 compute-0 ceph-mon[74194]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Sep 30 14:14:23 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Sep 30 14:14:23 compute-0 ceph-mgr[74485]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Sep 30 14:14:24 compute-0 ceph-mon[74194]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Sep 30 14:14:24 compute-0 ceph-mon[74194]: mon.compute-0@0(electing) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Sep 30 14:14:24 compute-0 ceph-mon[74194]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Sep 30 14:14:24 compute-0 ceph-mon[74194]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Sep 30 14:14:24 compute-0 ceph-mgr[74485]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/992432743; not ready for session (expect reconnect)
Sep 30 14:14:24 compute-0 ceph-mon[74194]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Sep 30 14:14:24 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Sep 30 14:14:24 compute-0 ceph-mgr[74485]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Sep 30 14:14:24 compute-0 ceph-mgr[74485]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/2141061146; not ready for session (expect reconnect)
Sep 30 14:14:24 compute-0 ceph-mon[74194]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Sep 30 14:14:24 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Sep 30 14:14:24 compute-0 ceph-mgr[74485]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Sep 30 14:14:25 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v66: 3 pgs: 1 unknown, 1 creating+peering, 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Sep 30 14:14:25 compute-0 ceph-mgr[74485]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/992432743; not ready for session (expect reconnect)
Sep 30 14:14:25 compute-0 ceph-mon[74194]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Sep 30 14:14:25 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Sep 30 14:14:25 compute-0 ceph-mgr[74485]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Sep 30 14:14:25 compute-0 ceph-mgr[74485]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/2141061146; not ready for session (expect reconnect)
Sep 30 14:14:25 compute-0 ceph-mon[74194]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Sep 30 14:14:25 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Sep 30 14:14:25 compute-0 ceph-mgr[74485]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Sep 30 14:14:26 compute-0 ceph-mon[74194]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Sep 30 14:14:26 compute-0 ceph-mgr[74485]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/992432743; not ready for session (expect reconnect)
Sep 30 14:14:26 compute-0 ceph-mon[74194]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Sep 30 14:14:26 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Sep 30 14:14:26 compute-0 ceph-mgr[74485]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Sep 30 14:14:26 compute-0 ceph-mgr[74485]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/2141061146; not ready for session (expect reconnect)
Sep 30 14:14:26 compute-0 ceph-mon[74194]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Sep 30 14:14:26 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Sep 30 14:14:26 compute-0 ceph-mgr[74485]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Sep 30 14:14:27 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v67: 3 pgs: 1 creating+peering, 2 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Sep 30 14:14:27 compute-0 ceph-mgr[74485]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/992432743; not ready for session (expect reconnect)
Sep 30 14:14:27 compute-0 ceph-mon[74194]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Sep 30 14:14:27 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Sep 30 14:14:27 compute-0 ceph-mgr[74485]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Sep 30 14:14:27 compute-0 ceph-mgr[74485]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/2141061146; not ready for session (expect reconnect)
Sep 30 14:14:27 compute-0 ceph-mon[74194]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Sep 30 14:14:27 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Sep 30 14:14:27 compute-0 ceph-mgr[74485]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Sep 30 14:14:27 compute-0 ceph-mon[74194]: paxos.0).electionLogic(7) init, last seen epoch 7, mid-election, bumping
Sep 30 14:14:27 compute-0 ceph-mon[74194]: mon.compute-0@0(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Sep 30 14:14:27 compute-0 ceph-mon[74194]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Sep 30 14:14:28 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : monmap epoch 2
Sep 30 14:14:28 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6
Sep 30 14:14:28 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : last_changed 2025-09-30T14:14:22.756097+0000
Sep 30 14:14:28 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : created 2025-09-30T14:12:03.527961+0000
Sep 30 14:14:28 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Sep 30 14:14:28 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : election_strategy: 1
Sep 30 14:14:28 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Sep 30 14:14:28 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Sep 30 14:14:28 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Sep 30 14:14:28 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : fsmap 
Sep 30 14:14:28 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e17: 2 total, 2 up, 2 in
Sep 30 14:14:28 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.buxlkm(active, since 115s)
Sep 30 14:14:28 compute-0 ceph-mon[74194]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 1 OSD(s) experiencing slow operations in BlueStore
Sep 30 14:14:28 compute-0 ceph-mon[74194]: log_channel(cluster) log [WRN] : [WRN] BLUESTORE_SLOW_OP_ALERT: 1 OSD(s) experiencing slow operations in BlueStore
Sep 30 14:14:28 compute-0 ceph-mon[74194]: log_channel(cluster) log [WRN] :      osd.1 observed slow operation indications in BlueStore
Sep 30 14:14:28 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Sep 30 14:14:28 compute-0 ceph-mon[74194]: log_channel(cluster) log [WRN] : Health check failed: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Sep 30 14:14:28 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:14:28 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Sep 30 14:14:28 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/407658845' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Sep 30 14:14:28 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e18 e18: 2 total, 2 up, 2 in
Sep 30 14:14:28 compute-0 adoring_mirzakhani[85132]: pool 'backups' created
Sep 30 14:14:28 compute-0 systemd[1]: libpod-20c21274e5094f2a8b7e96bcbf9a6dff5942728f64a4481dd109c4df35e3fa1d.scope: Deactivated successfully.
Sep 30 14:14:28 compute-0 podman[85117]: 2025-09-30 14:14:28.225264326 +0000 UTC m=+5.739635486 container died 20c21274e5094f2a8b7e96bcbf9a6dff5942728f64a4481dd109c4df35e3fa1d (image=quay.io/ceph/ceph:v19, name=adoring_mirzakhani, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Sep 30 14:14:28 compute-0 ceph-mon[74194]: Deploying daemon mon.compute-1 on compute-1
Sep 30 14:14:28 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Sep 30 14:14:28 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Sep 30 14:14:28 compute-0 ceph-mon[74194]: mon.compute-0 calling monitor election
Sep 30 14:14:28 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/407658845' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Sep 30 14:14:28 compute-0 ceph-mon[74194]: pgmap v65: 3 pgs: 1 unknown, 1 creating+peering, 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Sep 30 14:14:28 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Sep 30 14:14:28 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Sep 30 14:14:28 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Sep 30 14:14:28 compute-0 ceph-mon[74194]: mon.compute-2 calling monitor election
Sep 30 14:14:28 compute-0 ceph-mon[74194]: pgmap v66: 3 pgs: 1 unknown, 1 creating+peering, 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Sep 30 14:14:28 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Sep 30 14:14:28 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Sep 30 14:14:28 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Sep 30 14:14:28 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Sep 30 14:14:28 compute-0 ceph-mon[74194]: pgmap v67: 3 pgs: 1 creating+peering, 2 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Sep 30 14:14:28 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Sep 30 14:14:28 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Sep 30 14:14:28 compute-0 ceph-mon[74194]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Sep 30 14:14:28 compute-0 ceph-mon[74194]: monmap epoch 2
Sep 30 14:14:28 compute-0 ceph-mon[74194]: fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6
Sep 30 14:14:28 compute-0 ceph-mon[74194]: last_changed 2025-09-30T14:14:22.756097+0000
Sep 30 14:14:28 compute-0 ceph-mon[74194]: created 2025-09-30T14:12:03.527961+0000
Sep 30 14:14:28 compute-0 ceph-mon[74194]: min_mon_release 19 (squid)
Sep 30 14:14:28 compute-0 ceph-mon[74194]: election_strategy: 1
Sep 30 14:14:28 compute-0 ceph-mon[74194]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Sep 30 14:14:28 compute-0 ceph-mon[74194]: 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Sep 30 14:14:28 compute-0 ceph-mon[74194]: fsmap 
Sep 30 14:14:28 compute-0 ceph-mon[74194]: osdmap e17: 2 total, 2 up, 2 in
Sep 30 14:14:28 compute-0 ceph-mon[74194]: mgrmap e9: compute-0.buxlkm(active, since 115s)
Sep 30 14:14:28 compute-0 ceph-mon[74194]: Health detail: HEALTH_WARN 1 OSD(s) experiencing slow operations in BlueStore
Sep 30 14:14:28 compute-0 ceph-mon[74194]: [WRN] BLUESTORE_SLOW_OP_ALERT: 1 OSD(s) experiencing slow operations in BlueStore
Sep 30 14:14:28 compute-0 ceph-mon[74194]:      osd.1 observed slow operation indications in BlueStore
Sep 30 14:14:28 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e18: 2 total, 2 up, 2 in
Sep 30 14:14:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-a60fb81009aeba81bd38e47cdecb02d5b95110d3dcf61c560e31223935663dc0-merged.mount: Deactivated successfully.
Sep 30 14:14:28 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 18 pg[4.0( empty local-lis/les=0/0 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [0] r=0 lpr=18 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:14:28 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:14:28 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Sep 30 14:14:28 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 18 pg[3.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [0] r=0 lpr=17 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:14:28 compute-0 podman[85117]: 2025-09-30 14:14:28.417153631 +0000 UTC m=+5.931524821 container remove 20c21274e5094f2a8b7e96bcbf9a6dff5942728f64a4481dd109c4df35e3fa1d (image=quay.io/ceph/ceph:v19, name=adoring_mirzakhani, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Sep 30 14:14:28 compute-0 systemd[1]: libpod-conmon-20c21274e5094f2a8b7e96bcbf9a6dff5942728f64a4481dd109c4df35e3fa1d.scope: Deactivated successfully.
Sep 30 14:14:28 compute-0 sudo[85114]: pam_unix(sudo:session): session closed for user root
Sep 30 14:14:28 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:14:28 compute-0 ceph-mgr[74485]: [progress INFO root] complete: finished ev d69293bb-e9ba-4d53-9b86-9f8093b4e94f (Updating mon deployment (+2 -> 3))
Sep 30 14:14:28 compute-0 ceph-mgr[74485]: [progress INFO root] Completed event d69293bb-e9ba-4d53-9b86-9f8093b4e94f (Updating mon deployment (+2 -> 3)) in 9 seconds
Sep 30 14:14:28 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Sep 30 14:14:28 compute-0 sudo[85196]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljyesbuonvsaxetpsvgiwyxvqubuajnn ; /usr/bin/python3'
Sep 30 14:14:28 compute-0 sudo[85196]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:14:28 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:14:28 compute-0 ceph-mgr[74485]: [progress INFO root] update: starting ev 75a68850-d6f4-4141-aa69-d4a4076cae13 (Updating mgr deployment (+2 -> 3))
Sep 30 14:14:28 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-2.udzudc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Sep 30 14:14:28 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.udzudc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Sep 30 14:14:28 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Sep 30 14:14:28 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).monmap v2 adding/updating compute-1 at [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to monitor cluster
Sep 30 14:14:28 compute-0 ceph-mgr[74485]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/992432743; not ready for session (expect reconnect)
Sep 30 14:14:28 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Sep 30 14:14:28 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Sep 30 14:14:28 compute-0 ceph-mgr[74485]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Sep 30 14:14:28 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.udzudc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Sep 30 14:14:28 compute-0 python3[85198]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:14:28 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mgr services"} v 0)
Sep 30 14:14:28 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mgr services"}]: dispatch
Sep 30 14:14:28 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:14:28 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:14:28 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-2.udzudc on compute-2
Sep 30 14:14:28 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-2.udzudc on compute-2
Sep 30 14:14:28 compute-0 ceph-mon[74194]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Sep 30 14:14:28 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Sep 30 14:14:28 compute-0 ceph-mon[74194]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Sep 30 14:14:28 compute-0 ceph-mon[74194]: paxos.0).electionLogic(10) init, last seen epoch 10
Sep 30 14:14:28 compute-0 ceph-mon[74194]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Sep 30 14:14:28 compute-0 ceph-mon[74194]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Sep 30 14:14:28 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Sep 30 14:14:28 compute-0 ceph-mon[74194]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Sep 30 14:14:28 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Sep 30 14:14:28 compute-0 ceph-mgr[74485]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Sep 30 14:14:28 compute-0 podman[85199]: 2025-09-30 14:14:28.754103743 +0000 UTC m=+0.046621234 container create d46a39c41b498f5aaa53692d82011473162103bd87713a77dc284c65ee075352 (image=quay.io/ceph/ceph:v19, name=tender_curran, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Sep 30 14:14:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:14:28.757+0000 7f02c6114640 -1 mgr.server handle_report got status from non-daemon mon.compute-2
Sep 30 14:14:28 compute-0 ceph-mgr[74485]: mgr.server handle_report got status from non-daemon mon.compute-2
Sep 30 14:14:28 compute-0 systemd[1]: Started libpod-conmon-d46a39c41b498f5aaa53692d82011473162103bd87713a77dc284c65ee075352.scope.
Sep 30 14:14:28 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:14:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0885492b78779cd959bba9f360c642c8d8ba3c9ffb2082e43b45fbaa635e64e8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:14:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0885492b78779cd959bba9f360c642c8d8ba3c9ffb2082e43b45fbaa635e64e8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:14:28 compute-0 podman[85199]: 2025-09-30 14:14:28.735047668 +0000 UTC m=+0.027565179 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:14:28 compute-0 podman[85199]: 2025-09-30 14:14:28.836735163 +0000 UTC m=+0.129252674 container init d46a39c41b498f5aaa53692d82011473162103bd87713a77dc284c65ee075352 (image=quay.io/ceph/ceph:v19, name=tender_curran, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:14:28 compute-0 podman[85199]: 2025-09-30 14:14:28.843598995 +0000 UTC m=+0.136116486 container start d46a39c41b498f5aaa53692d82011473162103bd87713a77dc284c65ee075352 (image=quay.io/ceph/ceph:v19, name=tender_curran, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Sep 30 14:14:28 compute-0 podman[85199]: 2025-09-30 14:14:28.8470168 +0000 UTC m=+0.139534291 container attach d46a39c41b498f5aaa53692d82011473162103bd87713a77dc284c65ee075352 (image=quay.io/ceph/ceph:v19, name=tender_curran, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:14:28 compute-0 ceph-mon[74194]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Sep 30 14:14:29 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v69: 4 pgs: 1 unknown, 1 creating+peering, 2 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Sep 30 14:14:29 compute-0 ceph-mon[74194]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Sep 30 14:14:29 compute-0 ceph-mon[74194]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Sep 30 14:14:29 compute-0 ceph-mgr[74485]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/992432743; not ready for session (expect reconnect)
Sep 30 14:14:29 compute-0 ceph-mon[74194]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Sep 30 14:14:29 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Sep 30 14:14:29 compute-0 ceph-mgr[74485]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Sep 30 14:14:30 compute-0 ceph-mon[74194]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Sep 30 14:14:30 compute-0 ceph-mgr[74485]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/992432743; not ready for session (expect reconnect)
Sep 30 14:14:30 compute-0 ceph-mon[74194]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Sep 30 14:14:30 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Sep 30 14:14:30 compute-0 ceph-mgr[74485]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Sep 30 14:14:31 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v70: 4 pgs: 1 unknown, 1 creating+peering, 2 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Sep 30 14:14:31 compute-0 ceph-mgr[74485]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/992432743; not ready for session (expect reconnect)
Sep 30 14:14:31 compute-0 ceph-mon[74194]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Sep 30 14:14:31 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Sep 30 14:14:31 compute-0 ceph-mgr[74485]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Sep 30 14:14:31 compute-0 ceph-mon[74194]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Sep 30 14:14:32 compute-0 ceph-mon[74194]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Sep 30 14:14:32 compute-0 ceph-mon[74194]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Sep 30 14:14:32 compute-0 ceph-mon[74194]: mon.compute-0@0(electing) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Sep 30 14:14:32 compute-0 ceph-mon[74194]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Sep 30 14:14:32 compute-0 ceph-mgr[74485]: [balancer INFO root] Optimize plan auto_2025-09-30_14:14:32
Sep 30 14:14:32 compute-0 ceph-mgr[74485]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 14:14:32 compute-0 ceph-mgr[74485]: [balancer INFO root] Some PGs (0.250000) are unknown; try again later
Sep 30 14:14:32 compute-0 ceph-mon[74194]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Sep 30 14:14:32 compute-0 ceph-mgr[74485]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/992432743; not ready for session (expect reconnect)
Sep 30 14:14:32 compute-0 ceph-mon[74194]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Sep 30 14:14:32 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Sep 30 14:14:32 compute-0 ceph-mgr[74485]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Sep 30 14:14:32 compute-0 ceph-mon[74194]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Sep 30 14:14:33 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v71: 4 pgs: 1 creating+peering, 3 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Sep 30 14:14:33 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 14:14:33 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 14:14:33 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 14:14:33 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 14:14:33 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Sep 30 14:14:33 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 14:14:33 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Sep 30 14:14:33 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Sep 30 14:14:33 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Sep 30 14:14:33 compute-0 ceph-mon[74194]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0)
Sep 30 14:14:33 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Sep 30 14:14:33 compute-0 ceph-mgr[74485]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 14:14:33 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:14:33 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:14:33 compute-0 ceph-mgr[74485]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 14:14:33 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:14:33 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:14:33 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:14:33 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:14:33 compute-0 ceph-mgr[74485]: [progress INFO root] Writing back 3 completed events
Sep 30 14:14:33 compute-0 ceph-mon[74194]: mon.compute-0@0(electing) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Sep 30 14:14:33 compute-0 ceph-mon[74194]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Sep 30 14:14:33 compute-0 ceph-mon[74194]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Sep 30 14:14:33 compute-0 ceph-mgr[74485]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/992432743; not ready for session (expect reconnect)
Sep 30 14:14:33 compute-0 ceph-mon[74194]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Sep 30 14:14:33 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Sep 30 14:14:33 compute-0 ceph-mgr[74485]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Sep 30 14:14:33 compute-0 ceph-mon[74194]: paxos.0).electionLogic(11) init, last seen epoch 11, mid-election, bumping
Sep 30 14:14:33 compute-0 ceph-mon[74194]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Sep 30 14:14:33 compute-0 ceph-mon[74194]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Sep 30 14:14:33 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : monmap epoch 3
Sep 30 14:14:33 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6
Sep 30 14:14:33 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : last_changed 2025-09-30T14:14:28.667145+0000
Sep 30 14:14:33 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : created 2025-09-30T14:12:03.527961+0000
Sep 30 14:14:33 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Sep 30 14:14:33 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : election_strategy: 1
Sep 30 14:14:33 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Sep 30 14:14:33 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Sep 30 14:14:33 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : 2: [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] mon.compute-1
Sep 30 14:14:33 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Sep 30 14:14:33 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : fsmap 
Sep 30 14:14:33 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e18: 2 total, 2 up, 2 in
Sep 30 14:14:33 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.buxlkm(active, since 2m)
Sep 30 14:14:33 compute-0 ceph-mon[74194]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 1 OSD(s) experiencing slow operations in BlueStore; 2 pool(s) do not have an application enabled
Sep 30 14:14:33 compute-0 ceph-mon[74194]: log_channel(cluster) log [WRN] : [WRN] BLUESTORE_SLOW_OP_ALERT: 1 OSD(s) experiencing slow operations in BlueStore
Sep 30 14:14:33 compute-0 ceph-mon[74194]: log_channel(cluster) log [WRN] :      osd.1 observed slow operation indications in BlueStore
Sep 30 14:14:33 compute-0 ceph-mon[74194]: log_channel(cluster) log [WRN] : [WRN] POOL_APP_NOT_ENABLED: 2 pool(s) do not have an application enabled
Sep 30 14:14:33 compute-0 ceph-mon[74194]: log_channel(cluster) log [WRN] :     application not enabled on pool 'vms'
Sep 30 14:14:33 compute-0 ceph-mon[74194]: log_channel(cluster) log [WRN] :     application not enabled on pool 'volumes'
Sep 30 14:14:33 compute-0 ceph-mon[74194]: log_channel(cluster) log [WRN] :     use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
Sep 30 14:14:33 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:14:33 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Sep 30 14:14:33 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:14:33 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:14:33 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Sep 30 14:14:33 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:14:33 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-1.zeqptq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Sep 30 14:14:33 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.zeqptq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Sep 30 14:14:33 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Sep 30 14:14:33 compute-0 ceph-mon[74194]: log_channel(cluster) log [WRN] : Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Sep 30 14:14:33 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.zeqptq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Sep 30 14:14:33 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Sep 30 14:14:33 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mgr services"}]: dispatch
Sep 30 14:14:33 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:14:33 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:14:33 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-1.zeqptq on compute-1
Sep 30 14:14:33 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-1.zeqptq on compute-1
Sep 30 14:14:33 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Sep 30 14:14:33 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e19 e19: 2 total, 2 up, 2 in
Sep 30 14:14:33 compute-0 ceph-mon[74194]: Deploying daemon mgr.compute-2.udzudc on compute-2
Sep 30 14:14:33 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Sep 30 14:14:33 compute-0 ceph-mon[74194]: mon.compute-0 calling monitor election
Sep 30 14:14:33 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Sep 30 14:14:33 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Sep 30 14:14:33 compute-0 ceph-mon[74194]: mon.compute-2 calling monitor election
Sep 30 14:14:33 compute-0 ceph-mon[74194]: pgmap v69: 4 pgs: 1 unknown, 1 creating+peering, 2 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Sep 30 14:14:33 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Sep 30 14:14:33 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Sep 30 14:14:33 compute-0 ceph-mon[74194]: mon.compute-1 calling monitor election
Sep 30 14:14:33 compute-0 ceph-mon[74194]: pgmap v70: 4 pgs: 1 unknown, 1 creating+peering, 2 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Sep 30 14:14:33 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Sep 30 14:14:33 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Sep 30 14:14:33 compute-0 ceph-mon[74194]: pgmap v71: 4 pgs: 1 creating+peering, 3 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Sep 30 14:14:33 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Sep 30 14:14:33 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Sep 30 14:14:33 compute-0 ceph-mon[74194]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Sep 30 14:14:33 compute-0 ceph-mon[74194]: monmap epoch 3
Sep 30 14:14:33 compute-0 ceph-mon[74194]: fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6
Sep 30 14:14:33 compute-0 ceph-mon[74194]: last_changed 2025-09-30T14:14:28.667145+0000
Sep 30 14:14:33 compute-0 ceph-mon[74194]: created 2025-09-30T14:12:03.527961+0000
Sep 30 14:14:33 compute-0 ceph-mon[74194]: min_mon_release 19 (squid)
Sep 30 14:14:33 compute-0 ceph-mon[74194]: election_strategy: 1
Sep 30 14:14:33 compute-0 ceph-mon[74194]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Sep 30 14:14:33 compute-0 ceph-mon[74194]: 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Sep 30 14:14:33 compute-0 ceph-mon[74194]: 2: [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] mon.compute-1
Sep 30 14:14:33 compute-0 ceph-mon[74194]: fsmap 
Sep 30 14:14:33 compute-0 ceph-mon[74194]: osdmap e18: 2 total, 2 up, 2 in
Sep 30 14:14:33 compute-0 ceph-mon[74194]: mgrmap e9: compute-0.buxlkm(active, since 2m)
Sep 30 14:14:33 compute-0 ceph-mon[74194]: Health detail: HEALTH_WARN 1 OSD(s) experiencing slow operations in BlueStore; 2 pool(s) do not have an application enabled
Sep 30 14:14:33 compute-0 ceph-mon[74194]: [WRN] BLUESTORE_SLOW_OP_ALERT: 1 OSD(s) experiencing slow operations in BlueStore
Sep 30 14:14:33 compute-0 ceph-mon[74194]:      osd.1 observed slow operation indications in BlueStore
Sep 30 14:14:33 compute-0 ceph-mon[74194]: [WRN] POOL_APP_NOT_ENABLED: 2 pool(s) do not have an application enabled
Sep 30 14:14:33 compute-0 ceph-mon[74194]:     application not enabled on pool 'vms'
Sep 30 14:14:33 compute-0 ceph-mon[74194]:     application not enabled on pool 'volumes'
Sep 30 14:14:33 compute-0 ceph-mon[74194]:     use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
Sep 30 14:14:33 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:14:33 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:14:33 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:14:33 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:14:33 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.zeqptq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Sep 30 14:14:33 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 19 pg[4.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [0] r=0 lpr=18 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:14:33 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e19: 2 total, 2 up, 2 in
Sep 30 14:14:33 compute-0 ceph-mgr[74485]: [progress INFO root] update: starting ev dce39831-05d5-47f2-a32d-83503841e7b3 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Sep 30 14:14:33 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0)
Sep 30 14:14:33 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Sep 30 14:14:34 compute-0 ceph-mgr[74485]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/992432743; not ready for session (expect reconnect)
Sep 30 14:14:34 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Sep 30 14:14:34 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Sep 30 14:14:34 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e19 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 14:14:34 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Sep 30 14:14:34 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Sep 30 14:14:34 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e20 e20: 2 total, 2 up, 2 in
Sep 30 14:14:34 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e20: 2 total, 2 up, 2 in
Sep 30 14:14:34 compute-0 ceph-mgr[74485]: [progress INFO root] update: starting ev e4a8b610-c045-4473-94e8-c0612083d2c3 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Sep 30 14:14:34 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0)
Sep 30 14:14:34 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Sep 30 14:14:34 compute-0 ceph-mon[74194]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Sep 30 14:14:34 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.zeqptq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Sep 30 14:14:34 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mgr services"}]: dispatch
Sep 30 14:14:34 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:14:34 compute-0 ceph-mon[74194]: Deploying daemon mgr.compute-1.zeqptq on compute-1
Sep 30 14:14:34 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Sep 30 14:14:34 compute-0 ceph-mon[74194]: osdmap e19: 2 total, 2 up, 2 in
Sep 30 14:14:34 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Sep 30 14:14:34 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Sep 30 14:14:35 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v74: 4 pgs: 1 creating+peering, 3 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Sep 30 14:14:35 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0)
Sep 30 14:14:35 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Sep 30 14:14:35 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0)
Sep 30 14:14:35 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Sep 30 14:14:35 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Sep 30 14:14:35 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/369098532' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Sep 30 14:14:35 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Sep 30 14:14:35 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:14:35 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Sep 30 14:14:35 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:14:35 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Sep 30 14:14:35 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:14:35 compute-0 ceph-mgr[74485]: [progress INFO root] complete: finished ev 75a68850-d6f4-4141-aa69-d4a4076cae13 (Updating mgr deployment (+2 -> 3))
Sep 30 14:14:35 compute-0 ceph-mgr[74485]: [progress INFO root] Completed event 75a68850-d6f4-4141-aa69-d4a4076cae13 (Updating mgr deployment (+2 -> 3)) in 7 seconds
Sep 30 14:14:35 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Sep 30 14:14:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:14:35.669+0000 7f02c6114640 -1 mgr.server handle_report got status from non-daemon mon.compute-1
Sep 30 14:14:35 compute-0 ceph-mgr[74485]: mgr.server handle_report got status from non-daemon mon.compute-1
Sep 30 14:14:35 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:14:35 compute-0 ceph-mgr[74485]: [progress INFO root] update: starting ev bc5c408c-fbba-4cd3-ab5a-90f077f4a6db (Updating crash deployment (+1 -> 3))
Sep 30 14:14:35 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Sep 30 14:14:35 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Sep 30 14:14:35 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Sep 30 14:14:35 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Sep 30 14:14:35 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:14:35 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:14:35 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-2 on compute-2
Sep 30 14:14:35 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-2 on compute-2
Sep 30 14:14:35 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Sep 30 14:14:35 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Sep 30 14:14:35 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Sep 30 14:14:35 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/369098532' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Sep 30 14:14:35 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e21 e21: 2 total, 2 up, 2 in
Sep 30 14:14:35 compute-0 tender_curran[85213]: pool 'images' created
Sep 30 14:14:35 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e21: 2 total, 2 up, 2 in
Sep 30 14:14:35 compute-0 ceph-mgr[74485]: [progress INFO root] update: starting ev 71eed346-884e-4721-acae-3eb288ae739a (PG autoscaler increasing pool 4 PGs from 1 to 32)
Sep 30 14:14:35 compute-0 ceph-mgr[74485]: [progress INFO root] complete: finished ev dce39831-05d5-47f2-a32d-83503841e7b3 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Sep 30 14:14:35 compute-0 ceph-mgr[74485]: [progress INFO root] Completed event dce39831-05d5-47f2-a32d-83503841e7b3 (PG autoscaler increasing pool 2 PGs from 1 to 32) in 2 seconds
Sep 30 14:14:35 compute-0 ceph-mgr[74485]: [progress INFO root] complete: finished ev e4a8b610-c045-4473-94e8-c0612083d2c3 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Sep 30 14:14:35 compute-0 ceph-mgr[74485]: [progress INFO root] Completed event e4a8b610-c045-4473-94e8-c0612083d2c3 (PG autoscaler increasing pool 3 PGs from 1 to 32) in 1 seconds
Sep 30 14:14:35 compute-0 ceph-mgr[74485]: [progress INFO root] complete: finished ev 71eed346-884e-4721-acae-3eb288ae739a (PG autoscaler increasing pool 4 PGs from 1 to 32)
Sep 30 14:14:35 compute-0 ceph-mgr[74485]: [progress INFO root] Completed event 71eed346-884e-4721-acae-3eb288ae739a (PG autoscaler increasing pool 4 PGs from 1 to 32) in 0 seconds
Sep 30 14:14:35 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 21 pg[5.0( empty local-lis/les=0/0 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [0] r=0 lpr=21 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:14:35 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 21 pg[3.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=21 pruub=8.381165504s) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 active pruub 51.196254730s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:14:35 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 21 pg[3.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=21 pruub=8.381165504s) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 unknown pruub 51.196254730s@ mbc={}] state<Start>: transitioning to Primary
Sep 30 14:14:35 compute-0 systemd[1]: libpod-d46a39c41b498f5aaa53692d82011473162103bd87713a77dc284c65ee075352.scope: Deactivated successfully.
Sep 30 14:14:35 compute-0 podman[85199]: 2025-09-30 14:14:35.993129925 +0000 UTC m=+7.285647426 container died d46a39c41b498f5aaa53692d82011473162103bd87713a77dc284c65ee075352 (image=quay.io/ceph/ceph:v19, name=tender_curran, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:14:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-0885492b78779cd959bba9f360c642c8d8ba3c9ffb2082e43b45fbaa635e64e8-merged.mount: Deactivated successfully.
Sep 30 14:14:36 compute-0 podman[85199]: 2025-09-30 14:14:36.036029785 +0000 UTC m=+7.328547276 container remove d46a39c41b498f5aaa53692d82011473162103bd87713a77dc284c65ee075352 (image=quay.io/ceph/ceph:v19, name=tender_curran, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325)
Sep 30 14:14:36 compute-0 systemd[1]: libpod-conmon-d46a39c41b498f5aaa53692d82011473162103bd87713a77dc284c65ee075352.scope: Deactivated successfully.
Sep 30 14:14:36 compute-0 sudo[85196]: pam_unix(sudo:session): session closed for user root
Sep 30 14:14:36 compute-0 sudo[85276]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dedbttsuvdpiyklhmspiuauubreebfus ; /usr/bin/python3'
Sep 30 14:14:36 compute-0 sudo[85276]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:14:36 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Sep 30 14:14:36 compute-0 ceph-mon[74194]: osdmap e20: 2 total, 2 up, 2 in
Sep 30 14:14:36 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Sep 30 14:14:36 compute-0 ceph-mon[74194]: pgmap v74: 4 pgs: 1 creating+peering, 3 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Sep 30 14:14:36 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Sep 30 14:14:36 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Sep 30 14:14:36 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/369098532' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Sep 30 14:14:36 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:14:36 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:14:36 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:14:36 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:14:36 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Sep 30 14:14:36 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Sep 30 14:14:36 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:14:36 compute-0 python3[85278]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:14:36 compute-0 podman[85279]: 2025-09-30 14:14:36.402510974 +0000 UTC m=+0.041729232 container create cb568c96ee5c5f04aa657df3aeae81ddafa629c3379dc2d5501c955d83d40591 (image=quay.io/ceph/ceph:v19, name=nervous_hodgkin, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Sep 30 14:14:36 compute-0 systemd[1]: Started libpod-conmon-cb568c96ee5c5f04aa657df3aeae81ddafa629c3379dc2d5501c955d83d40591.scope.
Sep 30 14:14:36 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:14:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d184bdc3f925d5f3536aa9aaaf00c24b6146ad0120f592fb91bad2ee8dbb20c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:14:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d184bdc3f925d5f3536aa9aaaf00c24b6146ad0120f592fb91bad2ee8dbb20c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:14:36 compute-0 podman[85279]: 2025-09-30 14:14:36.475688558 +0000 UTC m=+0.114906816 container init cb568c96ee5c5f04aa657df3aeae81ddafa629c3379dc2d5501c955d83d40591 (image=quay.io/ceph/ceph:v19, name=nervous_hodgkin, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:14:36 compute-0 podman[85279]: 2025-09-30 14:14:36.385462839 +0000 UTC m=+0.024681127 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:14:36 compute-0 podman[85279]: 2025-09-30 14:14:36.481945864 +0000 UTC m=+0.121164122 container start cb568c96ee5c5f04aa657df3aeae81ddafa629c3379dc2d5501c955d83d40591 (image=quay.io/ceph/ceph:v19, name=nervous_hodgkin, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:14:36 compute-0 podman[85279]: 2025-09-30 14:14:36.485654887 +0000 UTC m=+0.124873165 container attach cb568c96ee5c5f04aa657df3aeae81ddafa629c3379dc2d5501c955d83d40591 (image=quay.io/ceph/ceph:v19, name=nervous_hodgkin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Sep 30 14:14:36 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Sep 30 14:14:36 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2981242285' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Sep 30 14:14:36 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Sep 30 14:14:36 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2981242285' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Sep 30 14:14:36 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e22 e22: 2 total, 2 up, 2 in
Sep 30 14:14:36 compute-0 nervous_hodgkin[85294]: pool 'cephfs.cephfs.meta' created
Sep 30 14:14:36 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e22: 2 total, 2 up, 2 in
Sep 30 14:14:36 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 22 pg[6.0( empty local-lis/les=0/0 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [0] r=0 lpr=22 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:14:36 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 22 pg[3.1e( empty local-lis/les=17/18 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:14:36 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 22 pg[3.1f( empty local-lis/les=17/18 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:14:36 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 22 pg[3.1d( empty local-lis/les=17/18 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:14:36 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 22 pg[3.1c( empty local-lis/les=17/18 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:14:36 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 22 pg[3.a( empty local-lis/les=17/18 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:14:36 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 22 pg[3.9( empty local-lis/les=17/18 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:14:36 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 22 pg[3.1b( empty local-lis/les=17/18 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:14:36 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 22 pg[3.8( empty local-lis/les=17/18 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:14:36 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 22 pg[3.4( empty local-lis/les=17/18 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:14:36 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 22 pg[3.3( empty local-lis/les=17/18 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:14:36 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 22 pg[3.5( empty local-lis/les=17/18 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:14:36 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 22 pg[3.2( empty local-lis/les=17/18 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:14:36 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 22 pg[3.6( empty local-lis/les=17/18 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:14:36 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 22 pg[3.1( empty local-lis/les=17/18 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:14:36 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 22 pg[3.b( empty local-lis/les=17/18 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:14:36 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 22 pg[3.7( empty local-lis/les=17/18 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:14:36 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 22 pg[3.c( empty local-lis/les=17/18 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:14:36 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 22 pg[3.d( empty local-lis/les=17/18 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:14:36 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 22 pg[3.e( empty local-lis/les=17/18 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:14:36 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 22 pg[3.10( empty local-lis/les=17/18 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:14:36 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 22 pg[3.11( empty local-lis/les=17/18 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:14:36 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 22 pg[3.12( empty local-lis/les=17/18 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:14:36 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 22 pg[3.13( empty local-lis/les=17/18 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:14:36 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 22 pg[3.14( empty local-lis/les=17/18 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:14:36 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 22 pg[3.f( empty local-lis/les=17/18 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:14:36 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 22 pg[3.15( empty local-lis/les=17/18 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:14:36 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 22 pg[3.16( empty local-lis/les=17/18 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:14:36 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 22 pg[3.17( empty local-lis/les=17/18 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:14:36 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 22 pg[3.18( empty local-lis/les=17/18 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:14:36 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 22 pg[3.19( empty local-lis/les=17/18 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:14:36 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 22 pg[3.1a( empty local-lis/les=17/18 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:14:37 compute-0 systemd[1]: libpod-cb568c96ee5c5f04aa657df3aeae81ddafa629c3379dc2d5501c955d83d40591.scope: Deactivated successfully.
Sep 30 14:14:37 compute-0 conmon[85294]: conmon cb568c96ee5c5f04aa65 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cb568c96ee5c5f04aa657df3aeae81ddafa629c3379dc2d5501c955d83d40591.scope/container/memory.events
Sep 30 14:14:37 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 22 pg[3.1e( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:14:37 compute-0 podman[85279]: 2025-09-30 14:14:37.005233742 +0000 UTC m=+0.644452020 container died cb568c96ee5c5f04aa657df3aeae81ddafa629c3379dc2d5501c955d83d40591 (image=quay.io/ceph/ceph:v19, name=nervous_hodgkin, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:14:37 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 22 pg[3.1d( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:14:37 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 22 pg[3.9( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:14:37 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 22 pg[3.1f( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:14:37 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 22 pg[3.8( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:14:37 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 22 pg[3.1c( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:14:37 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 22 pg[3.a( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:14:37 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 22 pg[3.4( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:14:37 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 22 pg[3.1b( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:14:37 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 22 pg[3.3( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:14:37 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 22 pg[3.5( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:14:37 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 22 pg[3.2( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:14:37 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 22 pg[3.1( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:14:37 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 22 pg[3.6( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:14:37 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 22 pg[3.0( empty local-lis/les=21/22 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:14:37 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 22 pg[3.b( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:14:37 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 22 pg[3.7( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:14:37 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 22 pg[5.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [0] r=0 lpr=21 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:14:37 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 22 pg[3.e( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:14:37 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 22 pg[3.c( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:14:37 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 22 pg[3.d( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:14:37 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 22 pg[3.11( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:14:37 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 22 pg[3.10( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:14:37 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 22 pg[3.13( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:14:37 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 22 pg[3.12( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:14:37 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 22 pg[3.15( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:14:37 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 22 pg[3.14( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:14:37 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 22 pg[3.16( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:14:37 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 22 pg[3.f( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:14:37 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 22 pg[3.18( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:14:37 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 22 pg[3.19( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:14:37 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 22 pg[3.1a( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:14:37 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 22 pg[3.17( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=17/17 les/c/f=18/18/0 sis=21) [0] r=0 lpr=21 pi=[17,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:14:37 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v77: 68 pgs: 1 creating+peering, 1 peering, 63 unknown, 3 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Sep 30 14:14:37 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0)
Sep 30 14:14:37 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Sep 30 14:14:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-8d184bdc3f925d5f3536aa9aaaf00c24b6146ad0120f592fb91bad2ee8dbb20c-merged.mount: Deactivated successfully.
Sep 30 14:14:37 compute-0 podman[85279]: 2025-09-30 14:14:37.04164044 +0000 UTC m=+0.680858698 container remove cb568c96ee5c5f04aa657df3aeae81ddafa629c3379dc2d5501c955d83d40591 (image=quay.io/ceph/ceph:v19, name=nervous_hodgkin, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Sep 30 14:14:37 compute-0 systemd[1]: libpod-conmon-cb568c96ee5c5f04aa657df3aeae81ddafa629c3379dc2d5501c955d83d40591.scope: Deactivated successfully.
Sep 30 14:14:37 compute-0 sudo[85276]: pam_unix(sudo:session): session closed for user root
Sep 30 14:14:37 compute-0 sudo[85356]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-niqgbvwyigremshislicjujuilgettlk ; /usr/bin/python3'
Sep 30 14:14:37 compute-0 sudo[85356]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:14:37 compute-0 ceph-mon[74194]: Deploying daemon crash.compute-2 on compute-2
Sep 30 14:14:37 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Sep 30 14:14:37 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Sep 30 14:14:37 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Sep 30 14:14:37 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/369098532' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Sep 30 14:14:37 compute-0 ceph-mon[74194]: osdmap e21: 2 total, 2 up, 2 in
Sep 30 14:14:37 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/2981242285' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Sep 30 14:14:37 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/2981242285' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Sep 30 14:14:37 compute-0 ceph-mon[74194]: osdmap e22: 2 total, 2 up, 2 in
Sep 30 14:14:37 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Sep 30 14:14:37 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 3.1e deep-scrub starts
Sep 30 14:14:37 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 3.1e deep-scrub ok
Sep 30 14:14:37 compute-0 python3[85358]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:14:37 compute-0 podman[85359]: 2025-09-30 14:14:37.391934465 +0000 UTC m=+0.048928141 container create 3630b285d8c4f1d8e961ab8fb0f160869e34de296b2da9928537dd06ead5ba17 (image=quay.io/ceph/ceph:v19, name=nice_jepsen, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:14:37 compute-0 systemd[1]: Started libpod-conmon-3630b285d8c4f1d8e961ab8fb0f160869e34de296b2da9928537dd06ead5ba17.scope.
Sep 30 14:14:37 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:14:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6985a4dddd4f0d8626bc6e59cf0f1ad14989022684f3ef92961c2ac774addab8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:14:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6985a4dddd4f0d8626bc6e59cf0f1ad14989022684f3ef92961c2ac774addab8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:14:37 compute-0 podman[85359]: 2025-09-30 14:14:37.37086438 +0000 UTC m=+0.027858086 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:14:37 compute-0 podman[85359]: 2025-09-30 14:14:37.479776296 +0000 UTC m=+0.136769992 container init 3630b285d8c4f1d8e961ab8fb0f160869e34de296b2da9928537dd06ead5ba17 (image=quay.io/ceph/ceph:v19, name=nice_jepsen, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:14:37 compute-0 podman[85359]: 2025-09-30 14:14:37.488319409 +0000 UTC m=+0.145313085 container start 3630b285d8c4f1d8e961ab8fb0f160869e34de296b2da9928537dd06ead5ba17 (image=quay.io/ceph/ceph:v19, name=nice_jepsen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2)
Sep 30 14:14:37 compute-0 podman[85359]: 2025-09-30 14:14:37.643230572 +0000 UTC m=+0.300224268 container attach 3630b285d8c4f1d8e961ab8fb0f160869e34de296b2da9928537dd06ead5ba17 (image=quay.io/ceph/ceph:v19, name=nice_jepsen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Sep 30 14:14:37 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Sep 30 14:14:37 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Sep 30 14:14:37 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1710423773' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Sep 30 14:14:37 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Sep 30 14:14:38 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:14:38 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Sep 30 14:14:38 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Sep 30 14:14:38 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Sep 30 14:14:38 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Sep 30 14:14:38 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1710423773' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Sep 30 14:14:38 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e23 e23: 2 total, 2 up, 2 in
Sep 30 14:14:38 compute-0 nice_jepsen[85374]: pool 'cephfs.cephfs.data' created
Sep 30 14:14:38 compute-0 systemd[1]: libpod-3630b285d8c4f1d8e961ab8fb0f160869e34de296b2da9928537dd06ead5ba17.scope: Deactivated successfully.
Sep 30 14:14:38 compute-0 podman[85359]: 2025-09-30 14:14:38.463300771 +0000 UTC m=+1.120294447 container died 3630b285d8c4f1d8e961ab8fb0f160869e34de296b2da9928537dd06ead5ba17 (image=quay.io/ceph/ceph:v19, name=nice_jepsen, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:14:38 compute-0 ceph-mgr[74485]: [progress INFO root] Writing back 7 completed events
Sep 30 14:14:38 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e23: 2 total, 2 up, 2 in
Sep 30 14:14:38 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Sep 30 14:14:39 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v79: 100 pgs: 1 creating+peering, 1 peering, 95 unknown, 3 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Sep 30 14:14:39 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 23 pg[4.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=23 pruub=10.819097519s) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active pruub 56.681304932s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:14:39 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 23 pg[4.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=23 pruub=10.819097519s) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown pruub 56.681304932s@ mbc={}] state<Start>: transitioning to Primary
Sep 30 14:14:39 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 3.9 scrub starts
Sep 30 14:14:39 compute-0 ceph-mon[74194]: log_channel(cluster) log [WRN] : Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Sep 30 14:14:39 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Sep 30 14:14:39 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 23 pg[6.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [0] r=0 lpr=22 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:14:39 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 3.9 scrub ok
Sep 30 14:14:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-6985a4dddd4f0d8626bc6e59cf0f1ad14989022684f3ef92961c2ac774addab8-merged.mount: Deactivated successfully.
Sep 30 14:14:39 compute-0 ceph-mon[74194]: pgmap v77: 68 pgs: 1 creating+peering, 1 peering, 63 unknown, 3 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Sep 30 14:14:39 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/1710423773' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Sep 30 14:14:39 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:14:39 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:14:40 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 3.8 deep-scrub starts
Sep 30 14:14:40 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 3.8 deep-scrub ok
Sep 30 14:14:40 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Sep 30 14:14:40 compute-0 podman[85359]: 2025-09-30 14:14:40.448860626 +0000 UTC m=+3.105854302 container remove 3630b285d8c4f1d8e961ab8fb0f160869e34de296b2da9928537dd06ead5ba17 (image=quay.io/ceph/ceph:v19, name=nice_jepsen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Sep 30 14:14:40 compute-0 sudo[85356]: pam_unix(sudo:session): session closed for user root
Sep 30 14:14:40 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e24 e24: 2 total, 2 up, 2 in
Sep 30 14:14:40 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:14:40 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e24: 2 total, 2 up, 2 in
Sep 30 14:14:40 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 24 pg[4.1e( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:14:40 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 24 pg[4.1d( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:14:40 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 24 pg[4.1f( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:14:40 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 24 pg[4.10( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:14:40 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 24 pg[4.11( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:14:40 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 24 pg[4.12( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:14:40 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 24 pg[4.14( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:14:40 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 24 pg[4.16( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:14:40 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 24 pg[4.17( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:14:40 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 24 pg[4.13( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:14:40 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 24 pg[4.8( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:14:40 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 24 pg[4.9( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:14:40 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 24 pg[4.a( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:14:40 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 24 pg[4.b( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:14:40 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 24 pg[4.7( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:14:40 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 24 pg[4.15( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:14:40 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 24 pg[4.1( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:14:40 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 24 pg[4.6( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:14:40 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 24 pg[4.4( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:14:40 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 24 pg[4.3( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:14:40 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 24 pg[4.5( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:14:40 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 24 pg[4.c( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:14:40 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 24 pg[4.f( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:14:40 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 24 pg[4.2( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:14:40 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 24 pg[4.d( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:14:40 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 24 pg[4.e( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:14:40 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 24 pg[4.1c( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:14:40 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 24 pg[4.1b( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:14:40 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 24 pg[4.1a( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:14:40 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 24 pg[4.19( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:14:40 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 24 pg[4.18( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:14:40 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 24 pg[4.1e( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:14:40 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 24 pg[4.11( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:14:40 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 24 pg[4.1d( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:14:40 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:14:40 compute-0 ceph-mgr[74485]: [progress INFO root] complete: finished ev bc5c408c-fbba-4cd3-ab5a-90f077f4a6db (Updating crash deployment (+1 -> 3))
Sep 30 14:14:40 compute-0 ceph-mgr[74485]: [progress INFO root] Completed event bc5c408c-fbba-4cd3-ab5a-90f077f4a6db (Updating crash deployment (+1 -> 3)) in 5 seconds
Sep 30 14:14:40 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 24 pg[4.10( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:14:40 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 24 pg[4.12( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:14:40 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Sep 30 14:14:40 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 24 pg[4.1f( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:14:40 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 24 pg[4.14( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:14:40 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 24 pg[4.17( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:14:40 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 24 pg[4.13( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:14:40 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 24 pg[4.8( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:14:40 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 24 pg[4.16( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:14:40 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 24 pg[4.9( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:14:40 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 24 pg[4.a( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:14:40 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 24 pg[4.b( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:14:40 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 24 pg[4.7( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:14:40 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 24 pg[4.0( empty local-lis/les=23/24 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:14:40 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 24 pg[4.1( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:14:40 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 24 pg[4.6( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:14:40 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 24 pg[4.15( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:14:40 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 24 pg[4.5( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:14:40 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 24 pg[4.4( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:14:40 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 24 pg[4.c( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:14:40 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 24 pg[4.d( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:14:40 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 24 pg[4.e( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:14:40 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 24 pg[4.1b( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:14:40 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 24 pg[4.2( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:14:40 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 24 pg[4.1a( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:14:40 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 24 pg[4.f( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:14:40 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 24 pg[4.3( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:14:40 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 24 pg[4.1c( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:14:40 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 24 pg[4.18( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:14:40 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 24 pg[4.19( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:14:40 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:14:40 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 14:14:40 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 14:14:40 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 14:14:40 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 14:14:40 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:14:40 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:14:40 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 14:14:40 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 14:14:40 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:14:40 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:14:40 compute-0 systemd[1]: libpod-conmon-3630b285d8c4f1d8e961ab8fb0f160869e34de296b2da9928537dd06ead5ba17.scope: Deactivated successfully.
Sep 30 14:14:40 compute-0 sudo[85411]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:14:40 compute-0 sudo[85411]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:14:40 compute-0 sudo[85411]: pam_unix(sudo:session): session closed for user root
Sep 30 14:14:40 compute-0 sudo[85465]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evomskccbdehvxthwbhzqddejsszuukt ; /usr/bin/python3'
Sep 30 14:14:40 compute-0 sudo[85465]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:14:40 compute-0 sudo[85457]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 14:14:40 compute-0 sudo[85457]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:14:40 compute-0 python3[85484]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:14:40 compute-0 ceph-mon[74194]: 3.1e deep-scrub starts
Sep 30 14:14:40 compute-0 ceph-mon[74194]: 3.1e deep-scrub ok
Sep 30 14:14:40 compute-0 ceph-mon[74194]: 3.1d scrub starts
Sep 30 14:14:40 compute-0 ceph-mon[74194]: 3.1d scrub ok
Sep 30 14:14:40 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Sep 30 14:14:40 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/1710423773' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Sep 30 14:14:40 compute-0 ceph-mon[74194]: osdmap e23: 2 total, 2 up, 2 in
Sep 30 14:14:40 compute-0 ceph-mon[74194]: pgmap v79: 100 pgs: 1 creating+peering, 1 peering, 95 unknown, 3 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Sep 30 14:14:40 compute-0 ceph-mon[74194]: 3.9 scrub starts
Sep 30 14:14:40 compute-0 ceph-mon[74194]: Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Sep 30 14:14:40 compute-0 ceph-mon[74194]: 3.9 scrub ok
Sep 30 14:14:40 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:14:40 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:14:40 compute-0 ceph-mon[74194]: osdmap e24: 2 total, 2 up, 2 in
Sep 30 14:14:40 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:14:40 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:14:40 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 14:14:40 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 14:14:40 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:14:40 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 14:14:40 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:14:40 compute-0 podman[85487]: 2025-09-30 14:14:40.865208076 +0000 UTC m=+0.042454830 container create e1de88cedba79263aa33c52f324f73687316dc23bfaf6918d414879942ca34a1 (image=quay.io/ceph/ceph:v19, name=youthful_meninsky, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:14:40 compute-0 systemd[75514]: Starting Mark boot as successful...
Sep 30 14:14:40 compute-0 systemd[75514]: Finished Mark boot as successful.
Sep 30 14:14:40 compute-0 systemd[1]: Started libpod-conmon-e1de88cedba79263aa33c52f324f73687316dc23bfaf6918d414879942ca34a1.scope.
Sep 30 14:14:40 compute-0 podman[85487]: 2025-09-30 14:14:40.848470535 +0000 UTC m=+0.025717309 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:14:40 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:14:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09ab20560e349b36b720b8e43a0c5e82b4e39f8f3908f26cc6290a3fa63290f2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:14:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09ab20560e349b36b720b8e43a0c5e82b4e39f8f3908f26cc6290a3fa63290f2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:14:40 compute-0 podman[85487]: 2025-09-30 14:14:40.974233478 +0000 UTC m=+0.151480262 container init e1de88cedba79263aa33c52f324f73687316dc23bfaf6918d414879942ca34a1 (image=quay.io/ceph/ceph:v19, name=youthful_meninsky, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Sep 30 14:14:40 compute-0 podman[85487]: 2025-09-30 14:14:40.980230416 +0000 UTC m=+0.157477170 container start e1de88cedba79263aa33c52f324f73687316dc23bfaf6918d414879942ca34a1 (image=quay.io/ceph/ceph:v19, name=youthful_meninsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:14:40 compute-0 podman[85487]: 2025-09-30 14:14:40.983865932 +0000 UTC m=+0.161112706 container attach e1de88cedba79263aa33c52f324f73687316dc23bfaf6918d414879942ca34a1 (image=quay.io/ceph/ceph:v19, name=youthful_meninsky, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Sep 30 14:14:41 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v81: 100 pgs: 1 creating+peering, 1 peering, 95 unknown, 3 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Sep 30 14:14:41 compute-0 podman[85547]: 2025-09-30 14:14:41.096422298 +0000 UTC m=+0.039521653 container create d81bcb30e211e9fadb0d6ffd00807e3585ea69139cbf2b96a813f0d7d12da5d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Sep 30 14:14:41 compute-0 systemd[1]: Started libpod-conmon-d81bcb30e211e9fadb0d6ffd00807e3585ea69139cbf2b96a813f0d7d12da5d9.scope.
Sep 30 14:14:41 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:14:41 compute-0 podman[85547]: 2025-09-30 14:14:41.078005252 +0000 UTC m=+0.021104627 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:14:41 compute-0 podman[85547]: 2025-09-30 14:14:41.178813979 +0000 UTC m=+0.121913414 container init d81bcb30e211e9fadb0d6ffd00807e3585ea69139cbf2b96a813f0d7d12da5d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_germain, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:14:41 compute-0 podman[85547]: 2025-09-30 14:14:41.18645181 +0000 UTC m=+0.129551175 container start d81bcb30e211e9fadb0d6ffd00807e3585ea69139cbf2b96a813f0d7d12da5d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_germain, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:14:41 compute-0 sad_germain[85582]: 167 167
Sep 30 14:14:41 compute-0 podman[85547]: 2025-09-30 14:14:41.190220239 +0000 UTC m=+0.133319614 container attach d81bcb30e211e9fadb0d6ffd00807e3585ea69139cbf2b96a813f0d7d12da5d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_germain, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:14:41 compute-0 systemd[1]: libpod-d81bcb30e211e9fadb0d6ffd00807e3585ea69139cbf2b96a813f0d7d12da5d9.scope: Deactivated successfully.
Sep 30 14:14:41 compute-0 podman[85547]: 2025-09-30 14:14:41.191979055 +0000 UTC m=+0.135078490 container died d81bcb30e211e9fadb0d6ffd00807e3585ea69139cbf2b96a813f0d7d12da5d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_germain, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:14:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-57237730ed7df1d1c482ac3c3f0ebe58cccad0e1b86950fa6422866056d10967-merged.mount: Deactivated successfully.
Sep 30 14:14:41 compute-0 podman[85547]: 2025-09-30 14:14:41.233852339 +0000 UTC m=+0.176951694 container remove d81bcb30e211e9fadb0d6ffd00807e3585ea69139cbf2b96a813f0d7d12da5d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_germain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Sep 30 14:14:41 compute-0 systemd[1]: libpod-conmon-d81bcb30e211e9fadb0d6ffd00807e3585ea69139cbf2b96a813f0d7d12da5d9.scope: Deactivated successfully.
Sep 30 14:14:41 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Sep 30 14:14:41 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Sep 30 14:14:41 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0)
Sep 30 14:14:41 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4223857589' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Sep 30 14:14:41 compute-0 podman[85606]: 2025-09-30 14:14:41.391085052 +0000 UTC m=+0.045036788 container create 0f9d13d8881477aa7b6d6e714976465ff216d0143b8d7eb224f5990e57c58d72 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_lovelace, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Sep 30 14:14:41 compute-0 systemd[1]: Started libpod-conmon-0f9d13d8881477aa7b6d6e714976465ff216d0143b8d7eb224f5990e57c58d72.scope.
Sep 30 14:14:41 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:14:41 compute-0 podman[85606]: 2025-09-30 14:14:41.372298737 +0000 UTC m=+0.026250523 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:14:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e00edc0aaa6b456d77c229451fcca5ccc4f9cc65e33cd986275f1402d9df24d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:14:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e00edc0aaa6b456d77c229451fcca5ccc4f9cc65e33cd986275f1402d9df24d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:14:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e00edc0aaa6b456d77c229451fcca5ccc4f9cc65e33cd986275f1402d9df24d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:14:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e00edc0aaa6b456d77c229451fcca5ccc4f9cc65e33cd986275f1402d9df24d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:14:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e00edc0aaa6b456d77c229451fcca5ccc4f9cc65e33cd986275f1402d9df24d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 14:14:41 compute-0 podman[85606]: 2025-09-30 14:14:41.484738269 +0000 UTC m=+0.138689995 container init 0f9d13d8881477aa7b6d6e714976465ff216d0143b8d7eb224f5990e57c58d72 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Sep 30 14:14:41 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Sep 30 14:14:41 compute-0 podman[85606]: 2025-09-30 14:14:41.492696899 +0000 UTC m=+0.146648625 container start 0f9d13d8881477aa7b6d6e714976465ff216d0143b8d7eb224f5990e57c58d72 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Sep 30 14:14:41 compute-0 podman[85606]: 2025-09-30 14:14:41.496011096 +0000 UTC m=+0.149962842 container attach 0f9d13d8881477aa7b6d6e714976465ff216d0143b8d7eb224f5990e57c58d72 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Sep 30 14:14:41 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4223857589' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Sep 30 14:14:41 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e25 e25: 2 total, 2 up, 2 in
Sep 30 14:14:41 compute-0 youthful_meninsky[85517]: enabled application 'rbd' on pool 'vms'
Sep 30 14:14:41 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e25: 2 total, 2 up, 2 in
Sep 30 14:14:41 compute-0 systemd[1]: libpod-e1de88cedba79263aa33c52f324f73687316dc23bfaf6918d414879942ca34a1.scope: Deactivated successfully.
Sep 30 14:14:41 compute-0 podman[85487]: 2025-09-30 14:14:41.579889106 +0000 UTC m=+0.757135870 container died e1de88cedba79263aa33c52f324f73687316dc23bfaf6918d414879942ca34a1 (image=quay.io/ceph/ceph:v19, name=youthful_meninsky, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:14:41 compute-0 ceph-mgr[74485]: mgr.server handle_open ignoring open from mgr.compute-2.udzudc 192.168.122.102:0/3096273757; not ready for session (expect reconnect)
Sep 30 14:14:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-09ab20560e349b36b720b8e43a0c5e82b4e39f8f3908f26cc6290a3fa63290f2-merged.mount: Deactivated successfully.
Sep 30 14:14:41 compute-0 podman[85487]: 2025-09-30 14:14:41.668649835 +0000 UTC m=+0.845896589 container remove e1de88cedba79263aa33c52f324f73687316dc23bfaf6918d414879942ca34a1 (image=quay.io/ceph/ceph:v19, name=youthful_meninsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Sep 30 14:14:41 compute-0 sudo[85465]: pam_unix(sudo:session): session closed for user root
Sep 30 14:14:41 compute-0 systemd[1]: libpod-conmon-e1de88cedba79263aa33c52f324f73687316dc23bfaf6918d414879942ca34a1.scope: Deactivated successfully.
Sep 30 14:14:41 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.udzudc started
Sep 30 14:14:41 compute-0 sudo[85672]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxyjhfkgimjpbqoklvzagtezetruuwdz ; /usr/bin/python3'
Sep 30 14:14:41 compute-0 sudo[85672]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:14:41 compute-0 stoic_lovelace[85624]: --> passed data devices: 0 physical, 1 LVM
Sep 30 14:14:41 compute-0 stoic_lovelace[85624]: --> All data devices are unavailable
Sep 30 14:14:41 compute-0 systemd[1]: libpod-0f9d13d8881477aa7b6d6e714976465ff216d0143b8d7eb224f5990e57c58d72.scope: Deactivated successfully.
Sep 30 14:14:41 compute-0 conmon[85624]: conmon 0f9d13d8881477aa7b6d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0f9d13d8881477aa7b6d6e714976465ff216d0143b8d7eb224f5990e57c58d72.scope/container/memory.events
Sep 30 14:14:41 compute-0 podman[85606]: 2025-09-30 14:14:41.835997434 +0000 UTC m=+0.489949160 container died 0f9d13d8881477aa7b6d6e714976465ff216d0143b8d7eb224f5990e57c58d72 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_lovelace, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:14:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-2e00edc0aaa6b456d77c229451fcca5ccc4f9cc65e33cd986275f1402d9df24d-merged.mount: Deactivated successfully.
Sep 30 14:14:41 compute-0 podman[85606]: 2025-09-30 14:14:41.881518604 +0000 UTC m=+0.535470330 container remove 0f9d13d8881477aa7b6d6e714976465ff216d0143b8d7eb224f5990e57c58d72 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_lovelace, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Sep 30 14:14:41 compute-0 systemd[1]: libpod-conmon-0f9d13d8881477aa7b6d6e714976465ff216d0143b8d7eb224f5990e57c58d72.scope: Deactivated successfully.
Sep 30 14:14:41 compute-0 sudo[85457]: pam_unix(sudo:session): session closed for user root
Sep 30 14:14:41 compute-0 python3[85676]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:14:41 compute-0 sudo[85690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:14:41 compute-0 sudo[85690]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:14:41 compute-0 sudo[85690]: pam_unix(sudo:session): session closed for user root
Sep 30 14:14:42 compute-0 podman[85710]: 2025-09-30 14:14:42.021280096 +0000 UTC m=+0.048052967 container create efc987614c3daf9324761d7acbbbd909b93be0552422031220f58f6a463c0901 (image=quay.io/ceph/ceph:v19, name=fervent_driscoll, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325)
Sep 30 14:14:42 compute-0 ceph-mon[74194]: 3.8 deep-scrub starts
Sep 30 14:14:42 compute-0 ceph-mon[74194]: 3.8 deep-scrub ok
Sep 30 14:14:42 compute-0 ceph-mon[74194]: 2.1e scrub starts
Sep 30 14:14:42 compute-0 ceph-mon[74194]: 2.1e scrub ok
Sep 30 14:14:42 compute-0 ceph-mon[74194]: pgmap v81: 100 pgs: 1 creating+peering, 1 peering, 95 unknown, 3 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Sep 30 14:14:42 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/4223857589' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Sep 30 14:14:42 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/4223857589' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Sep 30 14:14:42 compute-0 ceph-mon[74194]: osdmap e25: 2 total, 2 up, 2 in
Sep 30 14:14:42 compute-0 ceph-mon[74194]: Standby manager daemon compute-2.udzudc started
Sep 30 14:14:42 compute-0 systemd[1]: Started libpod-conmon-efc987614c3daf9324761d7acbbbd909b93be0552422031220f58f6a463c0901.scope.
Sep 30 14:14:42 compute-0 sudo[85724]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -- lvm list --format json
Sep 30 14:14:42 compute-0 sudo[85724]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:14:42 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:14:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edd135906975ee859cef94e31a5fc47c97948b08b62522cc32af520216bdcbe4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:14:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edd135906975ee859cef94e31a5fc47c97948b08b62522cc32af520216bdcbe4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:14:42 compute-0 podman[85710]: 2025-09-30 14:14:41.999546144 +0000 UTC m=+0.026319035 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:14:42 compute-0 podman[85710]: 2025-09-30 14:14:42.10303902 +0000 UTC m=+0.129811911 container init efc987614c3daf9324761d7acbbbd909b93be0552422031220f58f6a463c0901 (image=quay.io/ceph/ceph:v19, name=fervent_driscoll, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:14:42 compute-0 podman[85710]: 2025-09-30 14:14:42.109505371 +0000 UTC m=+0.136278242 container start efc987614c3daf9324761d7acbbbd909b93be0552422031220f58f6a463c0901 (image=quay.io/ceph/ceph:v19, name=fervent_driscoll, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1)
Sep 30 14:14:42 compute-0 podman[85710]: 2025-09-30 14:14:42.113740252 +0000 UTC m=+0.140513143 container attach efc987614c3daf9324761d7acbbbd909b93be0552422031220f58f6a463c0901 (image=quay.io/ceph/ceph:v19, name=fervent_driscoll, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1)
Sep 30 14:14:42 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.zeqptq started
Sep 30 14:14:42 compute-0 ceph-mgr[74485]: mgr.server handle_open ignoring open from mgr.compute-1.zeqptq 192.168.122.101:0/2116422035; not ready for session (expect reconnect)
Sep 30 14:14:42 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Sep 30 14:14:42 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Sep 30 14:14:42 compute-0 podman[85819]: 2025-09-30 14:14:42.453800952 +0000 UTC m=+0.039505252 container create dccf59f41aaffaffbf8c20376e2e1c0639ef6e875e2fdfb1c861ab04efd683bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_clarke, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:14:42 compute-0 systemd[1]: Started libpod-conmon-dccf59f41aaffaffbf8c20376e2e1c0639ef6e875e2fdfb1c861ab04efd683bd.scope.
Sep 30 14:14:42 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:14:42 compute-0 podman[85819]: 2025-09-30 14:14:42.527940726 +0000 UTC m=+0.113645056 container init dccf59f41aaffaffbf8c20376e2e1c0639ef6e875e2fdfb1c861ab04efd683bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_clarke, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Sep 30 14:14:42 compute-0 podman[85819]: 2025-09-30 14:14:42.437477032 +0000 UTC m=+0.023181352 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:14:42 compute-0 podman[85819]: 2025-09-30 14:14:42.532786604 +0000 UTC m=+0.118490904 container start dccf59f41aaffaffbf8c20376e2e1c0639ef6e875e2fdfb1c861ab04efd683bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_clarke, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Sep 30 14:14:42 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0)
Sep 30 14:14:42 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1385278754' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Sep 30 14:14:42 compute-0 nervous_clarke[85836]: 167 167
Sep 30 14:14:42 compute-0 podman[85819]: 2025-09-30 14:14:42.536143552 +0000 UTC m=+0.121847872 container attach dccf59f41aaffaffbf8c20376e2e1c0639ef6e875e2fdfb1c861ab04efd683bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_clarke, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:14:42 compute-0 systemd[1]: libpod-dccf59f41aaffaffbf8c20376e2e1c0639ef6e875e2fdfb1c861ab04efd683bd.scope: Deactivated successfully.
Sep 30 14:14:42 compute-0 podman[85819]: 2025-09-30 14:14:42.53757574 +0000 UTC m=+0.123280040 container died dccf59f41aaffaffbf8c20376e2e1c0639ef6e875e2fdfb1c861ab04efd683bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_clarke, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Sep 30 14:14:42 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd new", "uuid": "79a9ec1a-0d78-4c51-ab67-31c1affbe6d4"} v 0)
Sep 30 14:14:42 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "79a9ec1a-0d78-4c51-ab67-31c1affbe6d4"}]: dispatch
Sep 30 14:14:42 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Sep 30 14:14:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-9e6354d0fafc5018508cc42bc434145155ca4d26458cad39e175fa680650b771-merged.mount: Deactivated successfully.
Sep 30 14:14:42 compute-0 podman[85819]: 2025-09-30 14:14:42.57744278 +0000 UTC m=+0.163147080 container remove dccf59f41aaffaffbf8c20376e2e1c0639ef6e875e2fdfb1c861ab04efd683bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_clarke, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Sep 30 14:14:42 compute-0 systemd[1]: libpod-conmon-dccf59f41aaffaffbf8c20376e2e1c0639ef6e875e2fdfb1c861ab04efd683bd.scope: Deactivated successfully.
Sep 30 14:14:42 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1385278754' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Sep 30 14:14:42 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "79a9ec1a-0d78-4c51-ab67-31c1affbe6d4"}]': finished
Sep 30 14:14:42 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e26 e26: 3 total, 2 up, 3 in
Sep 30 14:14:42 compute-0 fervent_driscoll[85754]: enabled application 'rbd' on pool 'volumes'
Sep 30 14:14:42 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e26: 3 total, 2 up, 3 in
Sep 30 14:14:42 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Sep 30 14:14:42 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Sep 30 14:14:42 compute-0 ceph-mgr[74485]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Sep 30 14:14:42 compute-0 systemd[1]: libpod-efc987614c3daf9324761d7acbbbd909b93be0552422031220f58f6a463c0901.scope: Deactivated successfully.
Sep 30 14:14:42 compute-0 ceph-mgr[74485]: mgr.server handle_open ignoring open from mgr.compute-2.udzudc 192.168.122.102:0/3096273757; not ready for session (expect reconnect)
Sep 30 14:14:42 compute-0 podman[85710]: 2025-09-30 14:14:42.649437627 +0000 UTC m=+0.676210498 container died efc987614c3daf9324761d7acbbbd909b93be0552422031220f58f6a463c0901 (image=quay.io/ceph/ceph:v19, name=fervent_driscoll, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:14:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-edd135906975ee859cef94e31a5fc47c97948b08b62522cc32af520216bdcbe4-merged.mount: Deactivated successfully.
Sep 30 14:14:42 compute-0 podman[85710]: 2025-09-30 14:14:42.691810164 +0000 UTC m=+0.718583035 container remove efc987614c3daf9324761d7acbbbd909b93be0552422031220f58f6a463c0901 (image=quay.io/ceph/ceph:v19, name=fervent_driscoll, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:14:42 compute-0 systemd[1]: libpod-conmon-efc987614c3daf9324761d7acbbbd909b93be0552422031220f58f6a463c0901.scope: Deactivated successfully.
Sep 30 14:14:42 compute-0 sudo[85672]: pam_unix(sudo:session): session closed for user root
Sep 30 14:14:42 compute-0 podman[85869]: 2025-09-30 14:14:42.73609879 +0000 UTC m=+0.042743387 container create 261c2ffefde0f38925a151fc8cdb38487bf734752f72230eb6f8592d2a366d91 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_sutherland, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:14:42 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : mgrmap e10: compute-0.buxlkm(active, since 2m), standbys: compute-2.udzudc, compute-1.zeqptq
Sep 30 14:14:42 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.udzudc", "id": "compute-2.udzudc"} v 0)
Sep 30 14:14:42 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mgr metadata", "who": "compute-2.udzudc", "id": "compute-2.udzudc"}]: dispatch
Sep 30 14:14:42 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.zeqptq", "id": "compute-1.zeqptq"} v 0)
Sep 30 14:14:42 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mgr metadata", "who": "compute-1.zeqptq", "id": "compute-1.zeqptq"}]: dispatch
Sep 30 14:14:42 compute-0 systemd[1]: Started libpod-conmon-261c2ffefde0f38925a151fc8cdb38487bf734752f72230eb6f8592d2a366d91.scope.
Sep 30 14:14:42 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:14:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c28a8387878805d02c83cdc9ccd477a151ea81ec96ef25df0ac62a3702dbdd57/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:14:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c28a8387878805d02c83cdc9ccd477a151ea81ec96ef25df0ac62a3702dbdd57/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:14:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c28a8387878805d02c83cdc9ccd477a151ea81ec96ef25df0ac62a3702dbdd57/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:14:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c28a8387878805d02c83cdc9ccd477a151ea81ec96ef25df0ac62a3702dbdd57/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:14:42 compute-0 podman[85869]: 2025-09-30 14:14:42.717246304 +0000 UTC m=+0.023890931 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:14:42 compute-0 podman[85869]: 2025-09-30 14:14:42.815695438 +0000 UTC m=+0.122340055 container init 261c2ffefde0f38925a151fc8cdb38487bf734752f72230eb6f8592d2a366d91 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Sep 30 14:14:42 compute-0 podman[85869]: 2025-09-30 14:14:42.82186347 +0000 UTC m=+0.128508067 container start 261c2ffefde0f38925a151fc8cdb38487bf734752f72230eb6f8592d2a366d91 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True)
Sep 30 14:14:42 compute-0 podman[85869]: 2025-09-30 14:14:42.830150589 +0000 UTC m=+0.136795186 container attach 261c2ffefde0f38925a151fc8cdb38487bf734752f72230eb6f8592d2a366d91 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Sep 30 14:14:42 compute-0 sudo[85917]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzcercblvndjahasxxxzdefowwdfeolo ; /usr/bin/python3'
Sep 30 14:14:42 compute-0 sudo[85917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:14:43 compute-0 python3[85919]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:14:43 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v84: 100 pgs: 100 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Sep 30 14:14:43 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0)
Sep 30 14:14:43 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Sep 30 14:14:43 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0)
Sep 30 14:14:43 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Sep 30 14:14:43 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0)
Sep 30 14:14:43 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Sep 30 14:14:43 compute-0 ceph-mon[74194]: 3.1f scrub starts
Sep 30 14:14:43 compute-0 ceph-mon[74194]: 3.1f scrub ok
Sep 30 14:14:43 compute-0 ceph-mon[74194]: 2.1f scrub starts
Sep 30 14:14:43 compute-0 ceph-mon[74194]: 2.1f scrub ok
Sep 30 14:14:43 compute-0 ceph-mon[74194]: Standby manager daemon compute-1.zeqptq started
Sep 30 14:14:43 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/1385278754' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Sep 30 14:14:43 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/3137703' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "79a9ec1a-0d78-4c51-ab67-31c1affbe6d4"}]: dispatch
Sep 30 14:14:43 compute-0 ceph-mon[74194]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "79a9ec1a-0d78-4c51-ab67-31c1affbe6d4"}]: dispatch
Sep 30 14:14:43 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/1385278754' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Sep 30 14:14:43 compute-0 ceph-mon[74194]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "79a9ec1a-0d78-4c51-ab67-31c1affbe6d4"}]': finished
Sep 30 14:14:43 compute-0 ceph-mon[74194]: osdmap e26: 3 total, 2 up, 3 in
Sep 30 14:14:43 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Sep 30 14:14:43 compute-0 ceph-mon[74194]: mgrmap e10: compute-0.buxlkm(active, since 2m), standbys: compute-2.udzudc, compute-1.zeqptq
Sep 30 14:14:43 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mgr metadata", "who": "compute-2.udzudc", "id": "compute-2.udzudc"}]: dispatch
Sep 30 14:14:43 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mgr metadata", "who": "compute-1.zeqptq", "id": "compute-1.zeqptq"}]: dispatch
Sep 30 14:14:43 compute-0 ceph-mon[74194]: 2.1d scrub starts
Sep 30 14:14:43 compute-0 ceph-mon[74194]: 2.1d scrub ok
Sep 30 14:14:43 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Sep 30 14:14:43 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Sep 30 14:14:43 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Sep 30 14:14:43 compute-0 podman[85922]: 2025-09-30 14:14:43.098457558 +0000 UTC m=+0.063967606 container create 603c710bd4e59b32f6b6c98a4db5a725edbc91c4cb302683d96753953a012318 (image=quay.io/ceph/ceph:v19, name=relaxed_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325)
Sep 30 14:14:43 compute-0 affectionate_sutherland[85889]: {
Sep 30 14:14:43 compute-0 affectionate_sutherland[85889]:     "0": [
Sep 30 14:14:43 compute-0 affectionate_sutherland[85889]:         {
Sep 30 14:14:43 compute-0 affectionate_sutherland[85889]:             "devices": [
Sep 30 14:14:43 compute-0 affectionate_sutherland[85889]:                 "/dev/loop3"
Sep 30 14:14:43 compute-0 affectionate_sutherland[85889]:             ],
Sep 30 14:14:43 compute-0 affectionate_sutherland[85889]:             "lv_name": "ceph_lv0",
Sep 30 14:14:43 compute-0 affectionate_sutherland[85889]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:14:43 compute-0 affectionate_sutherland[85889]:             "lv_size": "21470642176",
Sep 30 14:14:43 compute-0 affectionate_sutherland[85889]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5e3c7776-ac03-5698-b79f-a6dc2d80cae6,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1bf35304-bfb4-41f5-b832-570aa31de1b2,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 14:14:43 compute-0 affectionate_sutherland[85889]:             "lv_uuid": "f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L",
Sep 30 14:14:43 compute-0 affectionate_sutherland[85889]:             "name": "ceph_lv0",
Sep 30 14:14:43 compute-0 affectionate_sutherland[85889]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:14:43 compute-0 affectionate_sutherland[85889]:             "tags": {
Sep 30 14:14:43 compute-0 affectionate_sutherland[85889]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:14:43 compute-0 affectionate_sutherland[85889]:                 "ceph.block_uuid": "f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L",
Sep 30 14:14:43 compute-0 affectionate_sutherland[85889]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 14:14:43 compute-0 affectionate_sutherland[85889]:                 "ceph.cluster_fsid": "5e3c7776-ac03-5698-b79f-a6dc2d80cae6",
Sep 30 14:14:43 compute-0 affectionate_sutherland[85889]:                 "ceph.cluster_name": "ceph",
Sep 30 14:14:43 compute-0 affectionate_sutherland[85889]:                 "ceph.crush_device_class": "",
Sep 30 14:14:43 compute-0 affectionate_sutherland[85889]:                 "ceph.encrypted": "0",
Sep 30 14:14:43 compute-0 affectionate_sutherland[85889]:                 "ceph.osd_fsid": "1bf35304-bfb4-41f5-b832-570aa31de1b2",
Sep 30 14:14:43 compute-0 affectionate_sutherland[85889]:                 "ceph.osd_id": "0",
Sep 30 14:14:43 compute-0 affectionate_sutherland[85889]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 14:14:43 compute-0 affectionate_sutherland[85889]:                 "ceph.type": "block",
Sep 30 14:14:43 compute-0 affectionate_sutherland[85889]:                 "ceph.vdo": "0",
Sep 30 14:14:43 compute-0 affectionate_sutherland[85889]:                 "ceph.with_tpm": "0"
Sep 30 14:14:43 compute-0 affectionate_sutherland[85889]:             },
Sep 30 14:14:43 compute-0 affectionate_sutherland[85889]:             "type": "block",
Sep 30 14:14:43 compute-0 affectionate_sutherland[85889]:             "vg_name": "ceph_vg0"
Sep 30 14:14:43 compute-0 affectionate_sutherland[85889]:         }
Sep 30 14:14:43 compute-0 affectionate_sutherland[85889]:     ]
Sep 30 14:14:43 compute-0 affectionate_sutherland[85889]: }
Sep 30 14:14:43 compute-0 systemd[1]: Started libpod-conmon-603c710bd4e59b32f6b6c98a4db5a725edbc91c4cb302683d96753953a012318.scope.
Sep 30 14:14:43 compute-0 podman[85869]: 2025-09-30 14:14:43.143838484 +0000 UTC m=+0.450483081 container died 261c2ffefde0f38925a151fc8cdb38487bf734752f72230eb6f8592d2a366d91 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:14:43 compute-0 systemd[1]: libpod-261c2ffefde0f38925a151fc8cdb38487bf734752f72230eb6f8592d2a366d91.scope: Deactivated successfully.
Sep 30 14:14:43 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:14:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4707fb6895f4d2ec3769d5a7cb17dd3cffe21890f434a40aede14d4f5b62f56/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:14:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4707fb6895f4d2ec3769d5a7cb17dd3cffe21890f434a40aede14d4f5b62f56/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:14:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-c28a8387878805d02c83cdc9ccd477a151ea81ec96ef25df0ac62a3702dbdd57-merged.mount: Deactivated successfully.
Sep 30 14:14:43 compute-0 podman[85922]: 2025-09-30 14:14:43.075695708 +0000 UTC m=+0.041205776 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:14:43 compute-0 podman[85869]: 2025-09-30 14:14:43.192405073 +0000 UTC m=+0.499049680 container remove 261c2ffefde0f38925a151fc8cdb38487bf734752f72230eb6f8592d2a366d91 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_sutherland, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:14:43 compute-0 podman[85922]: 2025-09-30 14:14:43.202841698 +0000 UTC m=+0.168351746 container init 603c710bd4e59b32f6b6c98a4db5a725edbc91c4cb302683d96753953a012318 (image=quay.io/ceph/ceph:v19, name=relaxed_clarke, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Sep 30 14:14:43 compute-0 systemd[1]: libpod-conmon-261c2ffefde0f38925a151fc8cdb38487bf734752f72230eb6f8592d2a366d91.scope: Deactivated successfully.
Sep 30 14:14:43 compute-0 podman[85922]: 2025-09-30 14:14:43.210843479 +0000 UTC m=+0.176353527 container start 603c710bd4e59b32f6b6c98a4db5a725edbc91c4cb302683d96753953a012318 (image=quay.io/ceph/ceph:v19, name=relaxed_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Sep 30 14:14:43 compute-0 podman[85922]: 2025-09-30 14:14:43.215247535 +0000 UTC m=+0.180757583 container attach 603c710bd4e59b32f6b6c98a4db5a725edbc91c4cb302683d96753953a012318 (image=quay.io/ceph/ceph:v19, name=relaxed_clarke, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Sep 30 14:14:43 compute-0 sudo[85724]: pam_unix(sudo:session): session closed for user root
Sep 30 14:14:43 compute-0 sudo[85955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:14:43 compute-0 sudo[85955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:14:43 compute-0 sudo[85955]: pam_unix(sudo:session): session closed for user root
Sep 30 14:14:43 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Sep 30 14:14:43 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Sep 30 14:14:43 compute-0 sudo[85990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -- raw list --format json
Sep 30 14:14:43 compute-0 sudo[85990]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:14:43 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0)
Sep 30 14:14:43 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2086974968' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Sep 30 14:14:43 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Sep 30 14:14:43 compute-0 podman[86066]: 2025-09-30 14:14:43.778014603 +0000 UTC m=+0.038234378 container create 7805ae0c8f962d8a6771bbbfb1c34dac514c4d788cb090917781157d61bbd7b1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_mahavira, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:14:43 compute-0 systemd[1]: Started libpod-conmon-7805ae0c8f962d8a6771bbbfb1c34dac514c4d788cb090917781157d61bbd7b1.scope.
Sep 30 14:14:43 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:14:43 compute-0 podman[86066]: 2025-09-30 14:14:43.852951478 +0000 UTC m=+0.113171283 container init 7805ae0c8f962d8a6771bbbfb1c34dac514c4d788cb090917781157d61bbd7b1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_mahavira, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:14:43 compute-0 podman[86066]: 2025-09-30 14:14:43.760529743 +0000 UTC m=+0.020749548 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:14:43 compute-0 podman[86066]: 2025-09-30 14:14:43.859529661 +0000 UTC m=+0.119749436 container start 7805ae0c8f962d8a6771bbbfb1c34dac514c4d788cb090917781157d61bbd7b1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_mahavira, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:14:43 compute-0 podman[86066]: 2025-09-30 14:14:43.863246229 +0000 UTC m=+0.123466034 container attach 7805ae0c8f962d8a6771bbbfb1c34dac514c4d788cb090917781157d61bbd7b1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Sep 30 14:14:43 compute-0 nifty_mahavira[86082]: 167 167
Sep 30 14:14:43 compute-0 systemd[1]: libpod-7805ae0c8f962d8a6771bbbfb1c34dac514c4d788cb090917781157d61bbd7b1.scope: Deactivated successfully.
Sep 30 14:14:43 compute-0 podman[86066]: 2025-09-30 14:14:43.865704074 +0000 UTC m=+0.125923859 container died 7805ae0c8f962d8a6771bbbfb1c34dac514c4d788cb090917781157d61bbd7b1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_mahavira, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:14:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-ab4c9d7ddfc52cf83a08163e1d99ed2d3de164369e904f30f2e130b8fc153ebf-merged.mount: Deactivated successfully.
Sep 30 14:14:43 compute-0 podman[86066]: 2025-09-30 14:14:43.904244079 +0000 UTC m=+0.164463854 container remove 7805ae0c8f962d8a6771bbbfb1c34dac514c4d788cb090917781157d61bbd7b1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_mahavira, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:14:43 compute-0 systemd[1]: libpod-conmon-7805ae0c8f962d8a6771bbbfb1c34dac514c4d788cb090917781157d61bbd7b1.scope: Deactivated successfully.
Sep 30 14:14:44 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Sep 30 14:14:44 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Sep 30 14:14:44 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Sep 30 14:14:44 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2086974968' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Sep 30 14:14:44 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e27 e27: 3 total, 2 up, 3 in
Sep 30 14:14:44 compute-0 relaxed_clarke[85939]: enabled application 'rbd' on pool 'backups'
Sep 30 14:14:44 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e27: 3 total, 2 up, 3 in
Sep 30 14:14:44 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 27 pg[4.1f( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=27 pruub=12.467321396s) [1] r=-1 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 active pruub 63.352478027s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:14:44 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 27 pg[4.1f( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=27 pruub=12.467291832s) [1] r=-1 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.352478027s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:14:44 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 27 pg[3.1a( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=27 pruub=8.956333160s) [1] r=-1 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active pruub 59.841598511s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:14:44 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 27 pg[3.1a( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=27 pruub=8.956305504s) [1] r=-1 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 59.841598511s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:14:44 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 27 pg[3.14( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=27 pruub=8.955915451s) [1] r=-1 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active pruub 59.841415405s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:14:44 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 27 pg[3.14( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=27 pruub=8.955902100s) [1] r=-1 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 59.841415405s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:14:44 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 27 pg[4.13( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=27 pruub=12.467323303s) [1] r=-1 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 active pruub 63.352855682s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:14:44 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 27 pg[3.15( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=27 pruub=8.955864906s) [1] r=-1 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active pruub 59.841407776s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:14:44 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 27 pg[4.13( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=27 pruub=12.467304230s) [1] r=-1 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.352855682s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:14:44 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 27 pg[3.15( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=27 pruub=8.955832481s) [1] r=-1 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 59.841407776s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:14:44 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 27 pg[3.13( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=27 pruub=8.955721855s) [1] r=-1 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active pruub 59.841373444s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:14:44 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 27 pg[3.13( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=27 pruub=8.955708504s) [1] r=-1 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 59.841373444s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:14:44 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 27 pg[3.16( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=27 pruub=8.955753326s) [1] r=-1 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active pruub 59.841449738s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:14:44 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 27 pg[4.15( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=27 pruub=12.467563629s) [1] r=-1 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 active pruub 63.353332520s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:14:44 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 27 pg[3.16( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=27 pruub=8.955739021s) [1] r=-1 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 59.841449738s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:14:44 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 27 pg[3.11( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=27 pruub=8.955538750s) [1] r=-1 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active pruub 59.841339111s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:14:44 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 27 pg[4.15( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=27 pruub=12.467527390s) [1] r=-1 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.353332520s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:14:44 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 27 pg[3.11( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=27 pruub=8.955523491s) [1] r=-1 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 59.841339111s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:14:44 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 27 pg[3.10( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=27 pruub=8.955413818s) [1] r=-1 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active pruub 59.841358185s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:14:44 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 27 pg[3.f( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=27 pruub=8.955585480s) [1] r=-1 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active pruub 59.841541290s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:14:44 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 27 pg[3.10( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=27 pruub=8.955395699s) [1] r=-1 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 59.841358185s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:14:44 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 27 pg[3.f( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=27 pruub=8.955570221s) [1] r=-1 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 59.841541290s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:14:44 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 27 pg[4.8( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=27 pruub=12.466878891s) [1] r=-1 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 active pruub 63.352897644s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:14:44 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 27 pg[4.8( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=27 pruub=12.466864586s) [1] r=-1 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.352897644s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:14:44 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 27 pg[3.e( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=27 pruub=8.955235481s) [1] r=-1 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active pruub 59.841297150s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:14:44 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 27 pg[4.9( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=27 pruub=12.466973305s) [1] r=-1 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 active pruub 63.353054047s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:14:44 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 27 pg[3.e( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=27 pruub=8.955220222s) [1] r=-1 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 59.841297150s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:14:44 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 27 pg[4.9( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=27 pruub=12.466959000s) [1] r=-1 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.353054047s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:14:44 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 27 pg[3.d( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=27 pruub=8.955145836s) [1] r=-1 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active pruub 59.841327667s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:14:44 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 27 pg[3.d( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=27 pruub=8.955131531s) [1] r=-1 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 59.841327667s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:14:44 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 27 pg[3.c( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=27 pruub=8.955086708s) [1] r=-1 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active pruub 59.841312408s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:14:44 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 27 pg[3.c( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=27 pruub=8.955071449s) [1] r=-1 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 59.841312408s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:14:44 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 27 pg[4.c( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=27 pruub=12.467140198s) [1] r=-1 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 active pruub 63.353435516s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:14:44 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 27 pg[4.c( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=27 pruub=12.467124939s) [1] r=-1 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.353435516s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:14:44 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 27 pg[4.a( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=27 pruub=12.466718674s) [1] r=-1 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 active pruub 63.353111267s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:14:44 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 27 pg[4.a( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=27 pruub=12.466706276s) [1] r=-1 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.353111267s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:14:44 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 27 pg[4.5( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=27 pruub=12.466904640s) [1] r=-1 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 active pruub 63.353374481s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:14:44 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 27 pg[4.5( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=27 pruub=12.466894150s) [1] r=-1 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.353374481s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:14:44 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 27 pg[3.3( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=27 pruub=8.954618454s) [1] r=-1 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active pruub 59.841133118s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:14:44 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 27 pg[3.3( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=27 pruub=8.954598427s) [1] r=-1 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 59.841133118s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:14:44 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Sep 30 14:14:44 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 27 pg[3.5( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=27 pruub=8.954500198s) [1] r=-1 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active pruub 59.841136932s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:14:44 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 27 pg[3.5( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=27 pruub=8.954485893s) [1] r=-1 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 59.841136932s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:14:44 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 27 pg[3.9( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=27 pruub=8.954152107s) [1] r=-1 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active pruub 59.840885162s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:14:44 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 27 pg[4.e( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=27 pruub=12.466730118s) [1] r=-1 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 active pruub 63.353485107s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:14:44 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Sep 30 14:14:44 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 27 pg[3.9( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=27 pruub=8.954133034s) [1] r=-1 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 59.840885162s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:14:44 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 27 pg[4.e( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=27 pruub=12.466720581s) [1] r=-1 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.353485107s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:14:44 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 27 pg[3.a( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=27 pruub=8.954173088s) [1] r=-1 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active pruub 59.841079712s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:14:44 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 27 pg[3.a( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=27 pruub=8.954154015s) [1] r=-1 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 59.841079712s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:14:44 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 27 pg[4.d( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=27 pruub=12.466464043s) [1] r=-1 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 active pruub 63.353458405s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:14:44 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 27 pg[4.d( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=27 pruub=12.466444969s) [1] r=-1 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.353458405s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:14:44 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 27 pg[4.1b( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=27 pruub=12.466432571s) [1] r=-1 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 active pruub 63.353511810s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:14:44 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 27 pg[4.1b( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=27 pruub=12.466422081s) [1] r=-1 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.353511810s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:14:44 compute-0 ceph-mgr[74485]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Sep 30 14:14:44 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 27 pg[4.1a( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=27 pruub=12.466469765s) [1] r=-1 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 active pruub 63.353641510s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:14:44 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 27 pg[3.1c( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=27 pruub=8.953787804s) [1] r=-1 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active pruub 59.840961456s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:14:44 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 27 pg[4.1a( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=27 pruub=12.466460228s) [1] r=-1 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.353641510s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:14:44 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 27 pg[3.1c( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=27 pruub=8.953766823s) [1] r=-1 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 59.840961456s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:14:44 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 27 pg[3.1d( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=27 pruub=8.953556061s) [1] r=-1 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active pruub 59.840839386s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:14:44 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 27 pg[3.1d( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=27 pruub=8.953542709s) [1] r=-1 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 59.840839386s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:14:44 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 27 pg[4.18( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=27 pruub=12.466444969s) [1] r=-1 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 active pruub 63.353813171s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:14:44 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 27 pg[4.18( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=27 pruub=12.466435432s) [1] r=-1 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.353813171s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:14:44 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 27 pg[4.1( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=27 pruub=12.465861320s) [1] r=-1 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 active pruub 63.353279114s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:14:44 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 27 pg[4.1( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=27 pruub=12.465848923s) [1] r=-1 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.353279114s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:14:44 compute-0 systemd[1]: libpod-603c710bd4e59b32f6b6c98a4db5a725edbc91c4cb302683d96753953a012318.scope: Deactivated successfully.
Sep 30 14:14:44 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 27 pg[2.1b( empty local-lis/les=0/0 n=0 ec=21/16 lis/c=21/21 les/c/f=23/23/0 sis=27) [0] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:14:44 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 27 pg[2.13( empty local-lis/les=0/0 n=0 ec=21/16 lis/c=21/21 les/c/f=23/23/0 sis=27) [0] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:14:44 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 27 pg[2.10( empty local-lis/les=0/0 n=0 ec=21/16 lis/c=21/21 les/c/f=23/23/0 sis=27) [0] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:14:44 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 27 pg[2.19( empty local-lis/les=0/0 n=0 ec=21/16 lis/c=21/21 les/c/f=23/23/0 sis=27) [0] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:14:44 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 27 pg[2.e( empty local-lis/les=0/0 n=0 ec=21/16 lis/c=21/21 les/c/f=23/23/0 sis=27) [0] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:14:44 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 27 pg[2.d( empty local-lis/les=0/0 n=0 ec=21/16 lis/c=21/21 les/c/f=23/23/0 sis=27) [0] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:14:44 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 27 pg[2.c( empty local-lis/les=0/0 n=0 ec=21/16 lis/c=21/21 les/c/f=23/23/0 sis=27) [0] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:14:44 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 27 pg[2.1( empty local-lis/les=0/0 n=0 ec=21/16 lis/c=21/21 les/c/f=23/23/0 sis=27) [0] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:14:44 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 27 pg[2.4( empty local-lis/les=0/0 n=0 ec=21/16 lis/c=21/21 les/c/f=23/23/0 sis=27) [0] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:14:44 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 27 pg[2.6( empty local-lis/les=0/0 n=0 ec=21/16 lis/c=21/21 les/c/f=23/23/0 sis=27) [0] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:14:44 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 27 pg[2.a( empty local-lis/les=0/0 n=0 ec=21/16 lis/c=21/21 les/c/f=23/23/0 sis=27) [0] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:14:44 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 27 pg[2.9( empty local-lis/les=0/0 n=0 ec=21/16 lis/c=21/21 les/c/f=23/23/0 sis=27) [0] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:14:44 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 27 pg[2.1e( empty local-lis/les=0/0 n=0 ec=21/16 lis/c=21/21 les/c/f=23/23/0 sis=27) [0] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:14:44 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 27 pg[2.1f( empty local-lis/les=0/0 n=0 ec=21/16 lis/c=21/21 les/c/f=23/23/0 sis=27) [0] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:14:44 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 27 pg[2.15( empty local-lis/les=0/0 n=0 ec=21/16 lis/c=21/21 les/c/f=23/23/0 sis=27) [0] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:14:44 compute-0 podman[86105]: 2025-09-30 14:14:44.084829926 +0000 UTC m=+0.072514431 container create 35f019726acf6d4145c07444d119b862b7ed56dc55d4365ec95e525481fcc156 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_feistel, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:14:44 compute-0 podman[85922]: 2025-09-30 14:14:44.087084466 +0000 UTC m=+1.052594514 container died 603c710bd4e59b32f6b6c98a4db5a725edbc91c4cb302683d96753953a012318 (image=quay.io/ceph/ceph:v19, name=relaxed_clarke, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Sep 30 14:14:44 compute-0 podman[86105]: 2025-09-30 14:14:44.034467199 +0000 UTC m=+0.022151724 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:14:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-b4707fb6895f4d2ec3769d5a7cb17dd3cffe21890f434a40aede14d4f5b62f56-merged.mount: Deactivated successfully.
Sep 30 14:14:44 compute-0 podman[85922]: 2025-09-30 14:14:44.206958654 +0000 UTC m=+1.172468702 container remove 603c710bd4e59b32f6b6c98a4db5a725edbc91c4cb302683d96753953a012318 (image=quay.io/ceph/ceph:v19, name=relaxed_clarke, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:14:44 compute-0 systemd[1]: Started libpod-conmon-35f019726acf6d4145c07444d119b862b7ed56dc55d4365ec95e525481fcc156.scope.
Sep 30 14:14:44 compute-0 systemd[1]: libpod-conmon-603c710bd4e59b32f6b6c98a4db5a725edbc91c4cb302683d96753953a012318.scope: Deactivated successfully.
Sep 30 14:14:44 compute-0 sudo[85917]: pam_unix(sudo:session): session closed for user root
Sep 30 14:14:44 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:14:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f80306e1ed05924a74b52c9ce75853fff3f622610ca856f79eaf21d3ceea9a6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:14:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f80306e1ed05924a74b52c9ce75853fff3f622610ca856f79eaf21d3ceea9a6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:14:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f80306e1ed05924a74b52c9ce75853fff3f622610ca856f79eaf21d3ceea9a6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:14:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f80306e1ed05924a74b52c9ce75853fff3f622610ca856f79eaf21d3ceea9a6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:14:44 compute-0 podman[86105]: 2025-09-30 14:14:44.257116636 +0000 UTC m=+0.244801161 container init 35f019726acf6d4145c07444d119b862b7ed56dc55d4365ec95e525481fcc156 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Sep 30 14:14:44 compute-0 podman[86105]: 2025-09-30 14:14:44.263905525 +0000 UTC m=+0.251590030 container start 35f019726acf6d4145c07444d119b862b7ed56dc55d4365ec95e525481fcc156 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_feistel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325)
Sep 30 14:14:44 compute-0 podman[86105]: 2025-09-30 14:14:44.281484168 +0000 UTC m=+0.269168703 container attach 35f019726acf6d4145c07444d119b862b7ed56dc55d4365ec95e525481fcc156 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Sep 30 14:14:44 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 4.1d deep-scrub starts
Sep 30 14:14:44 compute-0 sudo[86162]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxnkybmixwzsevmpspuznzvkbmfsbudy ; /usr/bin/python3'
Sep 30 14:14:44 compute-0 sudo[86162]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:14:44 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 4.1d deep-scrub ok
Sep 30 14:14:44 compute-0 python3[86164]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:14:44 compute-0 podman[86174]: 2025-09-30 14:14:44.565240584 +0000 UTC m=+0.056573971 container create 046cadd9dfcb2b43cf2561153c756779fcf0e7c7c736b8da4f3300a6255ee700 (image=quay.io/ceph/ceph:v19, name=vigorous_hodgkin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Sep 30 14:14:44 compute-0 systemd[1]: Started libpod-conmon-046cadd9dfcb2b43cf2561153c756779fcf0e7c7c736b8da4f3300a6255ee700.scope.
Sep 30 14:14:44 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:14:44 compute-0 podman[86174]: 2025-09-30 14:14:44.547958019 +0000 UTC m=+0.039291416 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:14:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/683f1c7c75ce9035fb3008c4d1b90c3af6858730a941615e498fc68ca28867bd/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:14:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/683f1c7c75ce9035fb3008c4d1b90c3af6858730a941615e498fc68ca28867bd/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:14:44 compute-0 podman[86174]: 2025-09-30 14:14:44.659334204 +0000 UTC m=+0.150667601 container init 046cadd9dfcb2b43cf2561153c756779fcf0e7c7c736b8da4f3300a6255ee700 (image=quay.io/ceph/ceph:v19, name=vigorous_hodgkin, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Sep 30 14:14:44 compute-0 podman[86174]: 2025-09-30 14:14:44.666750339 +0000 UTC m=+0.158083716 container start 046cadd9dfcb2b43cf2561153c756779fcf0e7c7c736b8da4f3300a6255ee700 (image=quay.io/ceph/ceph:v19, name=vigorous_hodgkin, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Sep 30 14:14:44 compute-0 podman[86174]: 2025-09-30 14:14:44.670418836 +0000 UTC m=+0.161752213 container attach 046cadd9dfcb2b43cf2561153c756779fcf0e7c7c736b8da4f3300a6255ee700 (image=quay.io/ceph/ceph:v19, name=vigorous_hodgkin, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:14:44 compute-0 ceph-mon[74194]: log_channel(cluster) log [WRN] : Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Sep 30 14:14:44 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e27 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 14:14:44 compute-0 lvm[86271]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 14:14:44 compute-0 lvm[86271]: VG ceph_vg0 finished
Sep 30 14:14:45 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v86: 100 pgs: 100 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Sep 30 14:14:45 compute-0 quizzical_feistel[86134]: {}
Sep 30 14:14:45 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0)
Sep 30 14:14:45 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/877083954' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Sep 30 14:14:45 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Sep 30 14:14:45 compute-0 systemd[1]: libpod-35f019726acf6d4145c07444d119b862b7ed56dc55d4365ec95e525481fcc156.scope: Deactivated successfully.
Sep 30 14:14:45 compute-0 systemd[1]: libpod-35f019726acf6d4145c07444d119b862b7ed56dc55d4365ec95e525481fcc156.scope: Consumed 1.117s CPU time.
Sep 30 14:14:45 compute-0 podman[86105]: 2025-09-30 14:14:45.059004384 +0000 UTC m=+1.046688899 container died 35f019726acf6d4145c07444d119b862b7ed56dc55d4365ec95e525481fcc156 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_feistel, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Sep 30 14:14:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-3f80306e1ed05924a74b52c9ce75853fff3f622610ca856f79eaf21d3ceea9a6-merged.mount: Deactivated successfully.
Sep 30 14:14:45 compute-0 ceph-mon[74194]: 3.1c scrub starts
Sep 30 14:14:45 compute-0 ceph-mon[74194]: 3.1c scrub ok
Sep 30 14:14:45 compute-0 ceph-mon[74194]: pgmap v84: 100 pgs: 100 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Sep 30 14:14:45 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/2086974968' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Sep 30 14:14:45 compute-0 ceph-mon[74194]: 2.1c deep-scrub starts
Sep 30 14:14:45 compute-0 ceph-mon[74194]: 2.1c deep-scrub ok
Sep 30 14:14:45 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Sep 30 14:14:45 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Sep 30 14:14:45 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Sep 30 14:14:45 compute-0 podman[86105]: 2025-09-30 14:14:45.106113375 +0000 UTC m=+1.093797880 container remove 35f019726acf6d4145c07444d119b862b7ed56dc55d4365ec95e525481fcc156 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Sep 30 14:14:45 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/2086974968' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Sep 30 14:14:45 compute-0 ceph-mon[74194]: osdmap e27: 3 total, 2 up, 3 in
Sep 30 14:14:45 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Sep 30 14:14:45 compute-0 systemd[1]: libpod-conmon-35f019726acf6d4145c07444d119b862b7ed56dc55d4365ec95e525481fcc156.scope: Deactivated successfully.
Sep 30 14:14:45 compute-0 sudo[85990]: pam_unix(sudo:session): session closed for user root
Sep 30 14:14:45 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 14:14:45 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Sep 30 14:14:45 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Sep 30 14:14:45 compute-0 ceph-mgr[74485]: [progress INFO root] Writing back 8 completed events
Sep 30 14:14:45 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Sep 30 14:14:46 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/877083954' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Sep 30 14:14:46 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e28 e28: 3 total, 2 up, 3 in
Sep 30 14:14:46 compute-0 vigorous_hodgkin[86209]: enabled application 'rbd' on pool 'images'
Sep 30 14:14:46 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e28: 3 total, 2 up, 3 in
Sep 30 14:14:46 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Sep 30 14:14:46 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Sep 30 14:14:46 compute-0 ceph-mgr[74485]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Sep 30 14:14:46 compute-0 systemd[1]: libpod-046cadd9dfcb2b43cf2561153c756779fcf0e7c7c736b8da4f3300a6255ee700.scope: Deactivated successfully.
Sep 30 14:14:46 compute-0 podman[86174]: 2025-09-30 14:14:46.272884889 +0000 UTC m=+1.764218276 container died 046cadd9dfcb2b43cf2561153c756779fcf0e7c7c736b8da4f3300a6255ee700 (image=quay.io/ceph/ceph:v19, name=vigorous_hodgkin, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Sep 30 14:14:46 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 28 pg[2.1b( empty local-lis/les=27/28 n=0 ec=21/16 lis/c=21/21 les/c/f=23/23/0 sis=27) [0] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:14:46 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 28 pg[2.13( empty local-lis/les=27/28 n=0 ec=21/16 lis/c=21/21 les/c/f=23/23/0 sis=27) [0] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:14:46 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 28 pg[2.15( empty local-lis/les=27/28 n=0 ec=21/16 lis/c=21/21 les/c/f=23/23/0 sis=27) [0] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:14:46 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 28 pg[2.10( empty local-lis/les=27/28 n=0 ec=21/16 lis/c=21/21 les/c/f=23/23/0 sis=27) [0] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:14:46 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 28 pg[2.19( empty local-lis/les=27/28 n=0 ec=21/16 lis/c=21/21 les/c/f=23/23/0 sis=27) [0] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:14:46 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 28 pg[2.e( empty local-lis/les=27/28 n=0 ec=21/16 lis/c=21/21 les/c/f=23/23/0 sis=27) [0] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:14:46 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 28 pg[2.d( empty local-lis/les=27/28 n=0 ec=21/16 lis/c=21/21 les/c/f=23/23/0 sis=27) [0] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:14:46 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 28 pg[2.c( empty local-lis/les=27/28 n=0 ec=21/16 lis/c=21/21 les/c/f=23/23/0 sis=27) [0] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:14:46 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 28 pg[2.a( empty local-lis/les=27/28 n=0 ec=21/16 lis/c=21/21 les/c/f=23/23/0 sis=27) [0] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:14:46 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 28 pg[2.1( empty local-lis/les=27/28 n=0 ec=21/16 lis/c=21/21 les/c/f=23/23/0 sis=27) [0] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:14:46 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 28 pg[2.6( empty local-lis/les=27/28 n=0 ec=21/16 lis/c=21/21 les/c/f=23/23/0 sis=27) [0] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:14:46 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 28 pg[2.9( empty local-lis/les=27/28 n=0 ec=21/16 lis/c=21/21 les/c/f=23/23/0 sis=27) [0] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:14:46 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 28 pg[2.1e( empty local-lis/les=27/28 n=0 ec=21/16 lis/c=21/21 les/c/f=23/23/0 sis=27) [0] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:14:46 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 28 pg[2.4( empty local-lis/les=27/28 n=0 ec=21/16 lis/c=21/21 les/c/f=23/23/0 sis=27) [0] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:14:46 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 28 pg[2.1f( empty local-lis/les=27/28 n=0 ec=21/16 lis/c=21/21 les/c/f=23/23/0 sis=27) [0] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:14:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-683f1c7c75ce9035fb3008c4d1b90c3af6858730a941615e498fc68ca28867bd-merged.mount: Deactivated successfully.
Sep 30 14:14:46 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Sep 30 14:14:46 compute-0 podman[86174]: 2025-09-30 14:14:46.525964527 +0000 UTC m=+2.017297904 container remove 046cadd9dfcb2b43cf2561153c756779fcf0e7c7c736b8da4f3300a6255ee700 (image=quay.io/ceph/ceph:v19, name=vigorous_hodgkin, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Sep 30 14:14:46 compute-0 systemd[1]: libpod-conmon-046cadd9dfcb2b43cf2561153c756779fcf0e7c7c736b8da4f3300a6255ee700.scope: Deactivated successfully.
Sep 30 14:14:46 compute-0 sudo[86162]: pam_unix(sudo:session): session closed for user root
Sep 30 14:14:46 compute-0 sudo[86320]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckoezkzkxykwhmkheimjhibwgnvukion ; /usr/bin/python3'
Sep 30 14:14:46 compute-0 sudo[86320]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:14:46 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Sep 30 14:14:46 compute-0 python3[86322]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:14:46 compute-0 podman[86323]: 2025-09-30 14:14:46.905861286 +0000 UTC m=+0.045150681 container create 8709ffcb747cd466b2bdc284fe6f54fddb4eee90c26adbc0efadec517378de4f (image=quay.io/ceph/ceph:v19, name=admiring_morse, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:14:46 compute-0 systemd[1]: Started libpod-conmon-8709ffcb747cd466b2bdc284fe6f54fddb4eee90c26adbc0efadec517378de4f.scope.
Sep 30 14:14:46 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:14:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21457ee856806e8b2763ef2e223e243a1ea794f3d42a8e3a9d0ae26e05814fa8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:14:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21457ee856806e8b2763ef2e223e243a1ea794f3d42a8e3a9d0ae26e05814fa8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:14:46 compute-0 podman[86323]: 2025-09-30 14:14:46.887662747 +0000 UTC m=+0.026952172 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:14:46 compute-0 podman[86323]: 2025-09-30 14:14:46.994227624 +0000 UTC m=+0.133517039 container init 8709ffcb747cd466b2bdc284fe6f54fddb4eee90c26adbc0efadec517378de4f (image=quay.io/ceph/ceph:v19, name=admiring_morse, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:14:47 compute-0 podman[86323]: 2025-09-30 14:14:47.002065251 +0000 UTC m=+0.141354646 container start 8709ffcb747cd466b2bdc284fe6f54fddb4eee90c26adbc0efadec517378de4f (image=quay.io/ceph/ceph:v19, name=admiring_morse, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Sep 30 14:14:47 compute-0 podman[86323]: 2025-09-30 14:14:47.005194253 +0000 UTC m=+0.144483668 container attach 8709ffcb747cd466b2bdc284fe6f54fddb4eee90c26adbc0efadec517378de4f (image=quay.io/ceph/ceph:v19, name=admiring_morse, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Sep 30 14:14:47 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v88: 100 pgs: 100 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Sep 30 14:14:47 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0)
Sep 30 14:14:47 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3828613119' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Sep 30 14:14:47 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Sep 30 14:14:47 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Sep 30 14:14:47 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Sep 30 14:14:47 compute-0 ceph-mon[74194]: 4.11 scrub starts
Sep 30 14:14:47 compute-0 ceph-mon[74194]: 4.11 scrub ok
Sep 30 14:14:47 compute-0 ceph-mon[74194]: 4.1d deep-scrub starts
Sep 30 14:14:47 compute-0 ceph-mon[74194]: 4.1d deep-scrub ok
Sep 30 14:14:47 compute-0 ceph-mon[74194]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Sep 30 14:14:47 compute-0 ceph-mon[74194]: 2.1a scrub starts
Sep 30 14:14:47 compute-0 ceph-mon[74194]: 2.1a scrub ok
Sep 30 14:14:47 compute-0 ceph-mon[74194]: pgmap v86: 100 pgs: 100 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Sep 30 14:14:47 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/877083954' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Sep 30 14:14:47 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/2243770332' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Sep 30 14:14:47 compute-0 ceph-mon[74194]: 2.18 scrub starts
Sep 30 14:14:47 compute-0 ceph-mon[74194]: 2.18 scrub ok
Sep 30 14:14:47 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:14:47 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 14:14:48 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Sep 30 14:14:48 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Sep 30 14:14:49 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v89: 100 pgs: 100 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Sep 30 14:14:49 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3828613119' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Sep 30 14:14:49 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e29 e29: 3 total, 2 up, 3 in
Sep 30 14:14:49 compute-0 admiring_morse[86339]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Sep 30 14:14:49 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:14:49 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e29: 3 total, 2 up, 3 in
Sep 30 14:14:49 compute-0 ceph-mgr[74485]: [progress INFO root] Completed event ad45e814-5f9b-4bfd-8ca5-e0e062859c0b (Global Recovery Event) in 26 seconds
Sep 30 14:14:49 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Sep 30 14:14:49 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Sep 30 14:14:49 compute-0 ceph-mgr[74485]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Sep 30 14:14:49 compute-0 systemd[1]: libpod-8709ffcb747cd466b2bdc284fe6f54fddb4eee90c26adbc0efadec517378de4f.scope: Deactivated successfully.
Sep 30 14:14:49 compute-0 podman[86323]: 2025-09-30 14:14:49.171975714 +0000 UTC m=+2.311265109 container died 8709ffcb747cd466b2bdc284fe6f54fddb4eee90c26adbc0efadec517378de4f (image=quay.io/ceph/ceph:v19, name=admiring_morse, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:14:49 compute-0 ceph-mon[74194]: 4.1e scrub starts
Sep 30 14:14:49 compute-0 ceph-mon[74194]: 4.1e scrub ok
Sep 30 14:14:49 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/877083954' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Sep 30 14:14:49 compute-0 ceph-mon[74194]: osdmap e28: 3 total, 2 up, 3 in
Sep 30 14:14:49 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Sep 30 14:14:49 compute-0 ceph-mon[74194]: 3.18 scrub starts
Sep 30 14:14:49 compute-0 ceph-mon[74194]: 3.18 scrub ok
Sep 30 14:14:49 compute-0 ceph-mon[74194]: 2.17 scrub starts
Sep 30 14:14:49 compute-0 ceph-mon[74194]: 2.17 scrub ok
Sep 30 14:14:49 compute-0 ceph-mon[74194]: pgmap v88: 100 pgs: 100 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Sep 30 14:14:49 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/3828613119' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Sep 30 14:14:49 compute-0 ceph-mon[74194]: 3.17 scrub starts
Sep 30 14:14:49 compute-0 ceph-mon[74194]: 3.17 scrub ok
Sep 30 14:14:49 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:14:49 compute-0 ceph-mon[74194]: 2.16 deep-scrub starts
Sep 30 14:14:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-21457ee856806e8b2763ef2e223e243a1ea794f3d42a8e3a9d0ae26e05814fa8-merged.mount: Deactivated successfully.
Sep 30 14:14:49 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:14:49 compute-0 podman[86323]: 2025-09-30 14:14:49.210643513 +0000 UTC m=+2.349932908 container remove 8709ffcb747cd466b2bdc284fe6f54fddb4eee90c26adbc0efadec517378de4f (image=quay.io/ceph/ceph:v19, name=admiring_morse, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:14:49 compute-0 systemd[1]: libpod-conmon-8709ffcb747cd466b2bdc284fe6f54fddb4eee90c26adbc0efadec517378de4f.scope: Deactivated successfully.
Sep 30 14:14:49 compute-0 sudo[86320]: pam_unix(sudo:session): session closed for user root
Sep 30 14:14:49 compute-0 sudo[86400]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtstfickfdugscuoetmesynhbxsrlsxd ; /usr/bin/python3'
Sep 30 14:14:49 compute-0 sudo[86400]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:14:49 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 4.12 deep-scrub starts
Sep 30 14:14:49 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 4.12 deep-scrub ok
Sep 30 14:14:49 compute-0 python3[86402]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:14:49 compute-0 podman[86403]: 2025-09-30 14:14:49.536817896 +0000 UTC m=+0.039453550 container create c1a95156b6317a418c6689105b635bcbf2b6ed76269f78de0e72663d64a4c343 (image=quay.io/ceph/ceph:v19, name=focused_easley, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:14:49 compute-0 systemd[1]: Started libpod-conmon-c1a95156b6317a418c6689105b635bcbf2b6ed76269f78de0e72663d64a4c343.scope.
Sep 30 14:14:49 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:14:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81b356744366dfac7aa3e0ca7c352d82061099cfb72f6443a12ee301f04aa143/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:14:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81b356744366dfac7aa3e0ca7c352d82061099cfb72f6443a12ee301f04aa143/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:14:49 compute-0 podman[86403]: 2025-09-30 14:14:49.517788545 +0000 UTC m=+0.020424239 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:14:49 compute-0 podman[86403]: 2025-09-30 14:14:49.637486019 +0000 UTC m=+0.140121703 container init c1a95156b6317a418c6689105b635bcbf2b6ed76269f78de0e72663d64a4c343 (image=quay.io/ceph/ceph:v19, name=focused_easley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Sep 30 14:14:49 compute-0 podman[86403]: 2025-09-30 14:14:49.642810119 +0000 UTC m=+0.145445783 container start c1a95156b6317a418c6689105b635bcbf2b6ed76269f78de0e72663d64a4c343 (image=quay.io/ceph/ceph:v19, name=focused_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Sep 30 14:14:49 compute-0 podman[86403]: 2025-09-30 14:14:49.646581128 +0000 UTC m=+0.149216812 container attach c1a95156b6317a418c6689105b635bcbf2b6ed76269f78de0e72663d64a4c343 (image=quay.io/ceph/ceph:v19, name=focused_easley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:14:49 compute-0 ceph-mon[74194]: log_channel(cluster) log [WRN] : Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Sep 30 14:14:49 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e29 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 14:14:49 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0)
Sep 30 14:14:49 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2861333342' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Sep 30 14:14:50 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Sep 30 14:14:50 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2861333342' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Sep 30 14:14:50 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e30 e30: 3 total, 2 up, 3 in
Sep 30 14:14:50 compute-0 focused_easley[86417]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Sep 30 14:14:50 compute-0 ceph-mon[74194]: 2.16 deep-scrub ok
Sep 30 14:14:50 compute-0 ceph-mon[74194]: 4.10 scrub starts
Sep 30 14:14:50 compute-0 ceph-mon[74194]: 4.10 scrub ok
Sep 30 14:14:50 compute-0 ceph-mon[74194]: 2.14 scrub starts
Sep 30 14:14:50 compute-0 ceph-mon[74194]: 2.14 scrub ok
Sep 30 14:14:50 compute-0 ceph-mon[74194]: pgmap v89: 100 pgs: 100 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Sep 30 14:14:50 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/3828613119' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Sep 30 14:14:50 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:14:50 compute-0 ceph-mon[74194]: osdmap e29: 3 total, 2 up, 3 in
Sep 30 14:14:50 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Sep 30 14:14:50 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:14:50 compute-0 ceph-mon[74194]: 4.12 deep-scrub starts
Sep 30 14:14:50 compute-0 ceph-mon[74194]: 4.12 deep-scrub ok
Sep 30 14:14:50 compute-0 ceph-mon[74194]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Sep 30 14:14:50 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/2861333342' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Sep 30 14:14:50 compute-0 systemd[1]: libpod-c1a95156b6317a418c6689105b635bcbf2b6ed76269f78de0e72663d64a4c343.scope: Deactivated successfully.
Sep 30 14:14:50 compute-0 podman[86403]: 2025-09-30 14:14:50.239021988 +0000 UTC m=+0.741657672 container died c1a95156b6317a418c6689105b635bcbf2b6ed76269f78de0e72663d64a4c343 (image=quay.io/ceph/ceph:v19, name=focused_easley, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:14:50 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e30: 3 total, 2 up, 3 in
Sep 30 14:14:50 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Sep 30 14:14:50 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Sep 30 14:14:50 compute-0 ceph-mgr[74485]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Sep 30 14:14:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-81b356744366dfac7aa3e0ca7c352d82061099cfb72f6443a12ee301f04aa143-merged.mount: Deactivated successfully.
Sep 30 14:14:50 compute-0 podman[86403]: 2025-09-30 14:14:50.318911223 +0000 UTC m=+0.821546887 container remove c1a95156b6317a418c6689105b635bcbf2b6ed76269f78de0e72663d64a4c343 (image=quay.io/ceph/ceph:v19, name=focused_easley, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:14:50 compute-0 systemd[1]: libpod-conmon-c1a95156b6317a418c6689105b635bcbf2b6ed76269f78de0e72663d64a4c343.scope: Deactivated successfully.
Sep 30 14:14:50 compute-0 sudo[86400]: pam_unix(sudo:session): session closed for user root
Sep 30 14:14:50 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Sep 30 14:14:50 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Sep 30 14:14:51 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v92: 100 pgs: 100 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Sep 30 14:14:51 compute-0 ceph-mon[74194]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 2 pool(s) do not have an application enabled)
Sep 30 14:14:51 compute-0 ceph-mon[74194]: 2.12 scrub starts
Sep 30 14:14:51 compute-0 ceph-mon[74194]: 2.12 scrub ok
Sep 30 14:14:51 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/2861333342' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Sep 30 14:14:51 compute-0 ceph-mon[74194]: osdmap e30: 3 total, 2 up, 3 in
Sep 30 14:14:51 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Sep 30 14:14:51 compute-0 ceph-mon[74194]: 4.14 scrub starts
Sep 30 14:14:51 compute-0 ceph-mon[74194]: 4.14 scrub ok
Sep 30 14:14:51 compute-0 python3[86531]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Sep 30 14:14:51 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 3.12 deep-scrub starts
Sep 30 14:14:51 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 3.12 deep-scrub ok
Sep 30 14:14:51 compute-0 python3[86602]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759241690.9990218-35252-82657712713359/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=ad866aa1f51f395809dd7ac5cb7a56d43c167b49 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:14:52 compute-0 sudo[86702]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-keugkeidegsuasigtufytvfjdchhcnoi ; /usr/bin/python3'
Sep 30 14:14:52 compute-0 sudo[86702]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:14:52 compute-0 python3[86704]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Sep 30 14:14:52 compute-0 sudo[86702]: pam_unix(sudo:session): session closed for user root
Sep 30 14:14:52 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Sep 30 14:14:52 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Sep 30 14:14:52 compute-0 sudo[86777]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhmjzlmqgrwltcxyalnrprsxnfaeavdx ; /usr/bin/python3'
Sep 30 14:14:52 compute-0 sudo[86777]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:14:52 compute-0 ceph-mon[74194]: 2.11 scrub starts
Sep 30 14:14:52 compute-0 ceph-mon[74194]: 2.11 scrub ok
Sep 30 14:14:52 compute-0 ceph-mon[74194]: pgmap v92: 100 pgs: 100 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Sep 30 14:14:52 compute-0 ceph-mon[74194]: Health check cleared: POOL_APP_NOT_ENABLED (was: 2 pool(s) do not have an application enabled)
Sep 30 14:14:52 compute-0 ceph-mon[74194]: 3.12 deep-scrub starts
Sep 30 14:14:52 compute-0 ceph-mon[74194]: 3.12 deep-scrub ok
Sep 30 14:14:52 compute-0 python3[86779]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759241691.9175355-35267-266112215248004/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=b0bb336201ee4004543b054f5b4825d6fdd45d1d backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:14:52 compute-0 sudo[86777]: pam_unix(sudo:session): session closed for user root
Sep 30 14:14:52 compute-0 sudo[86827]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldzzridrkjijydyxywftsgbeliehrrgh ; /usr/bin/python3'
Sep 30 14:14:52 compute-0 sudo[86827]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:14:52 compute-0 python3[86829]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:14:53 compute-0 podman[86830]: 2025-09-30 14:14:53.017035903 +0000 UTC m=+0.038174357 container create bc5e91559a7d06f03cec4afb6011a44a7c571ea5458297bed80faa659f8923aa (image=quay.io/ceph/ceph:v19, name=eloquent_borg, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:14:53 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v93: 100 pgs: 100 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Sep 30 14:14:53 compute-0 systemd[1]: Started libpod-conmon-bc5e91559a7d06f03cec4afb6011a44a7c571ea5458297bed80faa659f8923aa.scope.
Sep 30 14:14:53 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:14:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/050fdf5a3e3a298ebb6c9492d2f78d58296ca0c40c5b12ec16d83c827c985202/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:14:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/050fdf5a3e3a298ebb6c9492d2f78d58296ca0c40c5b12ec16d83c827c985202/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:14:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/050fdf5a3e3a298ebb6c9492d2f78d58296ca0c40c5b12ec16d83c827c985202/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Sep 30 14:14:53 compute-0 podman[86830]: 2025-09-30 14:14:53.077041254 +0000 UTC m=+0.098179718 container init bc5e91559a7d06f03cec4afb6011a44a7c571ea5458297bed80faa659f8923aa (image=quay.io/ceph/ceph:v19, name=eloquent_borg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:14:53 compute-0 podman[86830]: 2025-09-30 14:14:53.082986401 +0000 UTC m=+0.104124855 container start bc5e91559a7d06f03cec4afb6011a44a7c571ea5458297bed80faa659f8923aa (image=quay.io/ceph/ceph:v19, name=eloquent_borg, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:14:53 compute-0 podman[86830]: 2025-09-30 14:14:53.086335779 +0000 UTC m=+0.107474233 container attach bc5e91559a7d06f03cec4afb6011a44a7c571ea5458297bed80faa659f8923aa (image=quay.io/ceph/ceph:v19, name=eloquent_borg, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Sep 30 14:14:53 compute-0 podman[86830]: 2025-09-30 14:14:53.000462777 +0000 UTC m=+0.021601251 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:14:53 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Sep 30 14:14:53 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Sep 30 14:14:53 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Sep 30 14:14:53 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2810432276' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Sep 30 14:14:53 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2810432276' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Sep 30 14:14:53 compute-0 eloquent_borg[86845]: 
Sep 30 14:14:53 compute-0 eloquent_borg[86845]: [global]
Sep 30 14:14:53 compute-0 eloquent_borg[86845]:         fsid = 5e3c7776-ac03-5698-b79f-a6dc2d80cae6
Sep 30 14:14:53 compute-0 eloquent_borg[86845]:         mon_host = 192.168.122.100
Sep 30 14:14:53 compute-0 ceph-mon[74194]: 2.f scrub starts
Sep 30 14:14:53 compute-0 ceph-mon[74194]: 2.f scrub ok
Sep 30 14:14:53 compute-0 ceph-mon[74194]: 4.16 scrub starts
Sep 30 14:14:53 compute-0 ceph-mon[74194]: 4.16 scrub ok
Sep 30 14:14:53 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/2810432276' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Sep 30 14:14:53 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/2810432276' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Sep 30 14:14:53 compute-0 systemd[1]: libpod-bc5e91559a7d06f03cec4afb6011a44a7c571ea5458297bed80faa659f8923aa.scope: Deactivated successfully.
Sep 30 14:14:53 compute-0 podman[86830]: 2025-09-30 14:14:53.4674126 +0000 UTC m=+0.488551054 container died bc5e91559a7d06f03cec4afb6011a44a7c571ea5458297bed80faa659f8923aa (image=quay.io/ceph/ceph:v19, name=eloquent_borg, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Sep 30 14:14:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-050fdf5a3e3a298ebb6c9492d2f78d58296ca0c40c5b12ec16d83c827c985202-merged.mount: Deactivated successfully.
Sep 30 14:14:53 compute-0 podman[86830]: 2025-09-30 14:14:53.515648711 +0000 UTC m=+0.536787165 container remove bc5e91559a7d06f03cec4afb6011a44a7c571ea5458297bed80faa659f8923aa (image=quay.io/ceph/ceph:v19, name=eloquent_borg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:14:53 compute-0 systemd[1]: libpod-conmon-bc5e91559a7d06f03cec4afb6011a44a7c571ea5458297bed80faa659f8923aa.scope: Deactivated successfully.
Sep 30 14:14:53 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0)
Sep 30 14:14:53 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Sep 30 14:14:53 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:14:53 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:14:53 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-2
Sep 30 14:14:53 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-2
Sep 30 14:14:53 compute-0 sudo[86827]: pam_unix(sudo:session): session closed for user root
Sep 30 14:14:53 compute-0 sudo[86903]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhlllvuuysjdfylnqtntkkuelpfkagjg ; /usr/bin/python3'
Sep 30 14:14:53 compute-0 sudo[86903]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:14:53 compute-0 python3[86905]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:14:53 compute-0 podman[86906]: 2025-09-30 14:14:53.900859221 +0000 UTC m=+0.045764107 container create 68372b56398c4d7f04f93a2d07a462a5a1d65b0f64da536f979ca0e251b427dd (image=quay.io/ceph/ceph:v19, name=cool_maxwell, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Sep 30 14:14:53 compute-0 systemd[1]: Started libpod-conmon-68372b56398c4d7f04f93a2d07a462a5a1d65b0f64da536f979ca0e251b427dd.scope.
Sep 30 14:14:53 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:14:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90a952c10952ffd15e2729f3e2c3d85a457df133d655ab916bd9ce2b50a331d7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:14:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90a952c10952ffd15e2729f3e2c3d85a457df133d655ab916bd9ce2b50a331d7/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Sep 30 14:14:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90a952c10952ffd15e2729f3e2c3d85a457df133d655ab916bd9ce2b50a331d7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:14:53 compute-0 podman[86906]: 2025-09-30 14:14:53.881066949 +0000 UTC m=+0.025971865 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:14:53 compute-0 podman[86906]: 2025-09-30 14:14:53.985015758 +0000 UTC m=+0.129920674 container init 68372b56398c4d7f04f93a2d07a462a5a1d65b0f64da536f979ca0e251b427dd (image=quay.io/ceph/ceph:v19, name=cool_maxwell, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Sep 30 14:14:53 compute-0 podman[86906]: 2025-09-30 14:14:53.991897779 +0000 UTC m=+0.136802665 container start 68372b56398c4d7f04f93a2d07a462a5a1d65b0f64da536f979ca0e251b427dd (image=quay.io/ceph/ceph:v19, name=cool_maxwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Sep 30 14:14:54 compute-0 podman[86906]: 2025-09-30 14:14:54.013481948 +0000 UTC m=+0.158386834 container attach 68372b56398c4d7f04f93a2d07a462a5a1d65b0f64da536f979ca0e251b427dd (image=quay.io/ceph/ceph:v19, name=cool_maxwell, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid)
Sep 30 14:14:54 compute-0 ceph-mgr[74485]: [progress INFO root] Writing back 9 completed events
Sep 30 14:14:54 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Sep 30 14:14:54 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:14:54 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Sep 30 14:14:54 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Sep 30 14:14:54 compute-0 ceph-mon[74194]: 2.3 scrub starts
Sep 30 14:14:54 compute-0 ceph-mon[74194]: 2.3 scrub ok
Sep 30 14:14:54 compute-0 ceph-mon[74194]: pgmap v93: 100 pgs: 100 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Sep 30 14:14:54 compute-0 ceph-mon[74194]: 4.17 scrub starts
Sep 30 14:14:54 compute-0 ceph-mon[74194]: 4.17 scrub ok
Sep 30 14:14:54 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Sep 30 14:14:54 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:14:54 compute-0 ceph-mon[74194]: Deploying daemon osd.2 on compute-2
Sep 30 14:14:54 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:14:54 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0)
Sep 30 14:14:54 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4121503398' entity='client.admin' 
Sep 30 14:14:54 compute-0 cool_maxwell[86921]: set ssl_option
Sep 30 14:14:54 compute-0 systemd[1]: libpod-68372b56398c4d7f04f93a2d07a462a5a1d65b0f64da536f979ca0e251b427dd.scope: Deactivated successfully.
Sep 30 14:14:54 compute-0 podman[86906]: 2025-09-30 14:14:54.539852767 +0000 UTC m=+0.684757653 container died 68372b56398c4d7f04f93a2d07a462a5a1d65b0f64da536f979ca0e251b427dd (image=quay.io/ceph/ceph:v19, name=cool_maxwell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:14:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-90a952c10952ffd15e2729f3e2c3d85a457df133d655ab916bd9ce2b50a331d7-merged.mount: Deactivated successfully.
Sep 30 14:14:54 compute-0 podman[86906]: 2025-09-30 14:14:54.579335687 +0000 UTC m=+0.724240583 container remove 68372b56398c4d7f04f93a2d07a462a5a1d65b0f64da536f979ca0e251b427dd (image=quay.io/ceph/ceph:v19, name=cool_maxwell, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Sep 30 14:14:54 compute-0 systemd[1]: libpod-conmon-68372b56398c4d7f04f93a2d07a462a5a1d65b0f64da536f979ca0e251b427dd.scope: Deactivated successfully.
Sep 30 14:14:54 compute-0 sudo[86903]: pam_unix(sudo:session): session closed for user root
Sep 30 14:14:54 compute-0 sudo[86980]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iccujoczfeksbesqvoxmvhlhvrckhmsl ; /usr/bin/python3'
Sep 30 14:14:54 compute-0 sudo[86980]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:14:54 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e30 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 14:14:54 compute-0 python3[86982]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:14:54 compute-0 podman[86983]: 2025-09-30 14:14:54.941946161 +0000 UTC m=+0.039848361 container create 10a3aa4ddbaab6fbf26f86daa4aa5a435340dda8fd7823b5d48b548c1f41b8b4 (image=quay.io/ceph/ceph:v19, name=strange_hodgkin, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Sep 30 14:14:54 compute-0 systemd[1]: Started libpod-conmon-10a3aa4ddbaab6fbf26f86daa4aa5a435340dda8fd7823b5d48b548c1f41b8b4.scope.
Sep 30 14:14:55 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:14:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f55a8d822e700f653c09a8223a9d88676e0d7bc67fffb2c36372debbed5bf0d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:14:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f55a8d822e700f653c09a8223a9d88676e0d7bc67fffb2c36372debbed5bf0d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:14:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f55a8d822e700f653c09a8223a9d88676e0d7bc67fffb2c36372debbed5bf0d/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Sep 30 14:14:55 compute-0 podman[86983]: 2025-09-30 14:14:54.924223874 +0000 UTC m=+0.022126084 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:14:55 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v94: 100 pgs: 100 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Sep 30 14:14:55 compute-0 podman[86983]: 2025-09-30 14:14:55.09298113 +0000 UTC m=+0.190883330 container init 10a3aa4ddbaab6fbf26f86daa4aa5a435340dda8fd7823b5d48b548c1f41b8b4 (image=quay.io/ceph/ceph:v19, name=strange_hodgkin, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:14:55 compute-0 podman[86983]: 2025-09-30 14:14:55.099970254 +0000 UTC m=+0.197872444 container start 10a3aa4ddbaab6fbf26f86daa4aa5a435340dda8fd7823b5d48b548c1f41b8b4 (image=quay.io/ceph/ceph:v19, name=strange_hodgkin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Sep 30 14:14:55 compute-0 podman[86983]: 2025-09-30 14:14:55.151873092 +0000 UTC m=+0.249775282 container attach 10a3aa4ddbaab6fbf26f86daa4aa5a435340dda8fd7823b5d48b548c1f41b8b4 (image=quay.io/ceph/ceph:v19, name=strange_hodgkin, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:14:55 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 4.b scrub starts
Sep 30 14:14:55 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 4.b scrub ok
Sep 30 14:14:55 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.14271 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:14:55 compute-0 ceph-mgr[74485]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Sep 30 14:14:55 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Sep 30 14:14:55 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Sep 30 14:14:55 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:14:55 compute-0 ceph-mgr[74485]: [cephadm INFO root] Saving service ingress.rgw.default spec with placement count:2
Sep 30 14:14:55 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Saving service ingress.rgw.default spec with placement count:2
Sep 30 14:14:55 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Sep 30 14:14:55 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:14:55 compute-0 strange_hodgkin[86999]: Scheduled rgw.rgw update...
Sep 30 14:14:55 compute-0 strange_hodgkin[86999]: Scheduled ingress.rgw.default update...
Sep 30 14:14:55 compute-0 systemd[1]: libpod-10a3aa4ddbaab6fbf26f86daa4aa5a435340dda8fd7823b5d48b548c1f41b8b4.scope: Deactivated successfully.
Sep 30 14:14:55 compute-0 podman[86983]: 2025-09-30 14:14:55.589665887 +0000 UTC m=+0.687568067 container died 10a3aa4ddbaab6fbf26f86daa4aa5a435340dda8fd7823b5d48b548c1f41b8b4 (image=quay.io/ceph/ceph:v19, name=strange_hodgkin, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:14:55 compute-0 ceph-mon[74194]: 2.0 scrub starts
Sep 30 14:14:55 compute-0 ceph-mon[74194]: 2.0 scrub ok
Sep 30 14:14:55 compute-0 ceph-mon[74194]: 3.19 scrub starts
Sep 30 14:14:55 compute-0 ceph-mon[74194]: 3.19 scrub ok
Sep 30 14:14:55 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/4121503398' entity='client.admin' 
Sep 30 14:14:55 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:14:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-7f55a8d822e700f653c09a8223a9d88676e0d7bc67fffb2c36372debbed5bf0d-merged.mount: Deactivated successfully.
Sep 30 14:14:55 compute-0 podman[86983]: 2025-09-30 14:14:55.792502911 +0000 UTC m=+0.890405111 container remove 10a3aa4ddbaab6fbf26f86daa4aa5a435340dda8fd7823b5d48b548c1f41b8b4 (image=quay.io/ceph/ceph:v19, name=strange_hodgkin, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:14:55 compute-0 sudo[86980]: pam_unix(sudo:session): session closed for user root
Sep 30 14:14:55 compute-0 systemd[1]: libpod-conmon-10a3aa4ddbaab6fbf26f86daa4aa5a435340dda8fd7823b5d48b548c1f41b8b4.scope: Deactivated successfully.
Sep 30 14:14:56 compute-0 python3[87109]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_dashboard.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Sep 30 14:14:56 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 3.b scrub starts
Sep 30 14:14:56 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 3.b scrub ok
Sep 30 14:14:56 compute-0 python3[87180]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759241695.9557238-35287-190881534381900/source dest=/tmp/ceph_dashboard.yml mode=0644 force=True follow=False _original_basename=ceph_monitoring_stack.yml.j2 checksum=2701faaa92cae31b5bbad92984c27e2af7a44b84 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:14:56 compute-0 ceph-mon[74194]: 2.2 scrub starts
Sep 30 14:14:56 compute-0 ceph-mon[74194]: pgmap v94: 100 pgs: 100 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Sep 30 14:14:56 compute-0 ceph-mon[74194]: 2.2 scrub ok
Sep 30 14:14:56 compute-0 ceph-mon[74194]: 4.b scrub starts
Sep 30 14:14:56 compute-0 ceph-mon[74194]: 4.b scrub ok
Sep 30 14:14:56 compute-0 ceph-mon[74194]: from='client.14271 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:14:56 compute-0 ceph-mon[74194]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Sep 30 14:14:56 compute-0 ceph-mon[74194]: Saving service ingress.rgw.default spec with placement count:2
Sep 30 14:14:56 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:14:57 compute-0 sudo[87228]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibdqbiakepivqnxcecydqfxcqllxmeng ; /usr/bin/python3'
Sep 30 14:14:57 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v95: 100 pgs: 100 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Sep 30 14:14:57 compute-0 sudo[87228]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:14:57 compute-0 python3[87230]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_dashboard.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:14:57 compute-0 podman[87231]: 2025-09-30 14:14:57.251306329 +0000 UTC m=+0.068691971 container create 19f3c5fef3fe6dcff292d9d9135f14a70a2f279b4e17eb1dd7dd124b380dcb06 (image=quay.io/ceph/ceph:v19, name=confident_lehmann, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:14:57 compute-0 podman[87231]: 2025-09-30 14:14:57.205031489 +0000 UTC m=+0.022417151 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:14:57 compute-0 systemd[1]: Started libpod-conmon-19f3c5fef3fe6dcff292d9d9135f14a70a2f279b4e17eb1dd7dd124b380dcb06.scope.
Sep 30 14:14:57 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:14:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc58b654890cf22655fda9aba22b3c250247125caab8f9918a1ef2b62c0566f6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:14:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc58b654890cf22655fda9aba22b3c250247125caab8f9918a1ef2b62c0566f6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:14:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc58b654890cf22655fda9aba22b3c250247125caab8f9918a1ef2b62c0566f6/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Sep 30 14:14:57 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 3.0 scrub starts
Sep 30 14:14:57 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 3.0 scrub ok
Sep 30 14:14:57 compute-0 podman[87231]: 2025-09-30 14:14:57.453668071 +0000 UTC m=+0.271053743 container init 19f3c5fef3fe6dcff292d9d9135f14a70a2f279b4e17eb1dd7dd124b380dcb06 (image=quay.io/ceph/ceph:v19, name=confident_lehmann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325)
Sep 30 14:14:57 compute-0 podman[87231]: 2025-09-30 14:14:57.459045033 +0000 UTC m=+0.276430675 container start 19f3c5fef3fe6dcff292d9d9135f14a70a2f279b4e17eb1dd7dd124b380dcb06 (image=quay.io/ceph/ceph:v19, name=confident_lehmann, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Sep 30 14:14:57 compute-0 podman[87231]: 2025-09-30 14:14:57.469831187 +0000 UTC m=+0.287216829 container attach 19f3c5fef3fe6dcff292d9d9135f14a70a2f279b4e17eb1dd7dd124b380dcb06 (image=quay.io/ceph/ceph:v19, name=confident_lehmann, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:14:57 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.14277 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:14:57 compute-0 ceph-mgr[74485]: [cephadm INFO root] Saving service node-exporter spec with placement *
Sep 30 14:14:57 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Saving service node-exporter spec with placement *
Sep 30 14:14:57 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Sep 30 14:14:58 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Sep 30 14:14:58 compute-0 ceph-mon[74194]: 2.5 scrub starts
Sep 30 14:14:58 compute-0 ceph-mon[74194]: 2.5 scrub ok
Sep 30 14:14:58 compute-0 ceph-mon[74194]: 3.b scrub starts
Sep 30 14:14:58 compute-0 ceph-mon[74194]: 3.b scrub ok
Sep 30 14:14:58 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:14:58 compute-0 ceph-mgr[74485]: [cephadm INFO root] Saving service grafana spec with placement compute-0;count:1
Sep 30 14:14:58 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Saving service grafana spec with placement compute-0;count:1
Sep 30 14:14:58 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.grafana}] v 0)
Sep 30 14:14:58 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 4.0 scrub starts
Sep 30 14:14:58 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 4.0 scrub ok
Sep 30 14:14:58 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:14:58 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Sep 30 14:14:59 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v96: 100 pgs: 100 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Sep 30 14:14:59 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:14:59 compute-0 ceph-mgr[74485]: [cephadm INFO root] Saving service prometheus spec with placement compute-0;count:1
Sep 30 14:14:59 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Saving service prometheus spec with placement compute-0;count:1
Sep 30 14:14:59 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.prometheus}] v 0)
Sep 30 14:14:59 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:14:59 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Sep 30 14:14:59 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Sep 30 14:14:59 compute-0 ceph-mon[74194]: 2.7 scrub starts
Sep 30 14:14:59 compute-0 ceph-mon[74194]: 2.7 scrub ok
Sep 30 14:14:59 compute-0 ceph-mon[74194]: pgmap v95: 100 pgs: 100 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Sep 30 14:14:59 compute-0 ceph-mon[74194]: 3.0 scrub starts
Sep 30 14:14:59 compute-0 ceph-mon[74194]: 3.0 scrub ok
Sep 30 14:14:59 compute-0 ceph-mon[74194]: from='client.14277 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:14:59 compute-0 ceph-mon[74194]: Saving service node-exporter spec with placement *
Sep 30 14:14:59 compute-0 ceph-mon[74194]: 2.8 scrub starts
Sep 30 14:14:59 compute-0 ceph-mon[74194]: 2.8 scrub ok
Sep 30 14:14:59 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:14:59 compute-0 ceph-mon[74194]: Saving service grafana spec with placement compute-0;count:1
Sep 30 14:14:59 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:14:59 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:14:59 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:14:59 compute-0 ceph-mgr[74485]: [cephadm INFO root] Saving service alertmanager spec with placement compute-0;count:1
Sep 30 14:14:59 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Saving service alertmanager spec with placement compute-0;count:1
Sep 30 14:14:59 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.alertmanager}] v 0)
Sep 30 14:14:59 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:14:59 compute-0 confident_lehmann[87246]: Scheduled node-exporter update...
Sep 30 14:14:59 compute-0 confident_lehmann[87246]: Scheduled grafana update...
Sep 30 14:14:59 compute-0 confident_lehmann[87246]: Scheduled prometheus update...
Sep 30 14:14:59 compute-0 confident_lehmann[87246]: Scheduled alertmanager update...
Sep 30 14:14:59 compute-0 systemd[1]: libpod-19f3c5fef3fe6dcff292d9d9135f14a70a2f279b4e17eb1dd7dd124b380dcb06.scope: Deactivated successfully.
Sep 30 14:14:59 compute-0 podman[87231]: 2025-09-30 14:14:59.742428864 +0000 UTC m=+2.559814506 container died 19f3c5fef3fe6dcff292d9d9135f14a70a2f279b4e17eb1dd7dd124b380dcb06 (image=quay.io/ceph/ceph:v19, name=confident_lehmann, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:14:59 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e30 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 14:14:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-fc58b654890cf22655fda9aba22b3c250247125caab8f9918a1ef2b62c0566f6-merged.mount: Deactivated successfully.
Sep 30 14:15:00 compute-0 podman[87231]: 2025-09-30 14:15:00.020995054 +0000 UTC m=+2.838380696 container remove 19f3c5fef3fe6dcff292d9d9135f14a70a2f279b4e17eb1dd7dd124b380dcb06 (image=quay.io/ceph/ceph:v19, name=confident_lehmann, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:15:00 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0)
Sep 30 14:15:00 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Sep 30 14:15:00 compute-0 sudo[87228]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:00 compute-0 systemd[1]: libpod-conmon-19f3c5fef3fe6dcff292d9d9135f14a70a2f279b4e17eb1dd7dd124b380dcb06.scope: Deactivated successfully.
Sep 30 14:15:00 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 4.7 deep-scrub starts
Sep 30 14:15:00 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 4.7 deep-scrub ok
Sep 30 14:15:00 compute-0 sudo[87306]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyhhefyxuzuejqaizkvfckputruidrhb ; /usr/bin/python3'
Sep 30 14:15:00 compute-0 sudo[87306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:15:00 compute-0 python3[87308]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/server_port 8443 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:15:00 compute-0 podman[87309]: 2025-09-30 14:15:00.609102669 +0000 UTC m=+0.026790327 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:15:00 compute-0 podman[87309]: 2025-09-30 14:15:00.825933903 +0000 UTC m=+0.243621541 container create 65d877a4b08696654ed749953a9a543d9fb7e2bb836380b744d1069f81172461 (image=quay.io/ceph/ceph:v19, name=wonderful_torvalds, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2)
Sep 30 14:15:00 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Sep 30 14:15:01 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v97: 100 pgs: 100 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Sep 30 14:15:01 compute-0 ceph-mon[74194]: 4.0 scrub starts
Sep 30 14:15:01 compute-0 ceph-mon[74194]: 4.0 scrub ok
Sep 30 14:15:01 compute-0 ceph-mon[74194]: 2.b scrub starts
Sep 30 14:15:01 compute-0 ceph-mon[74194]: 2.b scrub ok
Sep 30 14:15:01 compute-0 ceph-mon[74194]: pgmap v96: 100 pgs: 100 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Sep 30 14:15:01 compute-0 ceph-mon[74194]: Saving service prometheus spec with placement compute-0;count:1
Sep 30 14:15:01 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:01 compute-0 ceph-mon[74194]: 3.7 scrub starts
Sep 30 14:15:01 compute-0 ceph-mon[74194]: 3.7 scrub ok
Sep 30 14:15:01 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:01 compute-0 ceph-mon[74194]: Saving service alertmanager spec with placement compute-0;count:1
Sep 30 14:15:01 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:01 compute-0 ceph-mon[74194]: from='osd.2 [v2:192.168.122.102:6800/1427596359,v1:192.168.122.102:6801/1427596359]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Sep 30 14:15:01 compute-0 ceph-mon[74194]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Sep 30 14:15:01 compute-0 systemd[1]: Started libpod-conmon-65d877a4b08696654ed749953a9a543d9fb7e2bb836380b744d1069f81172461.scope.
Sep 30 14:15:01 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Sep 30 14:15:01 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:15:01 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Sep 30 14:15:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/927a233c3b744271eaea7b1435c93ef6732dff569f32bbf1e655cc29094ded78/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Sep 30 14:15:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/927a233c3b744271eaea7b1435c93ef6732dff569f32bbf1e655cc29094ded78/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:15:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/927a233c3b744271eaea7b1435c93ef6732dff569f32bbf1e655cc29094ded78/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:15:01 compute-0 podman[87309]: 2025-09-30 14:15:01.510371687 +0000 UTC m=+0.928059345 container init 65d877a4b08696654ed749953a9a543d9fb7e2bb836380b744d1069f81172461 (image=quay.io/ceph/ceph:v19, name=wonderful_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Sep 30 14:15:01 compute-0 podman[87309]: 2025-09-30 14:15:01.518924972 +0000 UTC m=+0.936612610 container start 65d877a4b08696654ed749953a9a543d9fb7e2bb836380b744d1069f81172461 (image=quay.io/ceph/ceph:v19, name=wonderful_torvalds, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:15:01 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Sep 30 14:15:01 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Sep 30 14:15:01 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e31 e31: 3 total, 2 up, 3 in
Sep 30 14:15:01 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 2 up, 3 in
Sep 30 14:15:01 compute-0 podman[87309]: 2025-09-30 14:15:01.759400648 +0000 UTC m=+1.177088286 container attach 65d877a4b08696654ed749953a9a543d9fb7e2bb836380b744d1069f81172461 (image=quay.io/ceph/ceph:v19, name=wonderful_torvalds, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:15:01 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]} v 0)
Sep 30 14:15:01 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Sep 30 14:15:01 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e31 create-or-move crush item name 'osd.2' initial_weight 0.0195 at location {host=compute-2,root=default}
Sep 30 14:15:01 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Sep 30 14:15:01 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Sep 30 14:15:01 compute-0 ceph-mgr[74485]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Sep 30 14:15:02 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/server_port}] v 0)
Sep 30 14:15:02 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:02 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 4.6 scrub starts
Sep 30 14:15:02 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 4.6 scrub ok
Sep 30 14:15:02 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Sep 30 14:15:02 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Sep 30 14:15:02 compute-0 ceph-mon[74194]: 3.13 scrub starts
Sep 30 14:15:02 compute-0 ceph-mon[74194]: 3.13 scrub ok
Sep 30 14:15:02 compute-0 ceph-mon[74194]: 4.7 deep-scrub starts
Sep 30 14:15:02 compute-0 ceph-mon[74194]: 4.7 deep-scrub ok
Sep 30 14:15:02 compute-0 ceph-mon[74194]: 3.15 scrub starts
Sep 30 14:15:02 compute-0 ceph-mon[74194]: 3.15 scrub ok
Sep 30 14:15:02 compute-0 ceph-mon[74194]: pgmap v97: 100 pgs: 100 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Sep 30 14:15:02 compute-0 ceph-mon[74194]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Sep 30 14:15:02 compute-0 ceph-mon[74194]: from='osd.2 [v2:192.168.122.102:6800/1427596359,v1:192.168.122.102:6801/1427596359]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Sep 30 14:15:02 compute-0 ceph-mon[74194]: osdmap e31: 3 total, 2 up, 3 in
Sep 30 14:15:02 compute-0 ceph-mon[74194]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Sep 30 14:15:02 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Sep 30 14:15:02 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/935967380' entity='client.admin' 
Sep 30 14:15:02 compute-0 systemd[1]: libpod-65d877a4b08696654ed749953a9a543d9fb7e2bb836380b744d1069f81172461.scope: Deactivated successfully.
Sep 30 14:15:02 compute-0 podman[87309]: 2025-09-30 14:15:02.822517709 +0000 UTC m=+2.240205347 container died 65d877a4b08696654ed749953a9a543d9fb7e2bb836380b744d1069f81172461 (image=quay.io/ceph/ceph:v19, name=wonderful_torvalds, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Sep 30 14:15:03 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v99: 100 pgs: 100 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Sep 30 14:15:03 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:15:03 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:15:03 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:15:03 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:15:03 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:15:03 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:15:03 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]': finished
Sep 30 14:15:03 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e32 e32: 3 total, 2 up, 3 in
Sep 30 14:15:03 compute-0 ceph-mgr[74485]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1427596359; not ready for session (expect reconnect)
Sep 30 14:15:03 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:03 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 2 up, 3 in
Sep 30 14:15:03 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Sep 30 14:15:03 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Sep 30 14:15:03 compute-0 ceph-mgr[74485]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Sep 30 14:15:03 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Sep 30 14:15:03 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Sep 30 14:15:03 compute-0 sudo[87360]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 14:15:03 compute-0 sudo[87360]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:03 compute-0 sudo[87360]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:03 compute-0 sudo[87386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:15:03 compute-0 sudo[87386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:03 compute-0 sudo[87386]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:03 compute-0 sudo[87411]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 14:15:03 compute-0 sudo[87411]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-927a233c3b744271eaea7b1435c93ef6732dff569f32bbf1e655cc29094ded78-merged.mount: Deactivated successfully.
Sep 30 14:15:04 compute-0 sudo[87411]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:04 compute-0 ceph-mgr[74485]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1427596359; not ready for session (expect reconnect)
Sep 30 14:15:04 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Sep 30 14:15:04 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Sep 30 14:15:04 compute-0 ceph-mgr[74485]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Sep 30 14:15:04 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Sep 30 14:15:04 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Sep 30 14:15:04 compute-0 ceph-mon[74194]: purged_snaps scrub starts
Sep 30 14:15:04 compute-0 ceph-mon[74194]: purged_snaps scrub ok
Sep 30 14:15:04 compute-0 ceph-mon[74194]: 3.6 scrub starts
Sep 30 14:15:04 compute-0 ceph-mon[74194]: 3.6 scrub ok
Sep 30 14:15:04 compute-0 ceph-mon[74194]: 4.1f scrub starts
Sep 30 14:15:04 compute-0 ceph-mon[74194]: 4.1f scrub ok
Sep 30 14:15:04 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:04 compute-0 ceph-mon[74194]: 4.6 scrub starts
Sep 30 14:15:04 compute-0 ceph-mon[74194]: 4.6 scrub ok
Sep 30 14:15:04 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/935967380' entity='client.admin' 
Sep 30 14:15:04 compute-0 ceph-mon[74194]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]': finished
Sep 30 14:15:04 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:04 compute-0 ceph-mon[74194]: osdmap e32: 3 total, 2 up, 3 in
Sep 30 14:15:04 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Sep 30 14:15:05 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v101: 100 pgs: 100 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Sep 30 14:15:05 compute-0 podman[87309]: 2025-09-30 14:15:05.062921559 +0000 UTC m=+4.480609197 container remove 65d877a4b08696654ed749953a9a543d9fb7e2bb836380b744d1069f81172461 (image=quay.io/ceph/ceph:v19, name=wonderful_torvalds, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Sep 30 14:15:05 compute-0 sudo[87306]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:05 compute-0 systemd[1]: libpod-conmon-65d877a4b08696654ed749953a9a543d9fb7e2bb836380b744d1069f81172461.scope: Deactivated successfully.
Sep 30 14:15:05 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 14:15:05 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Sep 30 14:15:05 compute-0 sudo[87489]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujspacaxsaclqcqkvytjscclfrkipxmd ; /usr/bin/python3'
Sep 30 14:15:05 compute-0 sudo[87489]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:15:05 compute-0 python3[87491]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/ssl_server_port 8443 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:15:05 compute-0 ceph-mgr[74485]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1427596359; not ready for session (expect reconnect)
Sep 30 14:15:05 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Sep 30 14:15:05 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Sep 30 14:15:05 compute-0 ceph-mgr[74485]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Sep 30 14:15:05 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:05 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Sep 30 14:15:05 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 3.4 deep-scrub starts
Sep 30 14:15:05 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 3.4 deep-scrub ok
Sep 30 14:15:05 compute-0 podman[87492]: 2025-09-30 14:15:05.524953652 +0000 UTC m=+0.094517981 container create 09df9329b6c79fe6aa7281708a8b02da46c05ac6d3120b07f8a40c879455108c (image=quay.io/ceph/ceph:v19, name=blissful_spence, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2)
Sep 30 14:15:05 compute-0 podman[87492]: 2025-09-30 14:15:05.455132193 +0000 UTC m=+0.024696542 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:15:05 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:05 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Sep 30 14:15:05 compute-0 systemd[1]: Started libpod-conmon-09df9329b6c79fe6aa7281708a8b02da46c05ac6d3120b07f8a40c879455108c.scope.
Sep 30 14:15:05 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:15:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a37eacc090152da8cb87949093154e49bae41fdd0a10ad0e7d6b4700fb0873d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:15:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a37eacc090152da8cb87949093154e49bae41fdd0a10ad0e7d6b4700fb0873d/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Sep 30 14:15:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a37eacc090152da8cb87949093154e49bae41fdd0a10ad0e7d6b4700fb0873d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:15:06 compute-0 podman[87492]: 2025-09-30 14:15:06.00040345 +0000 UTC m=+0.569967789 container init 09df9329b6c79fe6aa7281708a8b02da46c05ac6d3120b07f8a40c879455108c (image=quay.io/ceph/ceph:v19, name=blissful_spence, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:15:06 compute-0 podman[87492]: 2025-09-30 14:15:06.007522457 +0000 UTC m=+0.577086786 container start 09df9329b6c79fe6aa7281708a8b02da46c05ac6d3120b07f8a40c879455108c (image=quay.io/ceph/ceph:v19, name=blissful_spence, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Sep 30 14:15:06 compute-0 ceph-mon[74194]: 3.1a scrub starts
Sep 30 14:15:06 compute-0 ceph-mon[74194]: 3.1a scrub ok
Sep 30 14:15:06 compute-0 ceph-mon[74194]: pgmap v99: 100 pgs: 100 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Sep 30 14:15:06 compute-0 ceph-mon[74194]: 3.2 scrub starts
Sep 30 14:15:06 compute-0 ceph-mon[74194]: 3.2 scrub ok
Sep 30 14:15:06 compute-0 ceph-mon[74194]: 4.13 scrub starts
Sep 30 14:15:06 compute-0 ceph-mon[74194]: 4.13 scrub ok
Sep 30 14:15:06 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Sep 30 14:15:06 compute-0 ceph-mon[74194]: 4.4 scrub starts
Sep 30 14:15:06 compute-0 ceph-mon[74194]: 4.4 scrub ok
Sep 30 14:15:06 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Sep 30 14:15:06 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:06 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:06 compute-0 podman[87492]: 2025-09-30 14:15:06.188758103 +0000 UTC m=+0.758322452 container attach 09df9329b6c79fe6aa7281708a8b02da46c05ac6d3120b07f8a40c879455108c (image=quay.io/ceph/ceph:v19, name=blissful_spence, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:15:06 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:06 compute-0 ceph-mgr[74485]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1427596359; not ready for session (expect reconnect)
Sep 30 14:15:06 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 4.3 deep-scrub starts
Sep 30 14:15:06 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Sep 30 14:15:06 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Sep 30 14:15:06 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Sep 30 14:15:06 compute-0 ceph-mgr[74485]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Sep 30 14:15:06 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 4.3 deep-scrub ok
Sep 30 14:15:06 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/ssl_server_port}] v 0)
Sep 30 14:15:06 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 32 pg[2.1b( empty local-lis/les=27/28 n=0 ec=21/16 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=11.371060371s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 85.134071350s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:15:06 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 32 pg[2.1b( empty local-lis/les=27/28 n=0 ec=21/16 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=11.371060371s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.134071350s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:15:06 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 32 pg[4.14( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.589675903s) [] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active pruub 87.352882385s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:15:06 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 32 pg[2.15( empty local-lis/les=27/28 n=0 ec=21/16 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=11.385762215s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 85.148986816s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:15:06 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 32 pg[4.14( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.589675903s) [] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.352882385s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:15:06 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 32 pg[2.15( empty local-lis/les=27/28 n=0 ec=21/16 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=11.385762215s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.148986816s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:15:06 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 32 pg[2.10( empty local-lis/les=27/28 n=0 ec=21/16 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=11.385617256s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 85.148994446s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:15:06 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 32 pg[2.10( empty local-lis/les=27/28 n=0 ec=21/16 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=11.385617256s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.148994446s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:15:06 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 32 pg[2.13( empty local-lis/les=27/28 n=0 ec=21/16 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=11.385618210s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 85.149024963s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:15:06 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 32 pg[2.d( empty local-lis/les=27/28 n=0 ec=21/16 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=11.385817528s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 85.149276733s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:15:06 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 32 pg[2.d( empty local-lis/les=27/28 n=0 ec=21/16 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=11.385817528s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.149276733s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:15:06 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 32 pg[2.c( empty local-lis/les=27/28 n=0 ec=21/16 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=11.385785103s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 85.149276733s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:15:06 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 32 pg[2.c( empty local-lis/les=27/28 n=0 ec=21/16 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=11.385785103s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.149276733s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:15:06 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 32 pg[4.1d( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.585020065s) [] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active pruub 87.348571777s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:15:06 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 32 pg[2.a( empty local-lis/les=27/28 n=0 ec=21/16 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=11.385710716s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active pruub 85.149276733s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:15:06 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 32 pg[4.1d( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.585020065s) [] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.348571777s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:15:06 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 32 pg[2.a( empty local-lis/les=27/28 n=0 ec=21/16 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=11.385710716s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.149276733s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:15:06 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 32 pg[3.0( empty local-lis/les=21/22 n=0 ec=17/17 lis/c=21/21 les/c/f=22/22/0 sis=32 pruub=10.077845573s) [] r=-1 lpr=32 pi=[21,32)/1 crt=0'0 mlcod 0'0 active pruub 83.841529846s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:15:06 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 32 pg[3.0( empty local-lis/les=21/22 n=0 ec=17/17 lis/c=21/21 les/c/f=22/22/0 sis=32 pruub=10.077845573s) [] r=-1 lpr=32 pi=[21,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.841529846s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:15:06 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 32 pg[5.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=32 pruub=10.077803612s) [] r=-1 lpr=32 pi=[21,32)/1 crt=0'0 mlcod 0'0 active pruub 83.841537476s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:15:06 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 32 pg[5.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=32 pruub=10.077803612s) [] r=-1 lpr=32 pi=[21,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.841537476s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:15:06 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 32 pg[4.6( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.589871407s) [] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active pruub 87.353637695s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:15:06 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 32 pg[4.3( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.590228081s) [] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active pruub 87.354087830s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:15:06 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 32 pg[2.13( empty local-lis/les=27/28 n=0 ec=21/16 lis/c=27/27 les/c/f=28/28/0 sis=32 pruub=11.385618210s) [] r=-1 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.149024963s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:15:06 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 32 pg[4.3( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.590228081s) [] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.354087830s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:15:06 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 32 pg[4.2( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.589963913s) [] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active pruub 87.353927612s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:15:06 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 32 pg[4.2( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.589963913s) [] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.353927612s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:15:06 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 32 pg[4.6( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.589871407s) [] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.353637695s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:15:06 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 32 pg[3.8( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=32 pruub=10.077262878s) [] r=-1 lpr=32 pi=[21,32)/1 crt=0'0 mlcod 0'0 active pruub 83.841300964s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:15:06 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 32 pg[3.8( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=32 pruub=10.077262878s) [] r=-1 lpr=32 pi=[21,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.841300964s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:15:06 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 32 pg[4.1c( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.590033531s) [] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active pruub 87.354103088s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:15:06 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 32 pg[4.1c( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.590033531s) [] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.354103088s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:15:06 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 32 pg[3.1b( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=32 pruub=10.077339172s) [] r=-1 lpr=32 pi=[21,32)/1 crt=0'0 mlcod 0'0 active pruub 83.841461182s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:15:06 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 32 pg[4.19( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.590037346s) [] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active pruub 87.354179382s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:15:06 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 32 pg[3.1b( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=32 pruub=10.077339172s) [] r=-1 lpr=32 pi=[21,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.841461182s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:15:06 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 32 pg[4.19( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.590037346s) [] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.354179382s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:15:07 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v102: 100 pgs: 100 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Sep 30 14:15:07 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:07 compute-0 ceph-mgr[74485]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1427596359; not ready for session (expect reconnect)
Sep 30 14:15:07 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Sep 30 14:15:07 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Sep 30 14:15:07 compute-0 ceph-mgr[74485]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Sep 30 14:15:07 compute-0 ceph-mon[74194]: 3.10 scrub starts
Sep 30 14:15:07 compute-0 ceph-mon[74194]: 3.10 scrub ok
Sep 30 14:15:07 compute-0 ceph-mon[74194]: pgmap v101: 100 pgs: 100 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Sep 30 14:15:07 compute-0 ceph-mon[74194]: 3.4 deep-scrub starts
Sep 30 14:15:07 compute-0 ceph-mon[74194]: 3.4 deep-scrub ok
Sep 30 14:15:07 compute-0 ceph-mon[74194]: 3.14 scrub starts
Sep 30 14:15:07 compute-0 ceph-mon[74194]: 3.14 scrub ok
Sep 30 14:15:07 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:07 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Sep 30 14:15:07 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 4.f scrub starts
Sep 30 14:15:07 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 4.f scrub ok
Sep 30 14:15:07 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/608654593' entity='client.admin' 
Sep 30 14:15:07 compute-0 systemd[1]: libpod-09df9329b6c79fe6aa7281708a8b02da46c05ac6d3120b07f8a40c879455108c.scope: Deactivated successfully.
Sep 30 14:15:07 compute-0 podman[87492]: 2025-09-30 14:15:07.494541638 +0000 UTC m=+2.064105977 container died 09df9329b6c79fe6aa7281708a8b02da46c05ac6d3120b07f8a40c879455108c (image=quay.io/ceph/ceph:v19, name=blissful_spence, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:15:07 compute-0 sshd-session[87531]: Received disconnect from 209.38.228.14 port 52086:11: Bye Bye [preauth]
Sep 30 14:15:07 compute-0 sshd-session[87531]: Disconnected from authenticating user root 209.38.228.14 port 52086 [preauth]
Sep 30 14:15:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-4a37eacc090152da8cb87949093154e49bae41fdd0a10ad0e7d6b4700fb0873d-merged.mount: Deactivated successfully.
Sep 30 14:15:07 compute-0 podman[87492]: 2025-09-30 14:15:07.713460596 +0000 UTC m=+2.283024925 container remove 09df9329b6c79fe6aa7281708a8b02da46c05ac6d3120b07f8a40c879455108c (image=quay.io/ceph/ceph:v19, name=blissful_spence, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:15:07 compute-0 sudo[87489]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:07 compute-0 systemd[1]: libpod-conmon-09df9329b6c79fe6aa7281708a8b02da46c05ac6d3120b07f8a40c879455108c.scope: Deactivated successfully.
Sep 30 14:15:07 compute-0 sudo[87568]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omswrlssdkbyfkxjrcczytcndsfytser ; /usr/bin/python3'
Sep 30 14:15:07 compute-0 sudo[87568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:15:08 compute-0 python3[87570]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/ssl false _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:15:08 compute-0 podman[87571]: 2025-09-30 14:15:08.104132769 +0000 UTC m=+0.061522892 container create 71e2a8e2d5b057bae1e98760fabec57e8e353fc9a86cf54a891988df196128d1 (image=quay.io/ceph/ceph:v19, name=boring_curie, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Sep 30 14:15:08 compute-0 systemd[1]: Started libpod-conmon-71e2a8e2d5b057bae1e98760fabec57e8e353fc9a86cf54a891988df196128d1.scope.
Sep 30 14:15:08 compute-0 podman[87571]: 2025-09-30 14:15:08.06356329 +0000 UTC m=+0.020953423 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:15:08 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:15:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cce6657761cdc1fc4def3ffe603de95ca162a3463f41641e97c64893401c7e26/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:15:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cce6657761cdc1fc4def3ffe603de95ca162a3463f41641e97c64893401c7e26/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Sep 30 14:15:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cce6657761cdc1fc4def3ffe603de95ca162a3463f41641e97c64893401c7e26/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:15:08 compute-0 podman[87571]: 2025-09-30 14:15:08.207909044 +0000 UTC m=+0.165299167 container init 71e2a8e2d5b057bae1e98760fabec57e8e353fc9a86cf54a891988df196128d1 (image=quay.io/ceph/ceph:v19, name=boring_curie, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True)
Sep 30 14:15:08 compute-0 podman[87571]: 2025-09-30 14:15:08.214454936 +0000 UTC m=+0.171845049 container start 71e2a8e2d5b057bae1e98760fabec57e8e353fc9a86cf54a891988df196128d1 (image=quay.io/ceph/ceph:v19, name=boring_curie, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:15:08 compute-0 podman[87571]: 2025-09-30 14:15:08.283525106 +0000 UTC m=+0.240915239 container attach 71e2a8e2d5b057bae1e98760fabec57e8e353fc9a86cf54a891988df196128d1 (image=quay.io/ceph/ceph:v19, name=boring_curie, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Sep 30 14:15:08 compute-0 ceph-mgr[74485]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1427596359; not ready for session (expect reconnect)
Sep 30 14:15:08 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Sep 30 14:15:08 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Sep 30 14:15:08 compute-0 ceph-mgr[74485]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Sep 30 14:15:08 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Sep 30 14:15:08 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Sep 30 14:15:08 compute-0 ceph-mon[74194]: 4.3 deep-scrub starts
Sep 30 14:15:08 compute-0 ceph-mon[74194]: 4.3 deep-scrub ok
Sep 30 14:15:08 compute-0 ceph-mon[74194]: 3.16 scrub starts
Sep 30 14:15:08 compute-0 ceph-mon[74194]: 3.16 scrub ok
Sep 30 14:15:08 compute-0 ceph-mon[74194]: pgmap v102: 100 pgs: 100 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Sep 30 14:15:08 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:08 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Sep 30 14:15:08 compute-0 ceph-mon[74194]: 4.f scrub starts
Sep 30 14:15:08 compute-0 ceph-mon[74194]: 4.f scrub ok
Sep 30 14:15:08 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/608654593' entity='client.admin' 
Sep 30 14:15:08 compute-0 ceph-mon[74194]: 3.f scrub starts
Sep 30 14:15:08 compute-0 ceph-mon[74194]: 3.f scrub ok
Sep 30 14:15:08 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Sep 30 14:15:08 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/ssl}] v 0)
Sep 30 14:15:08 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/100178633' entity='client.admin' 
Sep 30 14:15:08 compute-0 systemd[1]: libpod-71e2a8e2d5b057bae1e98760fabec57e8e353fc9a86cf54a891988df196128d1.scope: Deactivated successfully.
Sep 30 14:15:08 compute-0 podman[87571]: 2025-09-30 14:15:08.660369135 +0000 UTC m=+0.617759248 container died 71e2a8e2d5b057bae1e98760fabec57e8e353fc9a86cf54a891988df196128d1 (image=quay.io/ceph/ceph:v19, name=boring_curie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:15:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-cce6657761cdc1fc4def3ffe603de95ca162a3463f41641e97c64893401c7e26-merged.mount: Deactivated successfully.
Sep 30 14:15:08 compute-0 podman[87571]: 2025-09-30 14:15:08.697881154 +0000 UTC m=+0.655271267 container remove 71e2a8e2d5b057bae1e98760fabec57e8e353fc9a86cf54a891988df196128d1 (image=quay.io/ceph/ceph:v19, name=boring_curie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:15:08 compute-0 systemd[1]: libpod-conmon-71e2a8e2d5b057bae1e98760fabec57e8e353fc9a86cf54a891988df196128d1.scope: Deactivated successfully.
Sep 30 14:15:08 compute-0 sudo[87568]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:09 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v103: 100 pgs: 100 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Sep 30 14:15:09 compute-0 sudo[87645]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oiatzigbmwolhaixbgqrdrljbyzusndw ; /usr/bin/python3'
Sep 30 14:15:09 compute-0 sudo[87645]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:15:09 compute-0 python3[87647]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a -f 'name=ceph-?(.*)-mgr.*' --format \{\{\.Command\}\} --no-trunc
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:15:09 compute-0 sudo[87645]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:09 compute-0 ceph-mgr[74485]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1427596359; not ready for session (expect reconnect)
Sep 30 14:15:09 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Sep 30 14:15:09 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Sep 30 14:15:09 compute-0 ceph-mgr[74485]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Sep 30 14:15:09 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Sep 30 14:15:09 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 2.19 deep-scrub starts
Sep 30 14:15:09 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 2.19 deep-scrub ok
Sep 30 14:15:09 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:09 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Sep 30 14:15:09 compute-0 sudo[87683]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilhewgmtfctkpsdldybrwhzcvnzjciou ; /usr/bin/python3'
Sep 30 14:15:09 compute-0 sudo[87683]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:15:09 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Sep 30 14:15:09 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:09 compute-0 python3[87685]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/compute-0.buxlkm/server_addr 192.168.122.100
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:15:09 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0)
Sep 30 14:15:09 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Sep 30 14:15:09 compute-0 ceph-mgr[74485]: [cephadm INFO root] Adjusting osd_memory_target on compute-2 to 127.8M
Sep 30 14:15:09 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-2 to 127.8M
Sep 30 14:15:09 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Sep 30 14:15:09 compute-0 ceph-mgr[74485]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-2 to 134068633: error parsing value: Value '134068633' is below minimum 939524096
Sep 30 14:15:09 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-2 to 134068633: error parsing value: Value '134068633' is below minimum 939524096
Sep 30 14:15:09 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:15:09 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:15:09 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 14:15:09 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 14:15:09 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Sep 30 14:15:09 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Sep 30 14:15:09 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Sep 30 14:15:09 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Sep 30 14:15:09 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Sep 30 14:15:09 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Sep 30 14:15:09 compute-0 sudo[87692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Sep 30 14:15:09 compute-0 podman[87686]: 2025-09-30 14:15:09.789390453 +0000 UTC m=+0.063368261 container create 402463ed75c7b49c0fb5efa604100beb6b81ceabfe227a1013f2be4cdebe1468 (image=quay.io/ceph/ceph:v19, name=eloquent_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:15:09 compute-0 sudo[87692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:09 compute-0 sudo[87692]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:09 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e33 e33: 3 total, 3 up, 3 in
Sep 30 14:15:09 compute-0 podman[87686]: 2025-09-30 14:15:09.751002981 +0000 UTC m=+0.024980809 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:15:09 compute-0 sudo[87723]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/etc/ceph
Sep 30 14:15:09 compute-0 sudo[87723]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:09 compute-0 sudo[87723]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:09 compute-0 ceph-mon[74194]: 3.1 scrub starts
Sep 30 14:15:09 compute-0 ceph-mon[74194]: 3.1 scrub ok
Sep 30 14:15:09 compute-0 ceph-mon[74194]: OSD bench result of 5642.751702 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Sep 30 14:15:09 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/100178633' entity='client.admin' 
Sep 30 14:15:09 compute-0 ceph-mon[74194]: 3.3 scrub starts
Sep 30 14:15:09 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Sep 30 14:15:09 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:09 compute-0 systemd[1]: Started libpod-conmon-402463ed75c7b49c0fb5efa604100beb6b81ceabfe227a1013f2be4cdebe1468.scope.
Sep 30 14:15:09 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 33 pg[2.1b( empty local-lis/les=27/28 n=0 ec=21/16 lis/c=27/27 les/c/f=28/28/0 sis=33 pruub=8.377472878s) [2] r=-1 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.134071350s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:15:09 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 33 pg[4.1d( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=33 pruub=10.591959000s) [2] r=-1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.348571777s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:15:09 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 33 pg[4.1d( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=33 pruub=10.591932297s) [2] r=-1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.348571777s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:15:09 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 33 pg[2.1b( empty local-lis/les=27/28 n=0 ec=21/16 lis/c=27/27 les/c/f=28/28/0 sis=33 pruub=8.377428055s) [2] r=-1 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.134071350s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:15:09 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 33 pg[2.15( empty local-lis/les=27/28 n=0 ec=21/16 lis/c=27/27 les/c/f=28/28/0 sis=33 pruub=8.392279625s) [2] r=-1 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.148986816s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:15:09 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 33 pg[2.15( empty local-lis/les=27/28 n=0 ec=21/16 lis/c=27/27 les/c/f=28/28/0 sis=33 pruub=8.392263412s) [2] r=-1 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.148986816s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:15:09 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 33 pg[2.13( empty local-lis/les=27/28 n=0 ec=21/16 lis/c=27/27 les/c/f=28/28/0 sis=33 pruub=8.392208099s) [2] r=-1 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.149024963s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:15:09 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 33 pg[4.14( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=33 pruub=10.596120834s) [2] r=-1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.352882385s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:15:09 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 33 pg[2.13( empty local-lis/les=27/28 n=0 ec=21/16 lis/c=27/27 les/c/f=28/28/0 sis=33 pruub=8.392193794s) [2] r=-1 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.149024963s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:15:09 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 33 pg[4.14( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=33 pruub=10.596031189s) [2] r=-1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.352882385s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:15:09 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 33 pg[2.10( empty local-lis/les=27/28 n=0 ec=21/16 lis/c=27/27 les/c/f=28/28/0 sis=33 pruub=8.392086029s) [2] r=-1 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.148994446s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:15:09 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 33 pg[2.c( empty local-lis/les=27/28 n=0 ec=21/16 lis/c=27/27 les/c/f=28/28/0 sis=33 pruub=8.392358780s) [2] r=-1 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.149276733s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:15:09 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 33 pg[2.10( empty local-lis/les=27/28 n=0 ec=21/16 lis/c=27/27 les/c/f=28/28/0 sis=33 pruub=8.392068863s) [2] r=-1 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.148994446s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:15:09 compute-0 ceph-mon[74194]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.102:6800/1427596359,v1:192.168.122.102:6801/1427596359] boot
Sep 30 14:15:09 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 3 up, 3 in
Sep 30 14:15:09 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 33 pg[2.c( empty local-lis/les=27/28 n=0 ec=21/16 lis/c=27/27 les/c/f=28/28/0 sis=33 pruub=8.392347336s) [2] r=-1 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.149276733s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:15:09 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 33 pg[2.d( empty local-lis/les=27/28 n=0 ec=21/16 lis/c=27/27 les/c/f=28/28/0 sis=33 pruub=8.392339706s) [2] r=-1 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.149276733s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:15:09 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 33 pg[2.a( empty local-lis/les=27/28 n=0 ec=21/16 lis/c=27/27 les/c/f=28/28/0 sis=33 pruub=8.392327309s) [2] r=-1 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.149276733s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:15:09 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 33 pg[2.a( empty local-lis/les=27/28 n=0 ec=21/16 lis/c=27/27 les/c/f=28/28/0 sis=33 pruub=8.392316818s) [2] r=-1 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.149276733s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:15:09 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Sep 30 14:15:09 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Sep 30 14:15:09 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 33 pg[2.d( empty local-lis/les=27/28 n=0 ec=21/16 lis/c=27/27 les/c/f=28/28/0 sis=33 pruub=8.392325401s) [2] r=-1 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.149276733s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:15:09 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 33 pg[3.0( empty local-lis/les=21/22 n=0 ec=17/17 lis/c=21/21 les/c/f=22/22/0 sis=33 pruub=7.084457397s) [2] r=-1 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.841529846s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:15:09 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 33 pg[3.0( empty local-lis/les=21/22 n=0 ec=17/17 lis/c=21/21 les/c/f=22/22/0 sis=33 pruub=7.084445477s) [2] r=-1 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.841529846s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:15:09 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 33 pg[5.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=33 pruub=7.084341526s) [2] r=-1 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.841537476s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:15:09 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 33 pg[5.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=33 pruub=7.084327698s) [2] r=-1 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.841537476s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:15:09 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 33 pg[4.6( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=33 pruub=10.596416473s) [2] r=-1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.353637695s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:15:09 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 33 pg[4.3( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=33 pruub=10.596848488s) [2] r=-1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.354087830s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:15:09 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 33 pg[4.6( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=33 pruub=10.596400261s) [2] r=-1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.353637695s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:15:09 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 33 pg[4.3( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=33 pruub=10.596837997s) [2] r=-1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.354087830s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:15:09 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 33 pg[4.2( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=33 pruub=10.596655846s) [2] r=-1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.353927612s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:15:09 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 33 pg[4.2( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=33 pruub=10.596643448s) [2] r=-1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.353927612s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:15:09 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 33 pg[3.8( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=33 pruub=7.083981991s) [2] r=-1 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.841300964s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:15:09 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 33 pg[3.1b( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=33 pruub=7.084115505s) [2] r=-1 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.841461182s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:15:09 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 33 pg[4.1c( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=33 pruub=10.596759796s) [2] r=-1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.354103088s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:15:09 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 33 pg[3.8( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=33 pruub=7.083967209s) [2] r=-1 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.841300964s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:15:09 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 33 pg[3.1b( empty local-lis/les=21/22 n=0 ec=21/17 lis/c=21/21 les/c/f=22/22/0 sis=33 pruub=7.084106445s) [2] r=-1 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.841461182s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:15:09 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 33 pg[4.19( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=33 pruub=10.596820831s) [2] r=-1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.354179382s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:15:09 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 33 pg[4.1c( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=33 pruub=10.596746445s) [2] r=-1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.354103088s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:15:09 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 33 pg[4.19( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=33 pruub=10.596801758s) [2] r=-1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.354179382s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:15:09 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:15:09 compute-0 sudo[87748]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/etc/ceph/ceph.conf.new
Sep 30 14:15:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43965561ccc2674e12670b52e81b7f98c86309d28d3f9a06dec62fe419eebf26/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:15:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43965561ccc2674e12670b52e81b7f98c86309d28d3f9a06dec62fe419eebf26/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Sep 30 14:15:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43965561ccc2674e12670b52e81b7f98c86309d28d3f9a06dec62fe419eebf26/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:15:09 compute-0 sudo[87748]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:09 compute-0 sudo[87748]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:09 compute-0 podman[87686]: 2025-09-30 14:15:09.952811588 +0000 UTC m=+0.226789406 container init 402463ed75c7b49c0fb5efa604100beb6b81ceabfe227a1013f2be4cdebe1468 (image=quay.io/ceph/ceph:v19, name=eloquent_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Sep 30 14:15:09 compute-0 podman[87686]: 2025-09-30 14:15:09.960134211 +0000 UTC m=+0.234112019 container start 402463ed75c7b49c0fb5efa604100beb6b81ceabfe227a1013f2be4cdebe1468 (image=quay.io/ceph/ceph:v19, name=eloquent_mirzakhani, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Sep 30 14:15:09 compute-0 podman[87686]: 2025-09-30 14:15:09.964777944 +0000 UTC m=+0.238755752 container attach 402463ed75c7b49c0fb5efa604100beb6b81ceabfe227a1013f2be4cdebe1468 (image=quay.io/ceph/ceph:v19, name=eloquent_mirzakhani, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:15:09 compute-0 sudo[87778]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6
Sep 30 14:15:09 compute-0 sudo[87778]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:10 compute-0 sudo[87778]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:10 compute-0 sudo[87804]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/etc/ceph/ceph.conf.new
Sep 30 14:15:10 compute-0 sudo[87804]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:10 compute-0 sudo[87804]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:10 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e33 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 14:15:10 compute-0 sudo[87871]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/etc/ceph/ceph.conf.new
Sep 30 14:15:10 compute-0 sudo[87871]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:10 compute-0 sudo[87871]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:10 compute-0 sudo[87896]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/etc/ceph/ceph.conf.new
Sep 30 14:15:10 compute-0 sudo[87896]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:10 compute-0 sudo[87896]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:10 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.conf
Sep 30 14:15:10 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.conf
Sep 30 14:15:10 compute-0 sudo[87921]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Sep 30 14:15:10 compute-0 sudo[87921]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:10 compute-0 sudo[87921]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:10 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.conf
Sep 30 14:15:10 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.conf
Sep 30 14:15:10 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/compute-0.buxlkm/server_addr}] v 0)
Sep 30 14:15:10 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.conf
Sep 30 14:15:10 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.conf
Sep 30 14:15:10 compute-0 sudo[87946]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config
Sep 30 14:15:10 compute-0 sudo[87946]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:10 compute-0 sudo[87946]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:10 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1585349490' entity='client.admin' 
Sep 30 14:15:10 compute-0 systemd[1]: libpod-402463ed75c7b49c0fb5efa604100beb6b81ceabfe227a1013f2be4cdebe1468.scope: Deactivated successfully.
Sep 30 14:15:10 compute-0 podman[87686]: 2025-09-30 14:15:10.39083616 +0000 UTC m=+0.664813968 container died 402463ed75c7b49c0fb5efa604100beb6b81ceabfe227a1013f2be4cdebe1468 (image=quay.io/ceph/ceph:v19, name=eloquent_mirzakhani, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Sep 30 14:15:10 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 2.e scrub starts
Sep 30 14:15:10 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 2.e scrub ok
Sep 30 14:15:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-43965561ccc2674e12670b52e81b7f98c86309d28d3f9a06dec62fe419eebf26-merged.mount: Deactivated successfully.
Sep 30 14:15:10 compute-0 sudo[87972]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config
Sep 30 14:15:10 compute-0 sudo[87972]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:10 compute-0 sudo[87972]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:10 compute-0 podman[87686]: 2025-09-30 14:15:10.437458968 +0000 UTC m=+0.711436776 container remove 402463ed75c7b49c0fb5efa604100beb6b81ceabfe227a1013f2be4cdebe1468 (image=quay.io/ceph/ceph:v19, name=eloquent_mirzakhani, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Sep 30 14:15:10 compute-0 systemd[1]: libpod-conmon-402463ed75c7b49c0fb5efa604100beb6b81ceabfe227a1013f2be4cdebe1468.scope: Deactivated successfully.
Sep 30 14:15:10 compute-0 sudo[87683]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:10 compute-0 sudo[88010]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.conf.new
Sep 30 14:15:10 compute-0 sudo[88010]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:10 compute-0 sudo[88010]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:10 compute-0 sudo[88035]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6
Sep 30 14:15:10 compute-0 sudo[88035]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:10 compute-0 sudo[88035]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:10 compute-0 sudo[88060]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.conf.new
Sep 30 14:15:10 compute-0 sudo[88060]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:10 compute-0 sudo[88060]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:10 compute-0 sudo[88108]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.conf.new
Sep 30 14:15:10 compute-0 sudo[88108]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:10 compute-0 sudo[88108]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:10 compute-0 sudo[88133]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.conf.new
Sep 30 14:15:10 compute-0 sudo[88133]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:10 compute-0 sudo[88133]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:10 compute-0 sudo[88158]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.conf.new /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.conf
Sep 30 14:15:10 compute-0 sudo[88158]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:10 compute-0 sudo[88158]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:10 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Sep 30 14:15:10 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 14:15:10 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Sep 30 14:15:10 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Sep 30 14:15:10 compute-0 ceph-mon[74194]: 3.3 scrub ok
Sep 30 14:15:10 compute-0 ceph-mon[74194]: pgmap v103: 100 pgs: 100 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Sep 30 14:15:10 compute-0 ceph-mon[74194]: 2.19 deep-scrub starts
Sep 30 14:15:10 compute-0 ceph-mon[74194]: 2.19 deep-scrub ok
Sep 30 14:15:10 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:10 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Sep 30 14:15:10 compute-0 ceph-mon[74194]: Adjusting osd_memory_target on compute-2 to 127.8M
Sep 30 14:15:10 compute-0 ceph-mon[74194]: Unable to set osd_memory_target on compute-2 to 134068633: error parsing value: Value '134068633' is below minimum 939524096
Sep 30 14:15:10 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:15:10 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 14:15:10 compute-0 ceph-mon[74194]: Updating compute-0:/etc/ceph/ceph.conf
Sep 30 14:15:10 compute-0 ceph-mon[74194]: Updating compute-1:/etc/ceph/ceph.conf
Sep 30 14:15:10 compute-0 ceph-mon[74194]: Updating compute-2:/etc/ceph/ceph.conf
Sep 30 14:15:10 compute-0 ceph-mon[74194]: osd.2 [v2:192.168.122.102:6800/1427596359,v1:192.168.122.102:6801/1427596359] boot
Sep 30 14:15:10 compute-0 ceph-mon[74194]: osdmap e33: 3 total, 3 up, 3 in
Sep 30 14:15:10 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Sep 30 14:15:10 compute-0 ceph-mon[74194]: Updating compute-2:/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.conf
Sep 30 14:15:10 compute-0 ceph-mon[74194]: Updating compute-0:/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.conf
Sep 30 14:15:10 compute-0 ceph-mon[74194]: Updating compute-1:/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.conf
Sep 30 14:15:10 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/1585349490' entity='client.admin' 
Sep 30 14:15:11 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v105: 100 pgs: 18 peering, 82 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Sep 30 14:15:11 compute-0 sudo[88206]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfetvkzufnforwkryhztlqqcjmwdyvxf ; /usr/bin/python3'
Sep 30 14:15:11 compute-0 sudo[88206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:15:11 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e34 e34: 3 total, 3 up, 3 in
Sep 30 14:15:11 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:11 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 3 up, 3 in
Sep 30 14:15:11 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 14:15:11 compute-0 python3[88208]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/compute-1.zeqptq/server_addr 192.168.122.101
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:15:11 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:11 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Sep 30 14:15:11 compute-0 podman[88209]: 2025-09-30 14:15:11.292303332 +0000 UTC m=+0.054172308 container create d5b8081fe02e4f255c14e0198613a0316755993d015735f574bbfa4c0ca44374 (image=quay.io/ceph/ceph:v19, name=great_engelbart, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Sep 30 14:15:11 compute-0 systemd[1]: Started libpod-conmon-d5b8081fe02e4f255c14e0198613a0316755993d015735f574bbfa4c0ca44374.scope.
Sep 30 14:15:11 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:11 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Sep 30 14:15:11 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:15:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9ceccb72e71271e44044a8012b8d7969f66e7f37df39d0b3efe27f6c325eb48/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:15:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9ceccb72e71271e44044a8012b8d7969f66e7f37df39d0b3efe27f6c325eb48/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:15:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9ceccb72e71271e44044a8012b8d7969f66e7f37df39d0b3efe27f6c325eb48/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Sep 30 14:15:11 compute-0 podman[88209]: 2025-09-30 14:15:11.26565999 +0000 UTC m=+0.027529026 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:15:11 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Sep 30 14:15:11 compute-0 podman[88209]: 2025-09-30 14:15:11.377712702 +0000 UTC m=+0.139581658 container init d5b8081fe02e4f255c14e0198613a0316755993d015735f574bbfa4c0ca44374 (image=quay.io/ceph/ceph:v19, name=great_engelbart, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Sep 30 14:15:11 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Sep 30 14:15:11 compute-0 podman[88209]: 2025-09-30 14:15:11.388968189 +0000 UTC m=+0.150837135 container start d5b8081fe02e4f255c14e0198613a0316755993d015735f574bbfa4c0ca44374 (image=quay.io/ceph/ceph:v19, name=great_engelbart, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:15:11 compute-0 podman[88209]: 2025-09-30 14:15:11.392653136 +0000 UTC m=+0.154522142 container attach d5b8081fe02e4f255c14e0198613a0316755993d015735f574bbfa4c0ca44374 (image=quay.io/ceph/ceph:v19, name=great_engelbart, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:15:11 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:11 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/compute-1.zeqptq/server_addr}] v 0)
Sep 30 14:15:11 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:12 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:12 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 14:15:12 compute-0 ceph-mon[74194]: 4.e scrub starts
Sep 30 14:15:12 compute-0 ceph-mon[74194]: 4.e scrub ok
Sep 30 14:15:12 compute-0 ceph-mon[74194]: 2.e scrub starts
Sep 30 14:15:12 compute-0 ceph-mon[74194]: 2.e scrub ok
Sep 30 14:15:12 compute-0 ceph-mon[74194]: 4.c scrub starts
Sep 30 14:15:12 compute-0 ceph-mon[74194]: 4.c scrub ok
Sep 30 14:15:12 compute-0 ceph-mon[74194]: pgmap v105: 100 pgs: 18 peering, 82 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Sep 30 14:15:12 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:12 compute-0 ceph-mon[74194]: osdmap e34: 3 total, 3 up, 3 in
Sep 30 14:15:12 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:12 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:12 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:12 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:12 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3676501336' entity='client.admin' 
Sep 30 14:15:12 compute-0 systemd[1]: libpod-d5b8081fe02e4f255c14e0198613a0316755993d015735f574bbfa4c0ca44374.scope: Deactivated successfully.
Sep 30 14:15:12 compute-0 podman[88209]: 2025-09-30 14:15:12.218459785 +0000 UTC m=+0.980328731 container died d5b8081fe02e4f255c14e0198613a0316755993d015735f574bbfa4c0ca44374 (image=quay.io/ceph/ceph:v19, name=great_engelbart, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:15:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-e9ceccb72e71271e44044a8012b8d7969f66e7f37df39d0b3efe27f6c325eb48-merged.mount: Deactivated successfully.
Sep 30 14:15:12 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:12 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 14:15:12 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 14:15:12 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 14:15:12 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 14:15:12 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:15:12 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:15:12 compute-0 podman[88209]: 2025-09-30 14:15:12.276447542 +0000 UTC m=+1.038316488 container remove d5b8081fe02e4f255c14e0198613a0316755993d015735f574bbfa4c0ca44374 (image=quay.io/ceph/ceph:v19, name=great_engelbart, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Sep 30 14:15:12 compute-0 systemd[1]: libpod-conmon-d5b8081fe02e4f255c14e0198613a0316755993d015735f574bbfa4c0ca44374.scope: Deactivated successfully.
Sep 30 14:15:12 compute-0 sudo[88206]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:12 compute-0 sudo[88261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:15:12 compute-0 sudo[88261]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:12 compute-0 sudo[88261]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:12 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Sep 30 14:15:12 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Sep 30 14:15:12 compute-0 sudo[88286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 14:15:12 compute-0 sudo[88286]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:12 compute-0 podman[88348]: 2025-09-30 14:15:12.820371253 +0000 UTC m=+0.103132518 container create 3b1f08c433cafa52349b2345db0749a8d8ecff344d899ec499fbeb605cc88097 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_swirles, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Sep 30 14:15:12 compute-0 podman[88348]: 2025-09-30 14:15:12.745000187 +0000 UTC m=+0.027761472 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:15:12 compute-0 systemd[1]: Started libpod-conmon-3b1f08c433cafa52349b2345db0749a8d8ecff344d899ec499fbeb605cc88097.scope.
Sep 30 14:15:12 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:15:12 compute-0 sudo[88389]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jafksvubxdlgwvsbxqqrmynfcyjafcgn ; /usr/bin/python3'
Sep 30 14:15:12 compute-0 sudo[88389]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:15:12 compute-0 podman[88348]: 2025-09-30 14:15:12.956930011 +0000 UTC m=+0.239691306 container init 3b1f08c433cafa52349b2345db0749a8d8ecff344d899ec499fbeb605cc88097 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_swirles, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:15:12 compute-0 podman[88348]: 2025-09-30 14:15:12.963980117 +0000 UTC m=+0.246741382 container start 3b1f08c433cafa52349b2345db0749a8d8ecff344d899ec499fbeb605cc88097 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_swirles, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:15:12 compute-0 podman[88348]: 2025-09-30 14:15:12.967208692 +0000 UTC m=+0.249969957 container attach 3b1f08c433cafa52349b2345db0749a8d8ecff344d899ec499fbeb605cc88097 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_swirles, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:15:12 compute-0 sad_swirles[88387]: 167 167
Sep 30 14:15:12 compute-0 podman[88348]: 2025-09-30 14:15:12.970375975 +0000 UTC m=+0.253137240 container died 3b1f08c433cafa52349b2345db0749a8d8ecff344d899ec499fbeb605cc88097 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_swirles, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:15:12 compute-0 systemd[1]: libpod-3b1f08c433cafa52349b2345db0749a8d8ecff344d899ec499fbeb605cc88097.scope: Deactivated successfully.
Sep 30 14:15:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-07226c8af54b0db5fb06a484be17297d85bb1e46afb98e8f7d6243b08d2007d5-merged.mount: Deactivated successfully.
Sep 30 14:15:13 compute-0 podman[88348]: 2025-09-30 14:15:13.007188735 +0000 UTC m=+0.289950000 container remove 3b1f08c433cafa52349b2345db0749a8d8ecff344d899ec499fbeb605cc88097 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_swirles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:15:13 compute-0 systemd[1]: libpod-conmon-3b1f08c433cafa52349b2345db0749a8d8ecff344d899ec499fbeb605cc88097.scope: Deactivated successfully.
Sep 30 14:15:13 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v107: 100 pgs: 18 peering, 82 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Sep 30 14:15:13 compute-0 python3[88392]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/compute-2.udzudc/server_addr 192.168.122.102
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:15:13 compute-0 podman[88409]: 2025-09-30 14:15:13.139317997 +0000 UTC m=+0.036038881 container create d024df425ecfb1fbea2b52d66a3e1b769d0b90e1ca568d2143e9deed83e0997c (image=quay.io/ceph/ceph:v19, name=blissful_fermi, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:15:13 compute-0 podman[88419]: 2025-09-30 14:15:13.154001034 +0000 UTC m=+0.037095909 container create 02a8183896b65d4e60462706327db8f2c20f3cce7b5d95c238e9410644c8f093 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_noether, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Sep 30 14:15:13 compute-0 systemd[1]: Started libpod-conmon-d024df425ecfb1fbea2b52d66a3e1b769d0b90e1ca568d2143e9deed83e0997c.scope.
Sep 30 14:15:13 compute-0 systemd[1]: Started libpod-conmon-02a8183896b65d4e60462706327db8f2c20f3cce7b5d95c238e9410644c8f093.scope.
Sep 30 14:15:13 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:15:13 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:15:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4d33403e460480273437c41208b910307fac2daa72bdd27d60aa2d46fc7badb/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:15:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4d33403e460480273437c41208b910307fac2daa72bdd27d60aa2d46fc7badb/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:15:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4d33403e460480273437c41208b910307fac2daa72bdd27d60aa2d46fc7badb/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Sep 30 14:15:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f628bf739b76cd73138281344dcbb6beb400c29f0ac15509e0e1140d86e20a0c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:15:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f628bf739b76cd73138281344dcbb6beb400c29f0ac15509e0e1140d86e20a0c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:15:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f628bf739b76cd73138281344dcbb6beb400c29f0ac15509e0e1140d86e20a0c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:15:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f628bf739b76cd73138281344dcbb6beb400c29f0ac15509e0e1140d86e20a0c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:15:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f628bf739b76cd73138281344dcbb6beb400c29f0ac15509e0e1140d86e20a0c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 14:15:13 compute-0 ceph-mon[74194]: 2.1 scrub starts
Sep 30 14:15:13 compute-0 ceph-mon[74194]: 2.1 scrub ok
Sep 30 14:15:13 compute-0 ceph-mon[74194]: 2.a scrub starts
Sep 30 14:15:13 compute-0 ceph-mon[74194]: 4.d scrub starts
Sep 30 14:15:13 compute-0 ceph-mon[74194]: 2.a scrub ok
Sep 30 14:15:13 compute-0 ceph-mon[74194]: 4.d scrub ok
Sep 30 14:15:13 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:13 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/3676501336' entity='client.admin' 
Sep 30 14:15:13 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:13 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 14:15:13 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 14:15:13 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:15:13 compute-0 ceph-mon[74194]: 2.10 scrub starts
Sep 30 14:15:13 compute-0 ceph-mon[74194]: 2.10 scrub ok
Sep 30 14:15:13 compute-0 podman[88409]: 2025-09-30 14:15:13.123982903 +0000 UTC m=+0.020703807 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:15:13 compute-0 podman[88409]: 2025-09-30 14:15:13.221783359 +0000 UTC m=+0.118504273 container init d024df425ecfb1fbea2b52d66a3e1b769d0b90e1ca568d2143e9deed83e0997c (image=quay.io/ceph/ceph:v19, name=blissful_fermi, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325)
Sep 30 14:15:13 compute-0 podman[88419]: 2025-09-30 14:15:13.231129436 +0000 UTC m=+0.114224331 container init 02a8183896b65d4e60462706327db8f2c20f3cce7b5d95c238e9410644c8f093 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_noether, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:15:13 compute-0 podman[88419]: 2025-09-30 14:15:13.137266253 +0000 UTC m=+0.020361158 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:15:13 compute-0 podman[88409]: 2025-09-30 14:15:13.233633772 +0000 UTC m=+0.130354656 container start d024df425ecfb1fbea2b52d66a3e1b769d0b90e1ca568d2143e9deed83e0997c (image=quay.io/ceph/ceph:v19, name=blissful_fermi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:15:13 compute-0 podman[88409]: 2025-09-30 14:15:13.236879657 +0000 UTC m=+0.133600561 container attach d024df425ecfb1fbea2b52d66a3e1b769d0b90e1ca568d2143e9deed83e0997c (image=quay.io/ceph/ceph:v19, name=blissful_fermi, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Sep 30 14:15:13 compute-0 podman[88419]: 2025-09-30 14:15:13.238338376 +0000 UTC m=+0.121433251 container start 02a8183896b65d4e60462706327db8f2c20f3cce7b5d95c238e9410644c8f093 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_noether, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:15:13 compute-0 podman[88419]: 2025-09-30 14:15:13.241351815 +0000 UTC m=+0.124446690 container attach 02a8183896b65d4e60462706327db8f2c20f3cce7b5d95c238e9410644c8f093 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_noether, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:15:13 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Sep 30 14:15:13 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Sep 30 14:15:13 compute-0 youthful_noether[88445]: --> passed data devices: 0 physical, 1 LVM
Sep 30 14:15:13 compute-0 youthful_noether[88445]: --> All data devices are unavailable
Sep 30 14:15:13 compute-0 systemd[1]: libpod-02a8183896b65d4e60462706327db8f2c20f3cce7b5d95c238e9410644c8f093.scope: Deactivated successfully.
Sep 30 14:15:13 compute-0 podman[88419]: 2025-09-30 14:15:13.590202167 +0000 UTC m=+0.473297062 container died 02a8183896b65d4e60462706327db8f2c20f3cce7b5d95c238e9410644c8f093 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_noether, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:15:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-f628bf739b76cd73138281344dcbb6beb400c29f0ac15509e0e1140d86e20a0c-merged.mount: Deactivated successfully.
Sep 30 14:15:13 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/compute-2.udzudc/server_addr}] v 0)
Sep 30 14:15:13 compute-0 podman[88419]: 2025-09-30 14:15:13.641145939 +0000 UTC m=+0.524240814 container remove 02a8183896b65d4e60462706327db8f2c20f3cce7b5d95c238e9410644c8f093 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_noether, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Sep 30 14:15:13 compute-0 systemd[1]: libpod-conmon-02a8183896b65d4e60462706327db8f2c20f3cce7b5d95c238e9410644c8f093.scope: Deactivated successfully.
Sep 30 14:15:13 compute-0 sudo[88286]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:13 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1576935233' entity='client.admin' 
Sep 30 14:15:13 compute-0 systemd[1]: libpod-d024df425ecfb1fbea2b52d66a3e1b769d0b90e1ca568d2143e9deed83e0997c.scope: Deactivated successfully.
Sep 30 14:15:13 compute-0 podman[88409]: 2025-09-30 14:15:13.722028 +0000 UTC m=+0.618748884 container died d024df425ecfb1fbea2b52d66a3e1b769d0b90e1ca568d2143e9deed83e0997c (image=quay.io/ceph/ceph:v19, name=blissful_fermi, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Sep 30 14:15:13 compute-0 sudo[88493]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:15:13 compute-0 sudo[88493]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:13 compute-0 sudo[88493]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:13 compute-0 podman[88409]: 2025-09-30 14:15:13.78162079 +0000 UTC m=+0.678341674 container remove d024df425ecfb1fbea2b52d66a3e1b769d0b90e1ca568d2143e9deed83e0997c (image=quay.io/ceph/ceph:v19, name=blissful_fermi, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:15:13 compute-0 systemd[1]: libpod-conmon-d024df425ecfb1fbea2b52d66a3e1b769d0b90e1ca568d2143e9deed83e0997c.scope: Deactivated successfully.
Sep 30 14:15:13 compute-0 sudo[88389]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-d4d33403e460480273437c41208b910307fac2daa72bdd27d60aa2d46fc7badb-merged.mount: Deactivated successfully.
Sep 30 14:15:13 compute-0 sudo[88531]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -- lvm list --format json
Sep 30 14:15:13 compute-0 sudo[88531]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:13 compute-0 sudo[88579]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahppxkdwhikedpjtcekocljuiqmploxn ; /usr/bin/python3'
Sep 30 14:15:13 compute-0 sudo[88579]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:15:14 compute-0 python3[88581]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module disable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:15:14 compute-0 podman[88617]: 2025-09-30 14:15:14.183843388 +0000 UTC m=+0.045229903 container create dd254975eca05b49d5e96edcbf69cad2d775389702d9124b4d4ea5ad082d1423 (image=quay.io/ceph/ceph:v19, name=gifted_ptolemy, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:15:14 compute-0 podman[88630]: 2025-09-30 14:15:14.213566051 +0000 UTC m=+0.045266643 container create a31b7f70fd6609f20592835c751e3f4de126e90e044877994fe2f3f386a841e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_hawking, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Sep 30 14:15:14 compute-0 systemd[1]: Started libpod-conmon-dd254975eca05b49d5e96edcbf69cad2d775389702d9124b4d4ea5ad082d1423.scope.
Sep 30 14:15:14 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:15:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6581057587c336f551c574553f7e780104c4329a62ca7ebe0095c920a4bdd9f5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:15:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6581057587c336f551c574553f7e780104c4329a62ca7ebe0095c920a4bdd9f5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:15:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6581057587c336f551c574553f7e780104c4329a62ca7ebe0095c920a4bdd9f5/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Sep 30 14:15:14 compute-0 systemd[1]: Started libpod-conmon-a31b7f70fd6609f20592835c751e3f4de126e90e044877994fe2f3f386a841e8.scope.
Sep 30 14:15:14 compute-0 podman[88617]: 2025-09-30 14:15:14.161973972 +0000 UTC m=+0.023360507 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:15:14 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:15:14 compute-0 podman[88617]: 2025-09-30 14:15:14.276336815 +0000 UTC m=+0.137723350 container init dd254975eca05b49d5e96edcbf69cad2d775389702d9124b4d4ea5ad082d1423 (image=quay.io/ceph/ceph:v19, name=gifted_ptolemy, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:15:14 compute-0 podman[88617]: 2025-09-30 14:15:14.282111127 +0000 UTC m=+0.143497642 container start dd254975eca05b49d5e96edcbf69cad2d775389702d9124b4d4ea5ad082d1423 (image=quay.io/ceph/ceph:v19, name=gifted_ptolemy, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Sep 30 14:15:14 compute-0 ceph-mon[74194]: 2.6 scrub starts
Sep 30 14:15:14 compute-0 ceph-mon[74194]: 2.6 scrub ok
Sep 30 14:15:14 compute-0 ceph-mon[74194]: 3.c scrub starts
Sep 30 14:15:14 compute-0 ceph-mon[74194]: 3.c scrub ok
Sep 30 14:15:14 compute-0 ceph-mon[74194]: pgmap v107: 100 pgs: 18 peering, 82 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Sep 30 14:15:14 compute-0 ceph-mon[74194]: 2.9 scrub starts
Sep 30 14:15:14 compute-0 ceph-mon[74194]: 2.9 scrub ok
Sep 30 14:15:14 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/1576935233' entity='client.admin' 
Sep 30 14:15:14 compute-0 ceph-mon[74194]: 2.15 scrub starts
Sep 30 14:15:14 compute-0 ceph-mon[74194]: 2.15 scrub ok
Sep 30 14:15:14 compute-0 podman[88630]: 2025-09-30 14:15:14.290125769 +0000 UTC m=+0.121826371 container init a31b7f70fd6609f20592835c751e3f4de126e90e044877994fe2f3f386a841e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_hawking, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Sep 30 14:15:14 compute-0 podman[88630]: 2025-09-30 14:15:14.194633962 +0000 UTC m=+0.026334584 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:15:14 compute-0 podman[88617]: 2025-09-30 14:15:14.29437168 +0000 UTC m=+0.155758195 container attach dd254975eca05b49d5e96edcbf69cad2d775389702d9124b4d4ea5ad082d1423 (image=quay.io/ceph/ceph:v19, name=gifted_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Sep 30 14:15:14 compute-0 podman[88630]: 2025-09-30 14:15:14.295327596 +0000 UTC m=+0.127028198 container start a31b7f70fd6609f20592835c751e3f4de126e90e044877994fe2f3f386a841e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_hawking, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Sep 30 14:15:14 compute-0 nostalgic_hawking[88654]: 167 167
Sep 30 14:15:14 compute-0 podman[88630]: 2025-09-30 14:15:14.300031069 +0000 UTC m=+0.131731681 container attach a31b7f70fd6609f20592835c751e3f4de126e90e044877994fe2f3f386a841e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_hawking, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Sep 30 14:15:14 compute-0 systemd[1]: libpod-a31b7f70fd6609f20592835c751e3f4de126e90e044877994fe2f3f386a841e8.scope: Deactivated successfully.
Sep 30 14:15:14 compute-0 podman[88630]: 2025-09-30 14:15:14.300762629 +0000 UTC m=+0.132463241 container died a31b7f70fd6609f20592835c751e3f4de126e90e044877994fe2f3f386a841e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_hawking, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Sep 30 14:15:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-c922eea56efbb2d2e4921d4ac77d957cf7e289c5291da7e3eecc3e339c78a6a3-merged.mount: Deactivated successfully.
Sep 30 14:15:14 compute-0 podman[88630]: 2025-09-30 14:15:14.337367423 +0000 UTC m=+0.169068015 container remove a31b7f70fd6609f20592835c751e3f4de126e90e044877994fe2f3f386a841e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_hawking, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Sep 30 14:15:14 compute-0 systemd[1]: libpod-conmon-a31b7f70fd6609f20592835c751e3f4de126e90e044877994fe2f3f386a841e8.scope: Deactivated successfully.
Sep 30 14:15:14 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Sep 30 14:15:14 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Sep 30 14:15:14 compute-0 podman[88698]: 2025-09-30 14:15:14.470794879 +0000 UTC m=+0.021547849 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:15:14 compute-0 podman[88698]: 2025-09-30 14:15:14.567942249 +0000 UTC m=+0.118695209 container create d60f25ac40a6b2ddfaf6f68769192f77f0209ea3e0b8511161e491318e58391f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_shirley, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:15:14 compute-0 systemd[1]: Started libpod-conmon-d60f25ac40a6b2ddfaf6f68769192f77f0209ea3e0b8511161e491318e58391f.scope.
Sep 30 14:15:14 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:15:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/260093c2059d8f00f4ae2886bb14a7fa364a440ba8830c093535b706f87c516a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:15:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/260093c2059d8f00f4ae2886bb14a7fa364a440ba8830c093535b706f87c516a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:15:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/260093c2059d8f00f4ae2886bb14a7fa364a440ba8830c093535b706f87c516a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:15:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/260093c2059d8f00f4ae2886bb14a7fa364a440ba8830c093535b706f87c516a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:15:14 compute-0 podman[88698]: 2025-09-30 14:15:14.679645772 +0000 UTC m=+0.230398732 container init d60f25ac40a6b2ddfaf6f68769192f77f0209ea3e0b8511161e491318e58391f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_shirley, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Sep 30 14:15:14 compute-0 podman[88698]: 2025-09-30 14:15:14.688238748 +0000 UTC m=+0.238991708 container start d60f25ac40a6b2ddfaf6f68769192f77f0209ea3e0b8511161e491318e58391f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_shirley, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:15:14 compute-0 podman[88698]: 2025-09-30 14:15:14.691610707 +0000 UTC m=+0.242363687 container attach d60f25ac40a6b2ddfaf6f68769192f77f0209ea3e0b8511161e491318e58391f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_shirley, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:15:14 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module disable", "module": "dashboard"} v 0)
Sep 30 14:15:14 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3611745984' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Sep 30 14:15:14 compute-0 confident_shirley[88715]: {
Sep 30 14:15:14 compute-0 confident_shirley[88715]:     "0": [
Sep 30 14:15:14 compute-0 confident_shirley[88715]:         {
Sep 30 14:15:14 compute-0 confident_shirley[88715]:             "devices": [
Sep 30 14:15:14 compute-0 confident_shirley[88715]:                 "/dev/loop3"
Sep 30 14:15:14 compute-0 confident_shirley[88715]:             ],
Sep 30 14:15:14 compute-0 confident_shirley[88715]:             "lv_name": "ceph_lv0",
Sep 30 14:15:14 compute-0 confident_shirley[88715]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:15:14 compute-0 confident_shirley[88715]:             "lv_size": "21470642176",
Sep 30 14:15:14 compute-0 confident_shirley[88715]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5e3c7776-ac03-5698-b79f-a6dc2d80cae6,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1bf35304-bfb4-41f5-b832-570aa31de1b2,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 14:15:14 compute-0 confident_shirley[88715]:             "lv_uuid": "f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L",
Sep 30 14:15:14 compute-0 confident_shirley[88715]:             "name": "ceph_lv0",
Sep 30 14:15:14 compute-0 confident_shirley[88715]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:15:14 compute-0 confident_shirley[88715]:             "tags": {
Sep 30 14:15:14 compute-0 confident_shirley[88715]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:15:14 compute-0 confident_shirley[88715]:                 "ceph.block_uuid": "f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L",
Sep 30 14:15:14 compute-0 confident_shirley[88715]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 14:15:14 compute-0 confident_shirley[88715]:                 "ceph.cluster_fsid": "5e3c7776-ac03-5698-b79f-a6dc2d80cae6",
Sep 30 14:15:14 compute-0 confident_shirley[88715]:                 "ceph.cluster_name": "ceph",
Sep 30 14:15:14 compute-0 confident_shirley[88715]:                 "ceph.crush_device_class": "",
Sep 30 14:15:14 compute-0 confident_shirley[88715]:                 "ceph.encrypted": "0",
Sep 30 14:15:14 compute-0 confident_shirley[88715]:                 "ceph.osd_fsid": "1bf35304-bfb4-41f5-b832-570aa31de1b2",
Sep 30 14:15:14 compute-0 confident_shirley[88715]:                 "ceph.osd_id": "0",
Sep 30 14:15:14 compute-0 confident_shirley[88715]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 14:15:14 compute-0 confident_shirley[88715]:                 "ceph.type": "block",
Sep 30 14:15:14 compute-0 confident_shirley[88715]:                 "ceph.vdo": "0",
Sep 30 14:15:14 compute-0 confident_shirley[88715]:                 "ceph.with_tpm": "0"
Sep 30 14:15:14 compute-0 confident_shirley[88715]:             },
Sep 30 14:15:14 compute-0 confident_shirley[88715]:             "type": "block",
Sep 30 14:15:14 compute-0 confident_shirley[88715]:             "vg_name": "ceph_vg0"
Sep 30 14:15:14 compute-0 confident_shirley[88715]:         }
Sep 30 14:15:14 compute-0 confident_shirley[88715]:     ]
Sep 30 14:15:14 compute-0 confident_shirley[88715]: }
Sep 30 14:15:14 compute-0 systemd[1]: libpod-d60f25ac40a6b2ddfaf6f68769192f77f0209ea3e0b8511161e491318e58391f.scope: Deactivated successfully.
Sep 30 14:15:14 compute-0 podman[88698]: 2025-09-30 14:15:14.998706999 +0000 UTC m=+0.549459989 container died d60f25ac40a6b2ddfaf6f68769192f77f0209ea3e0b8511161e491318e58391f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_shirley, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:15:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-260093c2059d8f00f4ae2886bb14a7fa364a440ba8830c093535b706f87c516a-merged.mount: Deactivated successfully.
Sep 30 14:15:15 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v108: 100 pgs: 18 peering, 82 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Sep 30 14:15:15 compute-0 podman[88698]: 2025-09-30 14:15:15.04811687 +0000 UTC m=+0.598869830 container remove d60f25ac40a6b2ddfaf6f68769192f77f0209ea3e0b8511161e491318e58391f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_shirley, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:15:15 compute-0 systemd[1]: libpod-conmon-d60f25ac40a6b2ddfaf6f68769192f77f0209ea3e0b8511161e491318e58391f.scope: Deactivated successfully.
Sep 30 14:15:15 compute-0 sudo[88531]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:15 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e34 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 14:15:15 compute-0 sudo[88737]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:15:15 compute-0 sudo[88737]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:15 compute-0 sudo[88737]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:15 compute-0 sudo[88762]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -- raw list --format json
Sep 30 14:15:15 compute-0 sudo[88762]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:15 compute-0 ceph-mon[74194]: 4.5 deep-scrub starts
Sep 30 14:15:15 compute-0 ceph-mon[74194]: 4.5 deep-scrub ok
Sep 30 14:15:15 compute-0 ceph-mon[74194]: 2.4 scrub starts
Sep 30 14:15:15 compute-0 ceph-mon[74194]: 2.4 scrub ok
Sep 30 14:15:15 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/3611745984' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Sep 30 14:15:15 compute-0 ceph-mon[74194]: 2.c scrub starts
Sep 30 14:15:15 compute-0 ceph-mon[74194]: 2.c scrub ok
Sep 30 14:15:15 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3611745984' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Sep 30 14:15:15 compute-0 gifted_ptolemy[88647]: module 'dashboard' is already disabled
Sep 30 14:15:15 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.buxlkm(active, since 2m), standbys: compute-2.udzudc, compute-1.zeqptq
Sep 30 14:15:15 compute-0 systemd[1]: libpod-dd254975eca05b49d5e96edcbf69cad2d775389702d9124b4d4ea5ad082d1423.scope: Deactivated successfully.
Sep 30 14:15:15 compute-0 podman[88617]: 2025-09-30 14:15:15.546657346 +0000 UTC m=+1.408043861 container died dd254975eca05b49d5e96edcbf69cad2d775389702d9124b4d4ea5ad082d1423 (image=quay.io/ceph/ceph:v19, name=gifted_ptolemy, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:15:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-6581057587c336f551c574553f7e780104c4329a62ca7ebe0095c920a4bdd9f5-merged.mount: Deactivated successfully.
Sep 30 14:15:15 compute-0 podman[88617]: 2025-09-30 14:15:15.833613167 +0000 UTC m=+1.694999682 container remove dd254975eca05b49d5e96edcbf69cad2d775389702d9124b4d4ea5ad082d1423 (image=quay.io/ceph/ceph:v19, name=gifted_ptolemy, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Sep 30 14:15:15 compute-0 sudo[88579]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:15 compute-0 systemd[1]: libpod-conmon-dd254975eca05b49d5e96edcbf69cad2d775389702d9124b4d4ea5ad082d1423.scope: Deactivated successfully.
Sep 30 14:15:15 compute-0 podman[88828]: 2025-09-30 14:15:15.935221375 +0000 UTC m=+0.382938112 container create 1b21c0d6ba25700be4c0ab26df139177b9fbc828ae45705eb31ccdac45f2ba05 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_varahamihira, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325)
Sep 30 14:15:15 compute-0 systemd[1]: Started libpod-conmon-1b21c0d6ba25700be4c0ab26df139177b9fbc828ae45705eb31ccdac45f2ba05.scope.
Sep 30 14:15:16 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:15:16 compute-0 sudo[88881]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggntftkgfwidpyyxeuncwjfhlijqredd ; /usr/bin/python3'
Sep 30 14:15:16 compute-0 sudo[88881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:15:16 compute-0 podman[88828]: 2025-09-30 14:15:15.916347247 +0000 UTC m=+0.364064034 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:15:16 compute-0 podman[88828]: 2025-09-30 14:15:16.016708881 +0000 UTC m=+0.464425648 container init 1b21c0d6ba25700be4c0ab26df139177b9fbc828ae45705eb31ccdac45f2ba05 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_varahamihira, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Sep 30 14:15:16 compute-0 podman[88828]: 2025-09-30 14:15:16.024103336 +0000 UTC m=+0.471820073 container start 1b21c0d6ba25700be4c0ab26df139177b9fbc828ae45705eb31ccdac45f2ba05 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_varahamihira, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:15:16 compute-0 podman[88828]: 2025-09-30 14:15:16.027867195 +0000 UTC m=+0.475583962 container attach 1b21c0d6ba25700be4c0ab26df139177b9fbc828ae45705eb31ccdac45f2ba05 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_varahamihira, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Sep 30 14:15:16 compute-0 nervous_varahamihira[88879]: 167 167
Sep 30 14:15:16 compute-0 systemd[1]: libpod-1b21c0d6ba25700be4c0ab26df139177b9fbc828ae45705eb31ccdac45f2ba05.scope: Deactivated successfully.
Sep 30 14:15:16 compute-0 conmon[88879]: conmon 1b21c0d6ba25700be4c0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1b21c0d6ba25700be4c0ab26df139177b9fbc828ae45705eb31ccdac45f2ba05.scope/container/memory.events
Sep 30 14:15:16 compute-0 podman[88828]: 2025-09-30 14:15:16.029711244 +0000 UTC m=+0.477427981 container died 1b21c0d6ba25700be4c0ab26df139177b9fbc828ae45705eb31ccdac45f2ba05 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_varahamihira, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Sep 30 14:15:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-9749553c749c142df97524c984f88c46aa9336e7f2610f72cb7ea6f0302ab810-merged.mount: Deactivated successfully.
Sep 30 14:15:16 compute-0 podman[88828]: 2025-09-30 14:15:16.069258756 +0000 UTC m=+0.516975493 container remove 1b21c0d6ba25700be4c0ab26df139177b9fbc828ae45705eb31ccdac45f2ba05 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Sep 30 14:15:16 compute-0 systemd[1]: libpod-conmon-1b21c0d6ba25700be4c0ab26df139177b9fbc828ae45705eb31ccdac45f2ba05.scope: Deactivated successfully.
Sep 30 14:15:16 compute-0 python3[88884]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module enable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:15:16 compute-0 podman[88901]: 2025-09-30 14:15:16.213966218 +0000 UTC m=+0.043583250 container create 146ffdfd5d2bb0b45d1cf1a07491b8417165a229b8a2c1398955d2e50b1d8cc7 (image=quay.io/ceph/ceph:v19, name=musing_chatterjee, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:15:16 compute-0 podman[88916]: 2025-09-30 14:15:16.246265709 +0000 UTC m=+0.049824104 container create b370f4d7a73ff8f1c319700d8c1f7fcf6d0d6dc7e04a957c56b2846858c0bf1c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_nash, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Sep 30 14:15:16 compute-0 systemd[1]: Started libpod-conmon-146ffdfd5d2bb0b45d1cf1a07491b8417165a229b8a2c1398955d2e50b1d8cc7.scope.
Sep 30 14:15:16 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:15:16 compute-0 systemd[1]: Started libpod-conmon-b370f4d7a73ff8f1c319700d8c1f7fcf6d0d6dc7e04a957c56b2846858c0bf1c.scope.
Sep 30 14:15:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07c9ac723fca5aad9df5d473b0e705bd305291f31d27c3a4ac031d24944298fd/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Sep 30 14:15:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07c9ac723fca5aad9df5d473b0e705bd305291f31d27c3a4ac031d24944298fd/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:15:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07c9ac723fca5aad9df5d473b0e705bd305291f31d27c3a4ac031d24944298fd/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:15:16 compute-0 podman[88901]: 2025-09-30 14:15:16.196079536 +0000 UTC m=+0.025696588 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:15:16 compute-0 podman[88901]: 2025-09-30 14:15:16.303679881 +0000 UTC m=+0.133296943 container init 146ffdfd5d2bb0b45d1cf1a07491b8417165a229b8a2c1398955d2e50b1d8cc7 (image=quay.io/ceph/ceph:v19, name=musing_chatterjee, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Sep 30 14:15:16 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:15:16 compute-0 podman[88901]: 2025-09-30 14:15:16.313942172 +0000 UTC m=+0.143559204 container start 146ffdfd5d2bb0b45d1cf1a07491b8417165a229b8a2c1398955d2e50b1d8cc7 (image=quay.io/ceph/ceph:v19, name=musing_chatterjee, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Sep 30 14:15:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77f9a457062f83cf4d77e72d32063b1b4d47a6607dcd470a6f330cdd6149426d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:15:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77f9a457062f83cf4d77e72d32063b1b4d47a6607dcd470a6f330cdd6149426d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:15:16 compute-0 podman[88916]: 2025-09-30 14:15:16.226325113 +0000 UTC m=+0.029883558 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:15:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77f9a457062f83cf4d77e72d32063b1b4d47a6607dcd470a6f330cdd6149426d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:15:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77f9a457062f83cf4d77e72d32063b1b4d47a6607dcd470a6f330cdd6149426d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:15:16 compute-0 podman[88901]: 2025-09-30 14:15:16.318108362 +0000 UTC m=+0.147725394 container attach 146ffdfd5d2bb0b45d1cf1a07491b8417165a229b8a2c1398955d2e50b1d8cc7 (image=quay.io/ceph/ceph:v19, name=musing_chatterjee, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Sep 30 14:15:16 compute-0 podman[88916]: 2025-09-30 14:15:16.326196775 +0000 UTC m=+0.129755210 container init b370f4d7a73ff8f1c319700d8c1f7fcf6d0d6dc7e04a957c56b2846858c0bf1c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_nash, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Sep 30 14:15:16 compute-0 podman[88916]: 2025-09-30 14:15:16.336680561 +0000 UTC m=+0.140238956 container start b370f4d7a73ff8f1c319700d8c1f7fcf6d0d6dc7e04a957c56b2846858c0bf1c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_nash, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:15:16 compute-0 podman[88916]: 2025-09-30 14:15:16.341323583 +0000 UTC m=+0.144881998 container attach b370f4d7a73ff8f1c319700d8c1f7fcf6d0d6dc7e04a957c56b2846858c0bf1c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_nash, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Sep 30 14:15:16 compute-0 ceph-mon[74194]: 3.d scrub starts
Sep 30 14:15:16 compute-0 ceph-mon[74194]: 3.d scrub ok
Sep 30 14:15:16 compute-0 ceph-mon[74194]: pgmap v108: 100 pgs: 18 peering, 82 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Sep 30 14:15:16 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/3611745984' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Sep 30 14:15:16 compute-0 ceph-mon[74194]: mgrmap e11: compute-0.buxlkm(active, since 2m), standbys: compute-2.udzudc, compute-1.zeqptq
Sep 30 14:15:16 compute-0 ceph-mon[74194]: 2.13 scrub starts
Sep 30 14:15:16 compute-0 ceph-mon[74194]: 2.13 scrub ok
Sep 30 14:15:16 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module enable", "module": "dashboard"} v 0)
Sep 30 14:15:16 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1427431386' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Sep 30 14:15:16 compute-0 lvm[89036]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 14:15:16 compute-0 lvm[89036]: VG ceph_vg0 finished
Sep 30 14:15:17 compute-0 lvm[89040]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 14:15:17 compute-0 lvm[89040]: VG ceph_vg0 finished
Sep 30 14:15:17 compute-0 stupefied_nash[88941]: {}
Sep 30 14:15:17 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v109: 100 pgs: 100 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Sep 30 14:15:17 compute-0 podman[88916]: 2025-09-30 14:15:17.049114512 +0000 UTC m=+0.852672907 container died b370f4d7a73ff8f1c319700d8c1f7fcf6d0d6dc7e04a957c56b2846858c0bf1c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:15:17 compute-0 systemd[1]: libpod-b370f4d7a73ff8f1c319700d8c1f7fcf6d0d6dc7e04a957c56b2846858c0bf1c.scope: Deactivated successfully.
Sep 30 14:15:17 compute-0 systemd[1]: libpod-b370f4d7a73ff8f1c319700d8c1f7fcf6d0d6dc7e04a957c56b2846858c0bf1c.scope: Consumed 1.071s CPU time.
Sep 30 14:15:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-77f9a457062f83cf4d77e72d32063b1b4d47a6607dcd470a6f330cdd6149426d-merged.mount: Deactivated successfully.
Sep 30 14:15:17 compute-0 podman[88916]: 2025-09-30 14:15:17.090697038 +0000 UTC m=+0.894255433 container remove b370f4d7a73ff8f1c319700d8c1f7fcf6d0d6dc7e04a957c56b2846858c0bf1c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_nash, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:15:17 compute-0 systemd[1]: libpod-conmon-b370f4d7a73ff8f1c319700d8c1f7fcf6d0d6dc7e04a957c56b2846858c0bf1c.scope: Deactivated successfully.
Sep 30 14:15:17 compute-0 sudo[88762]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:17 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 14:15:17 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:17 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 14:15:17 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:17 compute-0 ceph-mgr[74485]: [progress INFO root] update: starting ev 4c5c5c50-bfbb-449e-9cb3-a8ff46ee7cb1 (Updating rgw.rgw deployment (+3 -> 3))
Sep 30 14:15:17 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.evkboy", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Sep 30 14:15:17 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.evkboy", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Sep 30 14:15:17 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.evkboy", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Sep 30 14:15:17 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Sep 30 14:15:17 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:17 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:15:17 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:15:17 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-2.evkboy on compute-2
Sep 30 14:15:17 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-2.evkboy on compute-2
Sep 30 14:15:17 compute-0 ceph-mon[74194]: 4.1b scrub starts
Sep 30 14:15:17 compute-0 ceph-mon[74194]: 4.1b scrub ok
Sep 30 14:15:17 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/1427431386' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Sep 30 14:15:17 compute-0 ceph-mon[74194]: 2.1b scrub starts
Sep 30 14:15:17 compute-0 ceph-mon[74194]: 2.1b scrub ok
Sep 30 14:15:17 compute-0 ceph-mon[74194]: pgmap v109: 100 pgs: 100 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Sep 30 14:15:17 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:17 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:17 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.evkboy", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Sep 30 14:15:17 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.evkboy", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Sep 30 14:15:17 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:17 compute-0 ceph-mon[74194]: from='mgr.14122 192.168.122.100:0/2101433928' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:15:18 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1427431386' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Sep 30 14:15:18 compute-0 ceph-mgr[74485]: mgr handle_mgr_map respawning because set of enabled modules changed!
Sep 30 14:15:18 compute-0 ceph-mgr[74485]: mgr respawn  e: '/usr/bin/ceph-mgr'
Sep 30 14:15:18 compute-0 ceph-mgr[74485]: mgr respawn  0: '/usr/bin/ceph-mgr'
Sep 30 14:15:18 compute-0 ceph-mgr[74485]: mgr respawn  1: '-n'
Sep 30 14:15:18 compute-0 ceph-mgr[74485]: mgr respawn  2: 'mgr.compute-0.buxlkm'
Sep 30 14:15:18 compute-0 ceph-mgr[74485]: mgr respawn  3: '-f'
Sep 30 14:15:18 compute-0 ceph-mgr[74485]: mgr respawn  4: '--setuser'
Sep 30 14:15:18 compute-0 ceph-mgr[74485]: mgr respawn  5: 'ceph'
Sep 30 14:15:18 compute-0 ceph-mgr[74485]: mgr respawn  6: '--setgroup'
Sep 30 14:15:18 compute-0 ceph-mgr[74485]: mgr respawn  7: 'ceph'
Sep 30 14:15:18 compute-0 ceph-mgr[74485]: mgr respawn  8: '--default-log-to-file=false'
Sep 30 14:15:18 compute-0 ceph-mgr[74485]: mgr respawn  9: '--default-log-to-journald=true'
Sep 30 14:15:18 compute-0 ceph-mgr[74485]: mgr respawn  10: '--default-log-to-stderr=false'
Sep 30 14:15:18 compute-0 ceph-mgr[74485]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Sep 30 14:15:18 compute-0 ceph-mgr[74485]: mgr respawn  exe_path /proc/self/exe
Sep 30 14:15:18 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : mgrmap e12: compute-0.buxlkm(active, since 2m), standbys: compute-2.udzudc, compute-1.zeqptq
Sep 30 14:15:18 compute-0 systemd[1]: libpod-146ffdfd5d2bb0b45d1cf1a07491b8417165a229b8a2c1398955d2e50b1d8cc7.scope: Deactivated successfully.
Sep 30 14:15:18 compute-0 podman[88901]: 2025-09-30 14:15:18.209886077 +0000 UTC m=+2.039503109 container died 146ffdfd5d2bb0b45d1cf1a07491b8417165a229b8a2c1398955d2e50b1d8cc7 (image=quay.io/ceph/ceph:v19, name=musing_chatterjee, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Sep 30 14:15:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-07c9ac723fca5aad9df5d473b0e705bd305291f31d27c3a4ac031d24944298fd-merged.mount: Deactivated successfully.
Sep 30 14:15:18 compute-0 podman[88901]: 2025-09-30 14:15:18.270071133 +0000 UTC m=+2.099688165 container remove 146ffdfd5d2bb0b45d1cf1a07491b8417165a229b8a2c1398955d2e50b1d8cc7 (image=quay.io/ceph/ceph:v19, name=musing_chatterjee, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:15:18 compute-0 sshd-session[75765]: Connection closed by 192.168.122.100 port 39354
Sep 30 14:15:18 compute-0 sshd-session[75821]: Connection closed by 192.168.122.100 port 39374
Sep 30 14:15:18 compute-0 sshd-session[75792]: Connection closed by 192.168.122.100 port 39362
Sep 30 14:15:18 compute-0 sshd-session[75736]: Connection closed by 192.168.122.100 port 39352
Sep 30 14:15:18 compute-0 sshd-session[75707]: Connection closed by 192.168.122.100 port 39350
Sep 30 14:15:18 compute-0 sshd-session[75562]: Connection closed by 192.168.122.100 port 39294
Sep 30 14:15:18 compute-0 sshd-session[75789]: pam_unix(sshd:session): session closed for user ceph-admin
Sep 30 14:15:18 compute-0 sshd-session[75533]: Connection closed by 192.168.122.100 port 39278
Sep 30 14:15:18 compute-0 sshd-session[75678]: Connection closed by 192.168.122.100 port 39348
Sep 30 14:15:18 compute-0 sshd-session[75532]: Connection closed by 192.168.122.100 port 39270
Sep 30 14:15:18 compute-0 sshd-session[75591]: Connection closed by 192.168.122.100 port 39306
Sep 30 14:15:18 compute-0 sshd-session[75620]: Connection closed by 192.168.122.100 port 39320
Sep 30 14:15:18 compute-0 sshd-session[75649]: Connection closed by 192.168.122.100 port 39334
Sep 30 14:15:18 compute-0 systemd-logind[808]: Session 33 logged out. Waiting for processes to exit.
Sep 30 14:15:18 compute-0 sshd-session[75762]: pam_unix(sshd:session): session closed for user ceph-admin
Sep 30 14:15:18 compute-0 sshd-session[75818]: pam_unix(sshd:session): session closed for user ceph-admin
Sep 30 14:15:18 compute-0 systemd[1]: session-33.scope: Deactivated successfully.
Sep 30 14:15:18 compute-0 sshd-session[75733]: pam_unix(sshd:session): session closed for user ceph-admin
Sep 30 14:15:18 compute-0 sshd-session[75675]: pam_unix(sshd:session): session closed for user ceph-admin
Sep 30 14:15:18 compute-0 sshd-session[75510]: pam_unix(sshd:session): session closed for user ceph-admin
Sep 30 14:15:18 compute-0 systemd[1]: session-32.scope: Deactivated successfully.
Sep 30 14:15:18 compute-0 sshd-session[75704]: pam_unix(sshd:session): session closed for user ceph-admin
Sep 30 14:15:18 compute-0 systemd-logind[808]: Session 32 logged out. Waiting for processes to exit.
Sep 30 14:15:18 compute-0 systemd[1]: session-31.scope: Deactivated successfully.
Sep 30 14:15:18 compute-0 sshd-session[75646]: pam_unix(sshd:session): session closed for user ceph-admin
Sep 30 14:15:18 compute-0 systemd[1]: session-22.scope: Deactivated successfully.
Sep 30 14:15:18 compute-0 systemd[1]: libpod-conmon-146ffdfd5d2bb0b45d1cf1a07491b8417165a229b8a2c1398955d2e50b1d8cc7.scope: Deactivated successfully.
Sep 30 14:15:18 compute-0 systemd[1]: session-30.scope: Deactivated successfully.
Sep 30 14:15:18 compute-0 systemd[1]: session-28.scope: Deactivated successfully.
Sep 30 14:15:18 compute-0 systemd[1]: session-34.scope: Deactivated successfully.
Sep 30 14:15:18 compute-0 systemd[1]: session-34.scope: Consumed 24.093s CPU time.
Sep 30 14:15:18 compute-0 systemd[1]: session-29.scope: Deactivated successfully.
Sep 30 14:15:18 compute-0 systemd-logind[808]: Session 31 logged out. Waiting for processes to exit.
Sep 30 14:15:18 compute-0 systemd-logind[808]: Session 22 logged out. Waiting for processes to exit.
Sep 30 14:15:18 compute-0 systemd-logind[808]: Session 29 logged out. Waiting for processes to exit.
Sep 30 14:15:18 compute-0 systemd-logind[808]: Session 30 logged out. Waiting for processes to exit.
Sep 30 14:15:18 compute-0 systemd-logind[808]: Session 28 logged out. Waiting for processes to exit.
Sep 30 14:15:18 compute-0 systemd-logind[808]: Session 34 logged out. Waiting for processes to exit.
Sep 30 14:15:18 compute-0 sshd-session[75527]: pam_unix(sshd:session): session closed for user ceph-admin
Sep 30 14:15:18 compute-0 sshd-session[75559]: pam_unix(sshd:session): session closed for user ceph-admin
Sep 30 14:15:18 compute-0 sudo[88881]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:18 compute-0 sshd-session[75617]: pam_unix(sshd:session): session closed for user ceph-admin
Sep 30 14:15:18 compute-0 sshd-session[75588]: pam_unix(sshd:session): session closed for user ceph-admin
Sep 30 14:15:18 compute-0 systemd[1]: session-27.scope: Deactivated successfully.
Sep 30 14:15:18 compute-0 systemd[1]: session-26.scope: Deactivated successfully.
Sep 30 14:15:18 compute-0 systemd[1]: session-25.scope: Deactivated successfully.
Sep 30 14:15:18 compute-0 systemd[1]: session-24.scope: Deactivated successfully.
Sep 30 14:15:18 compute-0 systemd-logind[808]: Removed session 33.
Sep 30 14:15:18 compute-0 systemd-logind[808]: Session 24 logged out. Waiting for processes to exit.
Sep 30 14:15:18 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ignoring --setuser ceph since I am not root
Sep 30 14:15:18 compute-0 systemd-logind[808]: Session 25 logged out. Waiting for processes to exit.
Sep 30 14:15:18 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ignoring --setgroup ceph since I am not root
Sep 30 14:15:18 compute-0 systemd-logind[808]: Session 26 logged out. Waiting for processes to exit.
Sep 30 14:15:18 compute-0 systemd-logind[808]: Session 27 logged out. Waiting for processes to exit.
Sep 30 14:15:18 compute-0 systemd-logind[808]: Removed session 32.
Sep 30 14:15:18 compute-0 systemd-logind[808]: Removed session 31.
Sep 30 14:15:18 compute-0 systemd-logind[808]: Removed session 22.
Sep 30 14:15:18 compute-0 systemd-logind[808]: Removed session 30.
Sep 30 14:15:18 compute-0 systemd-logind[808]: Removed session 28.
Sep 30 14:15:18 compute-0 ceph-mgr[74485]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Sep 30 14:15:18 compute-0 ceph-mgr[74485]: pidfile_write: ignore empty --pid-file
Sep 30 14:15:18 compute-0 systemd-logind[808]: Removed session 34.
Sep 30 14:15:18 compute-0 systemd-logind[808]: Removed session 29.
Sep 30 14:15:18 compute-0 systemd-logind[808]: Removed session 27.
Sep 30 14:15:18 compute-0 systemd-logind[808]: Removed session 26.
Sep 30 14:15:18 compute-0 systemd-logind[808]: Removed session 25.
Sep 30 14:15:18 compute-0 systemd-logind[808]: Removed session 24.
Sep 30 14:15:18 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'alerts'
Sep 30 14:15:18 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:15:18.465+0000 7f8aefe98140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Sep 30 14:15:18 compute-0 ceph-mgr[74485]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Sep 30 14:15:18 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'balancer'
Sep 30 14:15:18 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:15:18.551+0000 7f8aefe98140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Sep 30 14:15:18 compute-0 ceph-mgr[74485]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Sep 30 14:15:18 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'cephadm'
Sep 30 14:15:18 compute-0 sudo[89111]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eifyxobpkirpkvifalquczdwwhmsxyvf ; /usr/bin/python3'
Sep 30 14:15:18 compute-0 sudo[89111]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:15:18 compute-0 python3[89113]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-grafana-api-username admin _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:15:18 compute-0 podman[89114]: 2025-09-30 14:15:18.797469209 +0000 UTC m=+0.046109556 container create 4930029d1a19fb9fd913d2996707def152ba1a6fecbf9c13a6b60b719f37c311 (image=quay.io/ceph/ceph:v19, name=competent_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Sep 30 14:15:18 compute-0 systemd[1]: Started libpod-conmon-4930029d1a19fb9fd913d2996707def152ba1a6fecbf9c13a6b60b719f37c311.scope.
Sep 30 14:15:18 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:15:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff5aabbdd7e143bf8ba223486ea76ebabc3a2d3a1d67ea5dc0d5a541db8b1e01/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:15:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff5aabbdd7e143bf8ba223486ea76ebabc3a2d3a1d67ea5dc0d5a541db8b1e01/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Sep 30 14:15:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff5aabbdd7e143bf8ba223486ea76ebabc3a2d3a1d67ea5dc0d5a541db8b1e01/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:15:18 compute-0 podman[89114]: 2025-09-30 14:15:18.777284937 +0000 UTC m=+0.025925304 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:15:18 compute-0 podman[89114]: 2025-09-30 14:15:18.889471903 +0000 UTC m=+0.138112270 container init 4930029d1a19fb9fd913d2996707def152ba1a6fecbf9c13a6b60b719f37c311 (image=quay.io/ceph/ceph:v19, name=competent_hypatia, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Sep 30 14:15:18 compute-0 podman[89114]: 2025-09-30 14:15:18.908298579 +0000 UTC m=+0.156938926 container start 4930029d1a19fb9fd913d2996707def152ba1a6fecbf9c13a6b60b719f37c311 (image=quay.io/ceph/ceph:v19, name=competent_hypatia, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:15:18 compute-0 podman[89114]: 2025-09-30 14:15:18.91857292 +0000 UTC m=+0.167213287 container attach 4930029d1a19fb9fd913d2996707def152ba1a6fecbf9c13a6b60b719f37c311 (image=quay.io/ceph/ceph:v19, name=competent_hypatia, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Sep 30 14:15:19 compute-0 ceph-mon[74194]: 4.18 scrub starts
Sep 30 14:15:19 compute-0 ceph-mon[74194]: 4.18 scrub ok
Sep 30 14:15:19 compute-0 ceph-mon[74194]: 4.1c deep-scrub starts
Sep 30 14:15:19 compute-0 ceph-mon[74194]: 4.1c deep-scrub ok
Sep 30 14:15:19 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/1427431386' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Sep 30 14:15:19 compute-0 ceph-mon[74194]: mgrmap e12: compute-0.buxlkm(active, since 2m), standbys: compute-2.udzudc, compute-1.zeqptq
Sep 30 14:15:19 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'crash'
Sep 30 14:15:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:15:19.407+0000 7f8aefe98140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Sep 30 14:15:19 compute-0 ceph-mgr[74485]: mgr[py] Module crash has missing NOTIFY_TYPES member
Sep 30 14:15:19 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'dashboard'
Sep 30 14:15:20 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'devicehealth'
Sep 30 14:15:20 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:15:20.081+0000 7f8aefe98140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Sep 30 14:15:20 compute-0 ceph-mgr[74485]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Sep 30 14:15:20 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'diskprediction_local'
Sep 30 14:15:20 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e34 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 14:15:20 compute-0 ceph-mon[74194]: 3.a scrub starts
Sep 30 14:15:20 compute-0 ceph-mon[74194]: 3.a scrub ok
Sep 30 14:15:20 compute-0 ceph-mon[74194]: 4.2 scrub starts
Sep 30 14:15:20 compute-0 ceph-mon[74194]: 4.2 scrub ok
Sep 30 14:15:20 compute-0 ceph-mon[74194]: 4.a scrub starts
Sep 30 14:15:20 compute-0 ceph-mon[74194]: 4.a scrub ok
Sep 30 14:15:20 compute-0 ceph-mon[74194]: 2.d deep-scrub starts
Sep 30 14:15:20 compute-0 ceph-mon[74194]: 2.d deep-scrub ok
Sep 30 14:15:20 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Sep 30 14:15:20 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Sep 30 14:15:20 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]:   from numpy import show_config as show_numpy_config
Sep 30 14:15:20 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:15:20.267+0000 7f8aefe98140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Sep 30 14:15:20 compute-0 ceph-mgr[74485]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Sep 30 14:15:20 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'influx'
Sep 30 14:15:20 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:15:20.344+0000 7f8aefe98140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Sep 30 14:15:20 compute-0 ceph-mgr[74485]: mgr[py] Module influx has missing NOTIFY_TYPES member
Sep 30 14:15:20 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'insights'
Sep 30 14:15:20 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'iostat'
Sep 30 14:15:20 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:15:20.490+0000 7f8aefe98140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Sep 30 14:15:20 compute-0 ceph-mgr[74485]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Sep 30 14:15:20 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'k8sevents'
Sep 30 14:15:20 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'localpool'
Sep 30 14:15:20 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'mds_autoscaler'
Sep 30 14:15:21 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'mirroring'
Sep 30 14:15:21 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'nfs'
Sep 30 14:15:21 compute-0 ceph-mon[74194]: 3.5 scrub starts
Sep 30 14:15:21 compute-0 ceph-mon[74194]: 3.5 scrub ok
Sep 30 14:15:21 compute-0 ceph-mon[74194]: 3.1b scrub starts
Sep 30 14:15:21 compute-0 ceph-mon[74194]: 3.1b scrub ok
Sep 30 14:15:21 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:15:21.557+0000 7f8aefe98140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Sep 30 14:15:21 compute-0 ceph-mgr[74485]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Sep 30 14:15:21 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'orchestrator'
Sep 30 14:15:21 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:15:21.796+0000 7f8aefe98140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Sep 30 14:15:21 compute-0 ceph-mgr[74485]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Sep 30 14:15:21 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'osd_perf_query'
Sep 30 14:15:21 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:15:21.879+0000 7f8aefe98140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Sep 30 14:15:21 compute-0 ceph-mgr[74485]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Sep 30 14:15:21 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'osd_support'
Sep 30 14:15:21 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:15:21.953+0000 7f8aefe98140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Sep 30 14:15:21 compute-0 ceph-mgr[74485]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Sep 30 14:15:21 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'pg_autoscaler'
Sep 30 14:15:22 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:15:22.037+0000 7f8aefe98140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Sep 30 14:15:22 compute-0 ceph-mgr[74485]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Sep 30 14:15:22 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'progress'
Sep 30 14:15:22 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:15:22.115+0000 7f8aefe98140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Sep 30 14:15:22 compute-0 ceph-mgr[74485]: mgr[py] Module progress has missing NOTIFY_TYPES member
Sep 30 14:15:22 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'prometheus'
Sep 30 14:15:22 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:15:22.495+0000 7f8aefe98140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Sep 30 14:15:22 compute-0 ceph-mgr[74485]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Sep 30 14:15:22 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'rbd_support'
Sep 30 14:15:22 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:15:22.603+0000 7f8aefe98140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Sep 30 14:15:22 compute-0 ceph-mgr[74485]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Sep 30 14:15:22 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'restful'
Sep 30 14:15:22 compute-0 ceph-mon[74194]: 4.1a scrub starts
Sep 30 14:15:22 compute-0 ceph-mon[74194]: 4.1a scrub ok
Sep 30 14:15:22 compute-0 ceph-mon[74194]: 4.19 scrub starts
Sep 30 14:15:22 compute-0 ceph-mon[74194]: 4.19 scrub ok
Sep 30 14:15:22 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'rgw'
Sep 30 14:15:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:15:23.046+0000 7f8aefe98140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Sep 30 14:15:23 compute-0 ceph-mgr[74485]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Sep 30 14:15:23 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'rook'
Sep 30 14:15:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:15:23.652+0000 7f8aefe98140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Sep 30 14:15:23 compute-0 ceph-mgr[74485]: mgr[py] Module rook has missing NOTIFY_TYPES member
Sep 30 14:15:23 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'selftest'
Sep 30 14:15:23 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Sep 30 14:15:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:15:23.735+0000 7f8aefe98140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Sep 30 14:15:23 compute-0 ceph-mgr[74485]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Sep 30 14:15:23 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'snap_schedule'
Sep 30 14:15:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:15:23.822+0000 7f8aefe98140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Sep 30 14:15:23 compute-0 ceph-mgr[74485]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Sep 30 14:15:23 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'stats'
Sep 30 14:15:23 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'status'
Sep 30 14:15:23 compute-0 ceph-mon[74194]: 3.11 scrub starts
Sep 30 14:15:23 compute-0 ceph-mon[74194]: 3.11 scrub ok
Sep 30 14:15:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:15:23.992+0000 7f8aefe98140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Sep 30 14:15:23 compute-0 ceph-mgr[74485]: mgr[py] Module status has missing NOTIFY_TYPES member
Sep 30 14:15:23 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'telegraf'
Sep 30 14:15:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:15:24.071+0000 7f8aefe98140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Sep 30 14:15:24 compute-0 ceph-mgr[74485]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Sep 30 14:15:24 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'telemetry'
Sep 30 14:15:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:15:24.240+0000 7f8aefe98140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Sep 30 14:15:24 compute-0 ceph-mgr[74485]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Sep 30 14:15:24 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'test_orchestrator'
Sep 30 14:15:24 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e35 e35: 3 total, 3 up, 3 in
Sep 30 14:15:24 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 3 up, 3 in
Sep 30 14:15:24 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0)
Sep 30 14:15:24 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.evkboy' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Sep 30 14:15:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:15:24.483+0000 7f8aefe98140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Sep 30 14:15:24 compute-0 ceph-mgr[74485]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Sep 30 14:15:24 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'volumes'
Sep 30 14:15:24 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 35 pg[8.0( empty local-lis/les=0/0 n=0 ec=35/35 lis/c=0/0 les/c/f=0/0/0 sis=35) [0] r=0 lpr=35 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:15:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:15:24.755+0000 7f8aefe98140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Sep 30 14:15:24 compute-0 ceph-mgr[74485]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Sep 30 14:15:24 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'zabbix'
Sep 30 14:15:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:15:24.828+0000 7f8aefe98140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Sep 30 14:15:24 compute-0 ceph-mgr[74485]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Sep 30 14:15:24 compute-0 ceph-mon[74194]: log_channel(cluster) log [INF] : Active manager daemon compute-0.buxlkm restarted
Sep 30 14:15:24 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Sep 30 14:15:24 compute-0 ceph-mon[74194]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.buxlkm
Sep 30 14:15:24 compute-0 ceph-mgr[74485]: ms_deliver_dispatch: unhandled message 0x5569dd439860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Sep 30 14:15:24 compute-0 sshd-session[89166]: Invalid user user from 210.90.155.80 port 55290
Sep 30 14:15:25 compute-0 sshd-session[89166]: Received disconnect from 210.90.155.80 port 55290:11: Bye Bye [preauth]
Sep 30 14:15:25 compute-0 sshd-session[89166]: Disconnected from invalid user user 210.90.155.80 port 55290 [preauth]
Sep 30 14:15:25 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.evkboy' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Sep 30 14:15:25 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e36 e36: 3 total, 3 up, 3 in
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: mgr handle_mgr_map Activating!
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: mgr handle_mgr_map I am now activating
Sep 30 14:15:25 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 3 up, 3 in
Sep 30 14:15:25 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : mgrmap e13: compute-0.buxlkm(active, starting, since 0.357214s), standbys: compute-2.udzudc, compute-1.zeqptq
Sep 30 14:15:25 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.udzudc restarted
Sep 30 14:15:25 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.udzudc started
Sep 30 14:15:25 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Sep 30 14:15:25 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14331 192.168.122.100:0/2096387249' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Sep 30 14:15:25 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Sep 30 14:15:25 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14331 192.168.122.100:0/2096387249' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Sep 30 14:15:25 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Sep 30 14:15:25 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14331 192.168.122.100:0/2096387249' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Sep 30 14:15:25 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.zeqptq restarted
Sep 30 14:15:25 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.zeqptq started
Sep 30 14:15:25 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.buxlkm", "id": "compute-0.buxlkm"} v 0)
Sep 30 14:15:25 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14331 192.168.122.100:0/2096387249' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mgr metadata", "who": "compute-0.buxlkm", "id": "compute-0.buxlkm"}]: dispatch
Sep 30 14:15:25 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.udzudc", "id": "compute-2.udzudc"} v 0)
Sep 30 14:15:25 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14331 192.168.122.100:0/2096387249' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mgr metadata", "who": "compute-2.udzudc", "id": "compute-2.udzudc"}]: dispatch
Sep 30 14:15:25 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.zeqptq", "id": "compute-1.zeqptq"} v 0)
Sep 30 14:15:25 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14331 192.168.122.100:0/2096387249' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mgr metadata", "who": "compute-1.zeqptq", "id": "compute-1.zeqptq"}]: dispatch
Sep 30 14:15:25 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Sep 30 14:15:25 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14331 192.168.122.100:0/2096387249' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Sep 30 14:15:25 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Sep 30 14:15:25 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14331 192.168.122.100:0/2096387249' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Sep 30 14:15:25 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Sep 30 14:15:25 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14331 192.168.122.100:0/2096387249' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Sep 30 14:15:25 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata"} v 0)
Sep 30 14:15:25 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14331 192.168.122.100:0/2096387249' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mds metadata"}]: dispatch
Sep 30 14:15:25 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).mds e1 all = 1
Sep 30 14:15:25 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Sep 30 14:15:25 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14331 192.168.122.100:0/2096387249' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata"}]: dispatch
Sep 30 14:15:25 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata"} v 0)
Sep 30 14:15:25 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14331 192.168.122.100:0/2096387249' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mon metadata"}]: dispatch
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: mgr load Constructed class from module: balancer
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [balancer INFO root] Starting
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 14:15:25 compute-0 ceph-mon[74194]: log_channel(cluster) log [INF] : Manager daemon compute-0.buxlkm is now available
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [balancer INFO root] Optimize plan auto_2025-09-30_14:15:25
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: mgr load Constructed class from module: cephadm
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: mgr load Constructed class from module: crash
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: mgr load Constructed class from module: dashboard
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO access_control] Loading user roles DB version=2
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: mgr load Constructed class from module: devicehealth
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO sso] Loading SSO DB version=1
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [devicehealth INFO root] Starting
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: mgr load Constructed class from module: iostat
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: mgr load Constructed class from module: nfs
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: mgr load Constructed class from module: orchestrator
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO root] server: ssl=no host=192.168.122.100 port=8443
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO root] Configured CherryPy, starting engine...
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: mgr load Constructed class from module: pg_autoscaler
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: mgr load Constructed class from module: progress
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [progress INFO root] Loading...
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [progress INFO root] Loaded [<progress.module.GhostEvent object at 0x7f8a73249f70>, <progress.module.GhostEvent object at 0x7f8a6e9f1190>, <progress.module.GhostEvent object at 0x7f8a6e9f11c0>, <progress.module.GhostEvent object at 0x7f8a6e9f11f0>, <progress.module.GhostEvent object at 0x7f8a6e9f1220>, <progress.module.GhostEvent object at 0x7f8a6e9f1250>, <progress.module.GhostEvent object at 0x7f8a6e9f1280>, <progress.module.GhostEvent object at 0x7f8a6e9f12b0>, <progress.module.GhostEvent object at 0x7f8a6e9f12e0>] historic events
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [progress INFO root] Loaded OSDMap, ready.
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [rbd_support INFO root] recovery thread starting
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [rbd_support INFO root] starting setup
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: mgr load Constructed class from module: rbd_support
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: mgr load Constructed class from module: restful
Sep 30 14:15:25 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.buxlkm/mirror_snapshot_schedule"} v 0)
Sep 30 14:15:25 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14331 192.168.122.100:0/2096387249' entity='mgr.compute-0.buxlkm' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.buxlkm/mirror_snapshot_schedule"}]: dispatch
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [restful INFO root] server_addr: :: server_port: 8003
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: mgr load Constructed class from module: status
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: mgr load Constructed class from module: telemetry
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [restful WARNING root] server not running: no certificate configured
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 14:15:25 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 36 pg[8.0( empty local-lis/les=35/36 n=0 ec=35/35 lis/c=0/0 les/c/f=0/0/0 sis=35) [0] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [rbd_support INFO root] PerfHandler: starting
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_task_task: vms, start_after=
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_task_task: volumes, start_after=
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_task_task: backups, start_after=
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_task_task: images, start_after=
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [rbd_support INFO root] TaskHandler: starting
Sep 30 14:15:25 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.buxlkm/trash_purge_schedule"} v 0)
Sep 30 14:15:25 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14331 192.168.122.100:0/2096387249' entity='mgr.compute-0.buxlkm' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.buxlkm/trash_purge_schedule"}]: dispatch
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 14:15:25 compute-0 ceph-mon[74194]: 4.15 scrub starts
Sep 30 14:15:25 compute-0 ceph-mon[74194]: 4.15 scrub ok
Sep 30 14:15:25 compute-0 ceph-mon[74194]: osdmap e35: 3 total, 3 up, 3 in
Sep 30 14:15:25 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/2515320028' entity='client.rgw.rgw.compute-2.evkboy' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Sep 30 14:15:25 compute-0 ceph-mon[74194]: from='client.? ' entity='client.rgw.rgw.compute-2.evkboy' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Sep 30 14:15:25 compute-0 ceph-mon[74194]: Active manager daemon compute-0.buxlkm restarted
Sep 30 14:15:25 compute-0 ceph-mon[74194]: Activating manager daemon compute-0.buxlkm
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: mgr load Constructed class from module: volumes
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [rbd_support INFO root] setup complete
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFS -> /api/cephfs
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsUi -> /ui-api/cephfs
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolume -> /api/cephfs/subvolume
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeGroups -> /api/cephfs/subvolume/group
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeSnapshots -> /api/cephfs/subvolume/snapshot
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsSnapshotClone -> /api/cephfs/subvolume/snapshot/clone
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSnapshotSchedule -> /api/cephfs/snapshot/schedule
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiUi -> /ui-api/iscsi
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Iscsi -> /api/iscsi
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiTarget -> /api/iscsi/target
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaCluster -> /api/nfs-ganesha/cluster
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaExports -> /api/nfs-ganesha/export
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaUi -> /ui-api/nfs-ganesha
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Orchestrator -> /ui-api/orchestrator
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Service -> /api/service
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroring -> /api/block/mirroring
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringSummary -> /api/block/mirroring/summary
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolMode -> /api/block/mirroring/pool
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolBootstrap -> /api/block/mirroring/pool/{pool_name}/bootstrap
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolPeer -> /api/block/mirroring/pool/{pool_name}/peer
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringStatus -> /ui-api/block/mirroring
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Pool -> /api/pool
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PoolUi -> /ui-api/pool
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RBDPool -> /api/pool
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rbd -> /api/block/image
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdStatus -> /ui-api/block/rbd
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdSnapshot -> /api/block/image/{image_spec}/snap
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdTrash -> /api/block/image/trash
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdNamespace -> /api/block/pool/{pool_name}/namespace
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rgw -> /ui-api/rgw
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteStatus -> /ui-api/rgw/multisite
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteController -> /api/rgw/multisite
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwDaemon -> /api/rgw/daemon
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwSite -> /api/rgw/site
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucket -> /api/rgw/bucket
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucketUi -> /ui-api/rgw/bucket
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwUser -> /api/rgw/user
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClass -> /api/rgw/roles
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClassMetadata -> /ui-api/rgw/roles
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwRealm -> /api/rgw/realm
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZonegroup -> /api/rgw/zonegroup
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZone -> /api/rgw/zone
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Auth -> /api/auth
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClass -> /api/cluster/user
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClassMetadata -> /ui-api/cluster/user
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Cluster -> /api/cluster
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterUpgrade -> /api/cluster/upgrade
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterConfiguration -> /api/cluster_conf
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRule -> /api/crush_rule
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRuleUi -> /ui-api/crush_rule
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Daemon -> /api/daemon
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Docs -> /docs
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfile -> /api/erasure_code_profile
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfileUi -> /ui-api/erasure_code_profile
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackController -> /api/feedback
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackApiController -> /api/feedback/api_key
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackUiController -> /ui-api/feedback/api_key
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FrontendLogging -> /ui-api/logging
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Grafana -> /api/grafana
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Host -> /api/host
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HostUi -> /ui-api/host
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Health -> /api/health
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HomeController -> /
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LangsController -> /ui-api/langs
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LoginController -> /ui-api/login
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Logs -> /api/logs
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrModules -> /api/mgr/module
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Monitor -> /api/monitor
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFGateway -> /api/nvmeof/gateway
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSpdk -> /api/nvmeof/spdk
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSubsystem -> /api/nvmeof/subsystem
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFListener -> /api/nvmeof/subsystem/{nqn}/listener
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFNamespace -> /api/nvmeof/subsystem/{nqn}/namespace
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFHost -> /api/nvmeof/subsystem/{nqn}/host
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFConnection -> /api/nvmeof/subsystem/{nqn}/connection
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFTcpUI -> /ui-api/nvmeof
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Osd -> /api/osd
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdUi -> /ui-api/osd
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdFlagsController -> /api/osd/flags
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MdsPerfCounter -> /api/perf_counters/mds
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MonPerfCounter -> /api/perf_counters/mon
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdPerfCounter -> /api/perf_counters/osd
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwPerfCounter -> /api/perf_counters/rgw
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirrorPerfCounter -> /api/perf_counters/rbd-mirror
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrPerfCounter -> /api/perf_counters/mgr
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: TcmuRunnerPerfCounter -> /api/perf_counters/tcmu-runner
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PerfCounters -> /api/perf_counters
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusReceiver -> /api/prometheus_receiver
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Prometheus -> /api/prometheus
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusNotifications -> /api/prometheus/notifications
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusSettings -> /ui-api/prometheus
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Role -> /api/role
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Scope -> /ui-api/scope
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Saml2 -> /auth/saml2
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Settings -> /api/settings
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: StandardSettings -> /ui-api/standard_settings
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Summary -> /api/summary
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Task -> /api/task
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Telemetry -> /api/telemetry
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: User -> /api/user
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserPasswordPolicy -> /api/user
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserChangePassword -> /api/user/{username}
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeatureTogglesEndpoint -> /api/feature_toggles
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MessageOfTheDay -> /ui-api/motd
Sep 30 14:15:25 compute-0 sshd-session[89284]: Accepted publickey for ceph-admin from 192.168.122.100 port 37166 ssh2: RSA SHA256:xW6Secl6o9Q/fOm6V4KS97DIZ06Q0FgYLSMG01uhfVw
Sep 30 14:15:25 compute-0 systemd-logind[808]: New session 35 of user ceph-admin.
Sep 30 14:15:25 compute-0 systemd[1]: Started Session 35 of User ceph-admin.
Sep 30 14:15:25 compute-0 sshd-session[89284]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Sep 30 14:15:25 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.module] Engine started.
Sep 30 14:15:25 compute-0 sudo[89300]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:15:25 compute-0 sudo[89300]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:25 compute-0 sudo[89300]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:25 compute-0 sudo[89327]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Sep 30 14:15:25 compute-0 sudo[89327]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:26 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Sep 30 14:15:26 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Sep 30 14:15:26 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Sep 30 14:15:26 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Sep 30 14:15:26 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.evkboy' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Sep 30 14:15:26 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : mgrmap e14: compute-0.buxlkm(active, since 1.39084s), standbys: compute-1.zeqptq, compute-2.udzudc
Sep 30 14:15:26 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.14337 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-username", "value": "admin", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:15:26 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_USERNAME}] v 0)
Sep 30 14:15:26 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v4: 102 pgs: 101 active+clean, 1 unknown; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Sep 30 14:15:26 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14331 192.168.122.100:0/2096387249' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:26 compute-0 competent_hypatia[89129]: Option GRAFANA_API_USERNAME updated
Sep 30 14:15:26 compute-0 systemd[1]: libpod-4930029d1a19fb9fd913d2996707def152ba1a6fecbf9c13a6b60b719f37c311.scope: Deactivated successfully.
Sep 30 14:15:26 compute-0 podman[89114]: 2025-09-30 14:15:26.364585678 +0000 UTC m=+7.613226055 container died 4930029d1a19fb9fd913d2996707def152ba1a6fecbf9c13a6b60b719f37c311 (image=quay.io/ceph/ceph:v19, name=competent_hypatia, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Sep 30 14:15:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-ff5aabbdd7e143bf8ba223486ea76ebabc3a2d3a1d67ea5dc0d5a541db8b1e01-merged.mount: Deactivated successfully.
Sep 30 14:15:26 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 37 pg[9.0( empty local-lis/les=0/0 n=0 ec=37/37 lis/c=0/0 les/c/f=0/0/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:15:26 compute-0 podman[89114]: 2025-09-30 14:15:26.422196276 +0000 UTC m=+7.670836623 container remove 4930029d1a19fb9fd913d2996707def152ba1a6fecbf9c13a6b60b719f37c311 (image=quay.io/ceph/ceph:v19, name=competent_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1)
Sep 30 14:15:26 compute-0 systemd[1]: libpod-conmon-4930029d1a19fb9fd913d2996707def152ba1a6fecbf9c13a6b60b719f37c311.scope: Deactivated successfully.
Sep 30 14:15:26 compute-0 sudo[89111]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:26 compute-0 ceph-mon[74194]: 4.9 scrub starts
Sep 30 14:15:26 compute-0 ceph-mon[74194]: 4.9 scrub ok
Sep 30 14:15:26 compute-0 ceph-mon[74194]: from='client.? ' entity='client.rgw.rgw.compute-2.evkboy' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Sep 30 14:15:26 compute-0 ceph-mon[74194]: osdmap e36: 3 total, 3 up, 3 in
Sep 30 14:15:26 compute-0 ceph-mon[74194]: mgrmap e13: compute-0.buxlkm(active, starting, since 0.357214s), standbys: compute-2.udzudc, compute-1.zeqptq
Sep 30 14:15:26 compute-0 ceph-mon[74194]: Standby manager daemon compute-2.udzudc restarted
Sep 30 14:15:26 compute-0 ceph-mon[74194]: Standby manager daemon compute-2.udzudc started
Sep 30 14:15:26 compute-0 ceph-mon[74194]: from='mgr.14331 192.168.122.100:0/2096387249' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Sep 30 14:15:26 compute-0 ceph-mon[74194]: from='mgr.14331 192.168.122.100:0/2096387249' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Sep 30 14:15:26 compute-0 ceph-mon[74194]: from='mgr.14331 192.168.122.100:0/2096387249' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Sep 30 14:15:26 compute-0 ceph-mon[74194]: Standby manager daemon compute-1.zeqptq restarted
Sep 30 14:15:26 compute-0 ceph-mon[74194]: Standby manager daemon compute-1.zeqptq started
Sep 30 14:15:26 compute-0 ceph-mon[74194]: from='mgr.14331 192.168.122.100:0/2096387249' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mgr metadata", "who": "compute-0.buxlkm", "id": "compute-0.buxlkm"}]: dispatch
Sep 30 14:15:26 compute-0 ceph-mon[74194]: from='mgr.14331 192.168.122.100:0/2096387249' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mgr metadata", "who": "compute-2.udzudc", "id": "compute-2.udzudc"}]: dispatch
Sep 30 14:15:26 compute-0 ceph-mon[74194]: from='mgr.14331 192.168.122.100:0/2096387249' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mgr metadata", "who": "compute-1.zeqptq", "id": "compute-1.zeqptq"}]: dispatch
Sep 30 14:15:26 compute-0 ceph-mon[74194]: from='mgr.14331 192.168.122.100:0/2096387249' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Sep 30 14:15:26 compute-0 ceph-mon[74194]: from='mgr.14331 192.168.122.100:0/2096387249' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Sep 30 14:15:26 compute-0 ceph-mon[74194]: from='mgr.14331 192.168.122.100:0/2096387249' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Sep 30 14:15:26 compute-0 ceph-mon[74194]: from='mgr.14331 192.168.122.100:0/2096387249' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mds metadata"}]: dispatch
Sep 30 14:15:26 compute-0 ceph-mon[74194]: from='mgr.14331 192.168.122.100:0/2096387249' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata"}]: dispatch
Sep 30 14:15:26 compute-0 ceph-mon[74194]: from='mgr.14331 192.168.122.100:0/2096387249' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mon metadata"}]: dispatch
Sep 30 14:15:26 compute-0 ceph-mon[74194]: Manager daemon compute-0.buxlkm is now available
Sep 30 14:15:26 compute-0 ceph-mon[74194]: from='mgr.14331 192.168.122.100:0/2096387249' entity='mgr.compute-0.buxlkm' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.buxlkm/mirror_snapshot_schedule"}]: dispatch
Sep 30 14:15:26 compute-0 ceph-mon[74194]: from='mgr.14331 192.168.122.100:0/2096387249' entity='mgr.compute-0.buxlkm' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.buxlkm/trash_purge_schedule"}]: dispatch
Sep 30 14:15:26 compute-0 ceph-mon[74194]: 4.1 scrub starts
Sep 30 14:15:26 compute-0 ceph-mon[74194]: 4.1 scrub ok
Sep 30 14:15:26 compute-0 ceph-mon[74194]: osdmap e37: 3 total, 3 up, 3 in
Sep 30 14:15:26 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/701204040' entity='client.rgw.rgw.compute-2.evkboy' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Sep 30 14:15:26 compute-0 ceph-mon[74194]: from='client.? ' entity='client.rgw.rgw.compute-2.evkboy' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Sep 30 14:15:26 compute-0 ceph-mon[74194]: mgrmap e14: compute-0.buxlkm(active, since 1.39084s), standbys: compute-1.zeqptq, compute-2.udzudc
Sep 30 14:15:26 compute-0 ceph-mon[74194]: from='client.14337 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-username", "value": "admin", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:15:26 compute-0 ceph-mon[74194]: pgmap v4: 102 pgs: 101 active+clean, 1 unknown; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Sep 30 14:15:26 compute-0 ceph-mon[74194]: from='mgr.14331 192.168.122.100:0/2096387249' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:26 compute-0 podman[89434]: 2025-09-30 14:15:26.575491625 +0000 UTC m=+0.067430478 container exec a277d7b6b6f3cf10a7ce0ade5eebf0f8127074c248f9bce4451399614b97ded5 (image=quay.io/ceph/ceph:v19, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:15:26 compute-0 sudo[89477]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdbhxgjwkgmdskwrjunbfxfhwiuoqvmc ; /usr/bin/python3'
Sep 30 14:15:26 compute-0 sudo[89477]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:15:26 compute-0 podman[89434]: 2025-09-30 14:15:26.683572393 +0000 UTC m=+0.175511226 container exec_died a277d7b6b6f3cf10a7ce0ade5eebf0f8127074c248f9bce4451399614b97ded5 (image=quay.io/ceph/ceph:v19, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mon-compute-0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Sep 30 14:15:26 compute-0 python3[89479]: ansible-ansible.legacy.command Invoked with stdin=/home/grafana_password.yml stdin_add_newline=False _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-grafana-api-password -i - _uses_shell=False strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None
Sep 30 14:15:26 compute-0 podman[89502]: 2025-09-30 14:15:26.828097251 +0000 UTC m=+0.044981006 container create fbd6ba1dbb66827f7a1206e18f430baf1632f31b75264c67ab0c9f4a212df5fe (image=quay.io/ceph/ceph:v19, name=admiring_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Sep 30 14:15:26 compute-0 systemd[1]: Started libpod-conmon-fbd6ba1dbb66827f7a1206e18f430baf1632f31b75264c67ab0c9f4a212df5fe.scope.
Sep 30 14:15:26 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Sep 30 14:15:26 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:15:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/829e93e4f056a046c7057de919024fa6554ed6f05c3a6e69925b5c291c28f6ed/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:15:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/829e93e4f056a046c7057de919024fa6554ed6f05c3a6e69925b5c291c28f6ed/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Sep 30 14:15:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/829e93e4f056a046c7057de919024fa6554ed6f05c3a6e69925b5c291c28f6ed/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:15:26 compute-0 podman[89502]: 2025-09-30 14:15:26.808039682 +0000 UTC m=+0.024923457 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:15:26 compute-0 podman[89502]: 2025-09-30 14:15:26.913980353 +0000 UTC m=+0.130864128 container init fbd6ba1dbb66827f7a1206e18f430baf1632f31b75264c67ab0c9f4a212df5fe (image=quay.io/ceph/ceph:v19, name=admiring_lehmann, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Sep 30 14:15:26 compute-0 podman[89502]: 2025-09-30 14:15:26.921072419 +0000 UTC m=+0.137956174 container start fbd6ba1dbb66827f7a1206e18f430baf1632f31b75264c67ab0c9f4a212df5fe (image=quay.io/ceph/ceph:v19, name=admiring_lehmann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:15:26 compute-0 podman[89502]: 2025-09-30 14:15:26.925235569 +0000 UTC m=+0.142119474 container attach fbd6ba1dbb66827f7a1206e18f430baf1632f31b75264c67ab0c9f4a212df5fe (image=quay.io/ceph/ceph:v19, name=admiring_lehmann, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Sep 30 14:15:26 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14331 192.168.122.100:0/2096387249' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:26 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Sep 30 14:15:26 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14331 192.168.122.100:0/2096387249' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:27 compute-0 sudo[89327]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:27 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 14:15:27 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14331 192.168.122.100:0/2096387249' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:27 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 14:15:27 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Sep 30 14:15:27 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14331 192.168.122.100:0/2096387249' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:27 compute-0 ceph-mgr[74485]: [cephadm INFO cherrypy.error] [30/Sep/2025:14:15:27] ENGINE Bus STARTING
Sep 30 14:15:27 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : [30/Sep/2025:14:15:27] ENGINE Bus STARTING
Sep 30 14:15:27 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.evkboy' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Sep 30 14:15:27 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Sep 30 14:15:27 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Sep 30 14:15:27 compute-0 ceph-mon[74194]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Sep 30 14:15:27 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 38 pg[9.0( empty local-lis/les=37/38 n=0 ec=37/37 lis/c=0/0 les/c/f=0/0/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:15:27 compute-0 sudo[89585]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:15:27 compute-0 sudo[89585]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:27 compute-0 sudo[89585]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:27 compute-0 ceph-mgr[74485]: [cephadm INFO cherrypy.error] [30/Sep/2025:14:15:27] ENGINE Serving on http://192.168.122.100:8765
Sep 30 14:15:27 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : [30/Sep/2025:14:15:27] ENGINE Serving on http://192.168.122.100:8765
Sep 30 14:15:27 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v6: 102 pgs: 101 active+clean, 1 unknown; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Sep 30 14:15:27 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Sep 30 14:15:27 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.24137 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-password", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:15:27 compute-0 sudo[89619]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 14:15:27 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_PASSWORD}] v 0)
Sep 30 14:15:27 compute-0 sudo[89619]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:27 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14331 192.168.122.100:0/2096387249' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:27 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Sep 30 14:15:27 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14331 192.168.122.100:0/2096387249' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:27 compute-0 admiring_lehmann[89535]: Option GRAFANA_API_PASSWORD updated
Sep 30 14:15:27 compute-0 ceph-mgr[74485]: [cephadm INFO cherrypy.error] [30/Sep/2025:14:15:27] ENGINE Serving on https://192.168.122.100:7150
Sep 30 14:15:27 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : [30/Sep/2025:14:15:27] ENGINE Serving on https://192.168.122.100:7150
Sep 30 14:15:27 compute-0 ceph-mgr[74485]: [cephadm INFO cherrypy.error] [30/Sep/2025:14:15:27] ENGINE Bus STARTED
Sep 30 14:15:27 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : [30/Sep/2025:14:15:27] ENGINE Bus STARTED
Sep 30 14:15:27 compute-0 ceph-mgr[74485]: [cephadm INFO cherrypy.error] [30/Sep/2025:14:15:27] ENGINE Client ('192.168.122.100', 37994) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Sep 30 14:15:27 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : [30/Sep/2025:14:15:27] ENGINE Client ('192.168.122.100', 37994) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Sep 30 14:15:27 compute-0 systemd[1]: libpod-fbd6ba1dbb66827f7a1206e18f430baf1632f31b75264c67ab0c9f4a212df5fe.scope: Deactivated successfully.
Sep 30 14:15:27 compute-0 podman[89502]: 2025-09-30 14:15:27.423281532 +0000 UTC m=+0.640165307 container died fbd6ba1dbb66827f7a1206e18f430baf1632f31b75264c67ab0c9f4a212df5fe (image=quay.io/ceph/ceph:v19, name=admiring_lehmann, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Sep 30 14:15:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-829e93e4f056a046c7057de919024fa6554ed6f05c3a6e69925b5c291c28f6ed-merged.mount: Deactivated successfully.
Sep 30 14:15:27 compute-0 podman[89502]: 2025-09-30 14:15:27.470294801 +0000 UTC m=+0.687178556 container remove fbd6ba1dbb66827f7a1206e18f430baf1632f31b75264c67ab0c9f4a212df5fe (image=quay.io/ceph/ceph:v19, name=admiring_lehmann, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Sep 30 14:15:27 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14331 192.168.122.100:0/2096387249' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:27 compute-0 systemd[1]: libpod-conmon-fbd6ba1dbb66827f7a1206e18f430baf1632f31b75264c67ab0c9f4a212df5fe.scope: Deactivated successfully.
Sep 30 14:15:27 compute-0 sudo[89477]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:27 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : mgrmap e15: compute-0.buxlkm(active, since 2s), standbys: compute-1.zeqptq, compute-2.udzudc
Sep 30 14:15:27 compute-0 ceph-mgr[74485]: [devicehealth INFO root] Check health
Sep 30 14:15:27 compute-0 sudo[89730]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-faqjiyfbjxgasduwudgzlylihlwnxgjz ; /usr/bin/python3'
Sep 30 14:15:27 compute-0 sudo[89730]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:15:27 compute-0 sudo[89619]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:27 compute-0 sudo[89741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:15:27 compute-0 sudo[89741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:27 compute-0 sudo[89741]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:27 compute-0 python3[89737]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-alertmanager-api-host http://192.168.122.100:9093
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:15:27 compute-0 sudo[89766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Sep 30 14:15:27 compute-0 sudo[89766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:27 compute-0 podman[89787]: 2025-09-30 14:15:27.940047378 +0000 UTC m=+0.038323591 container create da832eb6f8cc8f3b43f2b7a54427cc2fa8cd311723873dfeea040e3a3138b1ad (image=quay.io/ceph/ceph:v19, name=intelligent_bell, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325)
Sep 30 14:15:27 compute-0 ceph-mon[74194]: 4.8 deep-scrub starts
Sep 30 14:15:27 compute-0 ceph-mon[74194]: 4.8 deep-scrub ok
Sep 30 14:15:27 compute-0 ceph-mon[74194]: from='mgr.14331 192.168.122.100:0/2096387249' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:27 compute-0 ceph-mon[74194]: from='mgr.14331 192.168.122.100:0/2096387249' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:27 compute-0 ceph-mon[74194]: from='mgr.14331 192.168.122.100:0/2096387249' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:27 compute-0 ceph-mon[74194]: from='mgr.14331 192.168.122.100:0/2096387249' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:27 compute-0 ceph-mon[74194]: [30/Sep/2025:14:15:27] ENGINE Bus STARTING
Sep 30 14:15:27 compute-0 ceph-mon[74194]: from='client.? ' entity='client.rgw.rgw.compute-2.evkboy' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Sep 30 14:15:27 compute-0 ceph-mon[74194]: osdmap e38: 3 total, 3 up, 3 in
Sep 30 14:15:27 compute-0 ceph-mon[74194]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Sep 30 14:15:27 compute-0 ceph-mon[74194]: [30/Sep/2025:14:15:27] ENGINE Serving on http://192.168.122.100:8765
Sep 30 14:15:27 compute-0 ceph-mon[74194]: pgmap v6: 102 pgs: 101 active+clean, 1 unknown; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Sep 30 14:15:27 compute-0 ceph-mon[74194]: from='client.24137 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-password", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:15:27 compute-0 ceph-mon[74194]: from='mgr.14331 192.168.122.100:0/2096387249' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:27 compute-0 ceph-mon[74194]: from='mgr.14331 192.168.122.100:0/2096387249' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:27 compute-0 ceph-mon[74194]: from='mgr.14331 192.168.122.100:0/2096387249' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:27 compute-0 ceph-mon[74194]: mgrmap e15: compute-0.buxlkm(active, since 2s), standbys: compute-1.zeqptq, compute-2.udzudc
Sep 30 14:15:27 compute-0 systemd[1]: Started libpod-conmon-da832eb6f8cc8f3b43f2b7a54427cc2fa8cd311723873dfeea040e3a3138b1ad.scope.
Sep 30 14:15:28 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:15:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/231ffe1bc1bc0c2a8cb8ff0388feb9a4981ba637a0629978b89f44c683e3fccc/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:15:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/231ffe1bc1bc0c2a8cb8ff0388feb9a4981ba637a0629978b89f44c683e3fccc/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:15:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/231ffe1bc1bc0c2a8cb8ff0388feb9a4981ba637a0629978b89f44c683e3fccc/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Sep 30 14:15:28 compute-0 podman[89787]: 2025-09-30 14:15:27.925465354 +0000 UTC m=+0.023741587 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:15:28 compute-0 podman[89787]: 2025-09-30 14:15:28.03387004 +0000 UTC m=+0.132146283 container init da832eb6f8cc8f3b43f2b7a54427cc2fa8cd311723873dfeea040e3a3138b1ad (image=quay.io/ceph/ceph:v19, name=intelligent_bell, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:15:28 compute-0 podman[89787]: 2025-09-30 14:15:28.040538726 +0000 UTC m=+0.138814939 container start da832eb6f8cc8f3b43f2b7a54427cc2fa8cd311723873dfeea040e3a3138b1ad (image=quay.io/ceph/ceph:v19, name=intelligent_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:15:28 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Sep 30 14:15:28 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Sep 30 14:15:28 compute-0 ceph-mon[74194]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Sep 30 14:15:28 compute-0 podman[89787]: 2025-09-30 14:15:28.340678934 +0000 UTC m=+0.438955147 container attach da832eb6f8cc8f3b43f2b7a54427cc2fa8cd311723873dfeea040e3a3138b1ad (image=quay.io/ceph/ceph:v19, name=intelligent_bell, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Sep 30 14:15:28 compute-0 sudo[89766]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:28 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.14373 -' entity='client.admin' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://192.168.122.100:9093", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:15:28 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/ALERTMANAGER_API_HOST}] v 0)
Sep 30 14:15:28 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 14:15:28 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Sep 30 14:15:28 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14331 192.168.122.100:0/2096387249' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:28 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Sep 30 14:15:28 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Sep 30 14:15:28 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14331 192.168.122.100:0/2096387249' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:28 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Sep 30 14:15:28 compute-0 intelligent_bell[89807]: Option ALERTMANAGER_API_HOST updated
Sep 30 14:15:29 compute-0 systemd[1]: libpod-da832eb6f8cc8f3b43f2b7a54427cc2fa8cd311723873dfeea040e3a3138b1ad.scope: Deactivated successfully.
Sep 30 14:15:29 compute-0 podman[89787]: 2025-09-30 14:15:29.005369518 +0000 UTC m=+1.103645741 container died da832eb6f8cc8f3b43f2b7a54427cc2fa8cd311723873dfeea040e3a3138b1ad (image=quay.io/ceph/ceph:v19, name=intelligent_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Sep 30 14:15:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-231ffe1bc1bc0c2a8cb8ff0388feb9a4981ba637a0629978b89f44c683e3fccc-merged.mount: Deactivated successfully.
Sep 30 14:15:29 compute-0 ceph-mon[74194]: [30/Sep/2025:14:15:27] ENGINE Serving on https://192.168.122.100:7150
Sep 30 14:15:29 compute-0 ceph-mon[74194]: [30/Sep/2025:14:15:27] ENGINE Bus STARTED
Sep 30 14:15:29 compute-0 ceph-mon[74194]: [30/Sep/2025:14:15:27] ENGINE Client ('192.168.122.100', 37994) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Sep 30 14:15:29 compute-0 ceph-mon[74194]: 3.e scrub starts
Sep 30 14:15:29 compute-0 ceph-mon[74194]: 3.e scrub ok
Sep 30 14:15:29 compute-0 ceph-mon[74194]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Sep 30 14:15:29 compute-0 ceph-mon[74194]: from='mgr.14331 192.168.122.100:0/2096387249' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:29 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14331 192.168.122.100:0/2096387249' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:29 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 14:15:29 compute-0 podman[89787]: 2025-09-30 14:15:29.04947558 +0000 UTC m=+1.147751793 container remove da832eb6f8cc8f3b43f2b7a54427cc2fa8cd311723873dfeea040e3a3138b1ad (image=quay.io/ceph/ceph:v19, name=intelligent_bell, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Sep 30 14:15:29 compute-0 systemd[1]: libpod-conmon-da832eb6f8cc8f3b43f2b7a54427cc2fa8cd311723873dfeea040e3a3138b1ad.scope: Deactivated successfully.
Sep 30 14:15:29 compute-0 sudo[89730]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:29 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Sep 30 14:15:29 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.evkboy' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Sep 30 14:15:29 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14331 192.168.122.100:0/2096387249' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:29 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Sep 30 14:15:29 compute-0 sudo[89886]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-culveahobokquzehwdtdviqeavwuwsud ; /usr/bin/python3'
Sep 30 14:15:29 compute-0 sudo[89886]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:15:29 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14331 192.168.122.100:0/2096387249' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:29 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Sep 30 14:15:29 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14331 192.168.122.100:0/2096387249' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Sep 30 14:15:29 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v8: 103 pgs: 101 active+clean, 2 unknown; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Sep 30 14:15:29 compute-0 python3[89888]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-prometheus-api-host http://192.168.122.100:9092
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:15:29 compute-0 podman[89889]: 2025-09-30 14:15:29.369394909 +0000 UTC m=+0.028124502 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:15:29 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14331 192.168.122.100:0/2096387249' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:29 compute-0 podman[89889]: 2025-09-30 14:15:29.623429473 +0000 UTC m=+0.282159036 container create a119630e406668d08a2a86eb0be67f4140a393a7eae5d40760cfd2cba31a60a0 (image=quay.io/ceph/ceph:v19, name=infallible_dewdney, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Sep 30 14:15:29 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Sep 30 14:15:29 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14331 192.168.122.100:0/2096387249' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Sep 30 14:15:29 compute-0 systemd[1]: Started libpod-conmon-a119630e406668d08a2a86eb0be67f4140a393a7eae5d40760cfd2cba31a60a0.scope.
Sep 30 14:15:29 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:15:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ea1153a1083c5bcacf92c8a0d14a7e28dcf319de7447a05ca1c5e3260becbbb/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:15:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ea1153a1083c5bcacf92c8a0d14a7e28dcf319de7447a05ca1c5e3260becbbb/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:15:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ea1153a1083c5bcacf92c8a0d14a7e28dcf319de7447a05ca1c5e3260becbbb/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Sep 30 14:15:29 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14331 192.168.122.100:0/2096387249' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:29 compute-0 podman[89889]: 2025-09-30 14:15:29.949407002 +0000 UTC m=+0.608136585 container init a119630e406668d08a2a86eb0be67f4140a393a7eae5d40760cfd2cba31a60a0 (image=quay.io/ceph/ceph:v19, name=infallible_dewdney, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:15:29 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Sep 30 14:15:29 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14331 192.168.122.100:0/2096387249' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Sep 30 14:15:29 compute-0 ceph-mgr[74485]: [cephadm INFO root] Adjusting osd_memory_target on compute-1 to 127.8M
Sep 30 14:15:29 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-1 to 127.8M
Sep 30 14:15:29 compute-0 ceph-mgr[74485]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 127.8M
Sep 30 14:15:29 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 127.8M
Sep 30 14:15:29 compute-0 podman[89889]: 2025-09-30 14:15:29.956259082 +0000 UTC m=+0.614988645 container start a119630e406668d08a2a86eb0be67f4140a393a7eae5d40760cfd2cba31a60a0 (image=quay.io/ceph/ceph:v19, name=infallible_dewdney, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Sep 30 14:15:29 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Sep 30 14:15:29 compute-0 ceph-mgr[74485]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-1 to 134071500: error parsing value: Value '134071500' is below minimum 939524096
Sep 30 14:15:29 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-1 to 134071500: error parsing value: Value '134071500' is below minimum 939524096
Sep 30 14:15:29 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Sep 30 14:15:29 compute-0 ceph-mgr[74485]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 134071500: error parsing value: Value '134071500' is below minimum 939524096
Sep 30 14:15:29 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 134071500: error parsing value: Value '134071500' is below minimum 939524096
Sep 30 14:15:29 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:15:29 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14331 192.168.122.100:0/2096387249' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:15:29 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 14:15:29 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14331 192.168.122.100:0/2096387249' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 14:15:29 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Sep 30 14:15:29 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Sep 30 14:15:29 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Sep 30 14:15:29 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Sep 30 14:15:29 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Sep 30 14:15:29 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Sep 30 14:15:29 compute-0 podman[89889]: 2025-09-30 14:15:29.965218128 +0000 UTC m=+0.623947691 container attach a119630e406668d08a2a86eb0be67f4140a393a7eae5d40760cfd2cba31a60a0 (image=quay.io/ceph/ceph:v19, name=infallible_dewdney, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Sep 30 14:15:29 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Sep 30 14:15:30 compute-0 sudo[89909]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Sep 30 14:15:30 compute-0 sudo[89909]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:30 compute-0 sudo[89909]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:30 compute-0 sudo[89934]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/etc/ceph
Sep 30 14:15:30 compute-0 sudo[89934]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:30 compute-0 sudo[89934]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:30 compute-0 sudo[89978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/etc/ceph/ceph.conf.new
Sep 30 14:15:30 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : mgrmap e16: compute-0.buxlkm(active, since 5s), standbys: compute-1.zeqptq, compute-2.udzudc
Sep 30 14:15:30 compute-0 sudo[89978]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:30 compute-0 sudo[89978]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:30 compute-0 sudo[90003]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6
Sep 30 14:15:30 compute-0 sudo[90003]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:30 compute-0 sudo[90003]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:30 compute-0 ceph-mon[74194]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Sep 30 14:15:30 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.evkboy' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Sep 30 14:15:30 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Sep 30 14:15:30 compute-0 ceph-mon[74194]: from='client.14373 -' entity='client.admin' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://192.168.122.100:9093", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:15:30 compute-0 ceph-mon[74194]: from='mgr.14331 192.168.122.100:0/2096387249' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:30 compute-0 ceph-mon[74194]: osdmap e39: 3 total, 3 up, 3 in
Sep 30 14:15:30 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/701204040' entity='client.rgw.rgw.compute-2.evkboy' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Sep 30 14:15:30 compute-0 ceph-mon[74194]: from='mgr.14331 192.168.122.100:0/2096387249' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:30 compute-0 ceph-mon[74194]: from='client.? ' entity='client.rgw.rgw.compute-2.evkboy' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Sep 30 14:15:30 compute-0 ceph-mon[74194]: from='mgr.14331 192.168.122.100:0/2096387249' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:30 compute-0 ceph-mon[74194]: from='mgr.14331 192.168.122.100:0/2096387249' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:30 compute-0 ceph-mon[74194]: from='mgr.14331 192.168.122.100:0/2096387249' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Sep 30 14:15:30 compute-0 ceph-mon[74194]: pgmap v8: 103 pgs: 101 active+clean, 2 unknown; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Sep 30 14:15:30 compute-0 ceph-mon[74194]: from='mgr.14331 192.168.122.100:0/2096387249' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:30 compute-0 ceph-mon[74194]: from='mgr.14331 192.168.122.100:0/2096387249' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Sep 30 14:15:30 compute-0 ceph-mon[74194]: from='mgr.14331 192.168.122.100:0/2096387249' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:30 compute-0 ceph-mon[74194]: from='mgr.14331 192.168.122.100:0/2096387249' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Sep 30 14:15:30 compute-0 ceph-mon[74194]: from='mgr.14331 192.168.122.100:0/2096387249' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:15:30 compute-0 ceph-mon[74194]: from='mgr.14331 192.168.122.100:0/2096387249' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 14:15:30 compute-0 sudo[90028]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/etc/ceph/ceph.conf.new
Sep 30 14:15:30 compute-0 sudo[90028]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:30 compute-0 sudo[90028]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:30 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Sep 30 14:15:30 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.24143 -' entity='client.admin' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://192.168.122.100:9092", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:15:30 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/PROMETHEUS_API_HOST}] v 0)
Sep 30 14:15:30 compute-0 sudo[90076]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/etc/ceph/ceph.conf.new
Sep 30 14:15:30 compute-0 sudo[90076]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:30 compute-0 sudo[90076]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:30 compute-0 sudo[90102]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/etc/ceph/ceph.conf.new
Sep 30 14:15:30 compute-0 sudo[90102]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:30 compute-0 sudo[90102]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:30 compute-0 sudo[90127]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Sep 30 14:15:30 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.conf
Sep 30 14:15:30 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.conf
Sep 30 14:15:30 compute-0 sudo[90127]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:30 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.conf
Sep 30 14:15:30 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.conf
Sep 30 14:15:30 compute-0 sudo[90127]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:30 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.conf
Sep 30 14:15:30 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.conf
Sep 30 14:15:30 compute-0 sudo[90152]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config
Sep 30 14:15:30 compute-0 sudo[90152]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:30 compute-0 sudo[90152]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:30 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14331 192.168.122.100:0/2096387249' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:30 compute-0 infallible_dewdney[89905]: Option PROMETHEUS_API_HOST updated
Sep 30 14:15:30 compute-0 sudo[90177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config
Sep 30 14:15:30 compute-0 sudo[90177]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:30 compute-0 sudo[90177]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:30 compute-0 systemd[1]: libpod-a119630e406668d08a2a86eb0be67f4140a393a7eae5d40760cfd2cba31a60a0.scope: Deactivated successfully.
Sep 30 14:15:30 compute-0 podman[89889]: 2025-09-30 14:15:30.64354774 +0000 UTC m=+1.302277303 container died a119630e406668d08a2a86eb0be67f4140a393a7eae5d40760cfd2cba31a60a0 (image=quay.io/ceph/ceph:v19, name=infallible_dewdney, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:15:30 compute-0 sudo[90209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.conf.new
Sep 30 14:15:30 compute-0 sudo[90209]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:30 compute-0 sudo[90209]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:30 compute-0 sudo[90238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6
Sep 30 14:15:30 compute-0 sudo[90238]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:30 compute-0 sudo[90238]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:30 compute-0 sudo[90263]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.conf.new
Sep 30 14:15:30 compute-0 sudo[90263]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:30 compute-0 sudo[90263]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-9ea1153a1083c5bcacf92c8a0d14a7e28dcf319de7447a05ca1c5e3260becbbb-merged.mount: Deactivated successfully.
Sep 30 14:15:30 compute-0 podman[89889]: 2025-09-30 14:15:30.882295861 +0000 UTC m=+1.541025424 container remove a119630e406668d08a2a86eb0be67f4140a393a7eae5d40760cfd2cba31a60a0 (image=quay.io/ceph/ceph:v19, name=infallible_dewdney, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Sep 30 14:15:30 compute-0 systemd[1]: libpod-conmon-a119630e406668d08a2a86eb0be67f4140a393a7eae5d40760cfd2cba31a60a0.scope: Deactivated successfully.
Sep 30 14:15:30 compute-0 sudo[89886]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:30 compute-0 sudo[90315]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.conf.new
Sep 30 14:15:30 compute-0 sudo[90315]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:30 compute-0 sudo[90315]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:31 compute-0 sudo[90340]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.conf.new
Sep 30 14:15:31 compute-0 sudo[90340]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:31 compute-0 sudo[90340]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:31 compute-0 sudo[90404]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgkqwtnwrpmqsdvrqryinbkrcgynscbo ; /usr/bin/python3'
Sep 30 14:15:31 compute-0 sudo[90404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:15:31 compute-0 sudo[90375]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.conf.new /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.conf
Sep 30 14:15:31 compute-0 sudo[90375]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:31 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Sep 30 14:15:31 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Sep 30 14:15:31 compute-0 sudo[90375]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:31 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Sep 30 14:15:31 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Sep 30 14:15:31 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Sep 30 14:15:31 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Sep 30 14:15:31 compute-0 sudo[90418]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Sep 30 14:15:31 compute-0 sudo[90418]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:31 compute-0 sudo[90418]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:31 compute-0 sudo[90443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/etc/ceph
Sep 30 14:15:31 compute-0 sudo[90443]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:31 compute-0 sudo[90443]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:31 compute-0 python3[90415]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-grafana-api-url http://192.168.122.100:3100
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:15:31 compute-0 sudo[90468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/etc/ceph/ceph.client.admin.keyring.new
Sep 30 14:15:31 compute-0 sudo[90468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:31 compute-0 sudo[90468]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:31 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Sep 30 14:15:31 compute-0 podman[90473]: 2025-09-30 14:15:31.241363861 +0000 UTC m=+0.041520085 container create 14b1a1b6db7ea4b02b3588910b16ecb40455429533015443ed764cd667d25a3f (image=quay.io/ceph/ceph:v19, name=determined_carson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:15:31 compute-0 ceph-mon[74194]: Adjusting osd_memory_target on compute-1 to 127.8M
Sep 30 14:15:31 compute-0 ceph-mon[74194]: Adjusting osd_memory_target on compute-0 to 127.8M
Sep 30 14:15:31 compute-0 ceph-mon[74194]: Unable to set osd_memory_target on compute-1 to 134071500: error parsing value: Value '134071500' is below minimum 939524096
Sep 30 14:15:31 compute-0 ceph-mon[74194]: Unable to set osd_memory_target on compute-0 to 134071500: error parsing value: Value '134071500' is below minimum 939524096
Sep 30 14:15:31 compute-0 ceph-mon[74194]: Updating compute-0:/etc/ceph/ceph.conf
Sep 30 14:15:31 compute-0 ceph-mon[74194]: Updating compute-1:/etc/ceph/ceph.conf
Sep 30 14:15:31 compute-0 ceph-mon[74194]: Updating compute-2:/etc/ceph/ceph.conf
Sep 30 14:15:31 compute-0 ceph-mon[74194]: mgrmap e16: compute-0.buxlkm(active, since 5s), standbys: compute-1.zeqptq, compute-2.udzudc
Sep 30 14:15:31 compute-0 ceph-mon[74194]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Sep 30 14:15:31 compute-0 ceph-mon[74194]: from='client.? ' entity='client.rgw.rgw.compute-2.evkboy' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Sep 30 14:15:31 compute-0 ceph-mon[74194]: osdmap e40: 3 total, 3 up, 3 in
Sep 30 14:15:31 compute-0 ceph-mon[74194]: from='mgr.14331 192.168.122.100:0/2096387249' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:31 compute-0 systemd[1]: Started libpod-conmon-14b1a1b6db7ea4b02b3588910b16ecb40455429533015443ed764cd667d25a3f.scope.
Sep 30 14:15:31 compute-0 sudo[90506]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6
Sep 30 14:15:31 compute-0 sudo[90506]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:31 compute-0 sudo[90506]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:31 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v10: 103 pgs: 103 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 408 B/s wr, 15 op/s
Sep 30 14:15:31 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:15:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ee2954e91eb103b810832d62f36b1ba3c1b4a6d311972ec29eb0fd83cf17f7d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:15:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ee2954e91eb103b810832d62f36b1ba3c1b4a6d311972ec29eb0fd83cf17f7d/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Sep 30 14:15:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ee2954e91eb103b810832d62f36b1ba3c1b4a6d311972ec29eb0fd83cf17f7d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:15:31 compute-0 podman[90473]: 2025-09-30 14:15:31.219017783 +0000 UTC m=+0.019174027 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:15:31 compute-0 sudo[90536]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/etc/ceph/ceph.client.admin.keyring.new
Sep 30 14:15:31 compute-0 sudo[90536]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:31 compute-0 sudo[90536]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:31 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Sep 30 14:15:31 compute-0 podman[90473]: 2025-09-30 14:15:31.372076216 +0000 UTC m=+0.172232440 container init 14b1a1b6db7ea4b02b3588910b16ecb40455429533015443ed764cd667d25a3f (image=quay.io/ceph/ceph:v19, name=determined_carson, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:15:31 compute-0 podman[90473]: 2025-09-30 14:15:31.377341754 +0000 UTC m=+0.177497978 container start 14b1a1b6db7ea4b02b3588910b16ecb40455429533015443ed764cd667d25a3f (image=quay.io/ceph/ceph:v19, name=determined_carson, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Sep 30 14:15:31 compute-0 sudo[90585]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/etc/ceph/ceph.client.admin.keyring.new
Sep 30 14:15:31 compute-0 sudo[90585]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:31 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Sep 30 14:15:31 compute-0 sudo[90585]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:31 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Sep 30 14:15:31 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.evkboy' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Sep 30 14:15:31 compute-0 sudo[90629]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/etc/ceph/ceph.client.admin.keyring.new
Sep 30 14:15:31 compute-0 sudo[90629]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:31 compute-0 sudo[90629]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:31 compute-0 sudo[90654]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Sep 30 14:15:31 compute-0 sudo[90654]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:31 compute-0 sudo[90654]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:31 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.client.admin.keyring
Sep 30 14:15:31 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.client.admin.keyring
Sep 30 14:15:31 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.client.admin.keyring
Sep 30 14:15:31 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.client.admin.keyring
Sep 30 14:15:31 compute-0 sudo[90679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config
Sep 30 14:15:31 compute-0 sudo[90679]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:31 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.client.admin.keyring
Sep 30 14:15:31 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.client.admin.keyring
Sep 30 14:15:31 compute-0 sudo[90679]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:31 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.14385 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "http://192.168.122.100:3100", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:15:31 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_URL}] v 0)
Sep 30 14:15:31 compute-0 sudo[90704]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config
Sep 30 14:15:31 compute-0 sudo[90704]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:31 compute-0 sudo[90704]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:31 compute-0 podman[90473]: 2025-09-30 14:15:31.802782744 +0000 UTC m=+0.602938968 container attach 14b1a1b6db7ea4b02b3588910b16ecb40455429533015443ed764cd667d25a3f (image=quay.io/ceph/ceph:v19, name=determined_carson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:15:31 compute-0 sudo[90730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.client.admin.keyring.new
Sep 30 14:15:31 compute-0 sudo[90730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:31 compute-0 sudo[90730]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:31 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 41 pg[11.0( empty local-lis/les=0/0 n=0 ec=41/41 lis/c=0/0 les/c/f=0/0/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:15:31 compute-0 sudo[90755]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6
Sep 30 14:15:31 compute-0 sudo[90755]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:31 compute-0 sudo[90755]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:31 compute-0 sudo[90780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.client.admin.keyring.new
Sep 30 14:15:31 compute-0 sudo[90780]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:31 compute-0 sudo[90780]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:32 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14331 192.168.122.100:0/2096387249' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:32 compute-0 determined_carson[90531]: Option GRAFANA_API_URL updated
Sep 30 14:15:32 compute-0 sudo[90828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.client.admin.keyring.new
Sep 30 14:15:32 compute-0 sudo[90828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:32 compute-0 sudo[90828]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:32 compute-0 systemd[1]: libpod-14b1a1b6db7ea4b02b3588910b16ecb40455429533015443ed764cd667d25a3f.scope: Deactivated successfully.
Sep 30 14:15:32 compute-0 podman[90854]: 2025-09-30 14:15:32.111486738 +0000 UTC m=+0.023014138 container died 14b1a1b6db7ea4b02b3588910b16ecb40455429533015443ed764cd667d25a3f (image=quay.io/ceph/ceph:v19, name=determined_carson, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:15:32 compute-0 sudo[90855]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.client.admin.keyring.new
Sep 30 14:15:32 compute-0 sudo[90855]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:32 compute-0 sudo[90855]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:32 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Sep 30 14:15:32 compute-0 sudo[90888]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.client.admin.keyring.new /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.client.admin.keyring
Sep 30 14:15:32 compute-0 sudo[90888]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:32 compute-0 sudo[90888]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:32 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Sep 30 14:15:32 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 14:15:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-1ee2954e91eb103b810832d62f36b1ba3c1b4a6d311972ec29eb0fd83cf17f7d-merged.mount: Deactivated successfully.
Sep 30 14:15:32 compute-0 podman[90854]: 2025-09-30 14:15:32.222341749 +0000 UTC m=+0.133869119 container remove 14b1a1b6db7ea4b02b3588910b16ecb40455429533015443ed764cd667d25a3f (image=quay.io/ceph/ceph:v19, name=determined_carson, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:15:32 compute-0 systemd[1]: libpod-conmon-14b1a1b6db7ea4b02b3588910b16ecb40455429533015443ed764cd667d25a3f.scope: Deactivated successfully.
Sep 30 14:15:32 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14331 192.168.122.100:0/2096387249' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:32 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Sep 30 14:15:32 compute-0 sudo[90404]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:32 compute-0 ceph-mon[74194]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Sep 30 14:15:32 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14331 192.168.122.100:0/2096387249' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:32 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Sep 30 14:15:32 compute-0 ceph-mon[74194]: from='client.24143 -' entity='client.admin' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://192.168.122.100:9092", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:15:32 compute-0 ceph-mon[74194]: Updating compute-1:/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.conf
Sep 30 14:15:32 compute-0 ceph-mon[74194]: Updating compute-2:/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.conf
Sep 30 14:15:32 compute-0 ceph-mon[74194]: Updating compute-0:/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.conf
Sep 30 14:15:32 compute-0 ceph-mon[74194]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Sep 30 14:15:32 compute-0 ceph-mon[74194]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Sep 30 14:15:32 compute-0 ceph-mon[74194]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Sep 30 14:15:32 compute-0 ceph-mon[74194]: pgmap v10: 103 pgs: 103 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 408 B/s wr, 15 op/s
Sep 30 14:15:32 compute-0 ceph-mon[74194]: osdmap e41: 3 total, 3 up, 3 in
Sep 30 14:15:32 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/701204040' entity='client.rgw.rgw.compute-2.evkboy' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Sep 30 14:15:32 compute-0 ceph-mon[74194]: from='client.? ' entity='client.rgw.rgw.compute-2.evkboy' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Sep 30 14:15:32 compute-0 ceph-mon[74194]: from='mgr.14331 192.168.122.100:0/2096387249' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:32 compute-0 ceph-mon[74194]: from='mgr.14331 192.168.122.100:0/2096387249' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:32 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14331 192.168.122.100:0/2096387249' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:32 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 14:15:32 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Sep 30 14:15:32 compute-0 sudo[90943]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uijlhhediqavhkzkgienhgyeiefgcpli ; /usr/bin/python3'
Sep 30 14:15:32 compute-0 sudo[90943]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:15:32 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14331 192.168.122.100:0/2096387249' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:32 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.evkboy' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Sep 30 14:15:32 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Sep 30 14:15:32 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14331 192.168.122.100:0/2096387249' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:32 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Sep 30 14:15:32 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 42 pg[11.0( empty local-lis/les=41/42 n=0 ec=41/41 lis/c=0/0 les/c/f=0/0/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:15:32 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14331 192.168.122.100:0/2096387249' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:32 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 14:15:32 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14331 192.168.122.100:0/2096387249' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:32 compute-0 ceph-mgr[74485]: [progress INFO root] update: starting ev f5cca836-5623-4b8c-8537-5781716ee475 (Updating node-exporter deployment (+3 -> 3))
Sep 30 14:15:32 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Deploying daemon node-exporter.compute-0 on compute-0
Sep 30 14:15:32 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Deploying daemon node-exporter.compute-0 on compute-0
Sep 30 14:15:32 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Sep 30 14:15:32 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.evkboy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Sep 30 14:15:32 compute-0 python3[90945]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module disable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:15:32 compute-0 sudo[90946]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:15:32 compute-0 sudo[90946]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:32 compute-0 sudo[90946]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:32 compute-0 podman[90959]: 2025-09-30 14:15:32.640322141 +0000 UTC m=+0.069133072 container create c3d804d5f9d52bf313d4b5859e98fbad8db9e8c6eab033401c7b782f6ff0f334 (image=quay.io/ceph/ceph:v19, name=nostalgic_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Sep 30 14:15:32 compute-0 sudo[90983]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/prometheus/node-exporter:v1.7.0 --timeout 895 _orch deploy --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6
Sep 30 14:15:32 compute-0 sudo[90983]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:32 compute-0 systemd[1]: Started libpod-conmon-c3d804d5f9d52bf313d4b5859e98fbad8db9e8c6eab033401c7b782f6ff0f334.scope.
Sep 30 14:15:32 compute-0 podman[90959]: 2025-09-30 14:15:32.600875582 +0000 UTC m=+0.029686533 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:15:32 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:15:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27b115839e9a5c8ff50dcb3ae107db5e39cc268f970a68c2f8f9e0c55a83d13b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:15:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27b115839e9a5c8ff50dcb3ae107db5e39cc268f970a68c2f8f9e0c55a83d13b/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Sep 30 14:15:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27b115839e9a5c8ff50dcb3ae107db5e39cc268f970a68c2f8f9e0c55a83d13b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:15:32 compute-0 podman[90959]: 2025-09-30 14:15:32.733717122 +0000 UTC m=+0.162528093 container init c3d804d5f9d52bf313d4b5859e98fbad8db9e8c6eab033401c7b782f6ff0f334 (image=quay.io/ceph/ceph:v19, name=nostalgic_matsumoto, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Sep 30 14:15:32 compute-0 podman[90959]: 2025-09-30 14:15:32.739935066 +0000 UTC m=+0.168745997 container start c3d804d5f9d52bf313d4b5859e98fbad8db9e8c6eab033401c7b782f6ff0f334 (image=quay.io/ceph/ceph:v19, name=nostalgic_matsumoto, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Sep 30 14:15:32 compute-0 podman[90959]: 2025-09-30 14:15:32.749039316 +0000 UTC m=+0.177850277 container attach c3d804d5f9d52bf313d4b5859e98fbad8db9e8c6eab033401c7b782f6ff0f334 (image=quay.io/ceph/ceph:v19, name=nostalgic_matsumoto, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:15:33 compute-0 systemd[1]: Reloading.
Sep 30 14:15:33 compute-0 systemd-rc-local-generator[91099]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:15:33 compute-0 systemd-sysv-generator[91103]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 14:15:33 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module disable", "module": "dashboard"} v 0)
Sep 30 14:15:33 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/484468477' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Sep 30 14:15:33 compute-0 systemd[1]: Reloading.
Sep 30 14:15:33 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v13: 104 pgs: 1 unknown, 103 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 472 B/s wr, 18 op/s
Sep 30 14:15:33 compute-0 systemd-rc-local-generator[91141]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:15:33 compute-0 systemd-sysv-generator[91145]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 14:15:33 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Sep 30 14:15:33 compute-0 systemd[1]: Starting Ceph node-exporter.compute-0 for 5e3c7776-ac03-5698-b79f-a6dc2d80cae6...
Sep 30 14:15:33 compute-0 ceph-mon[74194]: Updating compute-0:/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.client.admin.keyring
Sep 30 14:15:33 compute-0 ceph-mon[74194]: Updating compute-2:/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.client.admin.keyring
Sep 30 14:15:33 compute-0 ceph-mon[74194]: Updating compute-1:/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.client.admin.keyring
Sep 30 14:15:33 compute-0 ceph-mon[74194]: from='client.14385 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "http://192.168.122.100:3100", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:15:33 compute-0 ceph-mon[74194]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Sep 30 14:15:33 compute-0 ceph-mon[74194]: from='mgr.14331 192.168.122.100:0/2096387249' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:33 compute-0 ceph-mon[74194]: from='mgr.14331 192.168.122.100:0/2096387249' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:33 compute-0 ceph-mon[74194]: from='mgr.14331 192.168.122.100:0/2096387249' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:33 compute-0 ceph-mon[74194]: from='client.? ' entity='client.rgw.rgw.compute-2.evkboy' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Sep 30 14:15:33 compute-0 ceph-mon[74194]: from='mgr.14331 192.168.122.100:0/2096387249' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:33 compute-0 ceph-mon[74194]: osdmap e42: 3 total, 3 up, 3 in
Sep 30 14:15:33 compute-0 ceph-mon[74194]: from='mgr.14331 192.168.122.100:0/2096387249' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:33 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/701204040' entity='client.rgw.rgw.compute-2.evkboy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Sep 30 14:15:33 compute-0 ceph-mon[74194]: from='mgr.14331 192.168.122.100:0/2096387249' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:33 compute-0 ceph-mon[74194]: Deploying daemon node-exporter.compute-0 on compute-0
Sep 30 14:15:33 compute-0 ceph-mon[74194]: from='client.? ' entity='client.rgw.rgw.compute-2.evkboy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Sep 30 14:15:33 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/484468477' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Sep 30 14:15:33 compute-0 ceph-mon[74194]: pgmap v13: 104 pgs: 1 unknown, 103 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 472 B/s wr, 18 op/s
Sep 30 14:15:33 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.evkboy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Sep 30 14:15:33 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Sep 30 14:15:33 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/484468477' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Sep 30 14:15:33 compute-0 ceph-mgr[74485]: mgr handle_mgr_map respawning because set of enabled modules changed!
Sep 30 14:15:33 compute-0 ceph-mgr[74485]: mgr respawn  e: '/usr/bin/ceph-mgr'
Sep 30 14:15:33 compute-0 ceph-mgr[74485]: mgr respawn  0: '/usr/bin/ceph-mgr'
Sep 30 14:15:33 compute-0 ceph-mgr[74485]: mgr respawn  1: '-n'
Sep 30 14:15:33 compute-0 ceph-mgr[74485]: mgr respawn  2: 'mgr.compute-0.buxlkm'
Sep 30 14:15:33 compute-0 ceph-mgr[74485]: mgr respawn  3: '-f'
Sep 30 14:15:33 compute-0 ceph-mgr[74485]: mgr respawn  4: '--setuser'
Sep 30 14:15:33 compute-0 ceph-mgr[74485]: mgr respawn  5: 'ceph'
Sep 30 14:15:33 compute-0 ceph-mgr[74485]: mgr respawn  6: '--setgroup'
Sep 30 14:15:33 compute-0 ceph-mgr[74485]: mgr respawn  7: 'ceph'
Sep 30 14:15:33 compute-0 ceph-mgr[74485]: mgr respawn  8: '--default-log-to-file=false'
Sep 30 14:15:33 compute-0 ceph-mgr[74485]: mgr respawn  9: '--default-log-to-journald=true'
Sep 30 14:15:33 compute-0 ceph-mgr[74485]: mgr respawn  10: '--default-log-to-stderr=false'
Sep 30 14:15:33 compute-0 ceph-mgr[74485]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Sep 30 14:15:33 compute-0 ceph-mgr[74485]: mgr respawn  exe_path /proc/self/exe
Sep 30 14:15:33 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Sep 30 14:15:33 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : mgrmap e17: compute-0.buxlkm(active, since 9s), standbys: compute-1.zeqptq, compute-2.udzudc
Sep 30 14:15:33 compute-0 systemd[1]: libpod-c3d804d5f9d52bf313d4b5859e98fbad8db9e8c6eab033401c7b782f6ff0f334.scope: Deactivated successfully.
Sep 30 14:15:33 compute-0 podman[90959]: 2025-09-30 14:15:33.908308651 +0000 UTC m=+1.337119582 container died c3d804d5f9d52bf313d4b5859e98fbad8db9e8c6eab033401c7b782f6ff0f334 (image=quay.io/ceph/ceph:v19, name=nostalgic_matsumoto, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0)
Sep 30 14:15:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-27b115839e9a5c8ff50dcb3ae107db5e39cc268f970a68c2f8f9e0c55a83d13b-merged.mount: Deactivated successfully.
Sep 30 14:15:33 compute-0 podman[90959]: 2025-09-30 14:15:33.954153979 +0000 UTC m=+1.382964910 container remove c3d804d5f9d52bf313d4b5859e98fbad8db9e8c6eab033401c7b782f6ff0f334 (image=quay.io/ceph/ceph:v19, name=nostalgic_matsumoto, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:15:33 compute-0 bash[91194]: Trying to pull quay.io/prometheus/node-exporter:v1.7.0...
Sep 30 14:15:33 compute-0 sshd-session[89298]: Connection closed by 192.168.122.100 port 37166
Sep 30 14:15:33 compute-0 sshd-session[89284]: pam_unix(sshd:session): session closed for user ceph-admin
Sep 30 14:15:33 compute-0 systemd[1]: libpod-conmon-c3d804d5f9d52bf313d4b5859e98fbad8db9e8c6eab033401c7b782f6ff0f334.scope: Deactivated successfully.
Sep 30 14:15:33 compute-0 systemd-logind[808]: Session 35 logged out. Waiting for processes to exit.
Sep 30 14:15:33 compute-0 sudo[90943]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ignoring --setuser ceph since I am not root
Sep 30 14:15:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ignoring --setgroup ceph since I am not root
Sep 30 14:15:34 compute-0 ceph-mgr[74485]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Sep 30 14:15:34 compute-0 ceph-mgr[74485]: pidfile_write: ignore empty --pid-file
Sep 30 14:15:34 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'alerts'
Sep 30 14:15:34 compute-0 sudo[91260]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltwhjjqciefbfjjhjnwoupdrmwbmoitv ; /usr/bin/python3'
Sep 30 14:15:34 compute-0 sudo[91260]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:15:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:15:34.136+0000 7f0700497140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Sep 30 14:15:34 compute-0 ceph-mgr[74485]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Sep 30 14:15:34 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'balancer'
Sep 30 14:15:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:15:34.235+0000 7f0700497140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Sep 30 14:15:34 compute-0 ceph-mgr[74485]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Sep 30 14:15:34 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'cephadm'
Sep 30 14:15:34 compute-0 python3[91262]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module enable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:15:34 compute-0 podman[91263]: 2025-09-30 14:15:34.297926986 +0000 UTC m=+0.023485900 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:15:34 compute-0 bash[91194]: Getting image source signatures
Sep 30 14:15:34 compute-0 bash[91194]: Copying blob sha256:324153f2810a9927fcce320af9e4e291e0b6e805cbdd1f338386c756b9defa24
Sep 30 14:15:34 compute-0 bash[91194]: Copying blob sha256:2abcce694348cd2c949c0e98a7400ebdfd8341021bcf6b541bc72033ce982510
Sep 30 14:15:34 compute-0 bash[91194]: Copying blob sha256:455fd88e5221bc1e278ef2d059cd70e4df99a24e5af050ede621534276f6cf9a
Sep 30 14:15:34 compute-0 podman[91263]: 2025-09-30 14:15:34.459732379 +0000 UTC m=+0.185291283 container create 159bb0421a46f093384b6d4800c3ef78e952a1f1f4e9d462d5ea38ffd146179b (image=quay.io/ceph/ceph:v19, name=hopeful_borg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Sep 30 14:15:34 compute-0 systemd[1]: Started libpod-conmon-159bb0421a46f093384b6d4800c3ef78e952a1f1f4e9d462d5ea38ffd146179b.scope.
Sep 30 14:15:34 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:15:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd4dc40b59547bd51f3f4113c8aa70d167051a4ab99d18d06c11ac66205de328/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Sep 30 14:15:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd4dc40b59547bd51f3f4113c8aa70d167051a4ab99d18d06c11ac66205de328/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:15:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd4dc40b59547bd51f3f4113c8aa70d167051a4ab99d18d06c11ac66205de328/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:15:34 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'crash'
Sep 30 14:15:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:15:35.072+0000 7f0700497140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Sep 30 14:15:35 compute-0 ceph-mgr[74485]: mgr[py] Module crash has missing NOTIFY_TYPES member
Sep 30 14:15:35 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'dashboard'
Sep 30 14:15:35 compute-0 podman[91263]: 2025-09-30 14:15:35.077556258 +0000 UTC m=+0.803115182 container init 159bb0421a46f093384b6d4800c3ef78e952a1f1f4e9d462d5ea38ffd146179b (image=quay.io/ceph/ceph:v19, name=hopeful_borg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:15:35 compute-0 podman[91263]: 2025-09-30 14:15:35.087472529 +0000 UTC m=+0.813031423 container start 159bb0421a46f093384b6d4800c3ef78e952a1f1f4e9d462d5ea38ffd146179b (image=quay.io/ceph/ceph:v19, name=hopeful_borg, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Sep 30 14:15:35 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 14:15:35 compute-0 ceph-mon[74194]: from='client.? ' entity='client.rgw.rgw.compute-2.evkboy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Sep 30 14:15:35 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/484468477' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Sep 30 14:15:35 compute-0 ceph-mon[74194]: osdmap e43: 3 total, 3 up, 3 in
Sep 30 14:15:35 compute-0 ceph-mon[74194]: mgrmap e17: compute-0.buxlkm(active, since 9s), standbys: compute-1.zeqptq, compute-2.udzudc
Sep 30 14:15:35 compute-0 podman[91263]: 2025-09-30 14:15:35.597364394 +0000 UTC m=+1.322923308 container attach 159bb0421a46f093384b6d4800c3ef78e952a1f1f4e9d462d5ea38ffd146179b (image=quay.io/ceph/ceph:v19, name=hopeful_borg, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:15:35 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'devicehealth'
Sep 30 14:15:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:15:35.843+0000 7f0700497140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Sep 30 14:15:35 compute-0 ceph-mgr[74485]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Sep 30 14:15:35 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'diskprediction_local'
Sep 30 14:15:35 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module enable", "module": "dashboard"} v 0)
Sep 30 14:15:35 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1750391643' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Sep 30 14:15:36 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Sep 30 14:15:36 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Sep 30 14:15:36 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]:   from numpy import show_config as show_numpy_config
Sep 30 14:15:36 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:15:36.020+0000 7f0700497140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Sep 30 14:15:36 compute-0 ceph-mgr[74485]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Sep 30 14:15:36 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'influx'
Sep 30 14:15:36 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:15:36.102+0000 7f0700497140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Sep 30 14:15:36 compute-0 ceph-mgr[74485]: mgr[py] Module influx has missing NOTIFY_TYPES member
Sep 30 14:15:36 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'insights'
Sep 30 14:15:36 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'iostat'
Sep 30 14:15:36 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:15:36.254+0000 7f0700497140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Sep 30 14:15:36 compute-0 ceph-mgr[74485]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Sep 30 14:15:36 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'k8sevents'
Sep 30 14:15:36 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/1750391643' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Sep 30 14:15:36 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'localpool'
Sep 30 14:15:36 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'mds_autoscaler'
Sep 30 14:15:36 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1750391643' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Sep 30 14:15:36 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : mgrmap e18: compute-0.buxlkm(active, since 12s), standbys: compute-1.zeqptq, compute-2.udzudc
Sep 30 14:15:37 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'mirroring'
Sep 30 14:15:37 compute-0 systemd[1]: libpod-159bb0421a46f093384b6d4800c3ef78e952a1f1f4e9d462d5ea38ffd146179b.scope: Deactivated successfully.
Sep 30 14:15:37 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'nfs'
Sep 30 14:15:37 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:15:37.361+0000 7f0700497140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Sep 30 14:15:37 compute-0 ceph-mgr[74485]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Sep 30 14:15:37 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'orchestrator'
Sep 30 14:15:37 compute-0 podman[91361]: 2025-09-30 14:15:37.387697716 +0000 UTC m=+0.367107204 container died 159bb0421a46f093384b6d4800c3ef78e952a1f1f4e9d462d5ea38ffd146179b (image=quay.io/ceph/ceph:v19, name=hopeful_borg, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Sep 30 14:15:37 compute-0 bash[91194]: Copying config sha256:72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e
Sep 30 14:15:37 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:15:37.619+0000 7f0700497140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Sep 30 14:15:37 compute-0 ceph-mgr[74485]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Sep 30 14:15:37 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'osd_perf_query'
Sep 30 14:15:37 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:15:37.705+0000 7f0700497140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Sep 30 14:15:37 compute-0 ceph-mgr[74485]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Sep 30 14:15:37 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'osd_support'
Sep 30 14:15:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-bd4dc40b59547bd51f3f4113c8aa70d167051a4ab99d18d06c11ac66205de328-merged.mount: Deactivated successfully.
Sep 30 14:15:37 compute-0 bash[91194]: Writing manifest to image destination
Sep 30 14:15:37 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:15:37.782+0000 7f0700497140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Sep 30 14:15:37 compute-0 ceph-mgr[74485]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Sep 30 14:15:37 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'pg_autoscaler'
Sep 30 14:15:37 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:15:37.870+0000 7f0700497140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Sep 30 14:15:37 compute-0 ceph-mgr[74485]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Sep 30 14:15:37 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'progress'
Sep 30 14:15:37 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:15:37.949+0000 7f0700497140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Sep 30 14:15:37 compute-0 ceph-mgr[74485]: mgr[py] Module progress has missing NOTIFY_TYPES member
Sep 30 14:15:37 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'prometheus'
Sep 30 14:15:38 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/1750391643' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Sep 30 14:15:38 compute-0 ceph-mon[74194]: mgrmap e18: compute-0.buxlkm(active, since 12s), standbys: compute-1.zeqptq, compute-2.udzudc
Sep 30 14:15:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:15:38.314+0000 7f0700497140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Sep 30 14:15:38 compute-0 ceph-mgr[74485]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Sep 30 14:15:38 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'rbd_support'
Sep 30 14:15:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:15:38.411+0000 7f0700497140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Sep 30 14:15:38 compute-0 ceph-mgr[74485]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Sep 30 14:15:38 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'restful'
Sep 30 14:15:38 compute-0 podman[91194]: 2025-09-30 14:15:38.438996495 +0000 UTC m=+4.527196703 image pull 72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e quay.io/prometheus/node-exporter:v1.7.0
Sep 30 14:15:38 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'rgw'
Sep 30 14:15:38 compute-0 podman[91361]: 2025-09-30 14:15:38.643110383 +0000 UTC m=+1.622519921 container remove 159bb0421a46f093384b6d4800c3ef78e952a1f1f4e9d462d5ea38ffd146179b (image=quay.io/ceph/ceph:v19, name=hopeful_borg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:15:38 compute-0 systemd[1]: libpod-conmon-159bb0421a46f093384b6d4800c3ef78e952a1f1f4e9d462d5ea38ffd146179b.scope: Deactivated successfully.
Sep 30 14:15:38 compute-0 sudo[91260]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:15:38.874+0000 7f0700497140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Sep 30 14:15:38 compute-0 ceph-mgr[74485]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Sep 30 14:15:38 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'rook'
Sep 30 14:15:38 compute-0 podman[91194]: 2025-09-30 14:15:38.996244038 +0000 UTC m=+5.084444266 container create 0d94fdcb0089ce3f537370219af53558d7149360386a0f8dbbd34c4af8a36ba9 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:15:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c1b88179ff197e10250082c25cba9a1f3e4e0cff2c8bd2415f8dadc00fff3bc/merged/etc/node-exporter supports timestamps until 2038 (0x7fffffff)
Sep 30 14:15:39 compute-0 podman[91194]: 2025-09-30 14:15:39.415391571 +0000 UTC m=+5.503591779 container init 0d94fdcb0089ce3f537370219af53558d7149360386a0f8dbbd34c4af8a36ba9 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:15:39 compute-0 podman[91194]: 2025-09-30 14:15:39.419841179 +0000 UTC m=+5.508041377 container start 0d94fdcb0089ce3f537370219af53558d7149360386a0f8dbbd34c4af8a36ba9 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:15:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[91447]: ts=2025-09-30T14:15:39.426Z caller=node_exporter.go:192 level=info msg="Starting node_exporter" version="(version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)"
Sep 30 14:15:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[91447]: ts=2025-09-30T14:15:39.426Z caller=node_exporter.go:193 level=info msg="Build context" build_context="(go=go1.21.4, platform=linux/amd64, user=root@35918982f6d8, date=20231112-23:53:35, tags=netgo osusergo static_build)"
Sep 30 14:15:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[91447]: ts=2025-09-30T14:15:39.426Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Sep 30 14:15:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[91447]: ts=2025-09-30T14:15:39.426Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Sep 30 14:15:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[91447]: ts=2025-09-30T14:15:39.427Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Sep 30 14:15:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[91447]: ts=2025-09-30T14:15:39.427Z caller=diskstats_linux.go:265 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Sep 30 14:15:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[91447]: ts=2025-09-30T14:15:39.428Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Sep 30 14:15:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[91447]: ts=2025-09-30T14:15:39.428Z caller=node_exporter.go:117 level=info collector=arp
Sep 30 14:15:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[91447]: ts=2025-09-30T14:15:39.428Z caller=node_exporter.go:117 level=info collector=bcache
Sep 30 14:15:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[91447]: ts=2025-09-30T14:15:39.428Z caller=node_exporter.go:117 level=info collector=bonding
Sep 30 14:15:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[91447]: ts=2025-09-30T14:15:39.428Z caller=node_exporter.go:117 level=info collector=btrfs
Sep 30 14:15:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[91447]: ts=2025-09-30T14:15:39.428Z caller=node_exporter.go:117 level=info collector=conntrack
Sep 30 14:15:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[91447]: ts=2025-09-30T14:15:39.428Z caller=node_exporter.go:117 level=info collector=cpu
Sep 30 14:15:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[91447]: ts=2025-09-30T14:15:39.428Z caller=node_exporter.go:117 level=info collector=cpufreq
Sep 30 14:15:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[91447]: ts=2025-09-30T14:15:39.428Z caller=node_exporter.go:117 level=info collector=diskstats
Sep 30 14:15:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[91447]: ts=2025-09-30T14:15:39.428Z caller=node_exporter.go:117 level=info collector=dmi
Sep 30 14:15:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[91447]: ts=2025-09-30T14:15:39.428Z caller=node_exporter.go:117 level=info collector=edac
Sep 30 14:15:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[91447]: ts=2025-09-30T14:15:39.428Z caller=node_exporter.go:117 level=info collector=entropy
Sep 30 14:15:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[91447]: ts=2025-09-30T14:15:39.428Z caller=node_exporter.go:117 level=info collector=fibrechannel
Sep 30 14:15:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[91447]: ts=2025-09-30T14:15:39.428Z caller=node_exporter.go:117 level=info collector=filefd
Sep 30 14:15:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[91447]: ts=2025-09-30T14:15:39.428Z caller=node_exporter.go:117 level=info collector=filesystem
Sep 30 14:15:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[91447]: ts=2025-09-30T14:15:39.428Z caller=node_exporter.go:117 level=info collector=hwmon
Sep 30 14:15:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[91447]: ts=2025-09-30T14:15:39.428Z caller=node_exporter.go:117 level=info collector=infiniband
Sep 30 14:15:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[91447]: ts=2025-09-30T14:15:39.428Z caller=node_exporter.go:117 level=info collector=ipvs
Sep 30 14:15:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[91447]: ts=2025-09-30T14:15:39.428Z caller=node_exporter.go:117 level=info collector=loadavg
Sep 30 14:15:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[91447]: ts=2025-09-30T14:15:39.428Z caller=node_exporter.go:117 level=info collector=mdadm
Sep 30 14:15:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[91447]: ts=2025-09-30T14:15:39.428Z caller=node_exporter.go:117 level=info collector=meminfo
Sep 30 14:15:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[91447]: ts=2025-09-30T14:15:39.428Z caller=node_exporter.go:117 level=info collector=netclass
Sep 30 14:15:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[91447]: ts=2025-09-30T14:15:39.428Z caller=node_exporter.go:117 level=info collector=netdev
Sep 30 14:15:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[91447]: ts=2025-09-30T14:15:39.428Z caller=node_exporter.go:117 level=info collector=netstat
Sep 30 14:15:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[91447]: ts=2025-09-30T14:15:39.428Z caller=node_exporter.go:117 level=info collector=nfs
Sep 30 14:15:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[91447]: ts=2025-09-30T14:15:39.428Z caller=node_exporter.go:117 level=info collector=nfsd
Sep 30 14:15:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[91447]: ts=2025-09-30T14:15:39.428Z caller=node_exporter.go:117 level=info collector=nvme
Sep 30 14:15:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[91447]: ts=2025-09-30T14:15:39.428Z caller=node_exporter.go:117 level=info collector=os
Sep 30 14:15:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[91447]: ts=2025-09-30T14:15:39.428Z caller=node_exporter.go:117 level=info collector=powersupplyclass
Sep 30 14:15:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[91447]: ts=2025-09-30T14:15:39.428Z caller=node_exporter.go:117 level=info collector=pressure
Sep 30 14:15:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[91447]: ts=2025-09-30T14:15:39.428Z caller=node_exporter.go:117 level=info collector=rapl
Sep 30 14:15:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[91447]: ts=2025-09-30T14:15:39.428Z caller=node_exporter.go:117 level=info collector=schedstat
Sep 30 14:15:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[91447]: ts=2025-09-30T14:15:39.428Z caller=node_exporter.go:117 level=info collector=selinux
Sep 30 14:15:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[91447]: ts=2025-09-30T14:15:39.428Z caller=node_exporter.go:117 level=info collector=sockstat
Sep 30 14:15:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[91447]: ts=2025-09-30T14:15:39.428Z caller=node_exporter.go:117 level=info collector=softnet
Sep 30 14:15:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[91447]: ts=2025-09-30T14:15:39.428Z caller=node_exporter.go:117 level=info collector=stat
Sep 30 14:15:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[91447]: ts=2025-09-30T14:15:39.428Z caller=node_exporter.go:117 level=info collector=tapestats
Sep 30 14:15:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[91447]: ts=2025-09-30T14:15:39.428Z caller=node_exporter.go:117 level=info collector=textfile
Sep 30 14:15:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[91447]: ts=2025-09-30T14:15:39.428Z caller=node_exporter.go:117 level=info collector=thermal_zone
Sep 30 14:15:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[91447]: ts=2025-09-30T14:15:39.428Z caller=node_exporter.go:117 level=info collector=time
Sep 30 14:15:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[91447]: ts=2025-09-30T14:15:39.428Z caller=node_exporter.go:117 level=info collector=udp_queues
Sep 30 14:15:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[91447]: ts=2025-09-30T14:15:39.428Z caller=node_exporter.go:117 level=info collector=uname
Sep 30 14:15:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[91447]: ts=2025-09-30T14:15:39.428Z caller=node_exporter.go:117 level=info collector=vmstat
Sep 30 14:15:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[91447]: ts=2025-09-30T14:15:39.428Z caller=node_exporter.go:117 level=info collector=xfs
Sep 30 14:15:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[91447]: ts=2025-09-30T14:15:39.428Z caller=node_exporter.go:117 level=info collector=zfs
Sep 30 14:15:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[91447]: ts=2025-09-30T14:15:39.429Z caller=tls_config.go:274 level=info msg="Listening on" address=[::]:9100
Sep 30 14:15:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[91447]: ts=2025-09-30T14:15:39.429Z caller=tls_config.go:277 level=info msg="TLS is disabled." http2=false address=[::]:9100
Sep 30 14:15:39 compute-0 bash[91194]: 0d94fdcb0089ce3f537370219af53558d7149360386a0f8dbbd34c4af8a36ba9
Sep 30 14:15:39 compute-0 python3[91469]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Sep 30 14:15:39 compute-0 systemd[1]: Started Ceph node-exporter.compute-0 for 5e3c7776-ac03-5698-b79f-a6dc2d80cae6.
Sep 30 14:15:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:15:39.479+0000 7f0700497140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Sep 30 14:15:39 compute-0 ceph-mgr[74485]: mgr[py] Module rook has missing NOTIFY_TYPES member
Sep 30 14:15:39 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'selftest'
Sep 30 14:15:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:15:39.554+0000 7f0700497140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Sep 30 14:15:39 compute-0 ceph-mgr[74485]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Sep 30 14:15:39 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'snap_schedule'
Sep 30 14:15:39 compute-0 sudo[90983]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:39 compute-0 systemd[1]: session-35.scope: Deactivated successfully.
Sep 30 14:15:39 compute-0 systemd[1]: session-35.scope: Consumed 4.829s CPU time.
Sep 30 14:15:39 compute-0 systemd-logind[808]: Removed session 35.
Sep 30 14:15:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:15:39.647+0000 7f0700497140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Sep 30 14:15:39 compute-0 ceph-mgr[74485]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Sep 30 14:15:39 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'stats'
Sep 30 14:15:39 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'status'
Sep 30 14:15:39 compute-0 python3[91546]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759241739.133805-35419-127265020401435/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=b1f36629bdb347469f4890c95dfdef5abc68c3ae backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:15:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:15:39.812+0000 7f0700497140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Sep 30 14:15:39 compute-0 ceph-mgr[74485]: mgr[py] Module status has missing NOTIFY_TYPES member
Sep 30 14:15:39 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'telegraf'
Sep 30 14:15:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:15:39.881+0000 7f0700497140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Sep 30 14:15:39 compute-0 ceph-mgr[74485]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Sep 30 14:15:39 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'telemetry'
Sep 30 14:15:40 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:15:40.052+0000 7f0700497140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Sep 30 14:15:40 compute-0 ceph-mgr[74485]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Sep 30 14:15:40 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'test_orchestrator'
Sep 30 14:15:40 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 14:15:40 compute-0 sudo[91594]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnxcuebstkhqjhskoxrbfcpdgsvvkpdz ; /usr/bin/python3'
Sep 30 14:15:40 compute-0 sudo[91594]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:15:40 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:15:40.307+0000 7f0700497140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Sep 30 14:15:40 compute-0 ceph-mgr[74485]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Sep 30 14:15:40 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'volumes'
Sep 30 14:15:40 compute-0 python3[91596]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 compute-1 compute-2 '
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:15:40 compute-0 podman[91597]: 2025-09-30 14:15:40.540437415 +0000 UTC m=+0.050742958 container create fbcc7af519188b2b319477f5cfd67af565038121c6956ce82bdb2fa41e837a5d (image=quay.io/ceph/ceph:v19, name=kind_dirac, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:15:40 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.udzudc restarted
Sep 30 14:15:40 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.udzudc started
Sep 30 14:15:40 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:15:40.592+0000 7f0700497140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Sep 30 14:15:40 compute-0 ceph-mgr[74485]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Sep 30 14:15:40 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'zabbix'
Sep 30 14:15:40 compute-0 podman[91597]: 2025-09-30 14:15:40.515122178 +0000 UTC m=+0.025427751 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:15:40 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:15:40.667+0000 7f0700497140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Sep 30 14:15:40 compute-0 ceph-mgr[74485]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Sep 30 14:15:40 compute-0 ceph-mgr[74485]: ms_deliver_dispatch: unhandled message 0x55a0b169b860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Sep 30 14:15:40 compute-0 ceph-mgr[74485]: mgr handle_mgr_map respawning because set of enabled modules changed!
Sep 30 14:15:40 compute-0 ceph-mgr[74485]: mgr respawn  e: '/usr/bin/ceph-mgr'
Sep 30 14:15:40 compute-0 ceph-mgr[74485]: mgr respawn  0: '/usr/bin/ceph-mgr'
Sep 30 14:15:40 compute-0 ceph-mgr[74485]: mgr respawn  1: '-n'
Sep 30 14:15:40 compute-0 ceph-mgr[74485]: mgr respawn  2: 'mgr.compute-0.buxlkm'
Sep 30 14:15:40 compute-0 ceph-mgr[74485]: mgr respawn  3: '-f'
Sep 30 14:15:40 compute-0 ceph-mgr[74485]: mgr respawn  4: '--setuser'
Sep 30 14:15:40 compute-0 ceph-mgr[74485]: mgr respawn  5: 'ceph'
Sep 30 14:15:40 compute-0 ceph-mgr[74485]: mgr respawn  6: '--setgroup'
Sep 30 14:15:40 compute-0 ceph-mgr[74485]: mgr respawn  7: 'ceph'
Sep 30 14:15:40 compute-0 ceph-mgr[74485]: mgr respawn  8: '--default-log-to-file=false'
Sep 30 14:15:40 compute-0 ceph-mgr[74485]: mgr respawn  9: '--default-log-to-journald=true'
Sep 30 14:15:40 compute-0 ceph-mgr[74485]: mgr respawn  10: '--default-log-to-stderr=false'
Sep 30 14:15:40 compute-0 ceph-mgr[74485]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Sep 30 14:15:40 compute-0 ceph-mgr[74485]: mgr respawn  exe_path /proc/self/exe
Sep 30 14:15:40 compute-0 systemd[1]: Started libpod-conmon-fbcc7af519188b2b319477f5cfd67af565038121c6956ce82bdb2fa41e837a5d.scope.
Sep 30 14:15:40 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:15:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4aad6a9c72e6bed19a062da08c377a2735d82382929ee94a63a418cdd78ffcce/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:15:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4aad6a9c72e6bed19a062da08c377a2735d82382929ee94a63a418cdd78ffcce/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:15:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4aad6a9c72e6bed19a062da08c377a2735d82382929ee94a63a418cdd78ffcce/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Sep 30 14:15:40 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ignoring --setuser ceph since I am not root
Sep 30 14:15:40 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ignoring --setgroup ceph since I am not root
Sep 30 14:15:40 compute-0 ceph-mgr[74485]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Sep 30 14:15:40 compute-0 ceph-mgr[74485]: pidfile_write: ignore empty --pid-file
Sep 30 14:15:40 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'alerts'
Sep 30 14:15:40 compute-0 podman[91597]: 2025-09-30 14:15:40.978509597 +0000 UTC m=+0.488815170 container init fbcc7af519188b2b319477f5cfd67af565038121c6956ce82bdb2fa41e837a5d (image=quay.io/ceph/ceph:v19, name=kind_dirac, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:15:40 compute-0 ceph-mgr[74485]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Sep 30 14:15:40 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'balancer'
Sep 30 14:15:40 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:15:40.981+0000 7fdf57842140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Sep 30 14:15:40 compute-0 podman[91597]: 2025-09-30 14:15:40.98544449 +0000 UTC m=+0.495750043 container start fbcc7af519188b2b319477f5cfd67af565038121c6956ce82bdb2fa41e837a5d (image=quay.io/ceph/ceph:v19, name=kind_dirac, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Sep 30 14:15:41 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:15:41.072+0000 7fdf57842140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Sep 30 14:15:41 compute-0 ceph-mgr[74485]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Sep 30 14:15:41 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'cephadm'
Sep 30 14:15:41 compute-0 podman[91597]: 2025-09-30 14:15:41.149883843 +0000 UTC m=+0.660189416 container attach fbcc7af519188b2b319477f5cfd67af565038121c6956ce82bdb2fa41e837a5d (image=quay.io/ceph/ceph:v19, name=kind_dirac, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:15:41 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : mgrmap e19: compute-0.buxlkm(active, since 16s), standbys: compute-1.zeqptq, compute-2.udzudc
Sep 30 14:15:41 compute-0 ceph-mon[74194]: Standby manager daemon compute-2.udzudc restarted
Sep 30 14:15:41 compute-0 ceph-mon[74194]: Standby manager daemon compute-2.udzudc started
Sep 30 14:15:41 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'crash'
Sep 30 14:15:41 compute-0 ceph-mgr[74485]: mgr[py] Module crash has missing NOTIFY_TYPES member
Sep 30 14:15:41 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:15:41.930+0000 7fdf57842140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Sep 30 14:15:41 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'dashboard'
Sep 30 14:15:42 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'devicehealth'
Sep 30 14:15:42 compute-0 ceph-mgr[74485]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Sep 30 14:15:42 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'diskprediction_local'
Sep 30 14:15:42 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:15:42.609+0000 7fdf57842140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Sep 30 14:15:42 compute-0 ceph-mon[74194]: mgrmap e19: compute-0.buxlkm(active, since 16s), standbys: compute-1.zeqptq, compute-2.udzudc
Sep 30 14:15:42 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Sep 30 14:15:42 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Sep 30 14:15:42 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]:   from numpy import show_config as show_numpy_config
Sep 30 14:15:42 compute-0 ceph-mgr[74485]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Sep 30 14:15:42 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:15:42.784+0000 7fdf57842140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Sep 30 14:15:42 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'influx'
Sep 30 14:15:42 compute-0 ceph-mgr[74485]: mgr[py] Module influx has missing NOTIFY_TYPES member
Sep 30 14:15:42 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:15:42.857+0000 7fdf57842140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Sep 30 14:15:42 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'insights'
Sep 30 14:15:42 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'iostat'
Sep 30 14:15:43 compute-0 ceph-mgr[74485]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Sep 30 14:15:43 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'k8sevents'
Sep 30 14:15:43 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:15:43.026+0000 7fdf57842140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Sep 30 14:15:43 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'localpool'
Sep 30 14:15:43 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'mds_autoscaler'
Sep 30 14:15:43 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'mirroring'
Sep 30 14:15:43 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'nfs'
Sep 30 14:15:44 compute-0 ceph-mgr[74485]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Sep 30 14:15:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:15:44.126+0000 7fdf57842140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Sep 30 14:15:44 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'orchestrator'
Sep 30 14:15:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:15:44.355+0000 7fdf57842140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Sep 30 14:15:44 compute-0 ceph-mgr[74485]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Sep 30 14:15:44 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'osd_perf_query'
Sep 30 14:15:44 compute-0 ceph-mgr[74485]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Sep 30 14:15:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:15:44.436+0000 7fdf57842140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Sep 30 14:15:44 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'osd_support'
Sep 30 14:15:44 compute-0 ceph-mgr[74485]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Sep 30 14:15:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:15:44.513+0000 7fdf57842140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Sep 30 14:15:44 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'pg_autoscaler'
Sep 30 14:15:44 compute-0 ceph-mgr[74485]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Sep 30 14:15:44 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'progress'
Sep 30 14:15:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:15:44.598+0000 7fdf57842140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Sep 30 14:15:44 compute-0 ceph-mgr[74485]: mgr[py] Module progress has missing NOTIFY_TYPES member
Sep 30 14:15:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:15:44.668+0000 7fdf57842140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Sep 30 14:15:44 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'prometheus'
Sep 30 14:15:45 compute-0 ceph-mgr[74485]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Sep 30 14:15:45 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:15:45.056+0000 7fdf57842140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Sep 30 14:15:45 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'rbd_support'
Sep 30 14:15:45 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 14:15:45 compute-0 ceph-mgr[74485]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Sep 30 14:15:45 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:15:45.163+0000 7fdf57842140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Sep 30 14:15:45 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'restful'
Sep 30 14:15:45 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'rgw'
Sep 30 14:15:45 compute-0 ceph-mgr[74485]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Sep 30 14:15:45 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:15:45.653+0000 7fdf57842140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Sep 30 14:15:45 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'rook'
Sep 30 14:15:46 compute-0 ceph-mgr[74485]: mgr[py] Module rook has missing NOTIFY_TYPES member
Sep 30 14:15:46 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:15:46.247+0000 7fdf57842140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Sep 30 14:15:46 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'selftest'
Sep 30 14:15:46 compute-0 ceph-mgr[74485]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Sep 30 14:15:46 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'snap_schedule'
Sep 30 14:15:46 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:15:46.324+0000 7fdf57842140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Sep 30 14:15:46 compute-0 ceph-mgr[74485]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Sep 30 14:15:46 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:15:46.408+0000 7fdf57842140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Sep 30 14:15:46 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'stats'
Sep 30 14:15:46 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'status'
Sep 30 14:15:46 compute-0 ceph-mgr[74485]: mgr[py] Module status has missing NOTIFY_TYPES member
Sep 30 14:15:46 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:15:46.574+0000 7fdf57842140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Sep 30 14:15:46 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'telegraf'
Sep 30 14:15:46 compute-0 ceph-mgr[74485]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Sep 30 14:15:46 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:15:46.651+0000 7fdf57842140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Sep 30 14:15:46 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'telemetry'
Sep 30 14:15:46 compute-0 ceph-mgr[74485]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Sep 30 14:15:46 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:15:46.820+0000 7fdf57842140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Sep 30 14:15:46 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'test_orchestrator'
Sep 30 14:15:47 compute-0 ceph-mgr[74485]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Sep 30 14:15:47 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'volumes'
Sep 30 14:15:47 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:15:47.044+0000 7fdf57842140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Sep 30 14:15:47 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.udzudc restarted
Sep 30 14:15:47 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.udzudc started
Sep 30 14:15:47 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : mgrmap e20: compute-0.buxlkm(active, since 22s), standbys: compute-1.zeqptq, compute-2.udzudc
Sep 30 14:15:47 compute-0 ceph-mgr[74485]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Sep 30 14:15:47 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:15:47.351+0000 7fdf57842140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Sep 30 14:15:47 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'zabbix'
Sep 30 14:15:47 compute-0 ceph-mon[74194]: Standby manager daemon compute-2.udzudc restarted
Sep 30 14:15:47 compute-0 ceph-mon[74194]: Standby manager daemon compute-2.udzudc started
Sep 30 14:15:47 compute-0 ceph-mgr[74485]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Sep 30 14:15:47 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:15:47.429+0000 7fdf57842140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Sep 30 14:15:47 compute-0 ceph-mon[74194]: log_channel(cluster) log [INF] : Active manager daemon compute-0.buxlkm restarted
Sep 30 14:15:47 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Sep 30 14:15:47 compute-0 ceph-mon[74194]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.buxlkm
Sep 30 14:15:47 compute-0 ceph-mgr[74485]: ms_deliver_dispatch: unhandled message 0x556e6fb33860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Sep 30 14:15:47 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Sep 30 14:15:47 compute-0 ceph-mgr[74485]: mgr handle_mgr_map Activating!
Sep 30 14:15:47 compute-0 ceph-mgr[74485]: mgr handle_mgr_map I am now activating
Sep 30 14:15:47 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Sep 30 14:15:47 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : mgrmap e21: compute-0.buxlkm(active, starting, since 0.351083s), standbys: compute-1.zeqptq, compute-2.udzudc
Sep 30 14:15:47 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.zeqptq restarted
Sep 30 14:15:47 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.zeqptq started
Sep 30 14:15:47 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Sep 30 14:15:47 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Sep 30 14:15:47 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Sep 30 14:15:47 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Sep 30 14:15:47 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Sep 30 14:15:47 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Sep 30 14:15:47 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.buxlkm", "id": "compute-0.buxlkm"} v 0)
Sep 30 14:15:47 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mgr metadata", "who": "compute-0.buxlkm", "id": "compute-0.buxlkm"}]: dispatch
Sep 30 14:15:47 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.zeqptq", "id": "compute-1.zeqptq"} v 0)
Sep 30 14:15:47 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mgr metadata", "who": "compute-1.zeqptq", "id": "compute-1.zeqptq"}]: dispatch
Sep 30 14:15:47 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.udzudc", "id": "compute-2.udzudc"} v 0)
Sep 30 14:15:47 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mgr metadata", "who": "compute-2.udzudc", "id": "compute-2.udzudc"}]: dispatch
Sep 30 14:15:47 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Sep 30 14:15:47 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Sep 30 14:15:47 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Sep 30 14:15:47 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Sep 30 14:15:47 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Sep 30 14:15:47 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Sep 30 14:15:47 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata"} v 0)
Sep 30 14:15:47 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mds metadata"}]: dispatch
Sep 30 14:15:47 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).mds e1 all = 1
Sep 30 14:15:47 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Sep 30 14:15:47 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata"}]: dispatch
Sep 30 14:15:47 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata"} v 0)
Sep 30 14:15:47 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mon metadata"}]: dispatch
Sep 30 14:15:47 compute-0 ceph-mon[74194]: log_channel(cluster) log [INF] : Manager daemon compute-0.buxlkm is now available
Sep 30 14:15:47 compute-0 ceph-mgr[74485]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 14:15:47 compute-0 ceph-mgr[74485]: mgr load Constructed class from module: balancer
Sep 30 14:15:47 compute-0 ceph-mgr[74485]: [balancer INFO root] Starting
Sep 30 14:15:47 compute-0 ceph-mgr[74485]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 14:15:47 compute-0 ceph-mgr[74485]: [balancer INFO root] Optimize plan auto_2025-09-30_14:15:47
Sep 30 14:15:47 compute-0 ceph-mgr[74485]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 14:15:47 compute-0 ceph-mgr[74485]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Sep 30 14:15:47 compute-0 ceph-mgr[74485]: mgr load Constructed class from module: cephadm
Sep 30 14:15:47 compute-0 ceph-mgr[74485]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 14:15:47 compute-0 ceph-mgr[74485]: mgr load Constructed class from module: crash
Sep 30 14:15:47 compute-0 ceph-mgr[74485]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 14:15:47 compute-0 ceph-mgr[74485]: mgr load Constructed class from module: dashboard
Sep 30 14:15:47 compute-0 ceph-mgr[74485]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 14:15:47 compute-0 ceph-mgr[74485]: mgr load Constructed class from module: devicehealth
Sep 30 14:15:47 compute-0 ceph-mgr[74485]: [dashboard INFO access_control] Loading user roles DB version=2
Sep 30 14:15:47 compute-0 ceph-mgr[74485]: [dashboard INFO sso] Loading SSO DB version=1
Sep 30 14:15:47 compute-0 ceph-mgr[74485]: [devicehealth INFO root] Starting
Sep 30 14:15:47 compute-0 ceph-mgr[74485]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 14:15:47 compute-0 ceph-mgr[74485]: mgr load Constructed class from module: iostat
Sep 30 14:15:47 compute-0 ceph-mgr[74485]: [dashboard INFO root] server: ssl=no host=192.168.122.100 port=8443
Sep 30 14:15:47 compute-0 ceph-mgr[74485]: [dashboard INFO root] Configured CherryPy, starting engine...
Sep 30 14:15:47 compute-0 ceph-mgr[74485]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 14:15:47 compute-0 ceph-mgr[74485]: mgr load Constructed class from module: nfs
Sep 30 14:15:47 compute-0 ceph-mgr[74485]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 14:15:47 compute-0 ceph-mgr[74485]: mgr load Constructed class from module: orchestrator
Sep 30 14:15:47 compute-0 ceph-mgr[74485]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 14:15:47 compute-0 ceph-mgr[74485]: mgr load Constructed class from module: pg_autoscaler
Sep 30 14:15:47 compute-0 ceph-mgr[74485]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 14:15:47 compute-0 ceph-mgr[74485]: mgr load Constructed class from module: progress
Sep 30 14:15:47 compute-0 ceph-mgr[74485]: [progress INFO root] Loading...
Sep 30 14:15:47 compute-0 ceph-mgr[74485]: [progress INFO root] Loaded [<progress.module.GhostEvent object at 0x7fdedd3eaca0>, <progress.module.GhostEvent object at 0x7fdedd3eaf10>, <progress.module.GhostEvent object at 0x7fdedd3eaf40>, <progress.module.GhostEvent object at 0x7fdedd3eaf70>, <progress.module.GhostEvent object at 0x7fdedd3eafa0>, <progress.module.GhostEvent object at 0x7fdedd3eafd0>, <progress.module.GhostEvent object at 0x7fdedd3f6040>, <progress.module.GhostEvent object at 0x7fdedd3f6070>, <progress.module.GhostEvent object at 0x7fdedd3f60a0>] historic events
Sep 30 14:15:47 compute-0 ceph-mgr[74485]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 14:15:47 compute-0 ceph-mgr[74485]: [progress INFO root] Loaded OSDMap, ready.
Sep 30 14:15:47 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 14:15:47 compute-0 ceph-mgr[74485]: [rbd_support INFO root] recovery thread starting
Sep 30 14:15:47 compute-0 ceph-mgr[74485]: [rbd_support INFO root] starting setup
Sep 30 14:15:47 compute-0 ceph-mgr[74485]: mgr load Constructed class from module: rbd_support
Sep 30 14:15:47 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.buxlkm/mirror_snapshot_schedule"} v 0)
Sep 30 14:15:47 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.buxlkm/mirror_snapshot_schedule"}]: dispatch
Sep 30 14:15:47 compute-0 ceph-mgr[74485]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 14:15:47 compute-0 ceph-mgr[74485]: mgr load Constructed class from module: restful
Sep 30 14:15:47 compute-0 ceph-mgr[74485]: [restful INFO root] server_addr: :: server_port: 8003
Sep 30 14:15:47 compute-0 ceph-mgr[74485]: [restful WARNING root] server not running: no certificate configured
Sep 30 14:15:47 compute-0 ceph-mgr[74485]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 14:15:47 compute-0 ceph-mgr[74485]: mgr load Constructed class from module: status
Sep 30 14:15:47 compute-0 ceph-mgr[74485]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 14:15:47 compute-0 ceph-mgr[74485]: mgr load Constructed class from module: telemetry
Sep 30 14:15:47 compute-0 ceph-mgr[74485]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 14:15:47 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 14:15:47 compute-0 ceph-mgr[74485]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 14:15:47 compute-0 ceph-mgr[74485]: mgr load Constructed class from module: volumes
Sep 30 14:15:47 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 14:15:47 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 14:15:47 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 14:15:47 compute-0 ceph-mgr[74485]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Sep 30 14:15:47 compute-0 ceph-mgr[74485]: [rbd_support INFO root] PerfHandler: starting
Sep 30 14:15:47 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_task_task: vms, start_after=
Sep 30 14:15:47 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_task_task: volumes, start_after=
Sep 30 14:15:47 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_task_task: backups, start_after=
Sep 30 14:15:47 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_task_task: images, start_after=
Sep 30 14:15:47 compute-0 ceph-mgr[74485]: [rbd_support INFO root] TaskHandler: starting
Sep 30 14:15:47 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.buxlkm/trash_purge_schedule"} v 0)
Sep 30 14:15:47 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.buxlkm/trash_purge_schedule"}]: dispatch
Sep 30 14:15:47 compute-0 ceph-mgr[74485]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 14:15:47 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 14:15:47 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 14:15:47 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 14:15:47 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 14:15:47 compute-0 ceph-mgr[74485]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Sep 30 14:15:47 compute-0 ceph-mgr[74485]: [rbd_support INFO root] setup complete
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFS -> /api/cephfs
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsUi -> /ui-api/cephfs
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolume -> /api/cephfs/subvolume
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeGroups -> /api/cephfs/subvolume/group
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeSnapshots -> /api/cephfs/subvolume/snapshot
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsSnapshotClone -> /api/cephfs/subvolume/snapshot/clone
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSnapshotSchedule -> /api/cephfs/snapshot/schedule
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiUi -> /ui-api/iscsi
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Iscsi -> /api/iscsi
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiTarget -> /api/iscsi/target
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaCluster -> /api/nfs-ganesha/cluster
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaExports -> /api/nfs-ganesha/export
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaUi -> /ui-api/nfs-ganesha
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Orchestrator -> /ui-api/orchestrator
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Service -> /api/service
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroring -> /api/block/mirroring
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringSummary -> /api/block/mirroring/summary
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolMode -> /api/block/mirroring/pool
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolBootstrap -> /api/block/mirroring/pool/{pool_name}/bootstrap
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolPeer -> /api/block/mirroring/pool/{pool_name}/peer
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringStatus -> /ui-api/block/mirroring
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Pool -> /api/pool
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PoolUi -> /ui-api/pool
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RBDPool -> /api/pool
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rbd -> /api/block/image
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdStatus -> /ui-api/block/rbd
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdSnapshot -> /api/block/image/{image_spec}/snap
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdTrash -> /api/block/image/trash
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdNamespace -> /api/block/pool/{pool_name}/namespace
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rgw -> /ui-api/rgw
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteStatus -> /ui-api/rgw/multisite
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteController -> /api/rgw/multisite
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwDaemon -> /api/rgw/daemon
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwSite -> /api/rgw/site
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucket -> /api/rgw/bucket
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucketUi -> /ui-api/rgw/bucket
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwUser -> /api/rgw/user
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClass -> /api/rgw/roles
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClassMetadata -> /ui-api/rgw/roles
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwRealm -> /api/rgw/realm
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZonegroup -> /api/rgw/zonegroup
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZone -> /api/rgw/zone
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Auth -> /api/auth
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClass -> /api/cluster/user
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClassMetadata -> /ui-api/cluster/user
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Cluster -> /api/cluster
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterUpgrade -> /api/cluster/upgrade
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterConfiguration -> /api/cluster_conf
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRule -> /api/crush_rule
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRuleUi -> /ui-api/crush_rule
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Daemon -> /api/daemon
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Docs -> /docs
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfile -> /api/erasure_code_profile
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfileUi -> /ui-api/erasure_code_profile
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackController -> /api/feedback
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackApiController -> /api/feedback/api_key
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackUiController -> /ui-api/feedback/api_key
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FrontendLogging -> /ui-api/logging
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Grafana -> /api/grafana
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Host -> /api/host
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HostUi -> /ui-api/host
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Health -> /api/health
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HomeController -> /
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LangsController -> /ui-api/langs
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LoginController -> /ui-api/login
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Logs -> /api/logs
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrModules -> /api/mgr/module
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Monitor -> /api/monitor
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFGateway -> /api/nvmeof/gateway
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSpdk -> /api/nvmeof/spdk
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSubsystem -> /api/nvmeof/subsystem
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFListener -> /api/nvmeof/subsystem/{nqn}/listener
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFNamespace -> /api/nvmeof/subsystem/{nqn}/namespace
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFHost -> /api/nvmeof/subsystem/{nqn}/host
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFConnection -> /api/nvmeof/subsystem/{nqn}/connection
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFTcpUI -> /ui-api/nvmeof
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Osd -> /api/osd
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdUi -> /ui-api/osd
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdFlagsController -> /api/osd/flags
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MdsPerfCounter -> /api/perf_counters/mds
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MonPerfCounter -> /api/perf_counters/mon
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdPerfCounter -> /api/perf_counters/osd
Sep 30 14:15:48 compute-0 sshd-session[91784]: Accepted publickey for ceph-admin from 192.168.122.100 port 39336 ssh2: RSA SHA256:xW6Secl6o9Q/fOm6V4KS97DIZ06Q0FgYLSMG01uhfVw
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwPerfCounter -> /api/perf_counters/rgw
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirrorPerfCounter -> /api/perf_counters/rbd-mirror
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrPerfCounter -> /api/perf_counters/mgr
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: TcmuRunnerPerfCounter -> /api/perf_counters/tcmu-runner
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PerfCounters -> /api/perf_counters
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusReceiver -> /api/prometheus_receiver
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Prometheus -> /api/prometheus
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusNotifications -> /api/prometheus/notifications
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusSettings -> /ui-api/prometheus
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Role -> /api/role
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Scope -> /ui-api/scope
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Saml2 -> /auth/saml2
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Settings -> /api/settings
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: StandardSettings -> /ui-api/standard_settings
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Summary -> /api/summary
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Task -> /api/task
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Telemetry -> /api/telemetry
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: User -> /api/user
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserPasswordPolicy -> /api/user
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserChangePassword -> /api/user/{username}
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeatureTogglesEndpoint -> /api/feature_toggles
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MessageOfTheDay -> /ui-api/motd
Sep 30 14:15:48 compute-0 systemd-logind[808]: New session 36 of user ceph-admin.
Sep 30 14:15:48 compute-0 systemd[1]: Started Session 36 of User ceph-admin.
Sep 30 14:15:48 compute-0 sshd-session[91784]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Sep 30 14:15:48 compute-0 sudo[91799]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:15:48 compute-0 sudo[91799]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:48 compute-0 sudo[91799]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.module] Engine started.
Sep 30 14:15:48 compute-0 sudo[91824]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Sep 30 14:15:48 compute-0 sudo[91824]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:48 compute-0 ceph-mon[74194]: mgrmap e20: compute-0.buxlkm(active, since 22s), standbys: compute-1.zeqptq, compute-2.udzudc
Sep 30 14:15:48 compute-0 ceph-mon[74194]: Active manager daemon compute-0.buxlkm restarted
Sep 30 14:15:48 compute-0 ceph-mon[74194]: Activating manager daemon compute-0.buxlkm
Sep 30 14:15:48 compute-0 ceph-mon[74194]: osdmap e44: 3 total, 3 up, 3 in
Sep 30 14:15:48 compute-0 ceph-mon[74194]: mgrmap e21: compute-0.buxlkm(active, starting, since 0.351083s), standbys: compute-1.zeqptq, compute-2.udzudc
Sep 30 14:15:48 compute-0 ceph-mon[74194]: Standby manager daemon compute-1.zeqptq restarted
Sep 30 14:15:48 compute-0 ceph-mon[74194]: Standby manager daemon compute-1.zeqptq started
Sep 30 14:15:48 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Sep 30 14:15:48 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Sep 30 14:15:48 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Sep 30 14:15:48 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mgr metadata", "who": "compute-0.buxlkm", "id": "compute-0.buxlkm"}]: dispatch
Sep 30 14:15:48 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mgr metadata", "who": "compute-1.zeqptq", "id": "compute-1.zeqptq"}]: dispatch
Sep 30 14:15:48 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mgr metadata", "who": "compute-2.udzudc", "id": "compute-2.udzudc"}]: dispatch
Sep 30 14:15:48 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Sep 30 14:15:48 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Sep 30 14:15:48 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Sep 30 14:15:48 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mds metadata"}]: dispatch
Sep 30 14:15:48 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata"}]: dispatch
Sep 30 14:15:48 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mon metadata"}]: dispatch
Sep 30 14:15:48 compute-0 ceph-mon[74194]: Manager daemon compute-0.buxlkm is now available
Sep 30 14:15:48 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.buxlkm/mirror_snapshot_schedule"}]: dispatch
Sep 30 14:15:48 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.buxlkm/trash_purge_schedule"}]: dispatch
Sep 30 14:15:48 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : mgrmap e22: compute-0.buxlkm(active, since 1.37422s), standbys: compute-1.zeqptq, compute-2.udzudc
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.14418 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 compute-1 compute-2 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Sep 30 14:15:48 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0)
Sep 30 14:15:48 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Sep 30 14:15:48 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0)
Sep 30 14:15:48 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Sep 30 14:15:48 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0)
Sep 30 14:15:48 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Sep 30 14:15:48 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Sep 30 14:15:48 compute-0 ceph-mon[74194]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Sep 30 14:15:48 compute-0 ceph-mon[74194]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Sep 30 14:15:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mon-compute-0[74190]: 2025-09-30T14:15:48.811+0000 7f7cdb176640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v3: 104 pgs: 104 active+clean; 454 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Sep 30 14:15:48 compute-0 podman[91917]: 2025-09-30 14:15:48.840887525 +0000 UTC m=+0.066725989 container exec a277d7b6b6f3cf10a7ce0ade5eebf0f8127074c248f9bce4451399614b97ded5 (image=quay.io/ceph/ceph:v19, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mon-compute-0, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:15:48 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Sep 30 14:15:48 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).mds e2 new map
Sep 30 14:15:48 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).mds e2 print_map
                                           e2
                                           btime 2025-09-30T14:15:48:812491+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-09-30T14:15:48.812444+0000
                                           modified        2025-09-30T14:15:48.812444+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                            
                                            
Sep 30 14:15:48 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Sep 30 14:15:48 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Sep 30 14:15:48 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Sep 30 14:15:48 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Sep 30 14:15:48 compute-0 podman[91917]: 2025-09-30 14:15:48.936620168 +0000 UTC m=+0.162458602 container exec_died a277d7b6b6f3cf10a7ce0ade5eebf0f8127074c248f9bce4451399614b97ded5 (image=quay.io/ceph/ceph:v19, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mon-compute-0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Sep 30 14:15:48 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:48 compute-0 ceph-mgr[74485]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 compute-1 compute-2 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Sep 30 14:15:48 compute-0 systemd[1]: libpod-fbcc7af519188b2b319477f5cfd67af565038121c6956ce82bdb2fa41e837a5d.scope: Deactivated successfully.
Sep 30 14:15:48 compute-0 podman[91597]: 2025-09-30 14:15:48.973853539 +0000 UTC m=+8.484159182 container died fbcc7af519188b2b319477f5cfd67af565038121c6956ce82bdb2fa41e837a5d (image=quay.io/ceph/ceph:v19, name=kind_dirac, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:15:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-4aad6a9c72e6bed19a062da08c377a2735d82382929ee94a63a418cdd78ffcce-merged.mount: Deactivated successfully.
Sep 30 14:15:49 compute-0 podman[91597]: 2025-09-30 14:15:49.022699206 +0000 UTC m=+8.533004759 container remove fbcc7af519188b2b319477f5cfd67af565038121c6956ce82bdb2fa41e837a5d (image=quay.io/ceph/ceph:v19, name=kind_dirac, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Sep 30 14:15:49 compute-0 systemd[1]: libpod-conmon-fbcc7af519188b2b319477f5cfd67af565038121c6956ce82bdb2fa41e837a5d.scope: Deactivated successfully.
Sep 30 14:15:49 compute-0 sudo[91594]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:49 compute-0 ceph-mgr[74485]: [cephadm INFO cherrypy.error] [30/Sep/2025:14:15:49] ENGINE Bus STARTING
Sep 30 14:15:49 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : [30/Sep/2025:14:15:49] ENGINE Bus STARTING
Sep 30 14:15:49 compute-0 sudo[92049]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epnevfmarajydmyiqofbnksgxpafmgge ; /usr/bin/python3'
Sep 30 14:15:49 compute-0 sudo[92049]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:15:49 compute-0 ceph-mgr[74485]: [cephadm INFO cherrypy.error] [30/Sep/2025:14:15:49] ENGINE Serving on https://192.168.122.100:7150
Sep 30 14:15:49 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : [30/Sep/2025:14:15:49] ENGINE Serving on https://192.168.122.100:7150
Sep 30 14:15:49 compute-0 ceph-mgr[74485]: [cephadm INFO cherrypy.error] [30/Sep/2025:14:15:49] ENGINE Client ('192.168.122.100', 35416) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Sep 30 14:15:49 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : [30/Sep/2025:14:15:49] ENGINE Client ('192.168.122.100', 35416) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Sep 30 14:15:49 compute-0 ceph-mgr[74485]: [cephadm INFO cherrypy.error] [30/Sep/2025:14:15:49] ENGINE Serving on http://192.168.122.100:8765
Sep 30 14:15:49 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : [30/Sep/2025:14:15:49] ENGINE Serving on http://192.168.122.100:8765
Sep 30 14:15:49 compute-0 ceph-mgr[74485]: [cephadm INFO cherrypy.error] [30/Sep/2025:14:15:49] ENGINE Bus STARTED
Sep 30 14:15:49 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : [30/Sep/2025:14:15:49] ENGINE Bus STARTED
Sep 30 14:15:49 compute-0 python3[92056]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:15:49 compute-0 podman[92100]: 2025-09-30 14:15:49.43198267 +0000 UTC m=+0.108327256 container exec 0d94fdcb0089ce3f537370219af53558d7149360386a0f8dbbd34c4af8a36ba9 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:15:49 compute-0 podman[92115]: 2025-09-30 14:15:49.444292854 +0000 UTC m=+0.071800773 container create 983c005d2fd3a552096e63564c4e5548917e218f8e01d5cc68a022d93413aa6b (image=quay.io/ceph/ceph:v19, name=reverent_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:15:49 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Sep 30 14:15:49 compute-0 systemd[1]: Started libpod-conmon-983c005d2fd3a552096e63564c4e5548917e218f8e01d5cc68a022d93413aa6b.scope.
Sep 30 14:15:49 compute-0 podman[92100]: 2025-09-30 14:15:49.469290133 +0000 UTC m=+0.145634699 container exec_died 0d94fdcb0089ce3f537370219af53558d7149360386a0f8dbbd34c4af8a36ba9 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:15:49 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:15:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/546a08b2ec39418a1680b6ee6bc4cba6060727462e29d97a9b192e1ee504fb77/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:15:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/546a08b2ec39418a1680b6ee6bc4cba6060727462e29d97a9b192e1ee504fb77/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:15:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/546a08b2ec39418a1680b6ee6bc4cba6060727462e29d97a9b192e1ee504fb77/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Sep 30 14:15:49 compute-0 podman[92115]: 2025-09-30 14:15:49.510885849 +0000 UTC m=+0.138393768 container init 983c005d2fd3a552096e63564c4e5548917e218f8e01d5cc68a022d93413aa6b (image=quay.io/ceph/ceph:v19, name=reverent_noyce, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:15:49 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:49 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Sep 30 14:15:49 compute-0 podman[92115]: 2025-09-30 14:15:49.516843186 +0000 UTC m=+0.144351105 container start 983c005d2fd3a552096e63564c4e5548917e218f8e01d5cc68a022d93413aa6b (image=quay.io/ceph/ceph:v19, name=reverent_noyce, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:15:49 compute-0 podman[92115]: 2025-09-30 14:15:49.519508806 +0000 UTC m=+0.147016725 container attach 983c005d2fd3a552096e63564c4e5548917e218f8e01d5cc68a022d93413aa6b (image=quay.io/ceph/ceph:v19, name=reverent_noyce, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:15:49 compute-0 sudo[91824]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:49 compute-0 podman[92115]: 2025-09-30 14:15:49.429499154 +0000 UTC m=+0.057007093 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:15:49 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 14:15:49 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:49 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Sep 30 14:15:49 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:49 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 14:15:49 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v5: 104 pgs: 104 active+clean; 454 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Sep 30 14:15:49 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:49 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Sep 30 14:15:49 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.14442 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:15:49 compute-0 ceph-mgr[74485]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Sep 30 14:15:49 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Sep 30 14:15:49 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Sep 30 14:15:50 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e45 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 14:15:50 compute-0 ceph-mon[74194]: mgrmap e22: compute-0.buxlkm(active, since 1.37422s), standbys: compute-1.zeqptq, compute-2.udzudc
Sep 30 14:15:50 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Sep 30 14:15:50 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Sep 30 14:15:50 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Sep 30 14:15:50 compute-0 ceph-mon[74194]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Sep 30 14:15:50 compute-0 ceph-mon[74194]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Sep 30 14:15:50 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Sep 30 14:15:50 compute-0 ceph-mon[74194]: osdmap e45: 3 total, 3 up, 3 in
Sep 30 14:15:50 compute-0 ceph-mon[74194]: fsmap cephfs:0
Sep 30 14:15:50 compute-0 ceph-mon[74194]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Sep 30 14:15:50 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:50 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:50 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:50 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:50 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:50 compute-0 ceph-mgr[74485]: [devicehealth INFO root] Check health
Sep 30 14:15:50 compute-0 sudo[92191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:15:50 compute-0 sudo[92191]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:50 compute-0 sudo[92191]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:50 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:50 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : mgrmap e23: compute-0.buxlkm(active, since 2s), standbys: compute-1.zeqptq, compute-2.udzudc
Sep 30 14:15:50 compute-0 sudo[92216]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 14:15:50 compute-0 sudo[92216]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:50 compute-0 sshd-session[92114]: Connection closed by authenticating user root 80.94.95.115 port 18118 [preauth]
Sep 30 14:15:50 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:50 compute-0 reverent_noyce[92153]: Scheduled mds.cephfs update...
Sep 30 14:15:50 compute-0 podman[92115]: 2025-09-30 14:15:50.527871335 +0000 UTC m=+1.155379264 container died 983c005d2fd3a552096e63564c4e5548917e218f8e01d5cc68a022d93413aa6b (image=quay.io/ceph/ceph:v19, name=reverent_noyce, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:15:50 compute-0 systemd[1]: libpod-983c005d2fd3a552096e63564c4e5548917e218f8e01d5cc68a022d93413aa6b.scope: Deactivated successfully.
Sep 30 14:15:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-546a08b2ec39418a1680b6ee6bc4cba6060727462e29d97a9b192e1ee504fb77-merged.mount: Deactivated successfully.
Sep 30 14:15:50 compute-0 podman[92115]: 2025-09-30 14:15:50.571747641 +0000 UTC m=+1.199255560 container remove 983c005d2fd3a552096e63564c4e5548917e218f8e01d5cc68a022d93413aa6b (image=quay.io/ceph/ceph:v19, name=reverent_noyce, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:15:50 compute-0 systemd[1]: libpod-conmon-983c005d2fd3a552096e63564c4e5548917e218f8e01d5cc68a022d93413aa6b.scope: Deactivated successfully.
Sep 30 14:15:50 compute-0 sudo[92049]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:50 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Sep 30 14:15:50 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:50 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Sep 30 14:15:50 compute-0 sudo[92292]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdaevqzhxkujqaouphqnyicrznrsaiem ; /usr/bin/python3'
Sep 30 14:15:50 compute-0 sudo[92292]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:15:50 compute-0 python3[92296]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   nfs cluster create cephfs --ingress --virtual-ip=192.168.122.2/24 --ingress-mode=haproxy-protocol '--placement=compute-0 compute-1 compute-2 '
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:15:50 compute-0 sudo[92216]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:50 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:50 compute-0 podman[92311]: 2025-09-30 14:15:50.930236186 +0000 UTC m=+0.044563505 container create a2c8b1984260c7bebffd61b48f6deeb054ef43aada208bbeda39d4c1e5c88553 (image=quay.io/ceph/ceph:v19, name=tender_swanson, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Sep 30 14:15:50 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Sep 30 14:15:50 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Sep 30 14:15:50 compute-0 ceph-mgr[74485]: [cephadm INFO root] Adjusting osd_memory_target on compute-1 to 127.8M
Sep 30 14:15:50 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-1 to 127.8M
Sep 30 14:15:50 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Sep 30 14:15:50 compute-0 ceph-mgr[74485]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-1 to 134071500: error parsing value: Value '134071500' is below minimum 939524096
Sep 30 14:15:50 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-1 to 134071500: error parsing value: Value '134071500' is below minimum 939524096
Sep 30 14:15:50 compute-0 systemd[1]: Started libpod-conmon-a2c8b1984260c7bebffd61b48f6deeb054ef43aada208bbeda39d4c1e5c88553.scope.
Sep 30 14:15:51 compute-0 podman[92311]: 2025-09-30 14:15:50.910545558 +0000 UTC m=+0.024872937 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:15:51 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:15:51 compute-0 sudo[92326]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:15:51 compute-0 sudo[92326]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a53f048f2af0beb0a0c71f1db495d701639a935edca06c5cf2948cd88ceefa3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:15:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a53f048f2af0beb0a0c71f1db495d701639a935edca06c5cf2948cd88ceefa3/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Sep 30 14:15:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a53f048f2af0beb0a0c71f1db495d701639a935edca06c5cf2948cd88ceefa3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:15:51 compute-0 sudo[92326]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:51 compute-0 podman[92311]: 2025-09-30 14:15:51.02600071 +0000 UTC m=+0.140328069 container init a2c8b1984260c7bebffd61b48f6deeb054ef43aada208bbeda39d4c1e5c88553 (image=quay.io/ceph/ceph:v19, name=tender_swanson, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:15:51 compute-0 podman[92311]: 2025-09-30 14:15:51.032682216 +0000 UTC m=+0.147009545 container start a2c8b1984260c7bebffd61b48f6deeb054ef43aada208bbeda39d4c1e5c88553 (image=quay.io/ceph/ceph:v19, name=tender_swanson, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:15:51 compute-0 podman[92311]: 2025-09-30 14:15:51.039249859 +0000 UTC m=+0.153577218 container attach a2c8b1984260c7bebffd61b48f6deeb054ef43aada208bbeda39d4c1e5c88553 (image=quay.io/ceph/ceph:v19, name=tender_swanson, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Sep 30 14:15:51 compute-0 sudo[92354]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Sep 30 14:15:51 compute-0 sudo[92354]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:51 compute-0 sudo[92354]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:51 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Sep 30 14:15:51 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 14:15:51 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.14454 -' entity='client.admin' cmd=[{"prefix": "nfs cluster create", "cluster_id": "cephfs", "ingress": true, "virtual_ip": "192.168.122.2/24", "ingress_mode": "haproxy-protocol", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:15:51 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true} v 0)
Sep 30 14:15:51 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]: dispatch
Sep 30 14:15:51 compute-0 ceph-mon[74194]: [30/Sep/2025:14:15:49] ENGINE Bus STARTING
Sep 30 14:15:51 compute-0 ceph-mon[74194]: [30/Sep/2025:14:15:49] ENGINE Serving on https://192.168.122.100:7150
Sep 30 14:15:51 compute-0 ceph-mon[74194]: [30/Sep/2025:14:15:49] ENGINE Client ('192.168.122.100', 35416) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Sep 30 14:15:51 compute-0 ceph-mon[74194]: [30/Sep/2025:14:15:49] ENGINE Serving on http://192.168.122.100:8765
Sep 30 14:15:51 compute-0 ceph-mon[74194]: [30/Sep/2025:14:15:49] ENGINE Bus STARTED
Sep 30 14:15:51 compute-0 ceph-mon[74194]: pgmap v5: 104 pgs: 104 active+clean; 454 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Sep 30 14:15:51 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:51 compute-0 ceph-mon[74194]: from='client.14442 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:15:51 compute-0 ceph-mon[74194]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Sep 30 14:15:51 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:51 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:51 compute-0 ceph-mon[74194]: mgrmap e23: compute-0.buxlkm(active, since 2s), standbys: compute-1.zeqptq, compute-2.udzudc
Sep 30 14:15:51 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:51 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:51 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:51 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Sep 30 14:15:51 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:51 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Sep 30 14:15:51 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:51 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 14:15:51 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v6: 104 pgs: 104 active+clean; 454 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Sep 30 14:15:51 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:51 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Sep 30 14:15:51 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Sep 30 14:15:51 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Sep 30 14:15:52 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:52 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Sep 30 14:15:52 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Sep 30 14:15:52 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:15:52 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:15:52 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 14:15:52 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 14:15:52 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Sep 30 14:15:52 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Sep 30 14:15:52 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Sep 30 14:15:52 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Sep 30 14:15:52 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Sep 30 14:15:52 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Sep 30 14:15:52 compute-0 sudo[92420]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Sep 30 14:15:52 compute-0 sudo[92420]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:52 compute-0 sudo[92420]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:52 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]': finished
Sep 30 14:15:52 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Sep 30 14:15:52 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : mgrmap e24: compute-0.buxlkm(active, since 4s), standbys: compute-1.zeqptq, compute-2.udzudc
Sep 30 14:15:52 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Sep 30 14:15:52 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"} v 0)
Sep 30 14:15:52 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]: dispatch
Sep 30 14:15:52 compute-0 sudo[92445]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/etc/ceph
Sep 30 14:15:52 compute-0 sudo[92445]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:52 compute-0 sudo[92445]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:52 compute-0 sudo[92470]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/etc/ceph/ceph.conf.new
Sep 30 14:15:52 compute-0 sudo[92470]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:52 compute-0 sudo[92470]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:52 compute-0 sudo[92495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6
Sep 30 14:15:52 compute-0 sudo[92495]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:52 compute-0 sudo[92495]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:52 compute-0 sudo[92520]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/etc/ceph/ceph.conf.new
Sep 30 14:15:52 compute-0 sudo[92520]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:52 compute-0 sudo[92520]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:52 compute-0 sudo[92568]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/etc/ceph/ceph.conf.new
Sep 30 14:15:52 compute-0 sudo[92568]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:52 compute-0 sudo[92568]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:52 compute-0 sudo[92593]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/etc/ceph/ceph.conf.new
Sep 30 14:15:52 compute-0 sudo[92593]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:52 compute-0 sudo[92593]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:52 compute-0 ceph-mon[74194]: Adjusting osd_memory_target on compute-1 to 127.8M
Sep 30 14:15:52 compute-0 ceph-mon[74194]: Unable to set osd_memory_target on compute-1 to 134071500: error parsing value: Value '134071500' is below minimum 939524096
Sep 30 14:15:52 compute-0 ceph-mon[74194]: from='client.14454 -' entity='client.admin' cmd=[{"prefix": "nfs cluster create", "cluster_id": "cephfs", "ingress": true, "virtual_ip": "192.168.122.2/24", "ingress_mode": "haproxy-protocol", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:15:52 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]: dispatch
Sep 30 14:15:52 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:52 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:52 compute-0 ceph-mon[74194]: pgmap v6: 104 pgs: 104 active+clean; 454 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Sep 30 14:15:52 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:52 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Sep 30 14:15:52 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:52 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Sep 30 14:15:52 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:15:52 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 14:15:52 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]': finished
Sep 30 14:15:52 compute-0 ceph-mon[74194]: mgrmap e24: compute-0.buxlkm(active, since 4s), standbys: compute-1.zeqptq, compute-2.udzudc
Sep 30 14:15:52 compute-0 ceph-mon[74194]: osdmap e46: 3 total, 3 up, 3 in
Sep 30 14:15:52 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]: dispatch
Sep 30 14:15:52 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.conf
Sep 30 14:15:52 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.conf
Sep 30 14:15:52 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.conf
Sep 30 14:15:52 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.conf
Sep 30 14:15:52 compute-0 sudo[92618]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Sep 30 14:15:52 compute-0 sudo[92618]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:52 compute-0 sudo[92618]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:52 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.conf
Sep 30 14:15:52 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.conf
Sep 30 14:15:52 compute-0 sudo[92643]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config
Sep 30 14:15:52 compute-0 sudo[92643]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:52 compute-0 sudo[92643]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:52 compute-0 sudo[92668]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config
Sep 30 14:15:52 compute-0 sudo[92668]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:52 compute-0 sudo[92668]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:52 compute-0 sudo[92693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.conf.new
Sep 30 14:15:52 compute-0 sudo[92693]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:52 compute-0 sudo[92693]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:52 compute-0 sudo[92718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6
Sep 30 14:15:52 compute-0 sudo[92718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:52 compute-0 sudo[92718]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:52 compute-0 sudo[92743]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.conf.new
Sep 30 14:15:52 compute-0 sudo[92743]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:52 compute-0 sudo[92743]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:52 compute-0 sudo[92791]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.conf.new
Sep 30 14:15:52 compute-0 sudo[92791]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:52 compute-0 sudo[92791]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:53 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Sep 30 14:15:53 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Sep 30 14:15:53 compute-0 sudo[92816]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.conf.new
Sep 30 14:15:53 compute-0 sudo[92816]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:53 compute-0 sudo[92816]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:53 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Sep 30 14:15:53 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Sep 30 14:15:53 compute-0 sudo[92841]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.conf.new /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.conf
Sep 30 14:15:53 compute-0 sudo[92841]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:53 compute-0 sudo[92841]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:53 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Sep 30 14:15:53 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Sep 30 14:15:53 compute-0 sudo[92866]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Sep 30 14:15:53 compute-0 sudo[92866]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:53 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Sep 30 14:15:53 compute-0 sudo[92866]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:53 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]': finished
Sep 30 14:15:53 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Sep 30 14:15:53 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Sep 30 14:15:53 compute-0 sudo[92891]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/etc/ceph
Sep 30 14:15:53 compute-0 sudo[92891]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:53 compute-0 sudo[92891]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:53 compute-0 ceph-mgr[74485]: [nfs INFO nfs.cluster] Created empty object:conf-nfs.cephfs
Sep 30 14:15:53 compute-0 ceph-mgr[74485]: [cephadm INFO root] Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Sep 30 14:15:53 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Sep 30 14:15:53 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 14:15:53 compute-0 sudo[92926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/etc/ceph/ceph.client.admin.keyring.new
Sep 30 14:15:53 compute-0 sudo[92926]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:53 compute-0 sudo[92926]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:53 compute-0 sudo[92951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6
Sep 30 14:15:53 compute-0 sudo[92951]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:53 compute-0 sudo[92951]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:53 compute-0 sudo[92976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/etc/ceph/ceph.client.admin.keyring.new
Sep 30 14:15:53 compute-0 sudo[92976]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:53 compute-0 sudo[92976]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:53 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:53 compute-0 ceph-mgr[74485]: [cephadm INFO root] Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Sep 30 14:15:53 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Sep 30 14:15:53 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Sep 30 14:15:53 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.client.admin.keyring
Sep 30 14:15:53 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.client.admin.keyring
Sep 30 14:15:53 compute-0 sudo[93024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/etc/ceph/ceph.client.admin.keyring.new
Sep 30 14:15:53 compute-0 sudo[93024]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:53 compute-0 sudo[93024]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:53 compute-0 sudo[93049]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/etc/ceph/ceph.client.admin.keyring.new
Sep 30 14:15:53 compute-0 sudo[93049]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:53 compute-0 sudo[93049]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:53 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.client.admin.keyring
Sep 30 14:15:53 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.client.admin.keyring
Sep 30 14:15:53 compute-0 sudo[93074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Sep 30 14:15:53 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:53 compute-0 sudo[93074]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:53 compute-0 sudo[93074]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:53 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.client.admin.keyring
Sep 30 14:15:53 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.client.admin.keyring
Sep 30 14:15:53 compute-0 systemd[1]: libpod-a2c8b1984260c7bebffd61b48f6deeb054ef43aada208bbeda39d4c1e5c88553.scope: Deactivated successfully.
Sep 30 14:15:53 compute-0 podman[92311]: 2025-09-30 14:15:53.657046862 +0000 UTC m=+2.771374191 container died a2c8b1984260c7bebffd61b48f6deeb054ef43aada208bbeda39d4c1e5c88553 (image=quay.io/ceph/ceph:v19, name=tender_swanson, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:15:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-4a53f048f2af0beb0a0c71f1db495d701639a935edca06c5cf2948cd88ceefa3-merged.mount: Deactivated successfully.
Sep 30 14:15:53 compute-0 sudo[93100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config
Sep 30 14:15:53 compute-0 sudo[93100]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:53 compute-0 sudo[93100]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:53 compute-0 sudo[93139]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config
Sep 30 14:15:53 compute-0 sudo[93139]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:53 compute-0 sudo[93139]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:53 compute-0 podman[92311]: 2025-09-30 14:15:53.769757712 +0000 UTC m=+2.884085041 container remove a2c8b1984260c7bebffd61b48f6deeb054ef43aada208bbeda39d4c1e5c88553 (image=quay.io/ceph/ceph:v19, name=tender_swanson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Sep 30 14:15:53 compute-0 systemd[1]: libpod-conmon-a2c8b1984260c7bebffd61b48f6deeb054ef43aada208bbeda39d4c1e5c88553.scope: Deactivated successfully.
Sep 30 14:15:53 compute-0 sudo[92292]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:53 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v9: 105 pgs: 1 unknown, 104 active+clean; 454 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Sep 30 14:15:53 compute-0 sudo[93164]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.client.admin.keyring.new
Sep 30 14:15:53 compute-0 sudo[93164]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:53 compute-0 sudo[93164]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:53 compute-0 sudo[93189]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6
Sep 30 14:15:53 compute-0 sudo[93189]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:53 compute-0 sudo[93189]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:53 compute-0 ceph-mon[74194]: Updating compute-0:/etc/ceph/ceph.conf
Sep 30 14:15:53 compute-0 ceph-mon[74194]: Updating compute-1:/etc/ceph/ceph.conf
Sep 30 14:15:53 compute-0 ceph-mon[74194]: Updating compute-2:/etc/ceph/ceph.conf
Sep 30 14:15:53 compute-0 ceph-mon[74194]: Updating compute-2:/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.conf
Sep 30 14:15:53 compute-0 ceph-mon[74194]: Updating compute-1:/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.conf
Sep 30 14:15:53 compute-0 ceph-mon[74194]: Updating compute-0:/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.conf
Sep 30 14:15:53 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]': finished
Sep 30 14:15:53 compute-0 ceph-mon[74194]: osdmap e47: 3 total, 3 up, 3 in
Sep 30 14:15:53 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:53 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:53 compute-0 sudo[93214]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.client.admin.keyring.new
Sep 30 14:15:53 compute-0 sudo[93214]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:53 compute-0 sudo[93214]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:54 compute-0 sudo[93262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.client.admin.keyring.new
Sep 30 14:15:54 compute-0 sudo[93262]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:54 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Sep 30 14:15:54 compute-0 sudo[93262]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:54 compute-0 sudo[93287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.client.admin.keyring.new
Sep 30 14:15:54 compute-0 sudo[93287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:54 compute-0 sudo[93287]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:54 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Sep 30 14:15:54 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:54 compute-0 sudo[93325]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.client.admin.keyring.new /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.client.admin.keyring
Sep 30 14:15:54 compute-0 sudo[93325]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:15:54 compute-0 sudo[93325]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:54 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Sep 30 14:15:54 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Sep 30 14:15:54 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 14:15:54 compute-0 sudo[93412]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tllnkywvqknabhlkstijmfseubgupeqp ; /usr/bin/python3'
Sep 30 14:15:54 compute-0 sudo[93412]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:15:54 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:54 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Sep 30 14:15:54 compute-0 python3[93414]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Sep 30 14:15:54 compute-0 sudo[93412]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:54 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Sep 30 14:15:54 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:54 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Sep 30 14:15:54 compute-0 sudo[93485]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bybzmjtfikctzgkfruqzabworccjhuqd ; /usr/bin/python3'
Sep 30 14:15:54 compute-0 sudo[93485]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:15:54 compute-0 python3[93487]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759241754.1564746-35454-176083497335161/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=568fd117e3e38e19bc8df91cc4c576927d41f3c4 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:15:54 compute-0 sudo[93485]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:54 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:54 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : mgrmap e25: compute-0.buxlkm(active, since 7s), standbys: compute-1.zeqptq, compute-2.udzudc
Sep 30 14:15:54 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 14:15:55 compute-0 sudo[93535]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wveoxhrzmeqskmbpbkbrfkarlbcnnrlw ; /usr/bin/python3'
Sep 30 14:15:55 compute-0 sudo[93535]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:15:55 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e48 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 14:15:55 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:55 compute-0 python3[93537]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:15:55 compute-0 podman[93538]: 2025-09-30 14:15:55.393636789 +0000 UTC m=+0.087907507 container create 60998f799c28fb1549963495da010b0c7b801c2de6b81a62b16d5eed0c24bbb9 (image=quay.io/ceph/ceph:v19, name=nostalgic_wu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:15:55 compute-0 podman[93538]: 2025-09-30 14:15:55.325551625 +0000 UTC m=+0.019822363 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:15:55 compute-0 systemd[1]: Started libpod-conmon-60998f799c28fb1549963495da010b0c7b801c2de6b81a62b16d5eed0c24bbb9.scope.
Sep 30 14:15:55 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:15:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/895f72706623ed4e3baaacb845e4ae74308adc766707affdc6a1f3c906f1ca48/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:15:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/895f72706623ed4e3baaacb845e4ae74308adc766707affdc6a1f3c906f1ca48/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:15:55 compute-0 podman[93538]: 2025-09-30 14:15:55.705464624 +0000 UTC m=+0.399735362 container init 60998f799c28fb1549963495da010b0c7b801c2de6b81a62b16d5eed0c24bbb9 (image=quay.io/ceph/ceph:v19, name=nostalgic_wu, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Sep 30 14:15:55 compute-0 podman[93538]: 2025-09-30 14:15:55.713323521 +0000 UTC m=+0.407594229 container start 60998f799c28fb1549963495da010b0c7b801c2de6b81a62b16d5eed0c24bbb9 (image=quay.io/ceph/ceph:v19, name=nostalgic_wu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Sep 30 14:15:55 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v11: 105 pgs: 105 active+clean; 454 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 12 op/s
Sep 30 14:15:55 compute-0 ceph-mon[74194]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Sep 30 14:15:55 compute-0 ceph-mon[74194]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Sep 30 14:15:55 compute-0 ceph-mon[74194]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Sep 30 14:15:55 compute-0 ceph-mon[74194]: Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Sep 30 14:15:55 compute-0 ceph-mon[74194]: Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Sep 30 14:15:55 compute-0 ceph-mon[74194]: Updating compute-2:/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.client.admin.keyring
Sep 30 14:15:55 compute-0 ceph-mon[74194]: Updating compute-1:/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.client.admin.keyring
Sep 30 14:15:55 compute-0 ceph-mon[74194]: Updating compute-0:/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.client.admin.keyring
Sep 30 14:15:55 compute-0 ceph-mon[74194]: pgmap v9: 105 pgs: 1 unknown, 104 active+clean; 454 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Sep 30 14:15:55 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:55 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:55 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:55 compute-0 ceph-mon[74194]: osdmap e48: 3 total, 3 up, 3 in
Sep 30 14:15:55 compute-0 podman[93538]: 2025-09-30 14:15:55.912573231 +0000 UTC m=+0.606843949 container attach 60998f799c28fb1549963495da010b0c7b801c2de6b81a62b16d5eed0c24bbb9 (image=quay.io/ceph/ceph:v19, name=nostalgic_wu, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Sep 30 14:15:55 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:55 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 14:15:56 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth import"} v 0)
Sep 30 14:15:56 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2512736711' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Sep 30 14:15:56 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:56 compute-0 ceph-mgr[74485]: [progress INFO root] update: starting ev d573815c-452d-47ce-ab05-00264a997cf4 (Updating node-exporter deployment (+2 -> 3))
Sep 30 14:15:56 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Deploying daemon node-exporter.compute-1 on compute-1
Sep 30 14:15:56 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Deploying daemon node-exporter.compute-1 on compute-1
Sep 30 14:15:56 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2512736711' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Sep 30 14:15:56 compute-0 systemd[1]: libpod-60998f799c28fb1549963495da010b0c7b801c2de6b81a62b16d5eed0c24bbb9.scope: Deactivated successfully.
Sep 30 14:15:56 compute-0 podman[93538]: 2025-09-30 14:15:56.525415118 +0000 UTC m=+1.219685836 container died 60998f799c28fb1549963495da010b0c7b801c2de6b81a62b16d5eed0c24bbb9 (image=quay.io/ceph/ceph:v19, name=nostalgic_wu, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:15:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-895f72706623ed4e3baaacb845e4ae74308adc766707affdc6a1f3c906f1ca48-merged.mount: Deactivated successfully.
Sep 30 14:15:56 compute-0 podman[93538]: 2025-09-30 14:15:56.973223488 +0000 UTC m=+1.667494206 container remove 60998f799c28fb1549963495da010b0c7b801c2de6b81a62b16d5eed0c24bbb9 (image=quay.io/ceph/ceph:v19, name=nostalgic_wu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:15:56 compute-0 systemd[1]: libpod-conmon-60998f799c28fb1549963495da010b0c7b801c2de6b81a62b16d5eed0c24bbb9.scope: Deactivated successfully.
Sep 30 14:15:56 compute-0 sudo[93535]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:57 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:57 compute-0 ceph-mon[74194]: mgrmap e25: compute-0.buxlkm(active, since 7s), standbys: compute-1.zeqptq, compute-2.udzudc
Sep 30 14:15:57 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:57 compute-0 ceph-mon[74194]: pgmap v11: 105 pgs: 105 active+clean; 454 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 12 op/s
Sep 30 14:15:57 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:57 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/2512736711' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Sep 30 14:15:57 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:57 compute-0 ceph-mon[74194]: Deploying daemon node-exporter.compute-1 on compute-1
Sep 30 14:15:57 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/2512736711' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Sep 30 14:15:57 compute-0 sudo[93616]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nskztjhkfmkdoviuaimxhenvyvjfeupm ; /usr/bin/python3'
Sep 30 14:15:57 compute-0 sudo[93616]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:15:57 compute-0 python3[93618]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:15:57 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v12: 105 pgs: 105 active+clean; 454 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 12 op/s
Sep 30 14:15:57 compute-0 podman[93620]: 2025-09-30 14:15:57.809495502 +0000 UTC m=+0.043553799 container create 40b657dd7ed0d615e0745beb0a2c9d2181dd9e1dea007f5cc1c83d20e75cee18 (image=quay.io/ceph/ceph:v19, name=zen_pare, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:15:57 compute-0 systemd[1]: Started libpod-conmon-40b657dd7ed0d615e0745beb0a2c9d2181dd9e1dea007f5cc1c83d20e75cee18.scope.
Sep 30 14:15:57 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:15:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fef2f0826a6fcc64cd54f755ec0fb66efca3e281146b8fd0f978af68257e004/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:15:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fef2f0826a6fcc64cd54f755ec0fb66efca3e281146b8fd0f978af68257e004/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:15:57 compute-0 podman[93620]: 2025-09-30 14:15:57.788966581 +0000 UTC m=+0.023024908 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:15:57 compute-0 podman[93620]: 2025-09-30 14:15:57.896539205 +0000 UTC m=+0.130597522 container init 40b657dd7ed0d615e0745beb0a2c9d2181dd9e1dea007f5cc1c83d20e75cee18 (image=quay.io/ceph/ceph:v19, name=zen_pare, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Sep 30 14:15:57 compute-0 podman[93620]: 2025-09-30 14:15:57.902488602 +0000 UTC m=+0.136546899 container start 40b657dd7ed0d615e0745beb0a2c9d2181dd9e1dea007f5cc1c83d20e75cee18 (image=quay.io/ceph/ceph:v19, name=zen_pare, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:15:57 compute-0 podman[93620]: 2025-09-30 14:15:57.906268592 +0000 UTC m=+0.140326909 container attach 40b657dd7ed0d615e0745beb0a2c9d2181dd9e1dea007f5cc1c83d20e75cee18 (image=quay.io/ceph/ceph:v19, name=zen_pare, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Sep 30 14:15:58 compute-0 ceph-mon[74194]: pgmap v12: 105 pgs: 105 active+clean; 454 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 12 op/s
Sep 30 14:15:58 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Sep 30 14:15:58 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3702902197' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Sep 30 14:15:58 compute-0 zen_pare[93636]: 
Sep 30 14:15:58 compute-0 zen_pare[93636]: {"fsid":"5e3c7776-ac03-5698-b79f-a6dc2d80cae6","health":{"status":"HEALTH_ERR","checks":{"BLUESTORE_SLOW_OP_ALERT":{"severity":"HEALTH_WARN","summary":{"message":"1 OSD(s) experiencing slow operations in BlueStore","count":1},"muted":false},"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":14,"quorum":[0,1,2],"quorum_names":["compute-0","compute-2","compute-1"],"quorum_age":84,"monmap":{"epoch":3,"min_mon_release_name":"squid","num_mons":3},"osdmap":{"epoch":48,"num_osds":3,"num_up_osds":3,"osd_up_since":1759241709,"num_in_osds":3,"osd_in_since":1759241682,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":105}],"num_pgs":105,"num_pools":12,"num_objects":195,"data_bytes":464595,"bytes_used":84570112,"bytes_avail":64327356416,"bytes_total":64411926528,"read_bytes_sec":30025,"write_bytes_sec":0,"read_op_per_sec":9,"write_op_per_sec":2},"fsmap":{"epoch":2,"btime":"2025-09-30T14:15:48:812491+0000","id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":2,"modules":["cephadm","dashboard","iostat","nfs","restful"],"services":{"dashboard":"http://192.168.122.100:8443/"}},"servicemap":{"epoch":5,"modified":"2025-09-30T14:15:49.794924+0000","services":{"mgr":{"daemons":{"summary":"","compute-0.buxlkm":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-1.zeqptq":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2.udzudc":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"mon":{"daemons":{"summary":"","compute-0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"rgw":{"daemons":{"summary":"","24148":{"start_epoch":5,"start_stamp":"2025-09-30T14:15:48.818621+0000","gid":24148,"addr":"192.168.122.102:0/701204040","metadata":{"arch":"x86_64","ceph_release":"squid","ceph_version":"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)","ceph_version_short":"19.2.3","container_hostname":"compute-2","container_image":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","cpu":"AMD EPYC-Rome Processor","distro":"centos","distro_description":"CentOS Stream 9","distro_version":"9","frontend_config#0":"beast endpoint=192.168.122.102:8082","frontend_type#0":"beast","hostname":"compute-2","id":"rgw.compute-2.evkboy","kernel_description":"#1 SMP PREEMPT_DYNAMIC Mon Sep 15 21:46:13 UTC 2025","kernel_version":"5.14.0-617.el9.x86_64","mem_swap_kb":"1048572","mem_total_kb":"7864112","num_handles":"1","os":"Linux","pid":"2","realm_id":"","realm_name":"","zone_id":"8c90dce2-7f45-4620-bb41-00dca60054c7","zone_name":"default","zonegroup_id":"4cef40d9-d614-4ca3-b034-9be9bc4080b8","zonegroup_name":"default"},"task_status":{}}}}}},"progress_events":{}}
Sep 30 14:15:58 compute-0 systemd[1]: libpod-40b657dd7ed0d615e0745beb0a2c9d2181dd9e1dea007f5cc1c83d20e75cee18.scope: Deactivated successfully.
Sep 30 14:15:58 compute-0 podman[93620]: 2025-09-30 14:15:58.350520577 +0000 UTC m=+0.584578874 container died 40b657dd7ed0d615e0745beb0a2c9d2181dd9e1dea007f5cc1c83d20e75cee18 (image=quay.io/ceph/ceph:v19, name=zen_pare, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Sep 30 14:15:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-0fef2f0826a6fcc64cd54f755ec0fb66efca3e281146b8fd0f978af68257e004-merged.mount: Deactivated successfully.
Sep 30 14:15:58 compute-0 podman[93620]: 2025-09-30 14:15:58.386435993 +0000 UTC m=+0.620494290 container remove 40b657dd7ed0d615e0745beb0a2c9d2181dd9e1dea007f5cc1c83d20e75cee18 (image=quay.io/ceph/ceph:v19, name=zen_pare, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Sep 30 14:15:58 compute-0 systemd[1]: libpod-conmon-40b657dd7ed0d615e0745beb0a2c9d2181dd9e1dea007f5cc1c83d20e75cee18.scope: Deactivated successfully.
Sep 30 14:15:58 compute-0 sudo[93616]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:58 compute-0 sudo[93696]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pelbmdiwjfujbenzqqlpgwyrtdtsmpew ; /usr/bin/python3'
Sep 30 14:15:58 compute-0 sudo[93696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:15:58 compute-0 python3[93698]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:15:58 compute-0 podman[93699]: 2025-09-30 14:15:58.730120929 +0000 UTC m=+0.036762300 container create bae4bf4b80c1580864f5ef969e5e61520d94daa1191bd1cba7639d84610b5678 (image=quay.io/ceph/ceph:v19, name=sleepy_burnell, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:15:58 compute-0 systemd[1]: Started libpod-conmon-bae4bf4b80c1580864f5ef969e5e61520d94daa1191bd1cba7639d84610b5678.scope.
Sep 30 14:15:58 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:15:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97a89598ee21a281eccfa0ab08db16380297be367a4f49271cde0b0454b6593d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:15:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97a89598ee21a281eccfa0ab08db16380297be367a4f49271cde0b0454b6593d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:15:58 compute-0 podman[93699]: 2025-09-30 14:15:58.791562448 +0000 UTC m=+0.098203859 container init bae4bf4b80c1580864f5ef969e5e61520d94daa1191bd1cba7639d84610b5678 (image=quay.io/ceph/ceph:v19, name=sleepy_burnell, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:15:58 compute-0 podman[93699]: 2025-09-30 14:15:58.797000991 +0000 UTC m=+0.103642352 container start bae4bf4b80c1580864f5ef969e5e61520d94daa1191bd1cba7639d84610b5678 (image=quay.io/ceph/ceph:v19, name=sleepy_burnell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Sep 30 14:15:58 compute-0 podman[93699]: 2025-09-30 14:15:58.801734276 +0000 UTC m=+0.108375687 container attach bae4bf4b80c1580864f5ef969e5e61520d94daa1191bd1cba7639d84610b5678 (image=quay.io/ceph/ceph:v19, name=sleepy_burnell, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325)
Sep 30 14:15:58 compute-0 podman[93699]: 2025-09-30 14:15:58.713504241 +0000 UTC m=+0.020145622 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:15:59 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Sep 30 14:15:59 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2933351013' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 14:15:59 compute-0 sleepy_burnell[93714]: 
Sep 30 14:15:59 compute-0 sleepy_burnell[93714]: {"epoch":3,"fsid":"5e3c7776-ac03-5698-b79f-a6dc2d80cae6","modified":"2025-09-30T14:14:28.667145Z","created":"2025-09-30T14:12:03.527961Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"compute-2","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.102:3300","nonce":0},{"type":"v1","addr":"192.168.122.102:6789","nonce":0}]},"addr":"192.168.122.102:6789/0","public_addr":"192.168.122.102:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":2,"name":"compute-1","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.101:3300","nonce":0},{"type":"v1","addr":"192.168.122.101:6789","nonce":0}]},"addr":"192.168.122.101:6789/0","public_addr":"192.168.122.101:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0,1,2]}
Sep 30 14:15:59 compute-0 sleepy_burnell[93714]: dumped monmap epoch 3
Sep 30 14:15:59 compute-0 systemd[1]: libpod-bae4bf4b80c1580864f5ef969e5e61520d94daa1191bd1cba7639d84610b5678.scope: Deactivated successfully.
Sep 30 14:15:59 compute-0 podman[93739]: 2025-09-30 14:15:59.288333026 +0000 UTC m=+0.034023308 container died bae4bf4b80c1580864f5ef969e5e61520d94daa1191bd1cba7639d84610b5678 (image=quay.io/ceph/ceph:v19, name=sleepy_burnell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Sep 30 14:15:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-97a89598ee21a281eccfa0ab08db16380297be367a4f49271cde0b0454b6593d-merged.mount: Deactivated successfully.
Sep 30 14:15:59 compute-0 podman[93739]: 2025-09-30 14:15:59.318960473 +0000 UTC m=+0.064650725 container remove bae4bf4b80c1580864f5ef969e5e61520d94daa1191bd1cba7639d84610b5678 (image=quay.io/ceph/ceph:v19, name=sleepy_burnell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:15:59 compute-0 systemd[1]: libpod-conmon-bae4bf4b80c1580864f5ef969e5e61520d94daa1191bd1cba7639d84610b5678.scope: Deactivated successfully.
Sep 30 14:15:59 compute-0 sudo[93696]: pam_unix(sudo:session): session closed for user root
Sep 30 14:15:59 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/3702902197' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Sep 30 14:15:59 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Sep 30 14:15:59 compute-0 sudo[93777]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmcejcqadxbrtfwdtxoxbixyaaedyspt ; /usr/bin/python3'
Sep 30 14:15:59 compute-0 sudo[93777]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:15:59 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v13: 105 pgs: 105 active+clean; 454 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 0 B/s wr, 9 op/s
Sep 30 14:15:59 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:15:59 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Sep 30 14:15:59 compute-0 python3[93779]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:15:59 compute-0 podman[93780]: 2025-09-30 14:15:59.996430253 +0000 UTC m=+0.077113493 container create be16c1f76a2b1db4864373150d6a89416e80eb6a06be841cbabf424b9ddb0b71 (image=quay.io/ceph/ceph:v19, name=fervent_haslett, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2)
Sep 30 14:16:00 compute-0 systemd[1]: Started libpod-conmon-be16c1f76a2b1db4864373150d6a89416e80eb6a06be841cbabf424b9ddb0b71.scope.
Sep 30 14:16:00 compute-0 podman[93780]: 2025-09-30 14:15:59.944020522 +0000 UTC m=+0.024703792 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:16:00 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:16:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76c42277a9696a20fe353829fbcc753c0ef77e94efa4f235f943797b05300de9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:16:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76c42277a9696a20fe353829fbcc753c0ef77e94efa4f235f943797b05300de9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:16:00 compute-0 podman[93780]: 2025-09-30 14:16:00.110086918 +0000 UTC m=+0.190770158 container init be16c1f76a2b1db4864373150d6a89416e80eb6a06be841cbabf424b9ddb0b71 (image=quay.io/ceph/ceph:v19, name=fervent_haslett, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:16:00 compute-0 podman[93780]: 2025-09-30 14:16:00.119951678 +0000 UTC m=+0.200634908 container start be16c1f76a2b1db4864373150d6a89416e80eb6a06be841cbabf424b9ddb0b71 (image=quay.io/ceph/ceph:v19, name=fervent_haslett, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:16:00 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:00 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Sep 30 14:16:00 compute-0 podman[93780]: 2025-09-30 14:16:00.144154215 +0000 UTC m=+0.224837435 container attach be16c1f76a2b1db4864373150d6a89416e80eb6a06be841cbabf424b9ddb0b71 (image=quay.io/ceph/ceph:v19, name=fervent_haslett, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:16:00 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e48 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 14:16:00 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:00 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Deploying daemon node-exporter.compute-2 on compute-2
Sep 30 14:16:00 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Deploying daemon node-exporter.compute-2 on compute-2
Sep 30 14:16:00 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/2933351013' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 14:16:00 compute-0 ceph-mon[74194]: pgmap v13: 105 pgs: 105 active+clean; 454 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 0 B/s wr, 9 op/s
Sep 30 14:16:00 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:00 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:00 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:00 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0)
Sep 30 14:16:00 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/864796473' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Sep 30 14:16:00 compute-0 fervent_haslett[93795]: [client.openstack]
Sep 30 14:16:00 compute-0 fervent_haslett[93795]:         key = AQAM5dtoAAAAABAAzvguOWjVdWRDH6OkdLxqDw==
Sep 30 14:16:00 compute-0 fervent_haslett[93795]:         caps mgr = "allow *"
Sep 30 14:16:00 compute-0 fervent_haslett[93795]:         caps mon = "profile rbd"
Sep 30 14:16:00 compute-0 fervent_haslett[93795]:         caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Sep 30 14:16:00 compute-0 systemd[1]: libpod-be16c1f76a2b1db4864373150d6a89416e80eb6a06be841cbabf424b9ddb0b71.scope: Deactivated successfully.
Sep 30 14:16:00 compute-0 podman[93780]: 2025-09-30 14:16:00.572714137 +0000 UTC m=+0.653397367 container died be16c1f76a2b1db4864373150d6a89416e80eb6a06be841cbabf424b9ddb0b71 (image=quay.io/ceph/ceph:v19, name=fervent_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Sep 30 14:16:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-76c42277a9696a20fe353829fbcc753c0ef77e94efa4f235f943797b05300de9-merged.mount: Deactivated successfully.
Sep 30 14:16:00 compute-0 podman[93780]: 2025-09-30 14:16:00.604138945 +0000 UTC m=+0.684822175 container remove be16c1f76a2b1db4864373150d6a89416e80eb6a06be841cbabf424b9ddb0b71 (image=quay.io/ceph/ceph:v19, name=fervent_haslett, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:16:00 compute-0 systemd[1]: libpod-conmon-be16c1f76a2b1db4864373150d6a89416e80eb6a06be841cbabf424b9ddb0b71.scope: Deactivated successfully.
Sep 30 14:16:00 compute-0 sudo[93777]: pam_unix(sudo:session): session closed for user root
Sep 30 14:16:01 compute-0 ceph-mon[74194]: Deploying daemon node-exporter.compute-2 on compute-2
Sep 30 14:16:01 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/864796473' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Sep 30 14:16:01 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v14: 105 pgs: 105 active+clean; 454 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s
Sep 30 14:16:02 compute-0 sudo[93979]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fplncjwhixfsksxvwatovebtwsjukoar ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759241761.6666536-35529-250655314345427/async_wrapper.py j956357678869 30 /home/zuul/.ansible/tmp/ansible-tmp-1759241761.6666536-35529-250655314345427/AnsiballZ_command.py _'
Sep 30 14:16:02 compute-0 sudo[93979]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:16:02 compute-0 ansible-async_wrapper.py[93981]: Invoked with j956357678869 30 /home/zuul/.ansible/tmp/ansible-tmp-1759241761.6666536-35529-250655314345427/AnsiballZ_command.py _
Sep 30 14:16:02 compute-0 ansible-async_wrapper.py[93984]: Starting module and watcher
Sep 30 14:16:02 compute-0 ansible-async_wrapper.py[93984]: Start watching 93985 (30)
Sep 30 14:16:02 compute-0 ansible-async_wrapper.py[93985]: Start module (93985)
Sep 30 14:16:02 compute-0 ansible-async_wrapper.py[93981]: Return async_wrapper task started.
Sep 30 14:16:02 compute-0 sudo[93979]: pam_unix(sudo:session): session closed for user root
Sep 30 14:16:02 compute-0 python3[93986]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:16:02 compute-0 podman[93987]: 2025-09-30 14:16:02.37597965 +0000 UTC m=+0.038410743 container create 01cfa0b7deea3d2dbc31b7fb0fa52c958a64d9bfd82a938ebd8b4c9ffac31d33 (image=quay.io/ceph/ceph:v19, name=sweet_pare, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Sep 30 14:16:02 compute-0 systemd[1]: Started libpod-conmon-01cfa0b7deea3d2dbc31b7fb0fa52c958a64d9bfd82a938ebd8b4c9ffac31d33.scope.
Sep 30 14:16:02 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:16:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5024c3593041251ece672c1a2871a9cee0f1b5ef55475c02ddf54fd2a135ed31/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:16:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5024c3593041251ece672c1a2871a9cee0f1b5ef55475c02ddf54fd2a135ed31/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:16:02 compute-0 podman[93987]: 2025-09-30 14:16:02.358366776 +0000 UTC m=+0.020797889 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:16:02 compute-0 podman[93987]: 2025-09-30 14:16:02.462256674 +0000 UTC m=+0.124687777 container init 01cfa0b7deea3d2dbc31b7fb0fa52c958a64d9bfd82a938ebd8b4c9ffac31d33 (image=quay.io/ceph/ceph:v19, name=sweet_pare, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:16:02 compute-0 podman[93987]: 2025-09-30 14:16:02.469302539 +0000 UTC m=+0.131733632 container start 01cfa0b7deea3d2dbc31b7fb0fa52c958a64d9bfd82a938ebd8b4c9ffac31d33 (image=quay.io/ceph/ceph:v19, name=sweet_pare, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid)
Sep 30 14:16:02 compute-0 podman[93987]: 2025-09-30 14:16:02.47275667 +0000 UTC m=+0.135187793 container attach 01cfa0b7deea3d2dbc31b7fb0fa52c958a64d9bfd82a938ebd8b4c9ffac31d33 (image=quay.io/ceph/ceph:v19, name=sweet_pare, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:16:02 compute-0 ceph-mon[74194]: pgmap v14: 105 pgs: 105 active+clean; 454 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s
Sep 30 14:16:02 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.14490 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Sep 30 14:16:02 compute-0 sweet_pare[94002]: 
Sep 30 14:16:02 compute-0 sweet_pare[94002]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Sep 30 14:16:02 compute-0 systemd[1]: libpod-01cfa0b7deea3d2dbc31b7fb0fa52c958a64d9bfd82a938ebd8b4c9ffac31d33.scope: Deactivated successfully.
Sep 30 14:16:02 compute-0 podman[93987]: 2025-09-30 14:16:02.837546321 +0000 UTC m=+0.499977424 container died 01cfa0b7deea3d2dbc31b7fb0fa52c958a64d9bfd82a938ebd8b4c9ffac31d33 (image=quay.io/ceph/ceph:v19, name=sweet_pare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Sep 30 14:16:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-5024c3593041251ece672c1a2871a9cee0f1b5ef55475c02ddf54fd2a135ed31-merged.mount: Deactivated successfully.
Sep 30 14:16:02 compute-0 podman[93987]: 2025-09-30 14:16:02.87434493 +0000 UTC m=+0.536776013 container remove 01cfa0b7deea3d2dbc31b7fb0fa52c958a64d9bfd82a938ebd8b4c9ffac31d33 (image=quay.io/ceph/ceph:v19, name=sweet_pare, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:16:02 compute-0 systemd[1]: libpod-conmon-01cfa0b7deea3d2dbc31b7fb0fa52c958a64d9bfd82a938ebd8b4c9ffac31d33.scope: Deactivated successfully.
Sep 30 14:16:02 compute-0 ansible-async_wrapper.py[93985]: Module complete (93985)
Sep 30 14:16:03 compute-0 sudo[94086]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctquwhrrkkzbrnmlrioyhsgqkkuduyir ; /usr/bin/python3'
Sep 30 14:16:03 compute-0 sudo[94086]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:16:03 compute-0 python3[94088]: ansible-ansible.legacy.async_status Invoked with jid=j956357678869.93981 mode=status _async_dir=/root/.ansible_async
Sep 30 14:16:03 compute-0 sudo[94086]: pam_unix(sudo:session): session closed for user root
Sep 30 14:16:03 compute-0 sudo[94135]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flhpfkpkqeahcofhymfynbuxhvnfhhiu ; /usr/bin/python3'
Sep 30 14:16:03 compute-0 sudo[94135]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:16:03 compute-0 ceph-mon[74194]: from='client.14490 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Sep 30 14:16:03 compute-0 python3[94137]: ansible-ansible.legacy.async_status Invoked with jid=j956357678869.93981 mode=cleanup _async_dir=/root/.ansible_async
Sep 30 14:16:03 compute-0 sudo[94135]: pam_unix(sudo:session): session closed for user root
Sep 30 14:16:03 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v15: 105 pgs: 105 active+clean; 454 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 0 B/s wr, 7 op/s
Sep 30 14:16:04 compute-0 sudo[94161]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kewtychzrezvcmodqdefriitabzvncsc ; /usr/bin/python3'
Sep 30 14:16:04 compute-0 sudo[94161]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:16:04 compute-0 python3[94163]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:16:04 compute-0 podman[94164]: 2025-09-30 14:16:04.439580733 +0000 UTC m=+0.091011540 container create 59e9b61da62cda7c57a0a3196167e653ac98267fdcd9fc339537a93f0699a01e (image=quay.io/ceph/ceph:v19, name=relaxed_cohen, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Sep 30 14:16:04 compute-0 podman[94164]: 2025-09-30 14:16:04.371731684 +0000 UTC m=+0.023162521 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:16:04 compute-0 systemd[1]: Started libpod-conmon-59e9b61da62cda7c57a0a3196167e653ac98267fdcd9fc339537a93f0699a01e.scope.
Sep 30 14:16:04 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:16:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29a891c556b785981dcf9932ba19f1a8e55ed71f39f7d9829e1e88d9f8991bad/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:16:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29a891c556b785981dcf9932ba19f1a8e55ed71f39f7d9829e1e88d9f8991bad/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:16:04 compute-0 podman[94164]: 2025-09-30 14:16:04.519840257 +0000 UTC m=+0.171271154 container init 59e9b61da62cda7c57a0a3196167e653ac98267fdcd9fc339537a93f0699a01e (image=quay.io/ceph/ceph:v19, name=relaxed_cohen, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:16:04 compute-0 podman[94164]: 2025-09-30 14:16:04.527591581 +0000 UTC m=+0.179022388 container start 59e9b61da62cda7c57a0a3196167e653ac98267fdcd9fc339537a93f0699a01e (image=quay.io/ceph/ceph:v19, name=relaxed_cohen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Sep 30 14:16:04 compute-0 podman[94164]: 2025-09-30 14:16:04.554298365 +0000 UTC m=+0.205729192 container attach 59e9b61da62cda7c57a0a3196167e653ac98267fdcd9fc339537a93f0699a01e (image=quay.io/ceph/ceph:v19, name=relaxed_cohen, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:16:04 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.14496 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Sep 30 14:16:04 compute-0 relaxed_cohen[94179]: 
Sep 30 14:16:04 compute-0 relaxed_cohen[94179]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Sep 30 14:16:04 compute-0 systemd[1]: libpod-59e9b61da62cda7c57a0a3196167e653ac98267fdcd9fc339537a93f0699a01e.scope: Deactivated successfully.
Sep 30 14:16:04 compute-0 podman[94204]: 2025-09-30 14:16:04.947972357 +0000 UTC m=+0.020377498 container died 59e9b61da62cda7c57a0a3196167e653ac98267fdcd9fc339537a93f0699a01e (image=quay.io/ceph/ceph:v19, name=relaxed_cohen, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:16:05 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e48 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 14:16:05 compute-0 ceph-mon[74194]: pgmap v15: 105 pgs: 105 active+clean; 454 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 0 B/s wr, 7 op/s
Sep 30 14:16:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-29a891c556b785981dcf9932ba19f1a8e55ed71f39f7d9829e1e88d9f8991bad-merged.mount: Deactivated successfully.
Sep 30 14:16:05 compute-0 podman[94204]: 2025-09-30 14:16:05.64526766 +0000 UTC m=+0.717672781 container remove 59e9b61da62cda7c57a0a3196167e653ac98267fdcd9fc339537a93f0699a01e (image=quay.io/ceph/ceph:v19, name=relaxed_cohen, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Sep 30 14:16:05 compute-0 systemd[1]: libpod-conmon-59e9b61da62cda7c57a0a3196167e653ac98267fdcd9fc339537a93f0699a01e.scope: Deactivated successfully.
Sep 30 14:16:05 compute-0 sudo[94161]: pam_unix(sudo:session): session closed for user root
Sep 30 14:16:05 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v16: 105 pgs: 105 active+clean; 454 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Sep 30 14:16:06 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Sep 30 14:16:06 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:06 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Sep 30 14:16:06 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:06 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Sep 30 14:16:06 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:06 compute-0 ceph-mgr[74485]: [progress INFO root] complete: finished ev d573815c-452d-47ce-ab05-00264a997cf4 (Updating node-exporter deployment (+2 -> 3))
Sep 30 14:16:06 compute-0 ceph-mgr[74485]: [progress INFO root] Completed event d573815c-452d-47ce-ab05-00264a997cf4 (Updating node-exporter deployment (+2 -> 3)) in 10 seconds
Sep 30 14:16:06 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Sep 30 14:16:06 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:06 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 14:16:06 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 14:16:06 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 14:16:06 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 14:16:06 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:16:06 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:16:06 compute-0 sudo[94219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:16:06 compute-0 sudo[94219]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:16:06 compute-0 sudo[94219]: pam_unix(sudo:session): session closed for user root
Sep 30 14:16:06 compute-0 sudo[94244]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 14:16:06 compute-0 sudo[94244]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:16:06 compute-0 sudo[94292]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emvekxixfjmkgqedbgdsgpvtgcgcsoqd ; /usr/bin/python3'
Sep 30 14:16:06 compute-0 sudo[94292]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:16:06 compute-0 python3[94294]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:16:06 compute-0 podman[94330]: 2025-09-30 14:16:06.699193098 +0000 UTC m=+0.042748477 container create 12f5cb380e324f6201b4b16c792385765f2df1785b1c492bc60ab7f964d15c4b (image=quay.io/ceph/ceph:v19, name=nifty_khayyam, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Sep 30 14:16:06 compute-0 systemd[1]: Started libpod-conmon-12f5cb380e324f6201b4b16c792385765f2df1785b1c492bc60ab7f964d15c4b.scope.
Sep 30 14:16:06 compute-0 ceph-mon[74194]: from='client.14496 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Sep 30 14:16:06 compute-0 ceph-mon[74194]: pgmap v16: 105 pgs: 105 active+clean; 454 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Sep 30 14:16:06 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:06 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:06 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:06 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:06 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 14:16:06 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 14:16:06 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:16:06 compute-0 podman[94346]: 2025-09-30 14:16:06.743318661 +0000 UTC m=+0.062583820 container create 0c69227d854f65949e2b9a56cdeea6526a4fb0c7fad23fb598ecefa187b0b648 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_raman, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Sep 30 14:16:06 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:16:06 compute-0 systemd[1]: Started libpod-conmon-0c69227d854f65949e2b9a56cdeea6526a4fb0c7fad23fb598ecefa187b0b648.scope.
Sep 30 14:16:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cd1dfdba698d942705bd709478e6fe36c19da3529ac10a13b9931efbadb07ef/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:16:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cd1dfdba698d942705bd709478e6fe36c19da3529ac10a13b9931efbadb07ef/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:16:06 compute-0 podman[94330]: 2025-09-30 14:16:06.771608366 +0000 UTC m=+0.115163775 container init 12f5cb380e324f6201b4b16c792385765f2df1785b1c492bc60ab7f964d15c4b (image=quay.io/ceph/ceph:v19, name=nifty_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True)
Sep 30 14:16:06 compute-0 podman[94330]: 2025-09-30 14:16:06.675851973 +0000 UTC m=+0.019407362 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:16:06 compute-0 podman[94330]: 2025-09-30 14:16:06.775970631 +0000 UTC m=+0.119526010 container start 12f5cb380e324f6201b4b16c792385765f2df1785b1c492bc60ab7f964d15c4b (image=quay.io/ceph/ceph:v19, name=nifty_khayyam, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:16:06 compute-0 podman[94330]: 2025-09-30 14:16:06.779822052 +0000 UTC m=+0.123377441 container attach 12f5cb380e324f6201b4b16c792385765f2df1785b1c492bc60ab7f964d15c4b (image=quay.io/ceph/ceph:v19, name=nifty_khayyam, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Sep 30 14:16:06 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:16:06 compute-0 podman[94346]: 2025-09-30 14:16:06.795368702 +0000 UTC m=+0.114633881 container init 0c69227d854f65949e2b9a56cdeea6526a4fb0c7fad23fb598ecefa187b0b648 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_raman, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:16:06 compute-0 podman[94346]: 2025-09-30 14:16:06.800155748 +0000 UTC m=+0.119420907 container start 0c69227d854f65949e2b9a56cdeea6526a4fb0c7fad23fb598ecefa187b0b648 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_raman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Sep 30 14:16:06 compute-0 angry_raman[94369]: 167 167
Sep 30 14:16:06 compute-0 systemd[1]: libpod-0c69227d854f65949e2b9a56cdeea6526a4fb0c7fad23fb598ecefa187b0b648.scope: Deactivated successfully.
Sep 30 14:16:06 compute-0 podman[94346]: 2025-09-30 14:16:06.804868852 +0000 UTC m=+0.124134041 container attach 0c69227d854f65949e2b9a56cdeea6526a4fb0c7fad23fb598ecefa187b0b648 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_raman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Sep 30 14:16:06 compute-0 podman[94346]: 2025-09-30 14:16:06.805252092 +0000 UTC m=+0.124517251 container died 0c69227d854f65949e2b9a56cdeea6526a4fb0c7fad23fb598ecefa187b0b648 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_raman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default)
Sep 30 14:16:06 compute-0 podman[94346]: 2025-09-30 14:16:06.72202907 +0000 UTC m=+0.041294259 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:16:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-040ad7a5bd3efa72e535cccd3b720991c9ace5bf39495fb67848464a5f041943-merged.mount: Deactivated successfully.
Sep 30 14:16:06 compute-0 podman[94346]: 2025-09-30 14:16:06.869082955 +0000 UTC m=+0.188348114 container remove 0c69227d854f65949e2b9a56cdeea6526a4fb0c7fad23fb598ecefa187b0b648 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_raman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:16:06 compute-0 systemd[1]: libpod-conmon-0c69227d854f65949e2b9a56cdeea6526a4fb0c7fad23fb598ecefa187b0b648.scope: Deactivated successfully.
Sep 30 14:16:07 compute-0 podman[94414]: 2025-09-30 14:16:07.010258344 +0000 UTC m=+0.039478731 container create 2def5b1e2fbf100c0dcd9d8133ec5f454b6b137139660615c3216173801589c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_pasteur, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Sep 30 14:16:07 compute-0 systemd[1]: Started libpod-conmon-2def5b1e2fbf100c0dcd9d8133ec5f454b6b137139660615c3216173801589c3.scope.
Sep 30 14:16:07 compute-0 podman[94414]: 2025-09-30 14:16:06.991541901 +0000 UTC m=+0.020762288 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:16:07 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:16:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/913d799781bd0a5434d4df740d0c0842fa7b4251dc0fe89a2478ec6967a7ae54/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:16:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/913d799781bd0a5434d4df740d0c0842fa7b4251dc0fe89a2478ec6967a7ae54/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:16:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/913d799781bd0a5434d4df740d0c0842fa7b4251dc0fe89a2478ec6967a7ae54/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:16:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/913d799781bd0a5434d4df740d0c0842fa7b4251dc0fe89a2478ec6967a7ae54/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:16:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/913d799781bd0a5434d4df740d0c0842fa7b4251dc0fe89a2478ec6967a7ae54/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 14:16:07 compute-0 podman[94414]: 2025-09-30 14:16:07.115695112 +0000 UTC m=+0.144915509 container init 2def5b1e2fbf100c0dcd9d8133ec5f454b6b137139660615c3216173801589c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_pasteur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Sep 30 14:16:07 compute-0 podman[94414]: 2025-09-30 14:16:07.122230264 +0000 UTC m=+0.151450651 container start 2def5b1e2fbf100c0dcd9d8133ec5f454b6b137139660615c3216173801589c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_pasteur, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1)
Sep 30 14:16:07 compute-0 podman[94414]: 2025-09-30 14:16:07.125417318 +0000 UTC m=+0.154637735 container attach 2def5b1e2fbf100c0dcd9d8133ec5f454b6b137139660615c3216173801589c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_pasteur, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:16:07 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.14502 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Sep 30 14:16:07 compute-0 nifty_khayyam[94364]: 
Sep 30 14:16:07 compute-0 nifty_khayyam[94364]: [{"networks": ["192.168.122.0/24"], "placement": {"count": 1, "hosts": ["compute-0"]}, "service_name": "alertmanager", "service_type": "alertmanager"}, {"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"networks": ["192.168.122.0/24"], "placement": {"count": 1, "hosts": ["compute-0"]}, "service_name": "grafana", "service_type": "grafana", "spec": {"anonymous_access": true, "protocol": "https"}}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "nfs.cephfs", "service_name": "ingress.nfs.cephfs", "service_type": "ingress", "spec": {"backend_service": "nfs.cephfs", "enable_haproxy_protocol": true, "first_virtual_router_id": 50, "frontend_port": 2049, "monitor_port": 9049, "virtual_ip": "192.168.122.2/24"}}, {"placement": {"count": 2}, "service_id": "rgw.default", "service_name": "ingress.rgw.default", "service_type": "ingress", "spec": {"backend_service": "rgw.rgw", "first_virtual_router_id": 50, "frontend_port": 8080, "monitor_port": 8999, "virtual_interface_networks": ["192.168.122.0/24"], "virtual_ip": "192.168.122.2/24"}}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "cephfs", "service_name": "nfs.cephfs", "service_type": "nfs", "spec": {"enable_haproxy_protocol": true, "port": 12049}}, {"placement": {"host_pattern": "*"}, "service_name": "node-exporter", "service_type": "node-exporter"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"count": 1, "hosts": ["compute-0"]}, "service_name": "prometheus", "service_type": "prometheus"}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_frontend_port": 8082}}]
Sep 30 14:16:07 compute-0 systemd[1]: libpod-12f5cb380e324f6201b4b16c792385765f2df1785b1c492bc60ab7f964d15c4b.scope: Deactivated successfully.
Sep 30 14:16:07 compute-0 podman[94330]: 2025-09-30 14:16:07.159516197 +0000 UTC m=+0.503071596 container died 12f5cb380e324f6201b4b16c792385765f2df1785b1c492bc60ab7f964d15c4b (image=quay.io/ceph/ceph:v19, name=nifty_khayyam, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid)
Sep 30 14:16:07 compute-0 ansible-async_wrapper.py[93984]: Done in kid B.
Sep 30 14:16:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-4cd1dfdba698d942705bd709478e6fe36c19da3529ac10a13b9931efbadb07ef-merged.mount: Deactivated successfully.
Sep 30 14:16:07 compute-0 podman[94330]: 2025-09-30 14:16:07.20063057 +0000 UTC m=+0.544185959 container remove 12f5cb380e324f6201b4b16c792385765f2df1785b1c492bc60ab7f964d15c4b (image=quay.io/ceph/ceph:v19, name=nifty_khayyam, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:16:07 compute-0 systemd[1]: libpod-conmon-12f5cb380e324f6201b4b16c792385765f2df1785b1c492bc60ab7f964d15c4b.scope: Deactivated successfully.
Sep 30 14:16:07 compute-0 sudo[94292]: pam_unix(sudo:session): session closed for user root
Sep 30 14:16:07 compute-0 lucid_pasteur[94430]: --> passed data devices: 0 physical, 1 LVM
Sep 30 14:16:07 compute-0 lucid_pasteur[94430]: --> All data devices are unavailable
Sep 30 14:16:07 compute-0 systemd[1]: libpod-2def5b1e2fbf100c0dcd9d8133ec5f454b6b137139660615c3216173801589c3.scope: Deactivated successfully.
Sep 30 14:16:07 compute-0 podman[94414]: 2025-09-30 14:16:07.469887204 +0000 UTC m=+0.499107591 container died 2def5b1e2fbf100c0dcd9d8133ec5f454b6b137139660615c3216173801589c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_pasteur, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Sep 30 14:16:07 compute-0 podman[94414]: 2025-09-30 14:16:07.509034316 +0000 UTC m=+0.538254703 container remove 2def5b1e2fbf100c0dcd9d8133ec5f454b6b137139660615c3216173801589c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_pasteur, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Sep 30 14:16:07 compute-0 systemd[1]: libpod-conmon-2def5b1e2fbf100c0dcd9d8133ec5f454b6b137139660615c3216173801589c3.scope: Deactivated successfully.
Sep 30 14:16:07 compute-0 sudo[94244]: pam_unix(sudo:session): session closed for user root
Sep 30 14:16:07 compute-0 sudo[94470]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:16:07 compute-0 sudo[94470]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:16:07 compute-0 sudo[94470]: pam_unix(sudo:session): session closed for user root
Sep 30 14:16:07 compute-0 sudo[94495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -- lvm list --format json
Sep 30 14:16:07 compute-0 sudo[94495]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:16:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-913d799781bd0a5434d4df740d0c0842fa7b4251dc0fe89a2478ec6967a7ae54-merged.mount: Deactivated successfully.
Sep 30 14:16:07 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v17: 105 pgs: 105 active+clean; 454 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Sep 30 14:16:07 compute-0 ceph-mgr[74485]: [progress INFO root] Writing back 10 completed events
Sep 30 14:16:07 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Sep 30 14:16:07 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:08 compute-0 podman[94561]: 2025-09-30 14:16:08.012397929 +0000 UTC m=+0.036719739 container create c1a6d687044677611eb37ad3d2af4017f4341cfff693732ef69ec3fd18a1d935 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_raman, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Sep 30 14:16:08 compute-0 systemd[1]: Started libpod-conmon-c1a6d687044677611eb37ad3d2af4017f4341cfff693732ef69ec3fd18a1d935.scope.
Sep 30 14:16:08 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:16:08 compute-0 sudo[94603]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfwmossgtagecqderqxeadmgrpzcidls ; /usr/bin/python3'
Sep 30 14:16:08 compute-0 sudo[94603]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:16:08 compute-0 podman[94561]: 2025-09-30 14:16:08.083367359 +0000 UTC m=+0.107689189 container init c1a6d687044677611eb37ad3d2af4017f4341cfff693732ef69ec3fd18a1d935 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_raman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Sep 30 14:16:08 compute-0 podman[94561]: 2025-09-30 14:16:08.090752973 +0000 UTC m=+0.115074783 container start c1a6d687044677611eb37ad3d2af4017f4341cfff693732ef69ec3fd18a1d935 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_raman, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:16:08 compute-0 podman[94561]: 2025-09-30 14:16:07.996098939 +0000 UTC m=+0.020420779 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:16:08 compute-0 podman[94561]: 2025-09-30 14:16:08.093688361 +0000 UTC m=+0.118010201 container attach c1a6d687044677611eb37ad3d2af4017f4341cfff693732ef69ec3fd18a1d935 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_raman, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Sep 30 14:16:08 compute-0 elastic_raman[94597]: 167 167
Sep 30 14:16:08 compute-0 systemd[1]: libpod-c1a6d687044677611eb37ad3d2af4017f4341cfff693732ef69ec3fd18a1d935.scope: Deactivated successfully.
Sep 30 14:16:08 compute-0 podman[94561]: 2025-09-30 14:16:08.096387742 +0000 UTC m=+0.120709552 container died c1a6d687044677611eb37ad3d2af4017f4341cfff693732ef69ec3fd18a1d935 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_raman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Sep 30 14:16:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-532b28d75f6e9469797ba4b49a82c3edd2f38a7ea6278ee9f67da65543eea69e-merged.mount: Deactivated successfully.
Sep 30 14:16:08 compute-0 podman[94561]: 2025-09-30 14:16:08.13236564 +0000 UTC m=+0.156687450 container remove c1a6d687044677611eb37ad3d2af4017f4341cfff693732ef69ec3fd18a1d935 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_raman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Sep 30 14:16:08 compute-0 systemd[1]: libpod-conmon-c1a6d687044677611eb37ad3d2af4017f4341cfff693732ef69ec3fd18a1d935.scope: Deactivated successfully.
Sep 30 14:16:08 compute-0 python3[94605]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:16:08 compute-0 podman[94623]: 2025-09-30 14:16:08.247194675 +0000 UTC m=+0.020777738 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:16:08 compute-0 podman[94623]: 2025-09-30 14:16:08.345638289 +0000 UTC m=+0.119221362 container create 27b3c8d34317d5731d69f033df4c139aca443ad018298bdf80a7084fda40019d (image=quay.io/ceph/ceph:v19, name=cool_ride, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:16:08 compute-0 systemd[1]: Started libpod-conmon-27b3c8d34317d5731d69f033df4c139aca443ad018298bdf80a7084fda40019d.scope.
Sep 30 14:16:08 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:16:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfb99bc088f0dc280625a90267098a42059cabd34e9c8715ac42f7260d3bf3b9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:16:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfb99bc088f0dc280625a90267098a42059cabd34e9c8715ac42f7260d3bf3b9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:16:08 compute-0 podman[94641]: 2025-09-30 14:16:08.466830032 +0000 UTC m=+0.207392295 container create a60e40c0ebd489b2b24b7cfa86c8030f98f7f4bc09c8078b78e07a83c7173d42 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_jemison, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Sep 30 14:16:08 compute-0 podman[94641]: 2025-09-30 14:16:08.409869491 +0000 UTC m=+0.150431774 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:16:08 compute-0 podman[94623]: 2025-09-30 14:16:08.527157922 +0000 UTC m=+0.300740985 container init 27b3c8d34317d5731d69f033df4c139aca443ad018298bdf80a7084fda40019d (image=quay.io/ceph/ceph:v19, name=cool_ride, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:16:08 compute-0 podman[94623]: 2025-09-30 14:16:08.536537709 +0000 UTC m=+0.310120742 container start 27b3c8d34317d5731d69f033df4c139aca443ad018298bdf80a7084fda40019d (image=quay.io/ceph/ceph:v19, name=cool_ride, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:16:08 compute-0 podman[94623]: 2025-09-30 14:16:08.539539318 +0000 UTC m=+0.313122371 container attach 27b3c8d34317d5731d69f033df4c139aca443ad018298bdf80a7084fda40019d (image=quay.io/ceph/ceph:v19, name=cool_ride, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Sep 30 14:16:08 compute-0 systemd[1]: Started libpod-conmon-a60e40c0ebd489b2b24b7cfa86c8030f98f7f4bc09c8078b78e07a83c7173d42.scope.
Sep 30 14:16:08 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:16:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc3f950a38d826c4ef79e2e1ab831661c7c2fcaa94cfe663908fedebfd69af1d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:16:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc3f950a38d826c4ef79e2e1ab831661c7c2fcaa94cfe663908fedebfd69af1d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:16:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc3f950a38d826c4ef79e2e1ab831661c7c2fcaa94cfe663908fedebfd69af1d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:16:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc3f950a38d826c4ef79e2e1ab831661c7c2fcaa94cfe663908fedebfd69af1d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:16:08 compute-0 podman[94641]: 2025-09-30 14:16:08.590190523 +0000 UTC m=+0.330752786 container init a60e40c0ebd489b2b24b7cfa86c8030f98f7f4bc09c8078b78e07a83c7173d42 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_jemison, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:16:08 compute-0 podman[94641]: 2025-09-30 14:16:08.597269049 +0000 UTC m=+0.337831312 container start a60e40c0ebd489b2b24b7cfa86c8030f98f7f4bc09c8078b78e07a83c7173d42 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_jemison, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:16:08 compute-0 podman[94641]: 2025-09-30 14:16:08.60033307 +0000 UTC m=+0.340895323 container attach a60e40c0ebd489b2b24b7cfa86c8030f98f7f4bc09c8078b78e07a83c7173d42 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_jemison, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Sep 30 14:16:08 compute-0 clever_jemison[94663]: {
Sep 30 14:16:08 compute-0 clever_jemison[94663]:     "0": [
Sep 30 14:16:08 compute-0 clever_jemison[94663]:         {
Sep 30 14:16:08 compute-0 clever_jemison[94663]:             "devices": [
Sep 30 14:16:08 compute-0 clever_jemison[94663]:                 "/dev/loop3"
Sep 30 14:16:08 compute-0 clever_jemison[94663]:             ],
Sep 30 14:16:08 compute-0 clever_jemison[94663]:             "lv_name": "ceph_lv0",
Sep 30 14:16:08 compute-0 clever_jemison[94663]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:16:08 compute-0 clever_jemison[94663]:             "lv_size": "21470642176",
Sep 30 14:16:08 compute-0 clever_jemison[94663]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5e3c7776-ac03-5698-b79f-a6dc2d80cae6,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1bf35304-bfb4-41f5-b832-570aa31de1b2,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 14:16:08 compute-0 clever_jemison[94663]:             "lv_uuid": "f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L",
Sep 30 14:16:08 compute-0 clever_jemison[94663]:             "name": "ceph_lv0",
Sep 30 14:16:08 compute-0 clever_jemison[94663]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:16:08 compute-0 clever_jemison[94663]:             "tags": {
Sep 30 14:16:08 compute-0 clever_jemison[94663]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:16:08 compute-0 clever_jemison[94663]:                 "ceph.block_uuid": "f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L",
Sep 30 14:16:08 compute-0 clever_jemison[94663]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 14:16:08 compute-0 clever_jemison[94663]:                 "ceph.cluster_fsid": "5e3c7776-ac03-5698-b79f-a6dc2d80cae6",
Sep 30 14:16:08 compute-0 clever_jemison[94663]:                 "ceph.cluster_name": "ceph",
Sep 30 14:16:08 compute-0 clever_jemison[94663]:                 "ceph.crush_device_class": "",
Sep 30 14:16:08 compute-0 clever_jemison[94663]:                 "ceph.encrypted": "0",
Sep 30 14:16:08 compute-0 clever_jemison[94663]:                 "ceph.osd_fsid": "1bf35304-bfb4-41f5-b832-570aa31de1b2",
Sep 30 14:16:08 compute-0 clever_jemison[94663]:                 "ceph.osd_id": "0",
Sep 30 14:16:08 compute-0 clever_jemison[94663]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 14:16:08 compute-0 clever_jemison[94663]:                 "ceph.type": "block",
Sep 30 14:16:08 compute-0 clever_jemison[94663]:                 "ceph.vdo": "0",
Sep 30 14:16:08 compute-0 clever_jemison[94663]:                 "ceph.with_tpm": "0"
Sep 30 14:16:08 compute-0 clever_jemison[94663]:             },
Sep 30 14:16:08 compute-0 clever_jemison[94663]:             "type": "block",
Sep 30 14:16:08 compute-0 clever_jemison[94663]:             "vg_name": "ceph_vg0"
Sep 30 14:16:08 compute-0 clever_jemison[94663]:         }
Sep 30 14:16:08 compute-0 clever_jemison[94663]:     ]
Sep 30 14:16:08 compute-0 clever_jemison[94663]: }
Sep 30 14:16:08 compute-0 ceph-mon[74194]: from='client.14502 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Sep 30 14:16:08 compute-0 ceph-mon[74194]: pgmap v17: 105 pgs: 105 active+clean; 454 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Sep 30 14:16:08 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:08 compute-0 systemd[1]: libpod-a60e40c0ebd489b2b24b7cfa86c8030f98f7f4bc09c8078b78e07a83c7173d42.scope: Deactivated successfully.
Sep 30 14:16:08 compute-0 podman[94641]: 2025-09-30 14:16:08.890330991 +0000 UTC m=+0.630893274 container died a60e40c0ebd489b2b24b7cfa86c8030f98f7f4bc09c8078b78e07a83c7173d42 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_jemison, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Sep 30 14:16:08 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.14508 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Sep 30 14:16:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-fc3f950a38d826c4ef79e2e1ab831661c7c2fcaa94cfe663908fedebfd69af1d-merged.mount: Deactivated successfully.
Sep 30 14:16:08 compute-0 cool_ride[94657]: 
Sep 30 14:16:08 compute-0 cool_ride[94657]: [{"container_id": "ccc58ffcac3b", "container_image_digests": ["quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee", "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "0.10%", "created": "2025-09-30T14:12:58.881052Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "hostname": "compute-0", "is_active": false, "last_refresh": "2025-09-30T14:15:49.528529Z", "memory_usage": 7795113, "ports": [], "service_name": "crash", "started": "2025-09-30T14:12:58.798160Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6@crash.compute-0", "version": "19.2.3"}, {"container_id": "8bfd154ce5d3", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "0.37%", "created": "2025-09-30T14:13:38.316132Z", "daemon_id": "compute-1", "daemon_name": "crash.compute-1", "daemon_type": "crash", "hostname": "compute-1", "is_active": false, "last_refresh": "2025-09-30T14:15:49.464573Z", "memory_usage": 7817134, "ports": [], "service_name": "crash", "started": "2025-09-30T14:13:38.181938Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6@crash.compute-1", "version": "19.2.3"}, {"container_id": "e6ac0f233645", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "0.21%", "created": "2025-09-30T14:14:37.644436Z", "daemon_id": "compute-2", "daemon_name": "crash.compute-2", "daemon_type": "crash", "hostname": "compute-2", "is_active": false, "last_refresh": "2025-09-30T14:15:49.657552Z", "memory_usage": 7803502, "ports": [], "service_name": "crash", "started": "2025-09-30T14:14:37.516348Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6@crash.compute-2", "version": "19.2.3"}, {"container_id": "a69f0208767c", "container_image_digests": ["quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee", "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph:v19", "cpu_percentage": "24.88%", "created": "2025-09-30T14:12:11.335717Z", "daemon_id": "compute-0.buxlkm", "daemon_name": "mgr.compute-0.buxlkm", "daemon_type": "mgr", "hostname": "compute-0", "is_active": false, "last_refresh": "2025-09-30T14:15:49.528454Z", "memory_usage": 541484646, "ports": [9283, 8765], "service_name": "mgr", "started": "2025-09-30T14:12:11.245461Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6@mgr.compute-0.buxlkm", "version": "19.2.3"}, {"container_id": "ec46654b48e1", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "37.27%", "created": "2025-09-30T14:14:35.303583Z", "daemon_id": "compute-1.zeqptq", "daemon_name": "mgr.compute-1.zeqptq", "daemon_type": "mgr", "hostname": "compute-1", "is_active": false, "last_refresh": "2025-09-30T14:15:49.464848Z", "memory_usage": 504574771, "ports": [8765], "service_name": "mgr", "started": "2025-09-30T14:14:35.216986Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6@mgr.compute-1.zeqptq", "version": "19.2.3"}, {"container_id": "434a025f8712", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "34.85%", "created": "2025-09-30T14:14:32.138283Z", "daemon_id": "compute-2.udzudc", "daemon_name": "mgr.compute-2.udzudc", "daemon_type": "mgr", "hostname": "compute-2", "is_active": false, "last_refresh": "2025-09-30T14:15:49.657450Z", "memory_usage": 503840768, "ports": [8765], "service_name": "mgr", "started": "2025-09-30T14:14:31.822858Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6@mgr.compute-2.udzudc", "version": "19.2.3"}, {"container_id": "a277d7b6b6f3", "container_image_digests": ["quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee", "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph:v19", "cpu_percentage": "2.27%", "created": "2025-09-30T14:12:06.414589Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "hostname": "compute-0", "is_active": false, "last_refresh": "2025-09-30T14:15:49.528339Z", "memory_request": 2147483648, "memory_usage": 59653488, "ports": [], "service_name": "mon", "started": "2025-09-30T14:12:09.550096Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6@mon.compute-0", "version": "19.2.3"}, {"container_id": "d9065ecc06b6", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "1.57%", "created": "2025-09-30T14:14:24.568310Z", "daemon_id": "compute-1", "daemon_name": "mon.compute-1", "daemon_type": "mon", "hostname": "compute-1", "is_active": false, "last_refresh": "2025-09-30T14:15:49.464766Z", "memory_request": 2147483648, "memory_usage": 49230643, "ports": [], "service_name": "mon", "started": "2025-09-30T14:14:24.462467Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6@mon.compute-1", "version": "19.2.3"}, {"container_id": "b687cccf66ff", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "1.90%", "created": "2025-09-30T14:14:22.628018Z", "daemon_id": "compute-2", "daemon_name": "mon.compute-2", "daemon_type": "mon", "hostname": "compute-2", "is_active": false, "last_refresh": "2025-09-30T14:15:49.657314Z", "memory_request": 2147483648, "memory_usage": 51873054, "ports": [], "service_name": "mon", "started": "2025-09-30T14:14:22.225702Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6@mon.compute-2", "version": "19.2.3"}, {"container_id": "0d94fdcb0089", "container_image_digests": ["quay.io/prometheus/node-exporter@sha256:52a6f10ff10238979c365c06dbed8ad5cd1645c41780dc08ff813adacfb2341e", "quay.io/prometheus/node-exporter@sha256:4cb2b9019f1757be8482419002cb7afe028fdba35d47958829e4cfeaf6246d80"], "container_image_id": "72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e", "container_image_name": "quay.io/prometheus/node-exporter:v1.7.0", "cpu_percentage": "0.19%", "created": "2025-09-30T14:15:39.454180Z", "daemon_id": "compute-0", "daemon_name": "node-exporter.compute-0", "daemon_type": "node-exporter", "hostname": "compute-0", "is_active": false, "last_refresh": "2025-09-30T14:15:49.528686Z", "memory_usage": 3831496, "ports": [9100], "service_name": "node-exporter", "started": "2025-09-30T14:15:38.443143Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6@node-exporter.compute-0", "version": "1.7.0"}, {"daemon_id": "compute-1", "daemon_name": "node-exporter.compute-1", "daemon_type": "node-exporter", "events": ["2025-09-30T14:16:00.136101Z daemon:node-exporter.compute-1 [INFO] \"Deployed node-exporter.compute-1 on host 'compute-1'\""], "hostname": "compute-1", "is_active": false, "ports": [9100], "service_name": "node-exporter", "status": 2, "status_desc": "starting"}, {"daemon_id": "compute-2", "daemon_name": "node-exporter.compute-2", "daemon_type": "node-exporter", "events": ["2025-09-30T14:16:06.227873Z daemon:node-exporter.compute-2 [INFO] \"Deployed node-exporter.compute-2 on host 'compute-2'\""], "hostname": "compute-2", "is_active": false, "ports": [9100], "service_name": "node-exporter", "status": 2, "status_desc": "starting"}, {"container_id": "2db0b61839fe", "container_image_digests": ["quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee", "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "1.79%", "created": "2025-09-30T14:13:50.477377Z", "daemon_id": "0", "daemon_name": "osd.0", "daemon_type": "osd", "hostname": "compute-0", "is_active": false, "last_refresh": "2025-09-30T14:15:49.528597Z", "memory_request": 4294967296, "memory_usage": 68346183, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-09-30T14:13:50.360607Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6@osd.0", "version": "19.2.3"}, {"container_id": "f1d66d4fbd96", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "1.57%", "created": "2025-09-30T14:13:51.454467Z", "daemon_id": "1", "daemon_name": "osd.1", "daemon_type": "osd", "hostname": "compute-1", "is_active": false, "last_refresh": "2025-09-30T14:15:49.464692Z", "memory_request": 5502780620, "memory_usage": 67245178, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-09-30T14:13:51.306601Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6@osd.1", "version": "19.2.3"}, {"container_id": "9fdc5c7d8dc6", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "58.83%", "created": "2025-09-30T14:14:57.983973Z", "daemon_id": "2", "daemon_name": "osd.2", "daemon_type": "osd", "hostname": "compute-2", "is_active": false, "last_refresh": "2025-09-30T14:15:49.657652Z", "memory_request": 4294967296, "memory_usage": 62117642, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-09-30T14:14:57.847922Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6@osd.2", "version": "19.2.3"}, {"container_id": "41aaa3d3fad8", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "1.93%", "created": "2025-09-30T14:15:23.434646Z", "daemon_id": "rgw.compute-2.evkboy", "daemon_name": "rgw.rgw.compute-2.evkboy", "daemon_type": "rgw", "hostname": "compute-2", "ip": "192.168.122.102", "is_active": false, "last_refresh": "2025-09-30T14:15:49.657753Z", "memory_usage": 100893982, "ports": [8082], "service_name": "rgw.rgw", "started": "2025-09-30T14:15:22.112214Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6@rgw.rgw.compute-2.evkboy", "version": "19.2.3"}]
Sep 30 14:16:08 compute-0 podman[94641]: 2025-09-30 14:16:08.927197662 +0000 UTC m=+0.667759925 container remove a60e40c0ebd489b2b24b7cfa86c8030f98f7f4bc09c8078b78e07a83c7173d42 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_jemison, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:16:08 compute-0 podman[94623]: 2025-09-30 14:16:08.930233922 +0000 UTC m=+0.703816975 container died 27b3c8d34317d5731d69f033df4c139aca443ad018298bdf80a7084fda40019d (image=quay.io/ceph/ceph:v19, name=cool_ride, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Sep 30 14:16:08 compute-0 systemd[1]: libpod-27b3c8d34317d5731d69f033df4c139aca443ad018298bdf80a7084fda40019d.scope: Deactivated successfully.
Sep 30 14:16:08 compute-0 systemd[1]: libpod-conmon-a60e40c0ebd489b2b24b7cfa86c8030f98f7f4bc09c8078b78e07a83c7173d42.scope: Deactivated successfully.
Sep 30 14:16:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-dfb99bc088f0dc280625a90267098a42059cabd34e9c8715ac42f7260d3bf3b9-merged.mount: Deactivated successfully.
Sep 30 14:16:08 compute-0 sudo[94495]: pam_unix(sudo:session): session closed for user root
Sep 30 14:16:08 compute-0 podman[94623]: 2025-09-30 14:16:08.967579926 +0000 UTC m=+0.741162959 container remove 27b3c8d34317d5731d69f033df4c139aca443ad018298bdf80a7084fda40019d (image=quay.io/ceph/ceph:v19, name=cool_ride, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Sep 30 14:16:08 compute-0 systemd[1]: libpod-conmon-27b3c8d34317d5731d69f033df4c139aca443ad018298bdf80a7084fda40019d.scope: Deactivated successfully.
Sep 30 14:16:09 compute-0 sudo[94603]: pam_unix(sudo:session): session closed for user root
Sep 30 14:16:09 compute-0 sudo[94715]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:16:09 compute-0 sudo[94715]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:16:09 compute-0 sudo[94715]: pam_unix(sudo:session): session closed for user root
Sep 30 14:16:09 compute-0 sudo[94742]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -- raw list --format json
Sep 30 14:16:09 compute-0 sudo[94742]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:16:09 compute-0 rsyslogd[1004]: message too long (13247) with configured size 8096, begin of message is: [{"container_id": "ccc58ffcac3b", "container_image_digests": ["quay.io/ceph/ceph [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Sep 30 14:16:09 compute-0 podman[94807]: 2025-09-30 14:16:09.46111162 +0000 UTC m=+0.047792460 container create 049e298757d5e9745c8b758805c9d1bc99c764d028619b844a905cea8577cade (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_heyrovsky, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325)
Sep 30 14:16:09 compute-0 systemd[1]: Started libpod-conmon-049e298757d5e9745c8b758805c9d1bc99c764d028619b844a905cea8577cade.scope.
Sep 30 14:16:09 compute-0 sshd-session[94720]: Invalid user seekcy from 209.38.228.14 port 42982
Sep 30 14:16:09 compute-0 podman[94807]: 2025-09-30 14:16:09.438780532 +0000 UTC m=+0.025461392 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:16:09 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:16:09 compute-0 podman[94807]: 2025-09-30 14:16:09.570687007 +0000 UTC m=+0.157367877 container init 049e298757d5e9745c8b758805c9d1bc99c764d028619b844a905cea8577cade (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_heyrovsky, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:16:09 compute-0 podman[94807]: 2025-09-30 14:16:09.57912548 +0000 UTC m=+0.165806320 container start 049e298757d5e9745c8b758805c9d1bc99c764d028619b844a905cea8577cade (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_heyrovsky, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Sep 30 14:16:09 compute-0 podman[94807]: 2025-09-30 14:16:09.582831027 +0000 UTC m=+0.169511897 container attach 049e298757d5e9745c8b758805c9d1bc99c764d028619b844a905cea8577cade (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_heyrovsky, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Sep 30 14:16:09 compute-0 agitated_heyrovsky[94823]: 167 167
Sep 30 14:16:09 compute-0 systemd[1]: libpod-049e298757d5e9745c8b758805c9d1bc99c764d028619b844a905cea8577cade.scope: Deactivated successfully.
Sep 30 14:16:09 compute-0 podman[94807]: 2025-09-30 14:16:09.584541232 +0000 UTC m=+0.171222072 container died 049e298757d5e9745c8b758805c9d1bc99c764d028619b844a905cea8577cade (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_heyrovsky, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:16:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-194df221449c4f121ce0443300d9d852ecf77cc30ece078e583980b663ae3686-merged.mount: Deactivated successfully.
Sep 30 14:16:09 compute-0 podman[94807]: 2025-09-30 14:16:09.619549995 +0000 UTC m=+0.206230845 container remove 049e298757d5e9745c8b758805c9d1bc99c764d028619b844a905cea8577cade (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_heyrovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Sep 30 14:16:09 compute-0 sshd-session[94720]: Received disconnect from 209.38.228.14 port 42982:11: Bye Bye [preauth]
Sep 30 14:16:09 compute-0 sshd-session[94720]: Disconnected from invalid user seekcy 209.38.228.14 port 42982 [preauth]
Sep 30 14:16:09 compute-0 systemd[1]: libpod-conmon-049e298757d5e9745c8b758805c9d1bc99c764d028619b844a905cea8577cade.scope: Deactivated successfully.
Sep 30 14:16:09 compute-0 podman[94847]: 2025-09-30 14:16:09.781582244 +0000 UTC m=+0.052111644 container create 3e4f80305c21d9f371506b72ac3dc10e8533f27e20a843d7d8e0a8466506865f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Sep 30 14:16:09 compute-0 sudo[94885]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hycxleimoybhgfmadqaxakbphsuueubk ; /usr/bin/python3'
Sep 30 14:16:09 compute-0 sudo[94885]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:16:09 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v18: 105 pgs: 105 active+clean; 454 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Sep 30 14:16:09 compute-0 systemd[1]: Started libpod-conmon-3e4f80305c21d9f371506b72ac3dc10e8533f27e20a843d7d8e0a8466506865f.scope.
Sep 30 14:16:09 compute-0 podman[94847]: 2025-09-30 14:16:09.762037289 +0000 UTC m=+0.032566709 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:16:09 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:16:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21812e66bc585a3f6828e5c4b436774b6a721f047c4cee3489f7b39000382812/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:16:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21812e66bc585a3f6828e5c4b436774b6a721f047c4cee3489f7b39000382812/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:16:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21812e66bc585a3f6828e5c4b436774b6a721f047c4cee3489f7b39000382812/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:16:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21812e66bc585a3f6828e5c4b436774b6a721f047c4cee3489f7b39000382812/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:16:09 compute-0 podman[94847]: 2025-09-30 14:16:09.887162545 +0000 UTC m=+0.157691975 container init 3e4f80305c21d9f371506b72ac3dc10e8533f27e20a843d7d8e0a8466506865f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_shirley, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Sep 30 14:16:09 compute-0 podman[94847]: 2025-09-30 14:16:09.894113038 +0000 UTC m=+0.164642438 container start 3e4f80305c21d9f371506b72ac3dc10e8533f27e20a843d7d8e0a8466506865f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_shirley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:16:09 compute-0 podman[94847]: 2025-09-30 14:16:09.897695082 +0000 UTC m=+0.168224492 container attach 3e4f80305c21d9f371506b72ac3dc10e8533f27e20a843d7d8e0a8466506865f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_shirley, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Sep 30 14:16:09 compute-0 python3[94887]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:16:09 compute-0 ceph-mon[74194]: from='client.14508 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Sep 30 14:16:09 compute-0 ceph-mon[74194]: pgmap v18: 105 pgs: 105 active+clean; 454 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Sep 30 14:16:10 compute-0 podman[94895]: 2025-09-30 14:16:10.014400417 +0000 UTC m=+0.040874058 container create c79d066452847d77fa5404e9a628d514ac21662c5800ea5e4743504a263e4378 (image=quay.io/ceph/ceph:v19, name=pedantic_keller, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Sep 30 14:16:10 compute-0 systemd[1]: Started libpod-conmon-c79d066452847d77fa5404e9a628d514ac21662c5800ea5e4743504a263e4378.scope.
Sep 30 14:16:10 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:16:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5d03690cc96932f5a2c87c4edda24f37c41e3fe8077ecc6fdec454ca536da42/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:16:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5d03690cc96932f5a2c87c4edda24f37c41e3fe8077ecc6fdec454ca536da42/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:16:10 compute-0 podman[94895]: 2025-09-30 14:16:10.08966272 +0000 UTC m=+0.116136391 container init c79d066452847d77fa5404e9a628d514ac21662c5800ea5e4743504a263e4378 (image=quay.io/ceph/ceph:v19, name=pedantic_keller, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Sep 30 14:16:10 compute-0 podman[94895]: 2025-09-30 14:16:09.995960051 +0000 UTC m=+0.022433712 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:16:10 compute-0 podman[94895]: 2025-09-30 14:16:10.097304852 +0000 UTC m=+0.123778493 container start c79d066452847d77fa5404e9a628d514ac21662c5800ea5e4743504a263e4378 (image=quay.io/ceph/ceph:v19, name=pedantic_keller, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:16:10 compute-0 podman[94895]: 2025-09-30 14:16:10.101145223 +0000 UTC m=+0.127618884 container attach c79d066452847d77fa5404e9a628d514ac21662c5800ea5e4743504a263e4378 (image=quay.io/ceph/ceph:v19, name=pedantic_keller, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:16:10 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e48 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 14:16:10 compute-0 lvm[95000]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 14:16:10 compute-0 lvm[95000]: VG ceph_vg0 finished
Sep 30 14:16:10 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Sep 30 14:16:10 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3907676774' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Sep 30 14:16:10 compute-0 pedantic_keller[94908]: 
Sep 30 14:16:10 compute-0 pedantic_keller[94908]: {"fsid":"5e3c7776-ac03-5698-b79f-a6dc2d80cae6","health":{"status":"HEALTH_ERR","checks":{"BLUESTORE_SLOW_OP_ALERT":{"severity":"HEALTH_WARN","summary":{"message":"1 OSD(s) experiencing slow operations in BlueStore","count":1},"muted":false},"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":14,"quorum":[0,1,2],"quorum_names":["compute-0","compute-2","compute-1"],"quorum_age":96,"monmap":{"epoch":3,"min_mon_release_name":"squid","num_mons":3},"osdmap":{"epoch":48,"num_osds":3,"num_up_osds":3,"osd_up_since":1759241709,"num_in_osds":3,"osd_in_since":1759241682,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":105}],"num_pgs":105,"num_pools":12,"num_objects":195,"data_bytes":464595,"bytes_used":84623360,"bytes_avail":64327303168,"bytes_total":64411926528},"fsmap":{"epoch":2,"btime":"2025-09-30T14:15:48:812491+0000","id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":2,"modules":["cephadm","dashboard","iostat","nfs","restful"],"services":{"dashboard":"http://192.168.122.100:8443/"}},"servicemap":{"epoch":5,"modified":"2025-09-30T14:15:49.794924+0000","services":{"mgr":{"daemons":{"summary":"","compute-0.buxlkm":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-1.zeqptq":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2.udzudc":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"mon":{"daemons":{"summary":"","compute-0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"rgw":{"daemons":{"summary":"","24148":{"start_epoch":5,"start_stamp":"2025-09-30T14:15:48.818621+0000","gid":24148,"addr":"192.168.122.102:0/701204040","metadata":{"arch":"x86_64","ceph_release":"squid","ceph_version":"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)","ceph_version_short":"19.2.3","container_hostname":"compute-2","container_image":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","cpu":"AMD EPYC-Rome Processor","distro":"centos","distro_description":"CentOS Stream 9","distro_version":"9","frontend_config#0":"beast endpoint=192.168.122.102:8082","frontend_type#0":"beast","hostname":"compute-2","id":"rgw.compute-2.evkboy","kernel_description":"#1 SMP PREEMPT_DYNAMIC Mon Sep 15 21:46:13 UTC 2025","kernel_version":"5.14.0-617.el9.x86_64","mem_swap_kb":"1048572","mem_total_kb":"7864112","num_handles":"1","os":"Linux","pid":"2","realm_id":"","realm_name":"","zone_id":"8c90dce2-7f45-4620-bb41-00dca60054c7","zone_name":"default","zonegroup_id":"4cef40d9-d614-4ca3-b034-9be9bc4080b8","zonegroup_name":"default"},"task_status":{}}}}}},"progress_events":{}}
Sep 30 14:16:10 compute-0 systemd[1]: libpod-c79d066452847d77fa5404e9a628d514ac21662c5800ea5e4743504a263e4378.scope: Deactivated successfully.
Sep 30 14:16:10 compute-0 podman[94895]: 2025-09-30 14:16:10.572963004 +0000 UTC m=+0.599436645 container died c79d066452847d77fa5404e9a628d514ac21662c5800ea5e4743504a263e4378 (image=quay.io/ceph/ceph:v19, name=pedantic_keller, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Sep 30 14:16:10 compute-0 charming_shirley[94890]: {}
Sep 30 14:16:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-d5d03690cc96932f5a2c87c4edda24f37c41e3fe8077ecc6fdec454ca536da42-merged.mount: Deactivated successfully.
Sep 30 14:16:10 compute-0 podman[94895]: 2025-09-30 14:16:10.614050387 +0000 UTC m=+0.640524038 container remove c79d066452847d77fa5404e9a628d514ac21662c5800ea5e4743504a263e4378 (image=quay.io/ceph/ceph:v19, name=pedantic_keller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Sep 30 14:16:10 compute-0 systemd[1]: libpod-3e4f80305c21d9f371506b72ac3dc10e8533f27e20a843d7d8e0a8466506865f.scope: Deactivated successfully.
Sep 30 14:16:10 compute-0 systemd[1]: libpod-3e4f80305c21d9f371506b72ac3dc10e8533f27e20a843d7d8e0a8466506865f.scope: Consumed 1.116s CPU time.
Sep 30 14:16:10 compute-0 conmon[94890]: conmon 3e4f80305c21d9f37150 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3e4f80305c21d9f371506b72ac3dc10e8533f27e20a843d7d8e0a8466506865f.scope/container/memory.events
Sep 30 14:16:10 compute-0 podman[94847]: 2025-09-30 14:16:10.627534792 +0000 UTC m=+0.898064212 container died 3e4f80305c21d9f371506b72ac3dc10e8533f27e20a843d7d8e0a8466506865f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:16:10 compute-0 sudo[94885]: pam_unix(sudo:session): session closed for user root
Sep 30 14:16:10 compute-0 systemd[1]: libpod-conmon-c79d066452847d77fa5404e9a628d514ac21662c5800ea5e4743504a263e4378.scope: Deactivated successfully.
Sep 30 14:16:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-21812e66bc585a3f6828e5c4b436774b6a721f047c4cee3489f7b39000382812-merged.mount: Deactivated successfully.
Sep 30 14:16:10 compute-0 podman[94847]: 2025-09-30 14:16:10.668402919 +0000 UTC m=+0.938932319 container remove 3e4f80305c21d9f371506b72ac3dc10e8533f27e20a843d7d8e0a8466506865f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_shirley, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:16:10 compute-0 systemd[1]: libpod-conmon-3e4f80305c21d9f371506b72ac3dc10e8533f27e20a843d7d8e0a8466506865f.scope: Deactivated successfully.
Sep 30 14:16:10 compute-0 sudo[94742]: pam_unix(sudo:session): session closed for user root
Sep 30 14:16:10 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 14:16:10 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:10 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 14:16:10 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:10 compute-0 ceph-mgr[74485]: [progress INFO root] update: starting ev 605907a7-d3a4-40da-813c-652ea9a26f10 (Updating rgw.rgw deployment (+2 -> 3))
Sep 30 14:16:10 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.afpjht", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Sep 30 14:16:10 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.afpjht", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Sep 30 14:16:10 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.afpjht", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Sep 30 14:16:10 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Sep 30 14:16:10 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:10 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:16:10 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:16:10 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-1.afpjht on compute-1
Sep 30 14:16:10 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-1.afpjht on compute-1
Sep 30 14:16:10 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/3907676774' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Sep 30 14:16:10 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:10 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:10 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.afpjht", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Sep 30 14:16:10 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.afpjht", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Sep 30 14:16:10 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:10 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:16:10 compute-0 ceph-mon[74194]: Deploying daemon rgw.rgw.compute-1.afpjht on compute-1
Sep 30 14:16:11 compute-0 sudo[95052]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oyvoqhemxcfwlfyoutuftvmyvuogofak ; /usr/bin/python3'
Sep 30 14:16:11 compute-0 sudo[95052]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:16:11 compute-0 python3[95054]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:16:11 compute-0 podman[95055]: 2025-09-30 14:16:11.739653345 +0000 UTC m=+0.048411187 container create 04161b339fac2c44705450397d7360f4ddb88680fa99f3cfe6a94afa043e3958 (image=quay.io/ceph/ceph:v19, name=silly_borg, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:16:11 compute-0 systemd[1]: Started libpod-conmon-04161b339fac2c44705450397d7360f4ddb88680fa99f3cfe6a94afa043e3958.scope.
Sep 30 14:16:11 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:16:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b7796c01081355c85146955609a0bb94b02a98d8c81e29674220ef85f7f27a3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:16:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b7796c01081355c85146955609a0bb94b02a98d8c81e29674220ef85f7f27a3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:16:11 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v19: 105 pgs: 105 active+clean; 454 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Sep 30 14:16:11 compute-0 podman[95055]: 2025-09-30 14:16:11.72085758 +0000 UTC m=+0.029615442 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:16:11 compute-0 podman[95055]: 2025-09-30 14:16:11.814540988 +0000 UTC m=+0.123298850 container init 04161b339fac2c44705450397d7360f4ddb88680fa99f3cfe6a94afa043e3958 (image=quay.io/ceph/ceph:v19, name=silly_borg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Sep 30 14:16:11 compute-0 podman[95055]: 2025-09-30 14:16:11.82257946 +0000 UTC m=+0.131337302 container start 04161b339fac2c44705450397d7360f4ddb88680fa99f3cfe6a94afa043e3958 (image=quay.io/ceph/ceph:v19, name=silly_borg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325)
Sep 30 14:16:11 compute-0 podman[95055]: 2025-09-30 14:16:11.827262633 +0000 UTC m=+0.136020505 container attach 04161b339fac2c44705450397d7360f4ddb88680fa99f3cfe6a94afa043e3958 (image=quay.io/ceph/ceph:v19, name=silly_borg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:16:12 compute-0 ceph-mon[74194]: pgmap v19: 105 pgs: 105 active+clean; 454 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Sep 30 14:16:12 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Sep 30 14:16:12 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/930575865' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Sep 30 14:16:12 compute-0 silly_borg[95071]: 
Sep 30 14:16:12 compute-0 silly_borg[95071]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_api_version","value":"3","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"7","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/ALERTMANAGER_API_HOST","value":"http://192.168.122.100:9093","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/GRAFANA_API_PASSWORD","value":"/home/grafana_password.yml","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/GRAFANA_API_URL","value":"http://192.168.122.100:3100","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/GRAFANA_API_USERNAME","value":"admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/PROMETHEUS_API_HOST","value":"http://192.168.122.100:9092","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/compute-0.buxlkm/server_addr","value":"192.168.122.100","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/compute-1.zeqptq/server_addr","value":"192.168.122.101","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/compute-2.udzudc/server_addr","value":"192.168.122.102","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/server_port","value":"8443","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/ssl","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/ssl_server_port","value":"8443","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target","value":"5502780620","level":"basic","can_update_at_runtime":true,"mask":"host:compute-1","location_type":"host","location_value":"compute-1"},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"client.rgw.rgw.compute-1.afpjht","name":"rgw_frontends","value":"beast endpoint=192.168.122.101:8082","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"client.rgw.rgw.compute-2.evkboy","name":"rgw_frontends","value":"beast endpoint=192.168.122.102:8082","level":"basic","can_update_at_runtime":false,"mask":""}]
Sep 30 14:16:12 compute-0 systemd[1]: libpod-04161b339fac2c44705450397d7360f4ddb88680fa99f3cfe6a94afa043e3958.scope: Deactivated successfully.
Sep 30 14:16:12 compute-0 podman[95055]: 2025-09-30 14:16:12.175049047 +0000 UTC m=+0.483806909 container died 04161b339fac2c44705450397d7360f4ddb88680fa99f3cfe6a94afa043e3958 (image=quay.io/ceph/ceph:v19, name=silly_borg, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2)
Sep 30 14:16:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-0b7796c01081355c85146955609a0bb94b02a98d8c81e29674220ef85f7f27a3-merged.mount: Deactivated successfully.
Sep 30 14:16:12 compute-0 podman[95055]: 2025-09-30 14:16:12.212618487 +0000 UTC m=+0.521376329 container remove 04161b339fac2c44705450397d7360f4ddb88680fa99f3cfe6a94afa043e3958 (image=quay.io/ceph/ceph:v19, name=silly_borg, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:16:12 compute-0 systemd[1]: libpod-conmon-04161b339fac2c44705450397d7360f4ddb88680fa99f3cfe6a94afa043e3958.scope: Deactivated successfully.
Sep 30 14:16:12 compute-0 sudo[95052]: pam_unix(sudo:session): session closed for user root
Sep 30 14:16:12 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Sep 30 14:16:12 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:12 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Sep 30 14:16:12 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:12 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Sep 30 14:16:12 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:12 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.ojbkdm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Sep 30 14:16:12 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.ojbkdm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Sep 30 14:16:12 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.ojbkdm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Sep 30 14:16:12 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Sep 30 14:16:12 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:12 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:16:12 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:16:12 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.ojbkdm on compute-0
Sep 30 14:16:12 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.ojbkdm on compute-0
Sep 30 14:16:13 compute-0 sudo[95108]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:16:13 compute-0 sudo[95108]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:16:13 compute-0 sudo[95108]: pam_unix(sudo:session): session closed for user root
Sep 30 14:16:13 compute-0 sudo[95174]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-usvpeuujvlhzfrkvqkjjgzsrzjbjuvln ; /usr/bin/python3'
Sep 30 14:16:13 compute-0 sudo[95174]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:16:13 compute-0 sudo[95138]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6
Sep 30 14:16:13 compute-0 sudo[95138]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:16:13 compute-0 python3[95181]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:16:13 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/930575865' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Sep 30 14:16:13 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:13 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:13 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:13 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.ojbkdm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Sep 30 14:16:13 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.ojbkdm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Sep 30 14:16:13 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:13 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:16:13 compute-0 podman[95184]: 2025-09-30 14:16:13.266974117 +0000 UTC m=+0.051099877 container create 0bd7fd50bf43d908e86976584b8370f5c7535fa71cf7ff108a24ff47a5b1aeca (image=quay.io/ceph/ceph:v19, name=vigilant_sutherland, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:16:13 compute-0 systemd[1]: Started libpod-conmon-0bd7fd50bf43d908e86976584b8370f5c7535fa71cf7ff108a24ff47a5b1aeca.scope.
Sep 30 14:16:13 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:16:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/502be7388714a992783574b1e547949d192b9533b2af67038c06decc4e058a29/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:16:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/502be7388714a992783574b1e547949d192b9533b2af67038c06decc4e058a29/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:16:13 compute-0 podman[95184]: 2025-09-30 14:16:13.239457752 +0000 UTC m=+0.023583532 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:16:13 compute-0 podman[95184]: 2025-09-30 14:16:13.347592802 +0000 UTC m=+0.131718582 container init 0bd7fd50bf43d908e86976584b8370f5c7535fa71cf7ff108a24ff47a5b1aeca (image=quay.io/ceph/ceph:v19, name=vigilant_sutherland, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid)
Sep 30 14:16:13 compute-0 podman[95184]: 2025-09-30 14:16:13.35626959 +0000 UTC m=+0.140395350 container start 0bd7fd50bf43d908e86976584b8370f5c7535fa71cf7ff108a24ff47a5b1aeca (image=quay.io/ceph/ceph:v19, name=vigilant_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Sep 30 14:16:13 compute-0 podman[95184]: 2025-09-30 14:16:13.36273253 +0000 UTC m=+0.146858310 container attach 0bd7fd50bf43d908e86976584b8370f5c7535fa71cf7ff108a24ff47a5b1aeca (image=quay.io/ceph/ceph:v19, name=vigilant_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:16:13 compute-0 podman[95244]: 2025-09-30 14:16:13.501373042 +0000 UTC m=+0.044926264 container create 1cf744c754ebc7032d745ab6ba6767ec6c89073038d351995e267bfc25f97d55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_saha, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Sep 30 14:16:13 compute-0 systemd[1]: Started libpod-conmon-1cf744c754ebc7032d745ab6ba6767ec6c89073038d351995e267bfc25f97d55.scope.
Sep 30 14:16:13 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:16:13 compute-0 podman[95244]: 2025-09-30 14:16:13.479083015 +0000 UTC m=+0.022636247 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:16:13 compute-0 podman[95244]: 2025-09-30 14:16:13.575861905 +0000 UTC m=+0.119415147 container init 1cf744c754ebc7032d745ab6ba6767ec6c89073038d351995e267bfc25f97d55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_saha, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Sep 30 14:16:13 compute-0 podman[95244]: 2025-09-30 14:16:13.582076579 +0000 UTC m=+0.125629931 container start 1cf744c754ebc7032d745ab6ba6767ec6c89073038d351995e267bfc25f97d55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_saha, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:16:13 compute-0 zealous_saha[95279]: 167 167
Sep 30 14:16:13 compute-0 systemd[1]: libpod-1cf744c754ebc7032d745ab6ba6767ec6c89073038d351995e267bfc25f97d55.scope: Deactivated successfully.
Sep 30 14:16:13 compute-0 podman[95244]: 2025-09-30 14:16:13.586405503 +0000 UTC m=+0.129958755 container attach 1cf744c754ebc7032d745ab6ba6767ec6c89073038d351995e267bfc25f97d55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_saha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2)
Sep 30 14:16:13 compute-0 podman[95244]: 2025-09-30 14:16:13.587080631 +0000 UTC m=+0.130633863 container died 1cf744c754ebc7032d745ab6ba6767ec6c89073038d351995e267bfc25f97d55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_saha, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Sep 30 14:16:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-4f1908524b816666f0cc32f1dd46eef003899b461a05f584f571f4fedc30bd24-merged.mount: Deactivated successfully.
Sep 30 14:16:13 compute-0 podman[95244]: 2025-09-30 14:16:13.622437512 +0000 UTC m=+0.165990734 container remove 1cf744c754ebc7032d745ab6ba6767ec6c89073038d351995e267bfc25f97d55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_saha, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:16:13 compute-0 systemd[1]: libpod-conmon-1cf744c754ebc7032d745ab6ba6767ec6c89073038d351995e267bfc25f97d55.scope: Deactivated successfully.
Sep 30 14:16:13 compute-0 systemd[1]: Reloading.
Sep 30 14:16:13 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0)
Sep 30 14:16:13 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1554968796' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Sep 30 14:16:13 compute-0 systemd-rc-local-generator[95323]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:16:13 compute-0 systemd-sysv-generator[95328]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 14:16:13 compute-0 vigilant_sutherland[95209]: mimic
Sep 30 14:16:13 compute-0 podman[95184]: 2025-09-30 14:16:13.780378404 +0000 UTC m=+0.564504204 container died 0bd7fd50bf43d908e86976584b8370f5c7535fa71cf7ff108a24ff47a5b1aeca (image=quay.io/ceph/ceph:v19, name=vigilant_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Sep 30 14:16:13 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v20: 105 pgs: 105 active+clean; 454 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Sep 30 14:16:13 compute-0 systemd[1]: libpod-0bd7fd50bf43d908e86976584b8370f5c7535fa71cf7ff108a24ff47a5b1aeca.scope: Deactivated successfully.
Sep 30 14:16:13 compute-0 systemd[1]: Reloading.
Sep 30 14:16:14 compute-0 systemd-sysv-generator[95381]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 14:16:14 compute-0 systemd-rc-local-generator[95376]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:16:14 compute-0 podman[95184]: 2025-09-30 14:16:14.213828334 +0000 UTC m=+0.997954094 container remove 0bd7fd50bf43d908e86976584b8370f5c7535fa71cf7ff108a24ff47a5b1aeca (image=quay.io/ceph/ceph:v19, name=vigilant_sutherland, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Sep 30 14:16:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-502be7388714a992783574b1e547949d192b9533b2af67038c06decc4e058a29-merged.mount: Deactivated successfully.
Sep 30 14:16:14 compute-0 systemd[1]: libpod-conmon-0bd7fd50bf43d908e86976584b8370f5c7535fa71cf7ff108a24ff47a5b1aeca.scope: Deactivated successfully.
Sep 30 14:16:14 compute-0 sudo[95174]: pam_unix(sudo:session): session closed for user root
Sep 30 14:16:14 compute-0 systemd[1]: Starting Ceph rgw.rgw.compute-0.ojbkdm for 5e3c7776-ac03-5698-b79f-a6dc2d80cae6...
Sep 30 14:16:14 compute-0 ceph-mon[74194]: Deploying daemon rgw.rgw.compute-0.ojbkdm on compute-0
Sep 30 14:16:14 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/1554968796' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Sep 30 14:16:14 compute-0 ceph-mon[74194]: pgmap v20: 105 pgs: 105 active+clean; 454 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Sep 30 14:16:14 compute-0 podman[95437]: 2025-09-30 14:16:14.433897383 +0000 UTC m=+0.033199146 container create 92a136fc2fde9ebb92eb1bf0c2df2860389e289ee61b43b2b77570d8ab2711f6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-rgw-rgw-compute-0-ojbkdm, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1)
Sep 30 14:16:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/099a3e3aec79aeb092ae73189b1d7c3f0a839668573fea3d03c460d68d7d89f7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:16:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/099a3e3aec79aeb092ae73189b1d7c3f0a839668573fea3d03c460d68d7d89f7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:16:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/099a3e3aec79aeb092ae73189b1d7c3f0a839668573fea3d03c460d68d7d89f7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:16:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/099a3e3aec79aeb092ae73189b1d7c3f0a839668573fea3d03c460d68d7d89f7/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.ojbkdm supports timestamps until 2038 (0x7fffffff)
Sep 30 14:16:14 compute-0 podman[95437]: 2025-09-30 14:16:14.486431407 +0000 UTC m=+0.085733190 container init 92a136fc2fde9ebb92eb1bf0c2df2860389e289ee61b43b2b77570d8ab2711f6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-rgw-rgw-compute-0-ojbkdm, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:16:14 compute-0 podman[95437]: 2025-09-30 14:16:14.492288141 +0000 UTC m=+0.091589894 container start 92a136fc2fde9ebb92eb1bf0c2df2860389e289ee61b43b2b77570d8ab2711f6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-rgw-rgw-compute-0-ojbkdm, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:16:14 compute-0 bash[95437]: 92a136fc2fde9ebb92eb1bf0c2df2860389e289ee61b43b2b77570d8ab2711f6
Sep 30 14:16:14 compute-0 podman[95437]: 2025-09-30 14:16:14.419724779 +0000 UTC m=+0.019026562 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:16:14 compute-0 systemd[1]: Started Ceph rgw.rgw.compute-0.ojbkdm for 5e3c7776-ac03-5698-b79f-a6dc2d80cae6.
Sep 30 14:16:14 compute-0 radosgw[95456]: deferred set uid:gid to 167:167 (ceph:ceph)
Sep 30 14:16:14 compute-0 radosgw[95456]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process radosgw, pid 2
Sep 30 14:16:14 compute-0 radosgw[95456]: framework: beast
Sep 30 14:16:14 compute-0 radosgw[95456]: framework conf key: endpoint, val: 192.168.122.100:8082
Sep 30 14:16:14 compute-0 radosgw[95456]: init_numa not setting numa affinity
Sep 30 14:16:14 compute-0 sudo[95138]: pam_unix(sudo:session): session closed for user root
Sep 30 14:16:14 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 14:16:14 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:14 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 14:16:14 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:14 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Sep 30 14:16:14 compute-0 radosgw[95456]: v1 topic migration: starting v1 topic migration..
Sep 30 14:16:14 compute-0 radosgw[95456]: LDAP not started since no server URIs were provided in the configuration.
Sep 30 14:16:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-rgw-rgw-compute-0-ojbkdm[95452]: 2025-09-30T14:16:14.762+0000 7f92324c6980 -1 LDAP not started since no server URIs were provided in the configuration.
Sep 30 14:16:14 compute-0 radosgw[95456]: v1 topic migration: finished v1 topic migration
Sep 30 14:16:14 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:14 compute-0 ceph-mgr[74485]: [progress INFO root] complete: finished ev 605907a7-d3a4-40da-813c-652ea9a26f10 (Updating rgw.rgw deployment (+2 -> 3))
Sep 30 14:16:14 compute-0 ceph-mgr[74485]: [progress INFO root] Completed event 605907a7-d3a4-40da-813c-652ea9a26f10 (Updating rgw.rgw deployment (+2 -> 3)) in 4 seconds
Sep 30 14:16:14 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Sep 30 14:16:14 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Sep 30 14:16:14 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Sep 30 14:16:14 compute-0 radosgw[95456]: framework: beast
Sep 30 14:16:14 compute-0 radosgw[95456]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Sep 30 14:16:14 compute-0 radosgw[95456]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Sep 30 14:16:14 compute-0 radosgw[95456]: starting handler: beast
Sep 30 14:16:14 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Sep 30 14:16:14 compute-0 radosgw[95456]: set uid:gid to 167:167 (ceph:ceph)
Sep 30 14:16:14 compute-0 radosgw[95456]: mgrc service_daemon_register rgw.14556 metadata {arch=x86_64,ceph_release=squid,ceph_version=ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable),ceph_version_short=19.2.3,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.ojbkdm,kernel_description=#1 SMP PREEMPT_DYNAMIC Mon Sep 15 21:46:13 UTC 2025,kernel_version=5.14.0-617.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864116,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=8c90dce2-7f45-4620-bb41-00dca60054c7,zone_name=default,zonegroup_id=4cef40d9-d614-4ca3-b034-9be9bc4080b8,zonegroup_name=default}
Sep 30 14:16:14 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:14 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Sep 30 14:16:14 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:14 compute-0 ceph-mgr[74485]: [progress INFO root] update: starting ev e83ecd1d-c0d0-4021-a5da-a01e871e6088 (Updating mds.cephfs deployment (+3 -> 3))
Sep 30 14:16:14 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.cdakzt", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Sep 30 14:16:14 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.cdakzt", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Sep 30 14:16:14 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.cdakzt", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Sep 30 14:16:14 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:16:14 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:16:14 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-2.cdakzt on compute-2
Sep 30 14:16:14 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-2.cdakzt on compute-2
Sep 30 14:16:15 compute-0 sudo[96102]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oohxicddxlkqbdukphjiizohpvycychi ; /usr/bin/python3'
Sep 30 14:16:15 compute-0 sudo[96102]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:16:15 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e48 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 14:16:15 compute-0 python3[96104]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:16:15 compute-0 podman[96105]: 2025-09-30 14:16:15.241554473 +0000 UTC m=+0.055168864 container create afba62284afe8f05fd4f6d4e9a9d3b718073a4837d253661b538fd8e169232bf (image=quay.io/ceph/ceph:v19, name=nostalgic_goldberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Sep 30 14:16:15 compute-0 systemd[1]: Started libpod-conmon-afba62284afe8f05fd4f6d4e9a9d3b718073a4837d253661b538fd8e169232bf.scope.
Sep 30 14:16:15 compute-0 podman[96105]: 2025-09-30 14:16:15.211287026 +0000 UTC m=+0.024901437 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:16:15 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:16:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63f2c4f2e7e0d415ae405cb4338474e21982cb4414a3f291a8ee4bf076e2ac9a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:16:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63f2c4f2e7e0d415ae405cb4338474e21982cb4414a3f291a8ee4bf076e2ac9a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:16:15 compute-0 podman[96105]: 2025-09-30 14:16:15.33784814 +0000 UTC m=+0.151462551 container init afba62284afe8f05fd4f6d4e9a9d3b718073a4837d253661b538fd8e169232bf (image=quay.io/ceph/ceph:v19, name=nostalgic_goldberg, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:16:15 compute-0 podman[96105]: 2025-09-30 14:16:15.346252512 +0000 UTC m=+0.159866903 container start afba62284afe8f05fd4f6d4e9a9d3b718073a4837d253661b538fd8e169232bf (image=quay.io/ceph/ceph:v19, name=nostalgic_goldberg, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Sep 30 14:16:15 compute-0 podman[96105]: 2025-09-30 14:16:15.353215715 +0000 UTC m=+0.166830126 container attach afba62284afe8f05fd4f6d4e9a9d3b718073a4837d253661b538fd8e169232bf (image=quay.io/ceph/ceph:v19, name=nostalgic_goldberg, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:16:15 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:15 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:15 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:15 compute-0 ceph-mon[74194]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Sep 30 14:16:15 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:15 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:15 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.cdakzt", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Sep 30 14:16:15 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.cdakzt", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Sep 30 14:16:15 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:16:15 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v21: 105 pgs: 105 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 49 KiB/s rd, 0 B/s wr, 75 op/s
Sep 30 14:16:15 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions", "format": "json"} v 0)
Sep 30 14:16:15 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/790835595' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Sep 30 14:16:15 compute-0 nostalgic_goldberg[96120]: 
Sep 30 14:16:15 compute-0 systemd[1]: libpod-afba62284afe8f05fd4f6d4e9a9d3b718073a4837d253661b538fd8e169232bf.scope: Deactivated successfully.
Sep 30 14:16:15 compute-0 conmon[96120]: conmon afba62284afe8f05fd4f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-afba62284afe8f05fd4f6d4e9a9d3b718073a4837d253661b538fd8e169232bf.scope/container/memory.events
Sep 30 14:16:15 compute-0 nostalgic_goldberg[96120]: {"mon":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"mgr":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"osd":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"rgw":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":2},"overall":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":11}}
Sep 30 14:16:15 compute-0 podman[96105]: 2025-09-30 14:16:15.82668589 +0000 UTC m=+0.640300311 container died afba62284afe8f05fd4f6d4e9a9d3b718073a4837d253661b538fd8e169232bf (image=quay.io/ceph/ceph:v19, name=nostalgic_goldberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Sep 30 14:16:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-63f2c4f2e7e0d415ae405cb4338474e21982cb4414a3f291a8ee4bf076e2ac9a-merged.mount: Deactivated successfully.
Sep 30 14:16:15 compute-0 podman[96105]: 2025-09-30 14:16:15.882363037 +0000 UTC m=+0.695977428 container remove afba62284afe8f05fd4f6d4e9a9d3b718073a4837d253661b538fd8e169232bf (image=quay.io/ceph/ceph:v19, name=nostalgic_goldberg, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:16:15 compute-0 systemd[1]: libpod-conmon-afba62284afe8f05fd4f6d4e9a9d3b718073a4837d253661b538fd8e169232bf.scope: Deactivated successfully.
Sep 30 14:16:15 compute-0 sudo[96102]: pam_unix(sudo:session): session closed for user root
Sep 30 14:16:16 compute-0 ceph-mon[74194]: Deploying daemon mds.cephfs.compute-2.cdakzt on compute-2
Sep 30 14:16:16 compute-0 ceph-mon[74194]: pgmap v21: 105 pgs: 105 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 49 KiB/s rd, 0 B/s wr, 75 op/s
Sep 30 14:16:16 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/790835595' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Sep 30 14:16:17 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Sep 30 14:16:17 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:17 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Sep 30 14:16:17 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:17 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Sep 30 14:16:17 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:17 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.gqfeob", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Sep 30 14:16:17 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.gqfeob", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Sep 30 14:16:17 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.gqfeob", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Sep 30 14:16:17 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:16:17 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:16:17 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.gqfeob on compute-0
Sep 30 14:16:17 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.gqfeob on compute-0
Sep 30 14:16:17 compute-0 sudo[96156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:16:17 compute-0 sudo[96156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:16:17 compute-0 sudo[96156]: pam_unix(sudo:session): session closed for user root
Sep 30 14:16:17 compute-0 sudo[96181]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6
Sep 30 14:16:17 compute-0 sudo[96181]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:16:17 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v22: 105 pgs: 105 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 49 KiB/s rd, 0 B/s wr, 75 op/s
Sep 30 14:16:17 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).mds e3 new map
Sep 30 14:16:17 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).mds e3 print_map
                                           e3
                                           btime 2025-09-30T14:16:17:791947+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-09-30T14:15:48.812444+0000
                                           modified        2025-09-30T14:15:48.812444+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-2.cdakzt{-1:24208} state up:standby seq 1 addr [v2:192.168.122.102:6804/3684216434,v1:192.168.122.102:6805/3684216434] compat {c=[1],r=[1],i=[1fff]}]
Sep 30 14:16:17 compute-0 ceph-mgr[74485]: [progress INFO root] Writing back 11 completed events
Sep 30 14:16:17 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/3684216434,v1:192.168.122.102:6805/3684216434] up:boot
Sep 30 14:16:17 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).mds e3 assigned standby [v2:192.168.122.102:6804/3684216434,v1:192.168.122.102:6805/3684216434] as mds.0
Sep 30 14:16:17 compute-0 ceph-mon[74194]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-2.cdakzt assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Sep 30 14:16:17 compute-0 ceph-mon[74194]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Sep 30 14:16:17 compute-0 ceph-mon[74194]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Sep 30 14:16:17 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : fsmap cephfs:0 1 up:standby
Sep 30 14:16:17 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-2.cdakzt"} v 0)
Sep 30 14:16:17 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.cdakzt"}]: dispatch
Sep 30 14:16:17 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).mds e3 all = 0
Sep 30 14:16:17 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:16:17 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Sep 30 14:16:17 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:16:17 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:16:17 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:16:17 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:16:17 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:16:17 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).mds e4 new map
Sep 30 14:16:17 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).mds e4 print_map
                                           e4
                                           btime 2025-09-30T14:16:17:862481+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        4
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-09-30T14:15:48.812444+0000
                                           modified        2025-09-30T14:16:17.862474+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24208}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                           [mds.cephfs.compute-2.cdakzt{0:24208} state up:creating seq 1 addr [v2:192.168.122.102:6804/3684216434,v1:192.168.122.102:6805/3684216434] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
Sep 30 14:16:17 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.cdakzt=up:creating}
Sep 30 14:16:17 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:18 compute-0 ceph-mon[74194]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-2.cdakzt is now active in filesystem cephfs as rank 0
Sep 30 14:16:18 compute-0 podman[96245]: 2025-09-30 14:16:18.101484497 +0000 UTC m=+0.035655850 container create 832e24e752ae6ade6f1c93c1fb06a5720c0ba9d7b48390e1a7708bed99d951cd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_borg, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Sep 30 14:16:18 compute-0 systemd[1]: Started libpod-conmon-832e24e752ae6ade6f1c93c1fb06a5720c0ba9d7b48390e1a7708bed99d951cd.scope.
Sep 30 14:16:18 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:16:18 compute-0 podman[96245]: 2025-09-30 14:16:18.173006721 +0000 UTC m=+0.107178115 container init 832e24e752ae6ade6f1c93c1fb06a5720c0ba9d7b48390e1a7708bed99d951cd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_borg, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:16:18 compute-0 podman[96245]: 2025-09-30 14:16:18.179831001 +0000 UTC m=+0.114002364 container start 832e24e752ae6ade6f1c93c1fb06a5720c0ba9d7b48390e1a7708bed99d951cd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_borg, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:16:18 compute-0 podman[96245]: 2025-09-30 14:16:18.085479045 +0000 UTC m=+0.019650428 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:16:18 compute-0 podman[96245]: 2025-09-30 14:16:18.184447603 +0000 UTC m=+0.118618966 container attach 832e24e752ae6ade6f1c93c1fb06a5720c0ba9d7b48390e1a7708bed99d951cd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_borg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Sep 30 14:16:18 compute-0 infallible_borg[96261]: 167 167
Sep 30 14:16:18 compute-0 systemd[1]: libpod-832e24e752ae6ade6f1c93c1fb06a5720c0ba9d7b48390e1a7708bed99d951cd.scope: Deactivated successfully.
Sep 30 14:16:18 compute-0 conmon[96261]: conmon 832e24e752ae6ade6f1c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-832e24e752ae6ade6f1c93c1fb06a5720c0ba9d7b48390e1a7708bed99d951cd.scope/container/memory.events
Sep 30 14:16:18 compute-0 podman[96245]: 2025-09-30 14:16:18.187025411 +0000 UTC m=+0.121196784 container died 832e24e752ae6ade6f1c93c1fb06a5720c0ba9d7b48390e1a7708bed99d951cd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_borg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:16:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-0be4f0c588a3c1956ed881d74ab0ce895ed4d623366414a81ef8c0317fe34a40-merged.mount: Deactivated successfully.
Sep 30 14:16:18 compute-0 podman[96245]: 2025-09-30 14:16:18.221735185 +0000 UTC m=+0.155906548 container remove 832e24e752ae6ade6f1c93c1fb06a5720c0ba9d7b48390e1a7708bed99d951cd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_borg, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:16:18 compute-0 systemd[1]: libpod-conmon-832e24e752ae6ade6f1c93c1fb06a5720c0ba9d7b48390e1a7708bed99d951cd.scope: Deactivated successfully.
Sep 30 14:16:18 compute-0 systemd[1]: Reloading.
Sep 30 14:16:18 compute-0 systemd-rc-local-generator[96305]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:16:18 compute-0 systemd-sysv-generator[96310]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 14:16:18 compute-0 systemd[1]: Reloading.
Sep 30 14:16:18 compute-0 systemd-sysv-generator[96351]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 14:16:18 compute-0 systemd-rc-local-generator[96346]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:16:18 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:18 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:18 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:18 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.gqfeob", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Sep 30 14:16:18 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.gqfeob", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Sep 30 14:16:18 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:16:18 compute-0 ceph-mon[74194]: Deploying daemon mds.cephfs.compute-0.gqfeob on compute-0
Sep 30 14:16:18 compute-0 ceph-mon[74194]: pgmap v22: 105 pgs: 105 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 49 KiB/s rd, 0 B/s wr, 75 op/s
Sep 30 14:16:18 compute-0 ceph-mon[74194]: mds.? [v2:192.168.122.102:6804/3684216434,v1:192.168.122.102:6805/3684216434] up:boot
Sep 30 14:16:18 compute-0 ceph-mon[74194]: daemon mds.cephfs.compute-2.cdakzt assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Sep 30 14:16:18 compute-0 ceph-mon[74194]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Sep 30 14:16:18 compute-0 ceph-mon[74194]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Sep 30 14:16:18 compute-0 ceph-mon[74194]: fsmap cephfs:0 1 up:standby
Sep 30 14:16:18 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.cdakzt"}]: dispatch
Sep 30 14:16:18 compute-0 ceph-mon[74194]: fsmap cephfs:1 {0=cephfs.compute-2.cdakzt=up:creating}
Sep 30 14:16:18 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:18 compute-0 ceph-mon[74194]: daemon mds.cephfs.compute-2.cdakzt is now active in filesystem cephfs as rank 0
Sep 30 14:16:18 compute-0 systemd[1]: Starting Ceph mds.cephfs.compute-0.gqfeob for 5e3c7776-ac03-5698-b79f-a6dc2d80cae6...
Sep 30 14:16:19 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).mds e5 new map
Sep 30 14:16:19 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).mds e5 print_map
                                           e5
                                           btime 2025-09-30T14:16:18:985930+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-09-30T14:15:48.812444+0000
                                           modified        2025-09-30T14:16:18.985927+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24208}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 24208 members: 24208
                                           [mds.cephfs.compute-2.cdakzt{0:24208} state up:active seq 2 addr [v2:192.168.122.102:6804/3684216434,v1:192.168.122.102:6805/3684216434] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
Sep 30 14:16:19 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/3684216434,v1:192.168.122.102:6805/3684216434] up:active
Sep 30 14:16:19 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.cdakzt=up:active}
Sep 30 14:16:19 compute-0 podman[96405]: 2025-09-30 14:16:19.038784903 +0000 UTC m=+0.041350730 container create f2adb5791a14e4807ea26189a4ce48eba8f88db1d8d8e2a14f60eba73a378205 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mds-cephfs-compute-0-gqfeob, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Sep 30 14:16:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0441b78c8455bad925896e788e38b3a8c5b72ca2c368135a88e4f5d52e92c702/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:16:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0441b78c8455bad925896e788e38b3a8c5b72ca2c368135a88e4f5d52e92c702/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:16:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0441b78c8455bad925896e788e38b3a8c5b72ca2c368135a88e4f5d52e92c702/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:16:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0441b78c8455bad925896e788e38b3a8c5b72ca2c368135a88e4f5d52e92c702/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.gqfeob supports timestamps until 2038 (0x7fffffff)
Sep 30 14:16:19 compute-0 podman[96405]: 2025-09-30 14:16:19.1043022 +0000 UTC m=+0.106868047 container init f2adb5791a14e4807ea26189a4ce48eba8f88db1d8d8e2a14f60eba73a378205 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mds-cephfs-compute-0-gqfeob, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Sep 30 14:16:19 compute-0 podman[96405]: 2025-09-30 14:16:19.110105303 +0000 UTC m=+0.112671130 container start f2adb5791a14e4807ea26189a4ce48eba8f88db1d8d8e2a14f60eba73a378205 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mds-cephfs-compute-0-gqfeob, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:16:19 compute-0 bash[96405]: f2adb5791a14e4807ea26189a4ce48eba8f88db1d8d8e2a14f60eba73a378205
Sep 30 14:16:19 compute-0 podman[96405]: 2025-09-30 14:16:19.021025425 +0000 UTC m=+0.023591272 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:16:19 compute-0 systemd[1]: Started Ceph mds.cephfs.compute-0.gqfeob for 5e3c7776-ac03-5698-b79f-a6dc2d80cae6.
Sep 30 14:16:19 compute-0 ceph-mds[96424]: set uid:gid to 167:167 (ceph:ceph)
Sep 30 14:16:19 compute-0 ceph-mds[96424]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mds, pid 2
Sep 30 14:16:19 compute-0 ceph-mds[96424]: main not setting numa affinity
Sep 30 14:16:19 compute-0 ceph-mds[96424]: pidfile_write: ignore empty --pid-file
Sep 30 14:16:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mds-cephfs-compute-0-gqfeob[96420]: starting mds.cephfs.compute-0.gqfeob at 
Sep 30 14:16:19 compute-0 ceph-mds[96424]: mds.cephfs.compute-0.gqfeob Updating MDS map to version 5 from mon.0
Sep 30 14:16:19 compute-0 sudo[96181]: pam_unix(sudo:session): session closed for user root
Sep 30 14:16:19 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 14:16:19 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:19 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 14:16:19 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:19 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Sep 30 14:16:19 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:19 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.gwmnhp", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Sep 30 14:16:19 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.gwmnhp", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Sep 30 14:16:19 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.gwmnhp", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Sep 30 14:16:19 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:16:19 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:16:19 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-1.gwmnhp on compute-1
Sep 30 14:16:19 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-1.gwmnhp on compute-1
Sep 30 14:16:19 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v23: 105 pgs: 105 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 106 KiB/s rd, 1.2 KiB/s wr, 167 op/s
Sep 30 14:16:20 compute-0 ceph-mon[74194]: mds.? [v2:192.168.122.102:6804/3684216434,v1:192.168.122.102:6805/3684216434] up:active
Sep 30 14:16:20 compute-0 ceph-mon[74194]: fsmap cephfs:1 {0=cephfs.compute-2.cdakzt=up:active}
Sep 30 14:16:20 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:20 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:20 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:20 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.gwmnhp", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Sep 30 14:16:20 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.gwmnhp", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Sep 30 14:16:20 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:16:20 compute-0 ceph-mon[74194]: Deploying daemon mds.cephfs.compute-1.gwmnhp on compute-1
Sep 30 14:16:20 compute-0 ceph-mon[74194]: pgmap v23: 105 pgs: 105 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 106 KiB/s rd, 1.2 KiB/s wr, 167 op/s
Sep 30 14:16:20 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e48 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 14:16:20 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).mds e6 new map
Sep 30 14:16:20 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).mds e6 print_map
                                           e6
                                           btime 2025-09-30T14:16:20:007058+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-09-30T14:15:48.812444+0000
                                           modified        2025-09-30T14:16:18.985927+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24208}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 24208 members: 24208
                                           [mds.cephfs.compute-2.cdakzt{0:24208} state up:active seq 2 addr [v2:192.168.122.102:6804/3684216434,v1:192.168.122.102:6805/3684216434] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.gqfeob{-1:14568} state up:standby seq 1 addr [v2:192.168.122.100:6806/576743450,v1:192.168.122.100:6807/576743450] compat {c=[1],r=[1],i=[1fff]}]
Sep 30 14:16:20 compute-0 ceph-mds[96424]: mds.cephfs.compute-0.gqfeob Updating MDS map to version 6 from mon.0
Sep 30 14:16:20 compute-0 ceph-mds[96424]: mds.cephfs.compute-0.gqfeob Monitors have assigned me to become a standby
Sep 30 14:16:20 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/576743450,v1:192.168.122.100:6807/576743450] up:boot
Sep 30 14:16:20 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.cdakzt=up:active} 1 up:standby
Sep 30 14:16:20 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.gqfeob"} v 0)
Sep 30 14:16:20 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.gqfeob"}]: dispatch
Sep 30 14:16:20 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).mds e6 all = 0
Sep 30 14:16:20 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).mds e7 new map
Sep 30 14:16:20 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).mds e7 print_map
                                           e7
                                           btime 2025-09-30T14:16:20:168840+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-09-30T14:15:48.812444+0000
                                           modified        2025-09-30T14:16:18.985927+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24208}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           qdb_cluster        leader: 24208 members: 24208
                                           [mds.cephfs.compute-2.cdakzt{0:24208} state up:active seq 2 addr [v2:192.168.122.102:6804/3684216434,v1:192.168.122.102:6805/3684216434] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.gqfeob{-1:14568} state up:standby seq 1 addr [v2:192.168.122.100:6806/576743450,v1:192.168.122.100:6807/576743450] compat {c=[1],r=[1],i=[1fff]}]
Sep 30 14:16:20 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.cdakzt=up:active} 1 up:standby
Sep 30 14:16:20 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Sep 30 14:16:20 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:20 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Sep 30 14:16:20 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:20 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Sep 30 14:16:20 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:20 compute-0 ceph-mgr[74485]: [progress INFO root] complete: finished ev e83ecd1d-c0d0-4021-a5da-a01e871e6088 (Updating mds.cephfs deployment (+3 -> 3))
Sep 30 14:16:20 compute-0 ceph-mgr[74485]: [progress INFO root] Completed event e83ecd1d-c0d0-4021-a5da-a01e871e6088 (Updating mds.cephfs deployment (+3 -> 3)) in 6 seconds
Sep 30 14:16:20 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0)
Sep 30 14:16:20 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:20 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Sep 30 14:16:20 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:20 compute-0 ceph-mgr[74485]: [progress INFO root] update: starting ev 118e0ae2-a241-44bf-a95a-79aa8a0a87de (Updating nfs.cephfs deployment (+3 -> 3))
Sep 30 14:16:20 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 14:16:20 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:20 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.0.0.compute-1.mybdtc
Sep 30 14:16:20 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.0.0.compute-1.mybdtc
Sep 30 14:16:20 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.mybdtc", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]} v 0)
Sep 30 14:16:20 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.mybdtc", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Sep 30 14:16:20 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.mybdtc", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Sep 30 14:16:20 compute-0 ceph-mgr[74485]: [cephadm INFO root] Ensuring nfs.cephfs.0 is in the ganesha grace table
Sep 30 14:16:20 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Ensuring nfs.cephfs.0 is in the ganesha grace table
Sep 30 14:16:20 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]} v 0)
Sep 30 14:16:20 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Sep 30 14:16:21 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Sep 30 14:16:21 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:16:21 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:16:21 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"} v 0)
Sep 30 14:16:21 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Sep 30 14:16:21 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Sep 30 14:16:21 compute-0 ceph-mon[74194]: mds.? [v2:192.168.122.100:6806/576743450,v1:192.168.122.100:6807/576743450] up:boot
Sep 30 14:16:21 compute-0 ceph-mon[74194]: fsmap cephfs:1 {0=cephfs.compute-2.cdakzt=up:active} 1 up:standby
Sep 30 14:16:21 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.gqfeob"}]: dispatch
Sep 30 14:16:21 compute-0 ceph-mon[74194]: fsmap cephfs:1 {0=cephfs.compute-2.cdakzt=up:active} 1 up:standby
Sep 30 14:16:21 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:21 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:21 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:21 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:21 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:21 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:21 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.mybdtc", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Sep 30 14:16:21 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.mybdtc", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Sep 30 14:16:21 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Sep 30 14:16:21 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Sep 30 14:16:21 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:16:21 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Sep 30 14:16:21 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Sep 30 14:16:21 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.services.nfs] Rados config object exists: conf-nfs.cephfs
Sep 30 14:16:21 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Rados config object exists: conf-nfs.cephfs
Sep 30 14:16:21 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.0.0.compute-1.mybdtc-rgw
Sep 30 14:16:21 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.0.0.compute-1.mybdtc-rgw
Sep 30 14:16:21 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.mybdtc-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]} v 0)
Sep 30 14:16:21 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.mybdtc-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Sep 30 14:16:21 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.mybdtc-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Sep 30 14:16:21 compute-0 ceph-mgr[74485]: [cephadm WARNING cephadm.services.nfs] Bind address in nfs.cephfs.0.0.compute-1.mybdtc's ganesha conf is defaulting to empty
Sep 30 14:16:21 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [WRN] : Bind address in nfs.cephfs.0.0.compute-1.mybdtc's ganesha conf is defaulting to empty
Sep 30 14:16:21 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:16:21 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:16:21 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Deploying daemon nfs.cephfs.0.0.compute-1.mybdtc on compute-1
Sep 30 14:16:21 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Deploying daemon nfs.cephfs.0.0.compute-1.mybdtc on compute-1
Sep 30 14:16:21 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).mds e8 new map
Sep 30 14:16:21 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).mds e8 print_map
                                           e8
                                           btime 2025-09-30T14:16:21:235886+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-09-30T14:15:48.812444+0000
                                           modified        2025-09-30T14:16:18.985927+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24208}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           qdb_cluster        leader: 24208 members: 24208
                                           [mds.cephfs.compute-2.cdakzt{0:24208} state up:active seq 2 addr [v2:192.168.122.102:6804/3684216434,v1:192.168.122.102:6805/3684216434] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.gqfeob{-1:14568} state up:standby seq 1 addr [v2:192.168.122.100:6806/576743450,v1:192.168.122.100:6807/576743450] compat {c=[1],r=[1],i=[1fff]}]
                                           [mds.cephfs.compute-1.gwmnhp{-1:24242} state up:standby seq 1 addr [v2:192.168.122.101:6804/1218049431,v1:192.168.122.101:6805/1218049431] compat {c=[1],r=[1],i=[1fff]}]
Sep 30 14:16:21 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.101:6804/1218049431,v1:192.168.122.101:6805/1218049431] up:boot
Sep 30 14:16:21 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.cdakzt=up:active} 2 up:standby
Sep 30 14:16:21 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-1.gwmnhp"} v 0)
Sep 30 14:16:21 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.gwmnhp"}]: dispatch
Sep 30 14:16:21 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).mds e8 all = 0
Sep 30 14:16:21 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v24: 105 pgs: 105 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 106 KiB/s rd, 1.2 KiB/s wr, 167 op/s
Sep 30 14:16:22 compute-0 ceph-mon[74194]: Creating key for client.nfs.cephfs.0.0.compute-1.mybdtc
Sep 30 14:16:22 compute-0 ceph-mon[74194]: Ensuring nfs.cephfs.0 is in the ganesha grace table
Sep 30 14:16:22 compute-0 ceph-mon[74194]: Rados config object exists: conf-nfs.cephfs
Sep 30 14:16:22 compute-0 ceph-mon[74194]: Creating key for client.nfs.cephfs.0.0.compute-1.mybdtc-rgw
Sep 30 14:16:22 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.mybdtc-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Sep 30 14:16:22 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.mybdtc-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Sep 30 14:16:22 compute-0 ceph-mon[74194]: Bind address in nfs.cephfs.0.0.compute-1.mybdtc's ganesha conf is defaulting to empty
Sep 30 14:16:22 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:16:22 compute-0 ceph-mon[74194]: Deploying daemon nfs.cephfs.0.0.compute-1.mybdtc on compute-1
Sep 30 14:16:22 compute-0 ceph-mon[74194]: mds.? [v2:192.168.122.101:6804/1218049431,v1:192.168.122.101:6805/1218049431] up:boot
Sep 30 14:16:22 compute-0 ceph-mon[74194]: fsmap cephfs:1 {0=cephfs.compute-2.cdakzt=up:active} 2 up:standby
Sep 30 14:16:22 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.gwmnhp"}]: dispatch
Sep 30 14:16:22 compute-0 ceph-mon[74194]: pgmap v24: 105 pgs: 105 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 106 KiB/s rd, 1.2 KiB/s wr, 167 op/s
Sep 30 14:16:22 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).mds e9 new map
Sep 30 14:16:22 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).mds e9 print_map
                                           e9
                                           btime 2025-09-30T14:16:22:243445+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        9
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-09-30T14:15:48.812444+0000
                                           modified        2025-09-30T14:16:22.016891+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24208}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           qdb_cluster        leader: 24208 members: 24208
                                           [mds.cephfs.compute-2.cdakzt{0:24208} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/3684216434,v1:192.168.122.102:6805/3684216434] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.gqfeob{-1:14568} state up:standby seq 1 addr [v2:192.168.122.100:6806/576743450,v1:192.168.122.100:6807/576743450] compat {c=[1],r=[1],i=[1fff]}]
                                           [mds.cephfs.compute-1.gwmnhp{-1:24242} state up:standby seq 1 addr [v2:192.168.122.101:6804/1218049431,v1:192.168.122.101:6805/1218049431] compat {c=[1],r=[1],i=[1fff]}]
Sep 30 14:16:22 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/3684216434,v1:192.168.122.102:6805/3684216434] up:active
Sep 30 14:16:22 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.cdakzt=up:active} 2 up:standby
Sep 30 14:16:22 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Sep 30 14:16:22 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:22 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Sep 30 14:16:22 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:22 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 14:16:22 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:22 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.1.0.compute-2.jhairi
Sep 30 14:16:22 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.1.0.compute-2.jhairi
Sep 30 14:16:22 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.jhairi", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]} v 0)
Sep 30 14:16:22 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.jhairi", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Sep 30 14:16:22 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.jhairi", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Sep 30 14:16:22 compute-0 ceph-mgr[74485]: [cephadm INFO root] Ensuring nfs.cephfs.1 is in the ganesha grace table
Sep 30 14:16:22 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Ensuring nfs.cephfs.1 is in the ganesha grace table
Sep 30 14:16:22 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]} v 0)
Sep 30 14:16:22 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Sep 30 14:16:22 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Sep 30 14:16:22 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:16:22 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:16:22 compute-0 ceph-mgr[74485]: [progress INFO root] Writing back 12 completed events
Sep 30 14:16:22 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Sep 30 14:16:23 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:23 compute-0 ceph-mon[74194]: mds.? [v2:192.168.122.102:6804/3684216434,v1:192.168.122.102:6805/3684216434] up:active
Sep 30 14:16:23 compute-0 ceph-mon[74194]: fsmap cephfs:1 {0=cephfs.compute-2.cdakzt=up:active} 2 up:standby
Sep 30 14:16:23 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:23 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:23 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:23 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.jhairi", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Sep 30 14:16:23 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.jhairi", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Sep 30 14:16:23 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Sep 30 14:16:23 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Sep 30 14:16:23 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:16:23 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:23 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v25: 105 pgs: 105 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 106 KiB/s rd, 1.2 KiB/s wr, 167 op/s
Sep 30 14:16:24 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).mds e10 new map
Sep 30 14:16:24 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).mds e10 print_map
                                           e10
                                           btime 2025-09-30T14:16:24:057740+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        9
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-09-30T14:15:48.812444+0000
                                           modified        2025-09-30T14:16:22.016891+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24208}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           qdb_cluster        leader: 24208 members: 24208
                                           [mds.cephfs.compute-2.cdakzt{0:24208} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/3684216434,v1:192.168.122.102:6805/3684216434] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.gqfeob{-1:14568} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/576743450,v1:192.168.122.100:6807/576743450] compat {c=[1],r=[1],i=[1fff]}]
                                           [mds.cephfs.compute-1.gwmnhp{-1:24242} state up:standby seq 1 addr [v2:192.168.122.101:6804/1218049431,v1:192.168.122.101:6805/1218049431] compat {c=[1],r=[1],i=[1fff]}]
Sep 30 14:16:24 compute-0 ceph-mds[96424]: mds.cephfs.compute-0.gqfeob Updating MDS map to version 10 from mon.0
Sep 30 14:16:24 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/576743450,v1:192.168.122.100:6807/576743450] up:standby
Sep 30 14:16:24 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.cdakzt=up:active} 2 up:standby
Sep 30 14:16:24 compute-0 ceph-mon[74194]: Creating key for client.nfs.cephfs.1.0.compute-2.jhairi
Sep 30 14:16:24 compute-0 ceph-mon[74194]: Ensuring nfs.cephfs.1 is in the ganesha grace table
Sep 30 14:16:24 compute-0 ceph-mon[74194]: pgmap v25: 105 pgs: 105 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 106 KiB/s rd, 1.2 KiB/s wr, 167 op/s
Sep 30 14:16:24 compute-0 ceph-mon[74194]: mds.? [v2:192.168.122.100:6806/576743450,v1:192.168.122.100:6807/576743450] up:standby
Sep 30 14:16:24 compute-0 ceph-mon[74194]: fsmap cephfs:1 {0=cephfs.compute-2.cdakzt=up:active} 2 up:standby
Sep 30 14:16:25 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e48 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 14:16:25 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).mds e11 new map
Sep 30 14:16:25 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).mds e11 print_map
                                           e11
                                           btime 2025-09-30T14:16:25:498009+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        9
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-09-30T14:15:48.812444+0000
                                           modified        2025-09-30T14:16:22.016891+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24208}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           qdb_cluster        leader: 24208 members: 24208
                                           [mds.cephfs.compute-2.cdakzt{0:24208} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/3684216434,v1:192.168.122.102:6805/3684216434] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.gqfeob{-1:14568} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/576743450,v1:192.168.122.100:6807/576743450] compat {c=[1],r=[1],i=[1fff]}]
                                           [mds.cephfs.compute-1.gwmnhp{-1:24242} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.101:6804/1218049431,v1:192.168.122.101:6805/1218049431] compat {c=[1],r=[1],i=[1fff]}]
Sep 30 14:16:25 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.101:6804/1218049431,v1:192.168.122.101:6805/1218049431] up:standby
Sep 30 14:16:25 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.cdakzt=up:active} 2 up:standby
Sep 30 14:16:25 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v26: 105 pgs: 105 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 107 KiB/s rd, 1.9 KiB/s wr, 169 op/s
Sep 30 14:16:25 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"} v 0)
Sep 30 14:16:25 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Sep 30 14:16:25 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Sep 30 14:16:26 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.services.nfs] Rados config object exists: conf-nfs.cephfs
Sep 30 14:16:26 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Rados config object exists: conf-nfs.cephfs
Sep 30 14:16:26 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.1.0.compute-2.jhairi-rgw
Sep 30 14:16:26 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.1.0.compute-2.jhairi-rgw
Sep 30 14:16:26 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.jhairi-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]} v 0)
Sep 30 14:16:26 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.jhairi-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Sep 30 14:16:26 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.jhairi-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Sep 30 14:16:26 compute-0 ceph-mgr[74485]: [cephadm WARNING cephadm.services.nfs] Bind address in nfs.cephfs.1.0.compute-2.jhairi's ganesha conf is defaulting to empty
Sep 30 14:16:26 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [WRN] : Bind address in nfs.cephfs.1.0.compute-2.jhairi's ganesha conf is defaulting to empty
Sep 30 14:16:26 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:16:26 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:16:26 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Deploying daemon nfs.cephfs.1.0.compute-2.jhairi on compute-2
Sep 30 14:16:26 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Deploying daemon nfs.cephfs.1.0.compute-2.jhairi on compute-2
Sep 30 14:16:26 compute-0 ceph-mon[74194]: mds.? [v2:192.168.122.101:6804/1218049431,v1:192.168.122.101:6805/1218049431] up:standby
Sep 30 14:16:26 compute-0 ceph-mon[74194]: fsmap cephfs:1 {0=cephfs.compute-2.cdakzt=up:active} 2 up:standby
Sep 30 14:16:26 compute-0 ceph-mon[74194]: pgmap v26: 105 pgs: 105 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 107 KiB/s rd, 1.9 KiB/s wr, 169 op/s
Sep 30 14:16:26 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Sep 30 14:16:26 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Sep 30 14:16:26 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.jhairi-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Sep 30 14:16:26 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.jhairi-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Sep 30 14:16:26 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:16:27 compute-0 ceph-mon[74194]: Rados config object exists: conf-nfs.cephfs
Sep 30 14:16:27 compute-0 ceph-mon[74194]: Creating key for client.nfs.cephfs.1.0.compute-2.jhairi-rgw
Sep 30 14:16:27 compute-0 ceph-mon[74194]: Bind address in nfs.cephfs.1.0.compute-2.jhairi's ganesha conf is defaulting to empty
Sep 30 14:16:27 compute-0 ceph-mon[74194]: Deploying daemon nfs.cephfs.1.0.compute-2.jhairi on compute-2
Sep 30 14:16:27 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v27: 105 pgs: 105 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 58 KiB/s rd, 1.9 KiB/s wr, 93 op/s
Sep 30 14:16:28 compute-0 ceph-mon[74194]: pgmap v27: 105 pgs: 105 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 58 KiB/s rd, 1.9 KiB/s wr, 93 op/s
Sep 30 14:16:29 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Sep 30 14:16:29 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:29 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Sep 30 14:16:29 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:29 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 14:16:29 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:29 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.2.0.compute-0.qrbicy
Sep 30 14:16:29 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.2.0.compute-0.qrbicy
Sep 30 14:16:29 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.qrbicy", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]} v 0)
Sep 30 14:16:29 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.qrbicy", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Sep 30 14:16:29 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.qrbicy", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Sep 30 14:16:29 compute-0 ceph-mgr[74485]: [cephadm INFO root] Ensuring nfs.cephfs.2 is in the ganesha grace table
Sep 30 14:16:29 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Ensuring nfs.cephfs.2 is in the ganesha grace table
Sep 30 14:16:29 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]} v 0)
Sep 30 14:16:29 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Sep 30 14:16:29 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Sep 30 14:16:29 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:16:29 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:16:29 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v28: 105 pgs: 105 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 58 KiB/s rd, 1.9 KiB/s wr, 94 op/s
Sep 30 14:16:30 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e48 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 14:16:30 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:30 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:30 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:30 compute-0 ceph-mon[74194]: Creating key for client.nfs.cephfs.2.0.compute-0.qrbicy
Sep 30 14:16:30 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.qrbicy", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Sep 30 14:16:30 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.qrbicy", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Sep 30 14:16:30 compute-0 ceph-mon[74194]: Ensuring nfs.cephfs.2 is in the ganesha grace table
Sep 30 14:16:30 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Sep 30 14:16:30 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Sep 30 14:16:30 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:16:30 compute-0 ceph-mon[74194]: pgmap v28: 105 pgs: 105 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 58 KiB/s rd, 1.9 KiB/s wr, 94 op/s
Sep 30 14:16:31 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v29: 105 pgs: 105 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 767 B/s wr, 2 op/s
Sep 30 14:16:32 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"} v 0)
Sep 30 14:16:32 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Sep 30 14:16:32 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Sep 30 14:16:32 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.services.nfs] Rados config object exists: conf-nfs.cephfs
Sep 30 14:16:32 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Rados config object exists: conf-nfs.cephfs
Sep 30 14:16:32 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.2.0.compute-0.qrbicy-rgw
Sep 30 14:16:32 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.2.0.compute-0.qrbicy-rgw
Sep 30 14:16:32 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.qrbicy-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]} v 0)
Sep 30 14:16:32 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.qrbicy-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Sep 30 14:16:32 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.qrbicy-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Sep 30 14:16:32 compute-0 ceph-mgr[74485]: [cephadm WARNING cephadm.services.nfs] Bind address in nfs.cephfs.2.0.compute-0.qrbicy's ganesha conf is defaulting to empty
Sep 30 14:16:32 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [WRN] : Bind address in nfs.cephfs.2.0.compute-0.qrbicy's ganesha conf is defaulting to empty
Sep 30 14:16:32 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:16:32 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:16:32 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Deploying daemon nfs.cephfs.2.0.compute-0.qrbicy on compute-0
Sep 30 14:16:32 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Deploying daemon nfs.cephfs.2.0.compute-0.qrbicy on compute-0
Sep 30 14:16:32 compute-0 sudo[96552]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:16:32 compute-0 sudo[96552]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:16:32 compute-0 sudo[96552]: pam_unix(sudo:session): session closed for user root
Sep 30 14:16:32 compute-0 sudo[96577]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6
Sep 30 14:16:32 compute-0 sudo[96577]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:16:32 compute-0 ceph-mon[74194]: pgmap v29: 105 pgs: 105 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 767 B/s wr, 2 op/s
Sep 30 14:16:32 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Sep 30 14:16:32 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Sep 30 14:16:32 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.qrbicy-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Sep 30 14:16:32 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.qrbicy-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Sep 30 14:16:32 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:16:32 compute-0 podman[96643]: 2025-09-30 14:16:32.987266459 +0000 UTC m=+0.043862167 container create 82281186e89641429e2ec103c758ddca479e474780b0aede8438e371040d5345 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_clarke, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Sep 30 14:16:33 compute-0 systemd[1]: Started libpod-conmon-82281186e89641429e2ec103c758ddca479e474780b0aede8438e371040d5345.scope.
Sep 30 14:16:33 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:16:33 compute-0 podman[96643]: 2025-09-30 14:16:32.967789406 +0000 UTC m=+0.024385134 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:16:33 compute-0 podman[96643]: 2025-09-30 14:16:33.078832601 +0000 UTC m=+0.135428329 container init 82281186e89641429e2ec103c758ddca479e474780b0aede8438e371040d5345 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_clarke, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:16:33 compute-0 podman[96643]: 2025-09-30 14:16:33.088347502 +0000 UTC m=+0.144943240 container start 82281186e89641429e2ec103c758ddca479e474780b0aede8438e371040d5345 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_clarke, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Sep 30 14:16:33 compute-0 podman[96643]: 2025-09-30 14:16:33.092856421 +0000 UTC m=+0.149452219 container attach 82281186e89641429e2ec103c758ddca479e474780b0aede8438e371040d5345 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_clarke, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Sep 30 14:16:33 compute-0 vigilant_clarke[96660]: 167 167
Sep 30 14:16:33 compute-0 systemd[1]: libpod-82281186e89641429e2ec103c758ddca479e474780b0aede8438e371040d5345.scope: Deactivated successfully.
Sep 30 14:16:33 compute-0 conmon[96660]: conmon 82281186e89641429e2e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-82281186e89641429e2ec103c758ddca479e474780b0aede8438e371040d5345.scope/container/memory.events
Sep 30 14:16:33 compute-0 podman[96643]: 2025-09-30 14:16:33.097519194 +0000 UTC m=+0.154114942 container died 82281186e89641429e2ec103c758ddca479e474780b0aede8438e371040d5345 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_clarke, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Sep 30 14:16:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-d3201e5adc280760f26db7f1febbf2ef3b628df0493787c1f46b39511f67e692-merged.mount: Deactivated successfully.
Sep 30 14:16:33 compute-0 podman[96643]: 2025-09-30 14:16:33.13949948 +0000 UTC m=+0.196095188 container remove 82281186e89641429e2ec103c758ddca479e474780b0aede8438e371040d5345 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_clarke, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Sep 30 14:16:33 compute-0 systemd[1]: libpod-conmon-82281186e89641429e2ec103c758ddca479e474780b0aede8438e371040d5345.scope: Deactivated successfully.
Sep 30 14:16:33 compute-0 systemd[1]: Reloading.
Sep 30 14:16:33 compute-0 systemd-sysv-generator[96708]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 14:16:33 compute-0 systemd-rc-local-generator[96704]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:16:33 compute-0 systemd[1]: Reloading.
Sep 30 14:16:33 compute-0 systemd-rc-local-generator[96744]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:16:33 compute-0 systemd-sysv-generator[96748]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 14:16:33 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.qrbicy for 5e3c7776-ac03-5698-b79f-a6dc2d80cae6...
Sep 30 14:16:33 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v30: 105 pgs: 105 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 767 B/s wr, 2 op/s
Sep 30 14:16:33 compute-0 ceph-mon[74194]: Rados config object exists: conf-nfs.cephfs
Sep 30 14:16:33 compute-0 ceph-mon[74194]: Creating key for client.nfs.cephfs.2.0.compute-0.qrbicy-rgw
Sep 30 14:16:33 compute-0 ceph-mon[74194]: Bind address in nfs.cephfs.2.0.compute-0.qrbicy's ganesha conf is defaulting to empty
Sep 30 14:16:33 compute-0 ceph-mon[74194]: Deploying daemon nfs.cephfs.2.0.compute-0.qrbicy on compute-0
Sep 30 14:16:33 compute-0 podman[96801]: 2025-09-30 14:16:33.977213542 +0000 UTC m=+0.043773594 container create 7e80d1c63fee1012bbcba29dc5974698e4c3e504ac2a1caae6c03536ec058cd5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:16:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04bd5e04b989fcfa58fe4b394da2d15c4df665969f1a586e1386d49d135dadd4/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Sep 30 14:16:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04bd5e04b989fcfa58fe4b394da2d15c4df665969f1a586e1386d49d135dadd4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:16:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04bd5e04b989fcfa58fe4b394da2d15c4df665969f1a586e1386d49d135dadd4/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 14:16:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04bd5e04b989fcfa58fe4b394da2d15c4df665969f1a586e1386d49d135dadd4/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.qrbicy-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 14:16:34 compute-0 sshd-session[96635]: Invalid user tao from 210.90.155.80 port 50396
Sep 30 14:16:34 compute-0 podman[96801]: 2025-09-30 14:16:34.048011418 +0000 UTC m=+0.114571490 container init 7e80d1c63fee1012bbcba29dc5974698e4c3e504ac2a1caae6c03536ec058cd5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Sep 30 14:16:34 compute-0 podman[96801]: 2025-09-30 14:16:34.053476532 +0000 UTC m=+0.120036584 container start 7e80d1c63fee1012bbcba29dc5974698e4c3e504ac2a1caae6c03536ec058cd5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Sep 30 14:16:34 compute-0 podman[96801]: 2025-09-30 14:16:33.960528033 +0000 UTC m=+0.027088105 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:16:34 compute-0 bash[96801]: 7e80d1c63fee1012bbcba29dc5974698e4c3e504ac2a1caae6c03536ec058cd5
Sep 30 14:16:34 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.qrbicy for 5e3c7776-ac03-5698-b79f-a6dc2d80cae6.
Sep 30 14:16:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:16:34 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Sep 30 14:16:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:16:34 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Sep 30 14:16:34 compute-0 sudo[96577]: pam_unix(sudo:session): session closed for user root
Sep 30 14:16:34 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 14:16:34 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:34 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 14:16:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:16:34 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Sep 30 14:16:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:16:34 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Sep 30 14:16:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:16:34 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Sep 30 14:16:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:16:34 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Sep 30 14:16:34 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:16:34 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Sep 30 14:16:34 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 14:16:34 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:34 compute-0 ceph-mgr[74485]: [progress INFO root] complete: finished ev 118e0ae2-a241-44bf-a95a-79aa8a0a87de (Updating nfs.cephfs deployment (+3 -> 3))
Sep 30 14:16:34 compute-0 ceph-mgr[74485]: [progress INFO root] Completed event 118e0ae2-a241-44bf-a95a-79aa8a0a87de (Updating nfs.cephfs deployment (+3 -> 3)) in 13 seconds
Sep 30 14:16:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:16:34 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:16:34 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 14:16:34 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:34 compute-0 ceph-mgr[74485]: [progress INFO root] update: starting ev 2eae1830-bde6-4594-8917-8eee46c8750a (Updating ingress.nfs.cephfs deployment (+6 -> 6))
Sep 30 14:16:34 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.nfs.cephfs/monitor_password}] v 0)
Sep 30 14:16:34 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:34 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.nfs.cephfs.compute-1.oyovcd on compute-1
Sep 30 14:16:34 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.nfs.cephfs.compute-1.oyovcd on compute-1
Sep 30 14:16:34 compute-0 sshd-session[96635]: Received disconnect from 210.90.155.80 port 50396:11: Bye Bye [preauth]
Sep 30 14:16:34 compute-0 sshd-session[96635]: Disconnected from invalid user tao 210.90.155.80 port 50396 [preauth]
Sep 30 14:16:34 compute-0 ceph-mon[74194]: pgmap v30: 105 pgs: 105 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 767 B/s wr, 2 op/s
Sep 30 14:16:34 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:34 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:34 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:34 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:34 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:34 compute-0 ceph-mon[74194]: Deploying daemon haproxy.nfs.cephfs.compute-1.oyovcd on compute-1
Sep 30 14:16:35 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e48 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 14:16:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:16:35 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[main] rados_kv_traverse :CLIENT ID :EVENT :Failed to lst kv ret=-2
Sep 30 14:16:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:16:35 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[main] rados_cluster_read_clids :CLIENT ID :EVENT :Failed to traverse recovery db: -2
Sep 30 14:16:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:16:35 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:16:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:16:35 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:16:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:16:35 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:16:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:16:35 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:16:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:16:35 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:16:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:16:35 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:16:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:16:35 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:16:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:16:35 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:16:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:16:35 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:16:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:16:35 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:16:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:16:35 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[main] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-0000000000000003:nfs.cephfs.2: -2
Sep 30 14:16:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:16:35 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Sep 30 14:16:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:16:35 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Sep 30 14:16:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:16:35 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Sep 30 14:16:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:16:35 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Sep 30 14:16:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:16:35 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Sep 30 14:16:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:16:35 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Sep 30 14:16:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:16:35 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Sep 30 14:16:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:16:35 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 14:16:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:16:35 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 14:16:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:16:35 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 14:16:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:16:35 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Sep 30 14:16:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:16:35 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 14:16:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:16:35 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Sep 30 14:16:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:16:35 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Sep 30 14:16:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:16:35 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Sep 30 14:16:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:16:35 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Sep 30 14:16:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:16:35 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Sep 30 14:16:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:16:35 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Sep 30 14:16:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:16:35 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Sep 30 14:16:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:16:35 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Sep 30 14:16:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:16:35 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Sep 30 14:16:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:16:35 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Sep 30 14:16:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:16:35 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Sep 30 14:16:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:16:35 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Sep 30 14:16:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:16:35 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Sep 30 14:16:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:16:35 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Sep 30 14:16:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:16:35 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Sep 30 14:16:35 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v31: 105 pgs: 105 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1.6 KiB/s wr, 4 op/s
Sep 30 14:16:35 compute-0 ceph-mon[74194]: pgmap v31: 105 pgs: 105 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1.6 KiB/s wr, 4 op/s
Sep 30 14:16:37 compute-0 PackageKit[31967]: daemon quit
Sep 30 14:16:37 compute-0 systemd[1]: packagekit.service: Deactivated successfully.
Sep 30 14:16:37 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v32: 105 pgs: 105 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 853 B/s wr, 2 op/s
Sep 30 14:16:38 compute-0 ceph-mgr[74485]: [progress INFO root] Writing back 13 completed events
Sep 30 14:16:38 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Sep 30 14:16:38 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:38 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Sep 30 14:16:38 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:38 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Sep 30 14:16:38 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:38 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Sep 30 14:16:38 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:38 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.nfs.cephfs.compute-0.yvkpei on compute-0
Sep 30 14:16:38 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.nfs.cephfs.compute-0.yvkpei on compute-0
Sep 30 14:16:38 compute-0 ceph-mon[74194]: pgmap v32: 105 pgs: 105 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 853 B/s wr, 2 op/s
Sep 30 14:16:38 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:38 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:38 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:38 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:38 compute-0 sudo[96871]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:16:38 compute-0 sudo[96871]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:16:38 compute-0 sudo[96871]: pam_unix(sudo:session): session closed for user root
Sep 30 14:16:38 compute-0 sudo[96896]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/haproxy:2.3 --timeout 895 _orch deploy --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6
Sep 30 14:16:38 compute-0 sudo[96896]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:16:39 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v33: 105 pgs: 105 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s rd, 1.8 KiB/s wr, 7 op/s
Sep 30 14:16:39 compute-0 ceph-mon[74194]: Deploying daemon haproxy.nfs.cephfs.compute-0.yvkpei on compute-0
Sep 30 14:16:40 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:16:40 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf9c000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:16:40 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e48 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 14:16:40 compute-0 ceph-mon[74194]: pgmap v33: 105 pgs: 105 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s rd, 1.8 KiB/s wr, 7 op/s
Sep 30 14:16:41 compute-0 podman[96961]: 2025-09-30 14:16:41.411726188 +0000 UTC m=+2.084531385 container create edc61d9fc650da3a0d9703fc88e0ce8fe4e117f6f99b63744bf9ebd9e9d69b45 (image=quay.io/ceph/haproxy:2.3, name=recursing_proskuriakova)
Sep 30 14:16:41 compute-0 systemd[1]: Started libpod-conmon-edc61d9fc650da3a0d9703fc88e0ce8fe4e117f6f99b63744bf9ebd9e9d69b45.scope.
Sep 30 14:16:41 compute-0 podman[96961]: 2025-09-30 14:16:41.394652298 +0000 UTC m=+2.067457465 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Sep 30 14:16:41 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:16:41 compute-0 podman[96961]: 2025-09-30 14:16:41.496498701 +0000 UTC m=+2.169303868 container init edc61d9fc650da3a0d9703fc88e0ce8fe4e117f6f99b63744bf9ebd9e9d69b45 (image=quay.io/ceph/haproxy:2.3, name=recursing_proskuriakova)
Sep 30 14:16:41 compute-0 podman[96961]: 2025-09-30 14:16:41.507751598 +0000 UTC m=+2.180556745 container start edc61d9fc650da3a0d9703fc88e0ce8fe4e117f6f99b63744bf9ebd9e9d69b45 (image=quay.io/ceph/haproxy:2.3, name=recursing_proskuriakova)
Sep 30 14:16:41 compute-0 podman[96961]: 2025-09-30 14:16:41.5108726 +0000 UTC m=+2.183677757 container attach edc61d9fc650da3a0d9703fc88e0ce8fe4e117f6f99b63744bf9ebd9e9d69b45 (image=quay.io/ceph/haproxy:2.3, name=recursing_proskuriakova)
Sep 30 14:16:41 compute-0 recursing_proskuriakova[97078]: 0 0
Sep 30 14:16:41 compute-0 systemd[1]: libpod-edc61d9fc650da3a0d9703fc88e0ce8fe4e117f6f99b63744bf9ebd9e9d69b45.scope: Deactivated successfully.
Sep 30 14:16:41 compute-0 conmon[97078]: conmon edc61d9fc650da3a0d97 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-edc61d9fc650da3a0d9703fc88e0ce8fe4e117f6f99b63744bf9ebd9e9d69b45.scope/container/memory.events
Sep 30 14:16:41 compute-0 podman[96961]: 2025-09-30 14:16:41.516851938 +0000 UTC m=+2.189657085 container died edc61d9fc650da3a0d9703fc88e0ce8fe4e117f6f99b63744bf9ebd9e9d69b45 (image=quay.io/ceph/haproxy:2.3, name=recursing_proskuriakova)
Sep 30 14:16:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-40c497d3fcb8aab9894a0817fdc3c19204a59dbcd96e4069d6a11adba6611dd5-merged.mount: Deactivated successfully.
Sep 30 14:16:41 compute-0 podman[96961]: 2025-09-30 14:16:41.558986408 +0000 UTC m=+2.231791555 container remove edc61d9fc650da3a0d9703fc88e0ce8fe4e117f6f99b63744bf9ebd9e9d69b45 (image=quay.io/ceph/haproxy:2.3, name=recursing_proskuriakova)
Sep 30 14:16:41 compute-0 systemd[1]: libpod-conmon-edc61d9fc650da3a0d9703fc88e0ce8fe4e117f6f99b63744bf9ebd9e9d69b45.scope: Deactivated successfully.
Sep 30 14:16:41 compute-0 systemd[1]: Reloading.
Sep 30 14:16:41 compute-0 systemd-rc-local-generator[97125]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:16:41 compute-0 systemd-sysv-generator[97128]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 14:16:41 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v34: 105 pgs: 105 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 4.9 KiB/s rd, 1.8 KiB/s wr, 7 op/s
Sep 30 14:16:41 compute-0 systemd[1]: Reloading.
Sep 30 14:16:41 compute-0 ceph-mon[74194]: pgmap v34: 105 pgs: 105 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 4.9 KiB/s rd, 1.8 KiB/s wr, 7 op/s
Sep 30 14:16:41 compute-0 systemd-rc-local-generator[97168]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:16:41 compute-0 systemd-sysv-generator[97171]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 14:16:42 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:16:42 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf84000b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:16:42 compute-0 systemd[1]: Starting Ceph haproxy.nfs.cephfs.compute-0.yvkpei for 5e3c7776-ac03-5698-b79f-a6dc2d80cae6...
Sep 30 14:16:42 compute-0 podman[97224]: 2025-09-30 14:16:42.401306611 +0000 UTC m=+0.042944763 container create ec49c6e24c4fbc830188fe80824f1adb9a8c3cd6d4f4491a3e9330b04061bea8 (image=quay.io/ceph/haproxy:2.3, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-nfs-cephfs-compute-0-yvkpei)
Sep 30 14:16:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb9450b6b127d688f4d18cb676b10643c6a4d1532bcbbfb34d68176819a0b604/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Sep 30 14:16:42 compute-0 podman[97224]: 2025-09-30 14:16:42.453991309 +0000 UTC m=+0.095629481 container init ec49c6e24c4fbc830188fe80824f1adb9a8c3cd6d4f4491a3e9330b04061bea8 (image=quay.io/ceph/haproxy:2.3, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-nfs-cephfs-compute-0-yvkpei)
Sep 30 14:16:42 compute-0 podman[97224]: 2025-09-30 14:16:42.458793805 +0000 UTC m=+0.100431947 container start ec49c6e24c4fbc830188fe80824f1adb9a8c3cd6d4f4491a3e9330b04061bea8 (image=quay.io/ceph/haproxy:2.3, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-nfs-cephfs-compute-0-yvkpei)
Sep 30 14:16:42 compute-0 bash[97224]: ec49c6e24c4fbc830188fe80824f1adb9a8c3cd6d4f4491a3e9330b04061bea8
Sep 30 14:16:42 compute-0 podman[97224]: 2025-09-30 14:16:42.37889467 +0000 UTC m=+0.020532842 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Sep 30 14:16:42 compute-0 systemd[1]: Started Ceph haproxy.nfs.cephfs.compute-0.yvkpei for 5e3c7776-ac03-5698-b79f-a6dc2d80cae6.
Sep 30 14:16:42 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-nfs-cephfs-compute-0-yvkpei[97239]: [NOTICE] 272/141642 (2) : New worker #1 (4) forked
Sep 30 14:16:42 compute-0 sudo[96896]: pam_unix(sudo:session): session closed for user root
Sep 30 14:16:42 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 14:16:42 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:42 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 14:16:42 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:42 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Sep 30 14:16:42 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:42 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.nfs.cephfs.compute-2.yasbyv on compute-2
Sep 30 14:16:42 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.nfs.cephfs.compute-2.yasbyv on compute-2
Sep 30 14:16:43 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:43 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:43 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:43 compute-0 ceph-mon[74194]: Deploying daemon haproxy.nfs.cephfs.compute-2.yasbyv on compute-2
Sep 30 14:16:43 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:16:43 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf70000b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:16:43 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v35: 105 pgs: 105 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 4.9 KiB/s rd, 1.8 KiB/s wr, 7 op/s
Sep 30 14:16:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:16:44 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf9c000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:16:44 compute-0 ceph-mon[74194]: pgmap v35: 105 pgs: 105 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 4.9 KiB/s rd, 1.8 KiB/s wr, 7 op/s
Sep 30 14:16:45 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e48 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 14:16:45 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:16:45 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf84000b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:16:45 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v36: 105 pgs: 105 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 4.9 KiB/s rd, 1.8 KiB/s wr, 7 op/s
Sep 30 14:16:46 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:16:46 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf78000fc0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:16:46 compute-0 ceph-mon[74194]: pgmap v36: 105 pgs: 105 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 4.9 KiB/s rd, 1.8 KiB/s wr, 7 op/s
Sep 30 14:16:47 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:16:47 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf700016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:16:47 compute-0 ceph-mgr[74485]: [balancer INFO root] Optimize plan auto_2025-09-30_14:16:47
Sep 30 14:16:47 compute-0 ceph-mgr[74485]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 14:16:47 compute-0 ceph-mgr[74485]: [balancer INFO root] do_upmap
Sep 30 14:16:47 compute-0 ceph-mgr[74485]: [balancer INFO root] pools ['vms', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.meta', 'volumes', 'backups', '.mgr', '.nfs', 'images', 'cephfs.cephfs.data']
Sep 30 14:16:47 compute-0 ceph-mgr[74485]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 14:16:47 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v37: 105 pgs: 105 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 3.4 KiB/s rd, 1023 B/s wr, 4 op/s
Sep 30 14:16:47 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 14:16:47 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:16:47 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 14:16:47 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:16:47 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:16:47 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:16:47 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:16:47 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:16:47 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:16:47 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:16:47 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Sep 30 14:16:47 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:16:47 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 1)
Sep 30 14:16:47 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:16:47 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Sep 30 14:16:47 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:16:47 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 1)
Sep 30 14:16:47 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:16:47 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 1)
Sep 30 14:16:47 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:16:47 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Sep 30 14:16:47 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:16:47 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 32 (current 1)
Sep 30 14:16:47 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:16:47 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 1)
Sep 30 14:16:47 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0)
Sep 30 14:16:47 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Sep 30 14:16:47 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:16:47 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:16:47 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:16:47 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:16:47 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:16:47 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:16:47 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Sep 30 14:16:47 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:47 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Sep 30 14:16:47 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Sep 30 14:16:47 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Sep 30 14:16:47 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Sep 30 14:16:47 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Sep 30 14:16:47 compute-0 ceph-mgr[74485]: [progress INFO root] update: starting ev 52b7464a-460a-4fbc-8773-9b30e7c1a340 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Sep 30 14:16:47 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} v 0)
Sep 30 14:16:47 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Sep 30 14:16:47 compute-0 ceph-mgr[74485]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 14:16:47 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 14:16:47 compute-0 ceph-mon[74194]: pgmap v37: 105 pgs: 105 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 3.4 KiB/s rd, 1023 B/s wr, 4 op/s
Sep 30 14:16:47 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Sep 30 14:16:47 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 14:16:47 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:47 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Sep 30 14:16:47 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 14:16:47 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 14:16:47 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:47 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.nfs.cephfs/keepalived_password}] v 0)
Sep 30 14:16:47 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:47 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Sep 30 14:16:47 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Sep 30 14:16:47 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Sep 30 14:16:47 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Sep 30 14:16:47 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Sep 30 14:16:47 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Sep 30 14:16:47 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.nfs.cephfs.compute-0.nfjjcv on compute-0
Sep 30 14:16:47 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.nfs.cephfs.compute-0.nfjjcv on compute-0
Sep 30 14:16:47 compute-0 ceph-mgr[74485]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 14:16:47 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 14:16:47 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 14:16:47 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 14:16:47 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 14:16:48 compute-0 sudo[97253]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:16:48 compute-0 sudo[97253]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:16:48 compute-0 sudo[97253]: pam_unix(sudo:session): session closed for user root
Sep 30 14:16:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:16:48 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf9c0021f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:16:48 compute-0 sudo[97278]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/keepalived:2.2.4 --timeout 895 _orch deploy --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6
Sep 30 14:16:48 compute-0 sudo[97278]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:16:48 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Sep 30 14:16:49 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Sep 30 14:16:49 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Sep 30 14:16:49 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Sep 30 14:16:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:16:49 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf84001a70 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:16:49 compute-0 ceph-mgr[74485]: [progress INFO root] update: starting ev 3542de6d-1ffb-416c-9469-4899eab22abf (PG autoscaler increasing pool 6 PGs from 1 to 16)
Sep 30 14:16:49 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0)
Sep 30 14:16:49 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Sep 30 14:16:49 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:49 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Sep 30 14:16:49 compute-0 ceph-mon[74194]: osdmap e49: 3 total, 3 up, 3 in
Sep 30 14:16:49 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Sep 30 14:16:49 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:49 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:49 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:49 compute-0 ceph-mon[74194]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Sep 30 14:16:49 compute-0 ceph-mon[74194]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Sep 30 14:16:49 compute-0 ceph-mon[74194]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Sep 30 14:16:49 compute-0 ceph-mon[74194]: Deploying daemon keepalived.nfs.cephfs.compute-0.nfjjcv on compute-0
Sep 30 14:16:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:16:49 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf78001ae0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:16:49 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v40: 105 pgs: 105 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 0 op/s
Sep 30 14:16:49 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} v 0)
Sep 30 14:16:49 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Sep 30 14:16:49 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0)
Sep 30 14:16:49 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Sep 30 14:16:50 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:16:50 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf700016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:16:50 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Sep 30 14:16:50 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Sep 30 14:16:50 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Sep 30 14:16:50 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Sep 30 14:16:50 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Sep 30 14:16:50 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Sep 30 14:16:50 compute-0 ceph-mgr[74485]: [progress INFO root] update: starting ev 73e1770f-86d9-46f5-a1c1-1181d7fbd0c7 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Sep 30 14:16:50 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0)
Sep 30 14:16:50 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Sep 30 14:16:50 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 51 pg[6.0( v 48'39 (0'0,48'39] local-lis/les=22/23 n=22 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=51 pruub=13.343109131s) [0] r=0 lpr=51 pi=[22,51)/1 crt=48'39 lcod 48'38 mlcod 48'38 active pruub 190.545578003s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:16:50 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 51 pg[6.0( v 48'39 lc 0'0 (0'0,48'39] local-lis/les=22/23 n=1 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=51 pruub=13.343109131s) [0] r=0 lpr=51 pi=[22,51)/1 crt=48'39 lcod 48'38 mlcod 0'0 unknown pruub 190.545578003s@ mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:50 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Sep 30 14:16:50 compute-0 ceph-mon[74194]: osdmap e50: 3 total, 3 up, 3 in
Sep 30 14:16:50 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Sep 30 14:16:50 compute-0 ceph-mon[74194]: pgmap v40: 105 pgs: 105 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 0 op/s
Sep 30 14:16:50 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Sep 30 14:16:50 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Sep 30 14:16:50 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Sep 30 14:16:50 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Sep 30 14:16:50 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Sep 30 14:16:50 compute-0 ceph-mon[74194]: osdmap e51: 3 total, 3 up, 3 in
Sep 30 14:16:50 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Sep 30 14:16:51 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:16:51 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf9c0021f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:16:51 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Sep 30 14:16:51 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Sep 30 14:16:51 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Sep 30 14:16:51 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Sep 30 14:16:51 compute-0 ceph-mgr[74485]: [progress INFO root] update: starting ev 2ab85f2d-87f2-4ca9-b893-18ed71c6bee9 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Sep 30 14:16:51 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0)
Sep 30 14:16:51 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Sep 30 14:16:51 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 52 pg[6.c( v 48'39 lc 0'0 (0'0,48'39] local-lis/les=22/23 n=1 ec=51/22 lis/c=22/22 les/c/f=23/23/0 sis=51) [0] r=0 lpr=51 pi=[22,51)/1 crt=48'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:51 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 52 pg[6.8( v 48'39 lc 0'0 (0'0,48'39] local-lis/les=22/23 n=1 ec=51/22 lis/c=22/22 les/c/f=23/23/0 sis=51) [0] r=0 lpr=51 pi=[22,51)/1 crt=48'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:51 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 52 pg[6.b( v 48'39 lc 0'0 (0'0,48'39] local-lis/les=22/23 n=1 ec=51/22 lis/c=22/22 les/c/f=23/23/0 sis=51) [0] r=0 lpr=51 pi=[22,51)/1 crt=48'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:51 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 52 pg[6.f( v 48'39 lc 0'0 (0'0,48'39] local-lis/les=22/23 n=1 ec=51/22 lis/c=22/22 les/c/f=23/23/0 sis=51) [0] r=0 lpr=51 pi=[22,51)/1 crt=48'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:51 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 52 pg[6.a( v 48'39 lc 0'0 (0'0,48'39] local-lis/les=22/23 n=1 ec=51/22 lis/c=22/22 les/c/f=23/23/0 sis=51) [0] r=0 lpr=51 pi=[22,51)/1 crt=48'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:51 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 52 pg[6.9( v 48'39 lc 0'0 (0'0,48'39] local-lis/les=22/23 n=1 ec=51/22 lis/c=22/22 les/c/f=23/23/0 sis=51) [0] r=0 lpr=51 pi=[22,51)/1 crt=48'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:51 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 52 pg[6.e( v 48'39 lc 0'0 (0'0,48'39] local-lis/les=22/23 n=1 ec=51/22 lis/c=22/22 les/c/f=23/23/0 sis=51) [0] r=0 lpr=51 pi=[22,51)/1 crt=48'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:51 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 52 pg[6.5( v 48'39 lc 0'0 (0'0,48'39] local-lis/les=22/23 n=2 ec=51/22 lis/c=22/22 les/c/f=23/23/0 sis=51) [0] r=0 lpr=51 pi=[22,51)/1 crt=48'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:51 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 52 pg[6.2( v 48'39 lc 0'0 (0'0,48'39] local-lis/les=22/23 n=2 ec=51/22 lis/c=22/22 les/c/f=23/23/0 sis=51) [0] r=0 lpr=51 pi=[22,51)/1 crt=48'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:51 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 52 pg[6.3( v 48'39 lc 0'0 (0'0,48'39] local-lis/les=22/23 n=2 ec=51/22 lis/c=22/22 les/c/f=23/23/0 sis=51) [0] r=0 lpr=51 pi=[22,51)/1 crt=48'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:51 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 52 pg[6.4( v 48'39 lc 0'0 (0'0,48'39] local-lis/les=22/23 n=2 ec=51/22 lis/c=22/22 les/c/f=23/23/0 sis=51) [0] r=0 lpr=51 pi=[22,51)/1 crt=48'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:51 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 52 pg[6.7( v 48'39 lc 0'0 (0'0,48'39] local-lis/les=22/23 n=1 ec=51/22 lis/c=22/22 les/c/f=23/23/0 sis=51) [0] r=0 lpr=51 pi=[22,51)/1 crt=48'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:51 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 52 pg[6.6( v 48'39 lc 0'0 (0'0,48'39] local-lis/les=22/23 n=2 ec=51/22 lis/c=22/22 les/c/f=23/23/0 sis=51) [0] r=0 lpr=51 pi=[22,51)/1 crt=48'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:51 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 52 pg[6.1( v 48'39 (0'0,48'39] local-lis/les=22/23 n=2 ec=51/22 lis/c=22/22 les/c/f=23/23/0 sis=51) [0] r=0 lpr=51 pi=[22,51)/1 crt=48'39 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:51 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 52 pg[6.d( v 48'39 lc 0'0 (0'0,48'39] local-lis/les=22/23 n=1 ec=51/22 lis/c=22/22 les/c/f=23/23/0 sis=51) [0] r=0 lpr=51 pi=[22,51)/1 crt=48'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:51 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 52 pg[6.8( v 48'39 (0'0,48'39] local-lis/les=51/52 n=1 ec=51/22 lis/c=22/22 les/c/f=23/23/0 sis=51) [0] r=0 lpr=51 pi=[22,51)/1 crt=48'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:16:51 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 52 pg[6.f( v 48'39 (0'0,48'39] local-lis/les=51/52 n=1 ec=51/22 lis/c=22/22 les/c/f=23/23/0 sis=51) [0] r=0 lpr=51 pi=[22,51)/1 crt=48'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:16:51 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 52 pg[6.b( v 48'39 (0'0,48'39] local-lis/les=51/52 n=1 ec=51/22 lis/c=22/22 les/c/f=23/23/0 sis=51) [0] r=0 lpr=51 pi=[22,51)/1 crt=48'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:16:51 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 52 pg[6.c( v 48'39 (0'0,48'39] local-lis/les=51/52 n=1 ec=51/22 lis/c=22/22 les/c/f=23/23/0 sis=51) [0] r=0 lpr=51 pi=[22,51)/1 crt=48'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:16:51 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 52 pg[6.e( v 48'39 (0'0,48'39] local-lis/les=51/52 n=1 ec=51/22 lis/c=22/22 les/c/f=23/23/0 sis=51) [0] r=0 lpr=51 pi=[22,51)/1 crt=48'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:16:51 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 52 pg[6.9( v 48'39 (0'0,48'39] local-lis/les=51/52 n=1 ec=51/22 lis/c=22/22 les/c/f=23/23/0 sis=51) [0] r=0 lpr=51 pi=[22,51)/1 crt=48'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:16:51 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 52 pg[6.a( v 48'39 (0'0,48'39] local-lis/les=51/52 n=1 ec=51/22 lis/c=22/22 les/c/f=23/23/0 sis=51) [0] r=0 lpr=51 pi=[22,51)/1 crt=48'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:16:51 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 52 pg[6.5( v 48'39 (0'0,48'39] local-lis/les=51/52 n=2 ec=51/22 lis/c=22/22 les/c/f=23/23/0 sis=51) [0] r=0 lpr=51 pi=[22,51)/1 crt=48'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:16:51 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 52 pg[6.2( v 48'39 (0'0,48'39] local-lis/les=51/52 n=2 ec=51/22 lis/c=22/22 les/c/f=23/23/0 sis=51) [0] r=0 lpr=51 pi=[22,51)/1 crt=48'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:16:51 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 52 pg[6.3( v 48'39 (0'0,48'39] local-lis/les=51/52 n=2 ec=51/22 lis/c=22/22 les/c/f=23/23/0 sis=51) [0] r=0 lpr=51 pi=[22,51)/1 crt=48'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:16:51 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 52 pg[6.4( v 48'39 (0'0,48'39] local-lis/les=51/52 n=2 ec=51/22 lis/c=22/22 les/c/f=23/23/0 sis=51) [0] r=0 lpr=51 pi=[22,51)/1 crt=48'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:16:51 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 52 pg[6.6( v 48'39 (0'0,48'39] local-lis/les=51/52 n=2 ec=51/22 lis/c=22/22 les/c/f=23/23/0 sis=51) [0] r=0 lpr=51 pi=[22,51)/1 crt=48'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:16:51 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 52 pg[6.7( v 48'39 (0'0,48'39] local-lis/les=51/52 n=1 ec=51/22 lis/c=22/22 les/c/f=23/23/0 sis=51) [0] r=0 lpr=51 pi=[22,51)/1 crt=48'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:16:51 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 52 pg[6.0( v 48'39 (0'0,48'39] local-lis/les=51/52 n=1 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=51) [0] r=0 lpr=51 pi=[22,51)/1 crt=48'39 lcod 48'38 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:16:51 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 52 pg[6.1( v 48'39 (0'0,48'39] local-lis/les=51/52 n=2 ec=51/22 lis/c=22/22 les/c/f=23/23/0 sis=51) [0] r=0 lpr=51 pi=[22,51)/1 crt=48'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:16:51 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 52 pg[6.d( v 48'39 (0'0,48'39] local-lis/les=51/52 n=1 ec=51/22 lis/c=22/22 les/c/f=23/23/0 sis=51) [0] r=0 lpr=51 pi=[22,51)/1 crt=48'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:16:51 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 6.8 deep-scrub starts
Sep 30 14:16:51 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 6.8 deep-scrub ok
Sep 30 14:16:51 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:16:51 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf84001a70 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:16:51 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v43: 151 pgs: 46 unknown, 105 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:16:51 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0)
Sep 30 14:16:51 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Sep 30 14:16:51 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0)
Sep 30 14:16:51 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Sep 30 14:16:52 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:16:52 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf78001ae0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:16:52 compute-0 podman[97343]: 2025-09-30 14:16:52.090693799 +0000 UTC m=+3.609634179 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Sep 30 14:16:52 compute-0 podman[97343]: 2025-09-30 14:16:52.107313487 +0000 UTC m=+3.626253837 container create c6ff119b074ef1504adad596eb3836abb6ca8ad06238cb4b327ef3936e3f5ebf (image=quay.io/ceph/keepalived:2.2.4, name=elastic_solomon, architecture=x86_64, build-date=2023-02-22T09:23:20, release=1793, io.buildah.version=1.28.2, name=keepalived, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-type=git, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=keepalived-container, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Sep 30 14:16:52 compute-0 systemd[1]: Started libpod-conmon-c6ff119b074ef1504adad596eb3836abb6ca8ad06238cb4b327ef3936e3f5ebf.scope.
Sep 30 14:16:52 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:16:52 compute-0 podman[97343]: 2025-09-30 14:16:52.197192485 +0000 UTC m=+3.716132855 container init c6ff119b074ef1504adad596eb3836abb6ca8ad06238cb4b327ef3936e3f5ebf (image=quay.io/ceph/keepalived:2.2.4, name=elastic_solomon, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=keepalived, io.openshift.tags=Ceph keepalived, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, vcs-type=git, architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, description=keepalived for Ceph, summary=Provides keepalived on RHEL 9 for Ceph., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=keepalived-container, version=2.2.4, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, release=1793, io.buildah.version=1.28.2)
Sep 30 14:16:52 compute-0 podman[97343]: 2025-09-30 14:16:52.206364766 +0000 UTC m=+3.725305116 container start c6ff119b074ef1504adad596eb3836abb6ca8ad06238cb4b327ef3936e3f5ebf (image=quay.io/ceph/keepalived:2.2.4, name=elastic_solomon, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.expose-services=, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, name=keepalived, io.openshift.tags=Ceph keepalived, vcs-type=git, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public, architecture=x86_64)
Sep 30 14:16:52 compute-0 podman[97343]: 2025-09-30 14:16:52.210346331 +0000 UTC m=+3.729286681 container attach c6ff119b074ef1504adad596eb3836abb6ca8ad06238cb4b327ef3936e3f5ebf (image=quay.io/ceph/keepalived:2.2.4, name=elastic_solomon, release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=keepalived-container, name=keepalived, io.openshift.expose-services=, build-date=2023-02-22T09:23:20, version=2.2.4, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, description=keepalived for Ceph, vcs-type=git)
Sep 30 14:16:52 compute-0 elastic_solomon[97439]: 0 0
Sep 30 14:16:52 compute-0 systemd[1]: libpod-c6ff119b074ef1504adad596eb3836abb6ca8ad06238cb4b327ef3936e3f5ebf.scope: Deactivated successfully.
Sep 30 14:16:52 compute-0 conmon[97439]: conmon c6ff119b074ef1504ada <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c6ff119b074ef1504adad596eb3836abb6ca8ad06238cb4b327ef3936e3f5ebf.scope/container/memory.events
Sep 30 14:16:52 compute-0 podman[97343]: 2025-09-30 14:16:52.214391018 +0000 UTC m=+3.733331368 container died c6ff119b074ef1504adad596eb3836abb6ca8ad06238cb4b327ef3936e3f5ebf (image=quay.io/ceph/keepalived:2.2.4, name=elastic_solomon, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., vcs-type=git, version=2.2.4, io.buildah.version=1.28.2, name=keepalived, io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.display-name=Keepalived on RHEL 9, description=keepalived for Ceph, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=keepalived-container, release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, distribution-scope=public, build-date=2023-02-22T09:23:20, io.openshift.expose-services=)
Sep 30 14:16:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-da818b753a04960ff7aed7467a1b2bf36bafee403b09ec1c31b212ac07c459a6-merged.mount: Deactivated successfully.
Sep 30 14:16:52 compute-0 podman[97343]: 2025-09-30 14:16:52.258056848 +0000 UTC m=+3.776997208 container remove c6ff119b074ef1504adad596eb3836abb6ca8ad06238cb4b327ef3936e3f5ebf (image=quay.io/ceph/keepalived:2.2.4, name=elastic_solomon, vendor=Red Hat, Inc., distribution-scope=public, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2023-02-22T09:23:20, version=2.2.4, vcs-type=git)
Sep 30 14:16:52 compute-0 systemd[1]: libpod-conmon-c6ff119b074ef1504adad596eb3836abb6ca8ad06238cb4b327ef3936e3f5ebf.scope: Deactivated successfully.
Sep 30 14:16:52 compute-0 systemd[1]: Reloading.
Sep 30 14:16:52 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Sep 30 14:16:52 compute-0 systemd-rc-local-generator[97487]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:16:52 compute-0 systemd-sysv-generator[97490]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 14:16:52 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 6.f scrub starts
Sep 30 14:16:52 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 6.f scrub ok
Sep 30 14:16:52 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Sep 30 14:16:52 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Sep 30 14:16:52 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Sep 30 14:16:52 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Sep 30 14:16:52 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Sep 30 14:16:52 compute-0 ceph-mgr[74485]: [progress INFO root] update: starting ev 3c538e11-4628-4979-88dc-4c7f91fac31c (PG autoscaler increasing pool 9 PGs from 1 to 32)
Sep 30 14:16:52 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0)
Sep 30 14:16:52 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Sep 30 14:16:52 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 53 pg[8.0( v 36'6 (0'0,36'6] local-lis/les=35/36 n=6 ec=35/35 lis/c=35/35 les/c/f=36/36/0 sis=53 pruub=8.796410561s) [0] r=0 lpr=53 pi=[35,53)/1 crt=36'6 lcod 36'5 mlcod 36'5 active pruub 188.243820190s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:16:52 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 53 pg[8.0( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=35/35 lis/c=35/35 les/c/f=36/36/0 sis=53 pruub=8.796410561s) [0] r=0 lpr=53 pi=[35,53)/1 crt=36'6 lcod 36'5 mlcod 0'0 unknown pruub 188.243820190s@ mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:52 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0).collection(8.0_head 0x559a35f2b200) operator()   moving buffer(0x559a35ae3108 space 0x559a357f1940 0x0~1000 clean)
Sep 30 14:16:52 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0).collection(8.0_head 0x559a35f2b200) operator()   moving buffer(0x559a35ae2f28 space 0x559a358ebd50 0x0~1000 clean)
Sep 30 14:16:52 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0).collection(8.0_head 0x559a35f2b200) operator()   moving buffer(0x559a35ae45c8 space 0x559a357e4900 0x0~1000 clean)
Sep 30 14:16:52 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0).collection(8.0_head 0x559a35f2b200) operator()   moving buffer(0x559a35af6e88 space 0x559a358c5bb0 0x0~1000 clean)
Sep 30 14:16:52 compute-0 systemd[1]: Reloading.
Sep 30 14:16:52 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Sep 30 14:16:52 compute-0 ceph-mon[74194]: osdmap e52: 3 total, 3 up, 3 in
Sep 30 14:16:52 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Sep 30 14:16:52 compute-0 ceph-mon[74194]: 6.8 deep-scrub starts
Sep 30 14:16:52 compute-0 ceph-mon[74194]: 6.8 deep-scrub ok
Sep 30 14:16:52 compute-0 ceph-mon[74194]: 5.9 deep-scrub starts
Sep 30 14:16:52 compute-0 ceph-mon[74194]: 5.9 deep-scrub ok
Sep 30 14:16:52 compute-0 ceph-mon[74194]: pgmap v43: 151 pgs: 46 unknown, 105 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:16:52 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Sep 30 14:16:52 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Sep 30 14:16:52 compute-0 systemd-rc-local-generator[97527]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:16:52 compute-0 systemd-sysv-generator[97531]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 14:16:52 compute-0 systemd[1]: Starting Ceph keepalived.nfs.cephfs.compute-0.nfjjcv for 5e3c7776-ac03-5698-b79f-a6dc2d80cae6...
Sep 30 14:16:53 compute-0 ceph-mgr[74485]: [progress WARNING root] Starting Global Recovery Event,108 pgs not in active + clean state
Sep 30 14:16:53 compute-0 podman[97584]: 2025-09-30 14:16:53.121535968 +0000 UTC m=+0.047430941 container create df25873f420822291a2a2f3e4272e6ab946447daa59ec12441fae67f848da096 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-keepalived-nfs-cephfs-compute-0-nfjjcv, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=2.2.4, io.buildah.version=1.28.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=Ceph keepalived, name=keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, description=keepalived for Ceph)
Sep 30 14:16:53 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:16:53 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf700016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:16:53 compute-0 podman[97584]: 2025-09-30 14:16:53.09732055 +0000 UTC m=+0.023215543 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Sep 30 14:16:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3aee99089c79ff226660ae9a20d2136790e00ef650075ee1bde3684c591d99bd/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:16:53 compute-0 podman[97584]: 2025-09-30 14:16:53.219264663 +0000 UTC m=+0.145159656 container init df25873f420822291a2a2f3e4272e6ab946447daa59ec12441fae67f848da096 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-keepalived-nfs-cephfs-compute-0-nfjjcv, release=1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, description=keepalived for Ceph, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=keepalived-container, build-date=2023-02-22T09:23:20, version=2.2.4, io.buildah.version=1.28.2, vendor=Red Hat, Inc., summary=Provides keepalived on RHEL 9 for Ceph., name=keepalived, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Sep 30 14:16:53 compute-0 podman[97584]: 2025-09-30 14:16:53.225444766 +0000 UTC m=+0.151339739 container start df25873f420822291a2a2f3e4272e6ab946447daa59ec12441fae67f848da096 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-keepalived-nfs-cephfs-compute-0-nfjjcv, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, version=2.2.4, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., summary=Provides keepalived on RHEL 9 for Ceph., distribution-scope=public, build-date=2023-02-22T09:23:20, name=keepalived, release=1793, io.openshift.tags=Ceph keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, io.buildah.version=1.28.2, vcs-type=git, description=keepalived for Ceph)
Sep 30 14:16:53 compute-0 bash[97584]: df25873f420822291a2a2f3e4272e6ab946447daa59ec12441fae67f848da096
Sep 30 14:16:53 compute-0 systemd[1]: Started Ceph keepalived.nfs.cephfs.compute-0.nfjjcv for 5e3c7776-ac03-5698-b79f-a6dc2d80cae6.
Sep 30 14:16:53 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-keepalived-nfs-cephfs-compute-0-nfjjcv[97599]: Tue Sep 30 14:16:53 2025: Starting Keepalived v2.2.4 (08/21,2021)
Sep 30 14:16:53 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-keepalived-nfs-cephfs-compute-0-nfjjcv[97599]: Tue Sep 30 14:16:53 2025: Running on Linux 5.14.0-617.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Sep 15 21:46:13 UTC 2025 (built for Linux 5.14.0)
Sep 30 14:16:53 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-keepalived-nfs-cephfs-compute-0-nfjjcv[97599]: Tue Sep 30 14:16:53 2025: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Sep 30 14:16:53 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-keepalived-nfs-cephfs-compute-0-nfjjcv[97599]: Tue Sep 30 14:16:53 2025: Configuration file /etc/keepalived/keepalived.conf
Sep 30 14:16:53 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-keepalived-nfs-cephfs-compute-0-nfjjcv[97599]: Tue Sep 30 14:16:53 2025: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Sep 30 14:16:53 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-keepalived-nfs-cephfs-compute-0-nfjjcv[97599]: Tue Sep 30 14:16:53 2025: Starting VRRP child process, pid=4
Sep 30 14:16:53 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-keepalived-nfs-cephfs-compute-0-nfjjcv[97599]: Tue Sep 30 14:16:53 2025: Startup complete
Sep 30 14:16:53 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-keepalived-nfs-cephfs-compute-0-nfjjcv[97599]: Tue Sep 30 14:16:53 2025: (VI_0) Entering BACKUP STATE (init)
Sep 30 14:16:53 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-keepalived-nfs-cephfs-compute-0-nfjjcv[97599]: Tue Sep 30 14:16:53 2025: VRRP_Script(check_backend) succeeded
Sep 30 14:16:53 compute-0 sudo[97278]: pam_unix(sudo:session): session closed for user root
Sep 30 14:16:53 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 14:16:53 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:53 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 14:16:53 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:53 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Sep 30 14:16:53 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:53 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Sep 30 14:16:53 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Sep 30 14:16:53 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Sep 30 14:16:53 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Sep 30 14:16:53 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Sep 30 14:16:53 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Sep 30 14:16:53 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.nfs.cephfs.compute-2.moxzvy on compute-2
Sep 30 14:16:53 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.nfs.cephfs.compute-2.moxzvy on compute-2
Sep 30 14:16:53 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 6.b scrub starts
Sep 30 14:16:53 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 6.b scrub ok
Sep 30 14:16:53 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Sep 30 14:16:53 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Sep 30 14:16:53 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Sep 30 14:16:53 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Sep 30 14:16:53 compute-0 ceph-mgr[74485]: [progress INFO root] update: starting ev 493abd32-c35e-48fd-8f25-e319b524e86c (PG autoscaler increasing pool 10 PGs from 1 to 32)
Sep 30 14:16:53 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0)
Sep 30 14:16:53 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Sep 30 14:16:53 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 54 pg[8.18( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=53/35 lis/c=35/35 les/c/f=36/36/0 sis=53) [0] r=0 lpr=53 pi=[35,53)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:53 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 54 pg[8.17( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=53/35 lis/c=35/35 les/c/f=36/36/0 sis=53) [0] r=0 lpr=53 pi=[35,53)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:53 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 54 pg[8.11( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=53/35 lis/c=35/35 les/c/f=36/36/0 sis=53) [0] r=0 lpr=53 pi=[35,53)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:53 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 54 pg[8.1f( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=53/35 lis/c=35/35 les/c/f=36/36/0 sis=53) [0] r=0 lpr=53 pi=[35,53)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:53 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 54 pg[8.2( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=1 ec=53/35 lis/c=35/35 les/c/f=36/36/0 sis=53) [0] r=0 lpr=53 pi=[35,53)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:53 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 54 pg[8.10( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=53/35 lis/c=35/35 les/c/f=36/36/0 sis=53) [0] r=0 lpr=53 pi=[35,53)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:53 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 54 pg[8.5( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=1 ec=53/35 lis/c=35/35 les/c/f=36/36/0 sis=53) [0] r=0 lpr=53 pi=[35,53)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:53 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 54 pg[8.6( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=1 ec=53/35 lis/c=35/35 les/c/f=36/36/0 sis=53) [0] r=0 lpr=53 pi=[35,53)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:53 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 54 pg[8.1( v 36'6 (0'0,36'6] local-lis/les=35/36 n=1 ec=53/35 lis/c=35/35 les/c/f=36/36/0 sis=53) [0] r=0 lpr=53 pi=[35,53)/1 crt=36'6 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:53 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 54 pg[8.12( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=53/35 lis/c=35/35 les/c/f=36/36/0 sis=53) [0] r=0 lpr=53 pi=[35,53)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:53 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 54 pg[8.13( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=53/35 lis/c=35/35 les/c/f=36/36/0 sis=53) [0] r=0 lpr=53 pi=[35,53)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:53 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 54 pg[8.1c( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=53/35 lis/c=35/35 les/c/f=36/36/0 sis=53) [0] r=0 lpr=53 pi=[35,53)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:53 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 54 pg[8.1d( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=53/35 lis/c=35/35 les/c/f=36/36/0 sis=53) [0] r=0 lpr=53 pi=[35,53)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:53 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 54 pg[8.1e( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=53/35 lis/c=35/35 les/c/f=36/36/0 sis=53) [0] r=0 lpr=53 pi=[35,53)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:53 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 54 pg[8.19( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=53/35 lis/c=35/35 les/c/f=36/36/0 sis=53) [0] r=0 lpr=53 pi=[35,53)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:53 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 54 pg[8.1a( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=53/35 lis/c=35/35 les/c/f=36/36/0 sis=53) [0] r=0 lpr=53 pi=[35,53)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:53 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 54 pg[8.1b( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=53/35 lis/c=35/35 les/c/f=36/36/0 sis=53) [0] r=0 lpr=53 pi=[35,53)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:53 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 54 pg[8.4( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=1 ec=53/35 lis/c=35/35 les/c/f=36/36/0 sis=53) [0] r=0 lpr=53 pi=[35,53)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:53 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 54 pg[8.7( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=53/35 lis/c=35/35 les/c/f=36/36/0 sis=53) [0] r=0 lpr=53 pi=[35,53)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:53 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 54 pg[8.16( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=53/35 lis/c=35/35 les/c/f=36/36/0 sis=53) [0] r=0 lpr=53 pi=[35,53)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:53 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 54 pg[8.b( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=53/35 lis/c=35/35 les/c/f=36/36/0 sis=53) [0] r=0 lpr=53 pi=[35,53)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:53 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 54 pg[8.d( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=53/35 lis/c=35/35 les/c/f=36/36/0 sis=53) [0] r=0 lpr=53 pi=[35,53)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:53 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 54 pg[8.c( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=53/35 lis/c=35/35 les/c/f=36/36/0 sis=53) [0] r=0 lpr=53 pi=[35,53)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:53 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 54 pg[8.a( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=53/35 lis/c=35/35 les/c/f=36/36/0 sis=53) [0] r=0 lpr=53 pi=[35,53)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:53 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 54 pg[8.9( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=53/35 lis/c=35/35 les/c/f=36/36/0 sis=53) [0] r=0 lpr=53 pi=[35,53)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:53 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 54 pg[8.8( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=53/35 lis/c=35/35 les/c/f=36/36/0 sis=53) [0] r=0 lpr=53 pi=[35,53)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:53 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 54 pg[8.f( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=53/35 lis/c=35/35 les/c/f=36/36/0 sis=53) [0] r=0 lpr=53 pi=[35,53)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:53 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 54 pg[8.e( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=53/35 lis/c=35/35 les/c/f=36/36/0 sis=53) [0] r=0 lpr=53 pi=[35,53)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:53 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 54 pg[8.3( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=1 ec=53/35 lis/c=35/35 les/c/f=36/36/0 sis=53) [0] r=0 lpr=53 pi=[35,53)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:53 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 54 pg[8.15( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=53/35 lis/c=35/35 les/c/f=36/36/0 sis=53) [0] r=0 lpr=53 pi=[35,53)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:53 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 54 pg[8.14( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=53/35 lis/c=35/35 les/c/f=36/36/0 sis=53) [0] r=0 lpr=53 pi=[35,53)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:53 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 54 pg[8.17( v 36'6 (0'0,36'6] local-lis/les=53/54 n=0 ec=53/35 lis/c=35/35 les/c/f=36/36/0 sis=53) [0] r=0 lpr=53 pi=[35,53)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:16:53 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 54 pg[8.11( v 36'6 (0'0,36'6] local-lis/les=53/54 n=0 ec=53/35 lis/c=35/35 les/c/f=36/36/0 sis=53) [0] r=0 lpr=53 pi=[35,53)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:16:53 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 54 pg[8.18( v 36'6 (0'0,36'6] local-lis/les=53/54 n=0 ec=53/35 lis/c=35/35 les/c/f=36/36/0 sis=53) [0] r=0 lpr=53 pi=[35,53)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:16:53 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 54 pg[8.2( v 36'6 (0'0,36'6] local-lis/les=53/54 n=1 ec=53/35 lis/c=35/35 les/c/f=36/36/0 sis=53) [0] r=0 lpr=53 pi=[35,53)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:16:53 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 54 pg[8.5( v 36'6 (0'0,36'6] local-lis/les=53/54 n=1 ec=53/35 lis/c=35/35 les/c/f=36/36/0 sis=53) [0] r=0 lpr=53 pi=[35,53)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:16:53 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 54 pg[8.6( v 36'6 (0'0,36'6] local-lis/les=53/54 n=1 ec=53/35 lis/c=35/35 les/c/f=36/36/0 sis=53) [0] r=0 lpr=53 pi=[35,53)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:16:53 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 54 pg[8.1( v 36'6 (0'0,36'6] local-lis/les=53/54 n=1 ec=53/35 lis/c=35/35 les/c/f=36/36/0 sis=53) [0] r=0 lpr=53 pi=[35,53)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:16:53 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 54 pg[8.12( v 36'6 (0'0,36'6] local-lis/les=53/54 n=0 ec=53/35 lis/c=35/35 les/c/f=36/36/0 sis=53) [0] r=0 lpr=53 pi=[35,53)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:16:53 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 54 pg[8.1f( v 36'6 (0'0,36'6] local-lis/les=53/54 n=0 ec=53/35 lis/c=35/35 les/c/f=36/36/0 sis=53) [0] r=0 lpr=53 pi=[35,53)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:16:53 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 54 pg[8.13( v 36'6 (0'0,36'6] local-lis/les=53/54 n=0 ec=53/35 lis/c=35/35 les/c/f=36/36/0 sis=53) [0] r=0 lpr=53 pi=[35,53)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:16:53 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 54 pg[8.1c( v 36'6 (0'0,36'6] local-lis/les=53/54 n=0 ec=53/35 lis/c=35/35 les/c/f=36/36/0 sis=53) [0] r=0 lpr=53 pi=[35,53)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:16:53 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 54 pg[8.1d( v 36'6 (0'0,36'6] local-lis/les=53/54 n=0 ec=53/35 lis/c=35/35 les/c/f=36/36/0 sis=53) [0] r=0 lpr=53 pi=[35,53)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:16:53 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 54 pg[8.1e( v 36'6 (0'0,36'6] local-lis/les=53/54 n=0 ec=53/35 lis/c=35/35 les/c/f=36/36/0 sis=53) [0] r=0 lpr=53 pi=[35,53)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:16:53 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 54 pg[8.1a( v 36'6 (0'0,36'6] local-lis/les=53/54 n=0 ec=53/35 lis/c=35/35 les/c/f=36/36/0 sis=53) [0] r=0 lpr=53 pi=[35,53)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:16:53 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 54 pg[8.19( v 36'6 (0'0,36'6] local-lis/les=53/54 n=0 ec=53/35 lis/c=35/35 les/c/f=36/36/0 sis=53) [0] r=0 lpr=53 pi=[35,53)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:16:53 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 54 pg[8.1b( v 36'6 (0'0,36'6] local-lis/les=53/54 n=0 ec=53/35 lis/c=35/35 les/c/f=36/36/0 sis=53) [0] r=0 lpr=53 pi=[35,53)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:16:53 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 54 pg[8.4( v 36'6 (0'0,36'6] local-lis/les=53/54 n=1 ec=53/35 lis/c=35/35 les/c/f=36/36/0 sis=53) [0] r=0 lpr=53 pi=[35,53)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:16:53 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 54 pg[8.7( v 36'6 (0'0,36'6] local-lis/les=53/54 n=0 ec=53/35 lis/c=35/35 les/c/f=36/36/0 sis=53) [0] r=0 lpr=53 pi=[35,53)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:16:53 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 54 pg[8.10( v 36'6 (0'0,36'6] local-lis/les=53/54 n=0 ec=53/35 lis/c=35/35 les/c/f=36/36/0 sis=53) [0] r=0 lpr=53 pi=[35,53)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:16:53 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 54 pg[8.b( v 36'6 (0'0,36'6] local-lis/les=53/54 n=0 ec=53/35 lis/c=35/35 les/c/f=36/36/0 sis=53) [0] r=0 lpr=53 pi=[35,53)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:16:53 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 54 pg[8.d( v 36'6 (0'0,36'6] local-lis/les=53/54 n=0 ec=53/35 lis/c=35/35 les/c/f=36/36/0 sis=53) [0] r=0 lpr=53 pi=[35,53)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:16:53 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 54 pg[8.c( v 36'6 (0'0,36'6] local-lis/les=53/54 n=0 ec=53/35 lis/c=35/35 les/c/f=36/36/0 sis=53) [0] r=0 lpr=53 pi=[35,53)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:16:53 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 54 pg[8.a( v 36'6 (0'0,36'6] local-lis/les=53/54 n=0 ec=53/35 lis/c=35/35 les/c/f=36/36/0 sis=53) [0] r=0 lpr=53 pi=[35,53)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:16:53 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 54 pg[8.e( v 36'6 (0'0,36'6] local-lis/les=53/54 n=0 ec=53/35 lis/c=35/35 les/c/f=36/36/0 sis=53) [0] r=0 lpr=53 pi=[35,53)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:16:53 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 54 pg[8.8( v 36'6 (0'0,36'6] local-lis/les=53/54 n=0 ec=53/35 lis/c=35/35 les/c/f=36/36/0 sis=53) [0] r=0 lpr=53 pi=[35,53)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:16:53 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 54 pg[8.9( v 36'6 (0'0,36'6] local-lis/les=53/54 n=0 ec=53/35 lis/c=35/35 les/c/f=36/36/0 sis=53) [0] r=0 lpr=53 pi=[35,53)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:16:53 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 54 pg[8.f( v 36'6 (0'0,36'6] local-lis/les=53/54 n=0 ec=53/35 lis/c=35/35 les/c/f=36/36/0 sis=53) [0] r=0 lpr=53 pi=[35,53)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:16:53 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 54 pg[8.0( v 36'6 (0'0,36'6] local-lis/les=53/54 n=0 ec=35/35 lis/c=35/35 les/c/f=36/36/0 sis=53) [0] r=0 lpr=53 pi=[35,53)/1 crt=36'6 lcod 36'5 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:16:53 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 54 pg[8.3( v 36'6 (0'0,36'6] local-lis/les=53/54 n=1 ec=53/35 lis/c=35/35 les/c/f=36/36/0 sis=53) [0] r=0 lpr=53 pi=[35,53)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:16:53 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 54 pg[8.15( v 36'6 (0'0,36'6] local-lis/les=53/54 n=0 ec=53/35 lis/c=35/35 les/c/f=36/36/0 sis=53) [0] r=0 lpr=53 pi=[35,53)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:16:53 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 54 pg[8.16( v 36'6 (0'0,36'6] local-lis/les=53/54 n=0 ec=53/35 lis/c=35/35 les/c/f=36/36/0 sis=53) [0] r=0 lpr=53 pi=[35,53)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:16:53 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 54 pg[8.14( v 36'6 (0'0,36'6] local-lis/les=53/54 n=0 ec=53/35 lis/c=35/35 les/c/f=36/36/0 sis=53) [0] r=0 lpr=53 pi=[35,53)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:16:53 compute-0 ceph-mon[74194]: 6.f scrub starts
Sep 30 14:16:53 compute-0 ceph-mon[74194]: 6.f scrub ok
Sep 30 14:16:53 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Sep 30 14:16:53 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Sep 30 14:16:53 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Sep 30 14:16:53 compute-0 ceph-mon[74194]: osdmap e53: 3 total, 3 up, 3 in
Sep 30 14:16:53 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Sep 30 14:16:53 compute-0 ceph-mon[74194]: 5.8 scrub starts
Sep 30 14:16:53 compute-0 ceph-mon[74194]: 5.8 scrub ok
Sep 30 14:16:53 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:53 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:53 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:53 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Sep 30 14:16:53 compute-0 ceph-mon[74194]: osdmap e54: 3 total, 3 up, 3 in
Sep 30 14:16:53 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Sep 30 14:16:53 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:16:53 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf9c0021f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:16:53 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v46: 213 pgs: 108 unknown, 105 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Sep 30 14:16:53 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0)
Sep 30 14:16:53 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Sep 30 14:16:53 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0)
Sep 30 14:16:53 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Sep 30 14:16:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:16:54 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf84001a70 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:16:54 compute-0 sshd-session[97607]: Received disconnect from 193.46.255.33 port 22442:11:  [preauth]
Sep 30 14:16:54 compute-0 sshd-session[97607]: Disconnected from authenticating user root 193.46.255.33 port 22442 [preauth]
Sep 30 14:16:54 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 6.c scrub starts
Sep 30 14:16:54 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 6.c scrub ok
Sep 30 14:16:54 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Sep 30 14:16:54 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Sep 30 14:16:54 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Sep 30 14:16:54 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Sep 30 14:16:54 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Sep 30 14:16:54 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Sep 30 14:16:54 compute-0 ceph-mgr[74485]: [progress INFO root] update: starting ev a826192d-f0f7-4718-a1dd-245f9df858d9 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Sep 30 14:16:54 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"} v 0)
Sep 30 14:16:54 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]: dispatch
Sep 30 14:16:54 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 55 pg[9.0( v 48'1157 (0'0,48'1157] local-lis/les=37/38 n=178 ec=37/37 lis/c=37/37 les/c/f=38/38/0 sis=55 pruub=8.585557938s) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'1157 lcod 48'1156 mlcod 48'1156 active pruub 190.066223145s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:16:54 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 55 pg[9.0( v 48'1157 lc 0'0 (0'0,48'1157] local-lis/les=37/38 n=5 ec=37/37 lis/c=37/37 les/c/f=38/38/0 sis=55 pruub=8.585557938s) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'1157 lcod 48'1156 mlcod 0'0 unknown pruub 190.066223145s@ mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:54 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x559a35df2b40) operator()   moving buffer(0x559a357e8f28 space 0x559a357d8420 0x0~1000 clean)
Sep 30 14:16:54 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x559a35df2b40) operator()   moving buffer(0x559a35af1428 space 0x559a359ca830 0x0~1000 clean)
Sep 30 14:16:54 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x559a35df2b40) operator()   moving buffer(0x559a35b23608 space 0x559a359ca420 0x0~1000 clean)
Sep 30 14:16:54 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x559a35df2b40) operator()   moving buffer(0x559a35af0f28 space 0x559a359ca690 0x0~1000 clean)
Sep 30 14:16:54 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x559a35df2b40) operator()   moving buffer(0x559a35b23ec8 space 0x559a35937460 0x0~1000 clean)
Sep 30 14:16:54 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x559a35df2b40) operator()   moving buffer(0x559a35ae3ba8 space 0x559a359b04f0 0x0~1000 clean)
Sep 30 14:16:54 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x559a35df2b40) operator()   moving buffer(0x559a35816e88 space 0x559a35810c40 0x0~1000 clean)
Sep 30 14:16:54 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x559a35df2b40) operator()   moving buffer(0x559a35ae5428 space 0x559a359caf80 0x0~1000 clean)
Sep 30 14:16:54 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x559a35df2b40) operator()   moving buffer(0x559a35af1c48 space 0x559a359caaa0 0x0~1000 clean)
Sep 30 14:16:54 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x559a35df2b40) operator()   moving buffer(0x559a35b239c8 space 0x559a359371f0 0x0~1000 clean)
Sep 30 14:16:54 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x559a35df2b40) operator()   moving buffer(0x559a35af1888 space 0x559a357f1a10 0x0~1000 clean)
Sep 30 14:16:54 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x559a35df2b40) operator()   moving buffer(0x559a35ae3a68 space 0x559a358231f0 0x0~1000 clean)
Sep 30 14:16:54 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x559a35df2b40) operator()   moving buffer(0x559a35ae4488 space 0x559a359cad10 0x0~1000 clean)
Sep 30 14:16:54 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x559a35df2b40) operator()   moving buffer(0x559a35af14c8 space 0x559a3538a350 0x0~1000 clean)
Sep 30 14:16:54 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x559a35df2b40) operator()   moving buffer(0x559a35ae2c08 space 0x559a358f44f0 0x0~1000 clean)
Sep 30 14:16:54 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x559a35df2b40) operator()   moving buffer(0x559a35af1e28 space 0x559a359ca9d0 0x0~1000 clean)
Sep 30 14:16:54 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x559a35df2b40) operator()   moving buffer(0x559a35ae3ce8 space 0x559a357dade0 0x0~1000 clean)
Sep 30 14:16:54 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x559a35df2b40) operator()   moving buffer(0x559a35ae4a28 space 0x559a359cb050 0x0~1000 clean)
Sep 30 14:16:54 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x559a35df2b40) operator()   moving buffer(0x559a35af0488 space 0x559a3594e5c0 0x0~1000 clean)
Sep 30 14:16:54 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x559a35df2b40) operator()   moving buffer(0x559a35b22c08 space 0x559a359ca280 0x0~1000 clean)
Sep 30 14:16:54 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x559a35df2b40) operator()   moving buffer(0x559a35af0028 space 0x559a359b1530 0x0~1000 clean)
Sep 30 14:16:54 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x559a35df2b40) operator()   moving buffer(0x559a35af0528 space 0x559a359ca4f0 0x0~1000 clean)
Sep 30 14:16:54 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x559a35df2b40) operator()   moving buffer(0x559a35af1928 space 0x559a359ca900 0x0~1000 clean)
Sep 30 14:16:54 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x559a35df2b40) operator()   moving buffer(0x559a35ae2208 space 0x559a35944350 0x0~1000 clean)
Sep 30 14:16:54 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x559a35df2b40) operator()   moving buffer(0x559a35ae2b68 space 0x559a3597d7a0 0x0~1000 clean)
Sep 30 14:16:54 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x559a35df2b40) operator()   moving buffer(0x559a35ae5888 space 0x559a359b29d0 0x0~1000 clean)
Sep 30 14:16:54 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x559a35df2b40) operator()   moving buffer(0x559a35ae4668 space 0x559a359cac40 0x0~1000 clean)
Sep 30 14:16:54 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x559a35df2b40) operator()   moving buffer(0x559a35b23108 space 0x559a359ca350 0x0~1000 clean)
Sep 30 14:16:54 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x559a35df2b40) operator()   moving buffer(0x559a35ae4fc8 space 0x559a359caeb0 0x0~1000 clean)
Sep 30 14:16:54 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x559a35df2b40) operator()   moving buffer(0x559a35addc48 space 0x559a359ca1b0 0x0~1000 clean)
Sep 30 14:16:54 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x559a35df2b40) operator()   moving buffer(0x559a35ae4ac8 space 0x559a359cade0 0x0~1000 clean)
Sep 30 14:16:54 compute-0 ceph-mon[74194]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Sep 30 14:16:54 compute-0 ceph-mon[74194]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Sep 30 14:16:54 compute-0 ceph-mon[74194]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Sep 30 14:16:54 compute-0 ceph-mon[74194]: Deploying daemon keepalived.nfs.cephfs.compute-2.moxzvy on compute-2
Sep 30 14:16:54 compute-0 ceph-mon[74194]: 6.b scrub starts
Sep 30 14:16:54 compute-0 ceph-mon[74194]: 6.b scrub ok
Sep 30 14:16:54 compute-0 ceph-mon[74194]: 5.18 deep-scrub starts
Sep 30 14:16:54 compute-0 ceph-mon[74194]: 5.18 deep-scrub ok
Sep 30 14:16:54 compute-0 ceph-mon[74194]: pgmap v46: 213 pgs: 108 unknown, 105 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Sep 30 14:16:54 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Sep 30 14:16:54 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Sep 30 14:16:54 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Sep 30 14:16:54 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Sep 30 14:16:54 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Sep 30 14:16:54 compute-0 ceph-mon[74194]: osdmap e55: 3 total, 3 up, 3 in
Sep 30 14:16:54 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]: dispatch
Sep 30 14:16:55 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:16:55 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf78001ae0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:16:55 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e55 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 14:16:55 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 6.e scrub starts
Sep 30 14:16:55 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 6.e scrub ok
Sep 30 14:16:55 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Sep 30 14:16:55 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]': finished
Sep 30 14:16:55 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Sep 30 14:16:55 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Sep 30 14:16:55 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 56 pg[9.1e( v 48'1157 lc 0'0 (0'0,48'1157] local-lis/les=37/38 n=5 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'1157 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:55 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 56 pg[9.19( v 48'1157 lc 0'0 (0'0,48'1157] local-lis/les=37/38 n=5 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'1157 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:55 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 56 pg[9.17( v 48'1157 lc 0'0 (0'0,48'1157] local-lis/les=37/38 n=5 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'1157 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:55 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 56 pg[9.10( v 48'1157 lc 0'0 (0'0,48'1157] local-lis/les=37/38 n=6 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'1157 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:55 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 56 pg[9.11( v 48'1157 lc 0'0 (0'0,48'1157] local-lis/les=37/38 n=6 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'1157 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:55 compute-0 ceph-mgr[74485]: [progress INFO root] update: starting ev 87a5b2af-1e03-42d8-a029-ca16a697f76a (PG autoscaler increasing pool 12 PGs from 1 to 32)
Sep 30 14:16:55 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 56 pg[9.3( v 48'1157 lc 0'0 (0'0,48'1157] local-lis/les=37/38 n=6 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'1157 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:55 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 56 pg[9.4( v 48'1157 lc 0'0 (0'0,48'1157] local-lis/les=37/38 n=6 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'1157 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:55 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 56 pg[9.7( v 48'1157 lc 0'0 (0'0,48'1157] local-lis/les=37/38 n=6 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'1157 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:55 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 56 pg[9.13( v 48'1157 lc 0'0 (0'0,48'1157] local-lis/les=37/38 n=5 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'1157 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:55 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 56 pg[9.12( v 48'1157 lc 0'0 (0'0,48'1157] local-lis/les=37/38 n=6 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'1157 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:55 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 56 pg[9.1d( v 48'1157 lc 0'0 (0'0,48'1157] local-lis/les=37/38 n=5 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'1157 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:55 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 56 pg[9.1c( v 48'1157 lc 0'0 (0'0,48'1157] local-lis/les=37/38 n=5 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'1157 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:55 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 56 pg[9.1f( v 48'1157 lc 0'0 (0'0,48'1157] local-lis/les=37/38 n=5 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'1157 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:55 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 56 pg[9.18( v 48'1157 lc 0'0 (0'0,48'1157] local-lis/les=37/38 n=5 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'1157 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:55 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 56 pg[9.1b( v 48'1157 lc 0'0 (0'0,48'1157] local-lis/les=37/38 n=5 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'1157 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:55 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 56 pg[9.1a( v 48'1157 lc 0'0 (0'0,48'1157] local-lis/les=37/38 n=5 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'1157 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:55 compute-0 ceph-mgr[74485]: [progress INFO root] complete: finished ev 52b7464a-460a-4fbc-8773-9b30e7c1a340 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Sep 30 14:16:55 compute-0 ceph-mgr[74485]: [progress INFO root] Completed event 52b7464a-460a-4fbc-8773-9b30e7c1a340 (PG autoscaler increasing pool 5 PGs from 1 to 32) in 8 seconds
Sep 30 14:16:55 compute-0 ceph-mgr[74485]: [progress INFO root] complete: finished ev 3542de6d-1ffb-416c-9469-4899eab22abf (PG autoscaler increasing pool 6 PGs from 1 to 16)
Sep 30 14:16:55 compute-0 ceph-mgr[74485]: [progress INFO root] Completed event 3542de6d-1ffb-416c-9469-4899eab22abf (PG autoscaler increasing pool 6 PGs from 1 to 16) in 6 seconds
Sep 30 14:16:55 compute-0 ceph-mgr[74485]: [progress INFO root] complete: finished ev 73e1770f-86d9-46f5-a1c1-1181d7fbd0c7 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Sep 30 14:16:55 compute-0 ceph-mgr[74485]: [progress INFO root] Completed event 73e1770f-86d9-46f5-a1c1-1181d7fbd0c7 (PG autoscaler increasing pool 7 PGs from 1 to 32) in 5 seconds
Sep 30 14:16:55 compute-0 ceph-mgr[74485]: [progress INFO root] complete: finished ev 2ab85f2d-87f2-4ca9-b893-18ed71c6bee9 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Sep 30 14:16:55 compute-0 ceph-mgr[74485]: [progress INFO root] Completed event 2ab85f2d-87f2-4ca9-b893-18ed71c6bee9 (PG autoscaler increasing pool 8 PGs from 1 to 32) in 4 seconds
Sep 30 14:16:55 compute-0 ceph-mgr[74485]: [progress INFO root] complete: finished ev 3c538e11-4628-4979-88dc-4c7f91fac31c (PG autoscaler increasing pool 9 PGs from 1 to 32)
Sep 30 14:16:55 compute-0 ceph-mgr[74485]: [progress INFO root] Completed event 3c538e11-4628-4979-88dc-4c7f91fac31c (PG autoscaler increasing pool 9 PGs from 1 to 32) in 3 seconds
Sep 30 14:16:55 compute-0 ceph-mgr[74485]: [progress INFO root] complete: finished ev 493abd32-c35e-48fd-8f25-e319b524e86c (PG autoscaler increasing pool 10 PGs from 1 to 32)
Sep 30 14:16:55 compute-0 ceph-mgr[74485]: [progress INFO root] Completed event 493abd32-c35e-48fd-8f25-e319b524e86c (PG autoscaler increasing pool 10 PGs from 1 to 32) in 2 seconds
Sep 30 14:16:55 compute-0 ceph-mgr[74485]: [progress INFO root] complete: finished ev a826192d-f0f7-4718-a1dd-245f9df858d9 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Sep 30 14:16:55 compute-0 ceph-mgr[74485]: [progress INFO root] Completed event a826192d-f0f7-4718-a1dd-245f9df858d9 (PG autoscaler increasing pool 11 PGs from 1 to 32) in 1 seconds
Sep 30 14:16:55 compute-0 ceph-mgr[74485]: [progress INFO root] complete: finished ev 87a5b2af-1e03-42d8-a029-ca16a697f76a (PG autoscaler increasing pool 12 PGs from 1 to 32)
Sep 30 14:16:55 compute-0 ceph-mgr[74485]: [progress INFO root] Completed event 87a5b2af-1e03-42d8-a029-ca16a697f76a (PG autoscaler increasing pool 12 PGs from 1 to 32) in 0 seconds
Sep 30 14:16:55 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 56 pg[9.16( v 48'1157 lc 0'0 (0'0,48'1157] local-lis/les=37/38 n=5 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'1157 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:55 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 56 pg[9.5( v 48'1157 lc 0'0 (0'0,48'1157] local-lis/les=37/38 n=6 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'1157 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:55 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 56 pg[9.6( v 48'1157 lc 0'0 (0'0,48'1157] local-lis/les=37/38 n=6 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'1157 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:55 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 56 pg[9.1( v 48'1157 lc 0'0 (0'0,48'1157] local-lis/les=37/38 n=6 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'1157 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:55 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 56 pg[9.a( v 48'1157 lc 0'0 (0'0,48'1157] local-lis/les=37/38 n=6 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'1157 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:55 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 56 pg[9.d( v 48'1157 lc 0'0 (0'0,48'1157] local-lis/les=37/38 n=6 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'1157 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:55 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 56 pg[9.c( v 48'1157 lc 0'0 (0'0,48'1157] local-lis/les=37/38 n=6 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'1157 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:55 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 56 pg[9.b( v 48'1157 lc 0'0 (0'0,48'1157] local-lis/les=37/38 n=6 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'1157 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:55 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 56 pg[9.8( v 48'1157 lc 0'0 (0'0,48'1157] local-lis/les=37/38 n=6 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'1157 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:55 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 56 pg[9.9( v 48'1157 lc 0'0 (0'0,48'1157] local-lis/les=37/38 n=6 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'1157 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:55 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 56 pg[9.e( v 48'1157 lc 0'0 (0'0,48'1157] local-lis/les=37/38 n=6 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'1157 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:55 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 56 pg[9.f( v 48'1157 lc 0'0 (0'0,48'1157] local-lis/les=37/38 n=6 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'1157 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:55 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 56 pg[9.2( v 48'1157 lc 0'0 (0'0,48'1157] local-lis/les=37/38 n=6 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'1157 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:55 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 56 pg[9.14( v 48'1157 lc 0'0 (0'0,48'1157] local-lis/les=37/38 n=5 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'1157 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:55 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 56 pg[9.15( v 48'1157 lc 0'0 (0'0,48'1157] local-lis/les=37/38 n=5 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'1157 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:55 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 56 pg[9.1e( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=5 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:16:55 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 56 pg[9.19( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=5 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:16:55 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 56 pg[9.11( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=6 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:16:55 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 56 pg[9.10( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=6 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:16:55 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 56 pg[9.17( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=5 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:16:55 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 56 pg[9.3( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=6 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:16:55 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 56 pg[9.7( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=6 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:16:55 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 56 pg[9.0( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=5 ec=37/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'1157 lcod 48'1156 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:16:55 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 56 pg[9.4( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=6 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:16:55 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 56 pg[9.13( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=5 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:16:55 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 56 pg[9.1d( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=5 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:16:55 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 56 pg[9.1c( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=5 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:16:55 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 56 pg[9.1f( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=5 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:16:55 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 56 pg[9.18( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=5 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:16:55 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 56 pg[9.1a( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=5 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:16:55 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 56 pg[9.1b( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=5 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:16:55 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 56 pg[9.16( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=5 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:16:55 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 56 pg[9.6( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=6 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:16:55 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 56 pg[9.1( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=6 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:16:55 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 56 pg[9.a( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=6 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:16:55 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 56 pg[9.d( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=6 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:16:55 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 56 pg[9.5( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=6 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:16:55 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 56 pg[9.c( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=6 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:16:55 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 56 pg[9.b( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=6 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:16:55 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 56 pg[9.8( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=6 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:16:55 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 56 pg[9.e( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=6 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:16:55 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 56 pg[9.9( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=6 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:16:55 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 56 pg[9.f( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=6 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:16:55 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 56 pg[9.2( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=6 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:16:55 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 56 pg[9.14( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=5 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:16:55 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 56 pg[9.15( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=5 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:16:55 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 56 pg[9.12( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=6 ec=55/37 lis/c=37/37 les/c/f=38/38/0 sis=55) [0] r=0 lpr=55 pi=[37,55)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:16:55 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:16:55 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf70002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:16:55 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v49: 275 pgs: 62 unknown, 32 peering, 181 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Sep 30 14:16:55 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"} v 0)
Sep 30 14:16:55 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]: dispatch
Sep 30 14:16:55 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0)
Sep 30 14:16:55 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Sep 30 14:16:56 compute-0 ceph-mon[74194]: 6.c scrub starts
Sep 30 14:16:56 compute-0 ceph-mon[74194]: 6.c scrub ok
Sep 30 14:16:56 compute-0 ceph-mon[74194]: 5.19 scrub starts
Sep 30 14:16:56 compute-0 ceph-mon[74194]: 5.19 scrub ok
Sep 30 14:16:56 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]': finished
Sep 30 14:16:56 compute-0 ceph-mon[74194]: osdmap e56: 3 total, 3 up, 3 in
Sep 30 14:16:56 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]: dispatch
Sep 30 14:16:56 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Sep 30 14:16:56 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:16:56 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf9c0095a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:16:56 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Sep 30 14:16:56 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Sep 30 14:16:56 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Sep 30 14:16:56 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]': finished
Sep 30 14:16:56 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Sep 30 14:16:56 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Sep 30 14:16:56 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Sep 30 14:16:56 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-keepalived-nfs-cephfs-compute-0-nfjjcv[97599]: Tue Sep 30 14:16:56 2025: (VI_0) Entering MASTER STATE
Sep 30 14:16:57 compute-0 ceph-mon[74194]: 7.1d scrub starts
Sep 30 14:16:57 compute-0 ceph-mon[74194]: 7.1d scrub ok
Sep 30 14:16:57 compute-0 ceph-mon[74194]: 6.e scrub starts
Sep 30 14:16:57 compute-0 ceph-mon[74194]: 6.e scrub ok
Sep 30 14:16:57 compute-0 ceph-mon[74194]: 5.1e scrub starts
Sep 30 14:16:57 compute-0 ceph-mon[74194]: 5.1e scrub ok
Sep 30 14:16:57 compute-0 ceph-mon[74194]: pgmap v49: 275 pgs: 62 unknown, 32 peering, 181 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Sep 30 14:16:57 compute-0 ceph-mon[74194]: 7.1a scrub starts
Sep 30 14:16:57 compute-0 ceph-mon[74194]: 7.1a scrub ok
Sep 30 14:16:57 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]': finished
Sep 30 14:16:57 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Sep 30 14:16:57 compute-0 ceph-mon[74194]: osdmap e57: 3 total, 3 up, 3 in
Sep 30 14:16:57 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:16:57 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf84001a70 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:16:57 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Sep 30 14:16:57 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Sep 30 14:16:57 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:16:57 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf78002f70 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:16:57 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v51: 337 pgs: 124 unknown, 32 peering, 181 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Sep 30 14:16:57 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 57 pg[11.0( empty local-lis/les=41/42 n=0 ec=41/41 lis/c=41/41 les/c/f=42/42/0 sis=57 pruub=10.577435493s) [0] r=0 lpr=57 pi=[41,57)/1 crt=0'0 mlcod 0'0 active pruub 195.295410156s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:16:57 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 57 pg[11.0( empty local-lis/les=41/42 n=0 ec=41/41 lis/c=41/41 les/c/f=42/42/0 sis=57 pruub=10.577435493s) [0] r=0 lpr=57 pi=[41,57)/1 crt=0'0 mlcod 0'0 unknown pruub 195.295410156s@ mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:57 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Sep 30 14:16:58 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:16:58 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf70002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:16:58 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Sep 30 14:16:58 compute-0 ceph-mgr[74485]: [progress INFO root] Writing back 21 completed events
Sep 30 14:16:58 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Sep 30 14:16:58 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Sep 30 14:16:58 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 6.a scrub starts
Sep 30 14:16:58 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 6.a scrub ok
Sep 30 14:16:58 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:16:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:16:59 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf9c0095a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:16:59 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 58 pg[11.17( empty local-lis/les=41/42 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:59 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 58 pg[11.16( empty local-lis/les=41/42 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:59 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 58 pg[11.d( empty local-lis/les=41/42 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:59 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 58 pg[11.c( empty local-lis/les=41/42 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:59 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 58 pg[11.b( empty local-lis/les=41/42 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:59 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 58 pg[11.a( empty local-lis/les=41/42 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:59 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 58 pg[11.9( empty local-lis/les=41/42 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:59 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 58 pg[11.e( empty local-lis/les=41/42 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:59 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 58 pg[11.f( empty local-lis/les=41/42 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:59 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 58 pg[11.8( empty local-lis/les=41/42 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:59 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 58 pg[11.3( empty local-lis/les=41/42 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:59 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 58 pg[11.4( empty local-lis/les=41/42 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:59 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 58 pg[11.7( empty local-lis/les=41/42 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:59 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 58 pg[11.18( empty local-lis/les=41/42 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:59 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 58 pg[11.19( empty local-lis/les=41/42 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:59 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 58 pg[11.1a( empty local-lis/les=41/42 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:59 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 58 pg[11.1d( empty local-lis/les=41/42 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:59 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 58 pg[11.1e( empty local-lis/les=41/42 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:59 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 58 pg[11.1f( empty local-lis/les=41/42 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:59 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 58 pg[11.10( empty local-lis/les=41/42 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:59 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 58 pg[11.11( empty local-lis/les=41/42 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:59 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 58 pg[11.5( empty local-lis/les=41/42 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:59 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 58 pg[11.6( empty local-lis/les=41/42 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:59 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 58 pg[11.1( empty local-lis/les=41/42 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:59 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 58 pg[11.12( empty local-lis/les=41/42 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:59 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 58 pg[11.13( empty local-lis/les=41/42 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:59 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 58 pg[11.15( empty local-lis/les=41/42 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:59 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 58 pg[11.14( empty local-lis/les=41/42 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:59 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 58 pg[11.1b( empty local-lis/les=41/42 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:59 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 58 pg[11.2( empty local-lis/les=41/42 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:59 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 58 pg[11.1c( empty local-lis/les=41/42 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:16:59 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Sep 30 14:16:59 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Sep 30 14:16:59 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Sep 30 14:16:59 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Sep 30 14:16:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:16:59 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf84001a70 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:16:59 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v53: 337 pgs: 1 peering, 31 unknown, 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 593 B/s rd, 0 op/s
Sep 30 14:17:00 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:17:00 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf78002f70 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:17:00 compute-0 ceph-mon[74194]: 6.9 scrub starts
Sep 30 14:17:00 compute-0 ceph-mon[74194]: 6.9 scrub ok
Sep 30 14:17:00 compute-0 ceph-mon[74194]: 5.1f scrub starts
Sep 30 14:17:00 compute-0 ceph-mon[74194]: 5.1f scrub ok
Sep 30 14:17:00 compute-0 ceph-mon[74194]: 7.1e scrub starts
Sep 30 14:17:00 compute-0 ceph-mon[74194]: 7.1e scrub ok
Sep 30 14:17:00 compute-0 ceph-mon[74194]: pgmap v51: 337 pgs: 124 unknown, 32 peering, 181 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Sep 30 14:17:00 compute-0 ceph-mon[74194]: osdmap e58: 3 total, 3 up, 3 in
Sep 30 14:17:00 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Sep 30 14:17:00 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:17:00 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Sep 30 14:17:00 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Sep 30 14:17:00 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Sep 30 14:17:00 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 59 pg[11.17( empty local-lis/les=57/59 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:17:00 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 59 pg[11.0( empty local-lis/les=57/59 n=0 ec=41/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:17:00 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Sep 30 14:17:00 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:17:00 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Sep 30 14:17:00 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 59 pg[11.16( empty local-lis/les=57/59 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:17:00 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 59 pg[11.d( empty local-lis/les=57/59 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:17:00 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 59 pg[11.c( empty local-lis/les=57/59 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:17:00 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 59 pg[11.b( empty local-lis/les=57/59 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:17:00 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 59 pg[11.a( empty local-lis/les=57/59 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:17:00 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 59 pg[11.9( empty local-lis/les=57/59 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:17:00 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 59 pg[11.e( empty local-lis/les=57/59 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:17:00 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 59 pg[11.f( empty local-lis/les=57/59 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:17:00 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 59 pg[11.3( empty local-lis/les=57/59 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:17:00 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 59 pg[11.8( empty local-lis/les=57/59 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:17:00 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 59 pg[11.4( empty local-lis/les=57/59 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:17:00 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 59 pg[11.7( empty local-lis/les=57/59 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:17:00 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 59 pg[11.18( empty local-lis/les=57/59 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:17:00 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 59 pg[11.19( empty local-lis/les=57/59 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:17:00 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 59 pg[11.1a( empty local-lis/les=57/59 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:17:00 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 59 pg[11.1d( empty local-lis/les=57/59 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:17:00 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 59 pg[11.1e( empty local-lis/les=57/59 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:17:00 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 59 pg[11.1f( empty local-lis/les=57/59 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:17:00 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 59 pg[11.10( empty local-lis/les=57/59 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:17:00 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 59 pg[11.11( empty local-lis/les=57/59 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:17:00 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 59 pg[11.2( empty local-lis/les=57/59 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:17:00 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 59 pg[11.5( empty local-lis/les=57/59 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:17:00 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 59 pg[11.6( empty local-lis/les=57/59 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:17:00 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 59 pg[11.1( empty local-lis/les=57/59 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:17:00 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 59 pg[11.15( empty local-lis/les=57/59 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:17:00 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 59 pg[11.12( empty local-lis/les=57/59 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:17:00 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 59 pg[11.13( empty local-lis/les=57/59 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:17:00 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 59 pg[11.1b( empty local-lis/les=57/59 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:17:00 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 59 pg[11.14( empty local-lis/les=57/59 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:17:00 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 59 pg[11.1c( empty local-lis/les=57/59 n=0 ec=57/41 lis/c=41/41 les/c/f=42/42/0 sis=57) [0] r=0 lpr=57 pi=[41,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:17:00 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:17:00 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Sep 30 14:17:00 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Sep 30 14:17:00 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Sep 30 14:17:00 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Sep 30 14:17:00 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Sep 30 14:17:00 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Sep 30 14:17:00 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.nfs.cephfs.compute-1.ixytoe on compute-1
Sep 30 14:17:00 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.nfs.cephfs.compute-1.ixytoe on compute-1
Sep 30 14:17:00 compute-0 sudo[97637]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbowujvvfuimggonrhewhkuiptdflmwa ; /usr/bin/python3'
Sep 30 14:17:00 compute-0 sudo[97637]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:17:00 compute-0 python3[97639]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:17:01 compute-0 podman[97640]: 2025-09-30 14:17:00.957783589 +0000 UTC m=+0.023134841 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:17:01 compute-0 podman[97640]: 2025-09-30 14:17:01.196038626 +0000 UTC m=+0.261389858 container create 50a0fdfa458c3d53052fed60c9c75661d86167afd2f54b039c714a42f1a36b1f (image=quay.io/ceph/ceph:v19, name=hardcore_lumiere, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:17:01 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:17:01 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf70002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:17:01 compute-0 systemd[1]: Started libpod-conmon-50a0fdfa458c3d53052fed60c9c75661d86167afd2f54b039c714a42f1a36b1f.scope.
Sep 30 14:17:01 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:17:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a742b366faf9808ebf978e1befff6b160506a014c83cc60f3ec76f487342476e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:17:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a742b366faf9808ebf978e1befff6b160506a014c83cc60f3ec76f487342476e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:17:01 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Sep 30 14:17:01 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Sep 30 14:17:01 compute-0 podman[97640]: 2025-09-30 14:17:01.689704304 +0000 UTC m=+0.755055536 container init 50a0fdfa458c3d53052fed60c9c75661d86167afd2f54b039c714a42f1a36b1f (image=quay.io/ceph/ceph:v19, name=hardcore_lumiere, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Sep 30 14:17:01 compute-0 podman[97640]: 2025-09-30 14:17:01.696331318 +0000 UTC m=+0.761682550 container start 50a0fdfa458c3d53052fed60c9c75661d86167afd2f54b039c714a42f1a36b1f (image=quay.io/ceph/ceph:v19, name=hardcore_lumiere, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Sep 30 14:17:01 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:17:01 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf9c00a2b0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:17:01 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v55: 337 pgs: 1 peering, 31 unknown, 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:17:01 compute-0 podman[97640]: 2025-09-30 14:17:01.899626384 +0000 UTC m=+0.964977616 container attach 50a0fdfa458c3d53052fed60c9c75661d86167afd2f54b039c714a42f1a36b1f (image=quay.io/ceph/ceph:v19, name=hardcore_lumiere, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Sep 30 14:17:02 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:17:02 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf84001a70 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:17:02 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Sep 30 14:17:02 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Sep 30 14:17:02 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-keepalived-nfs-cephfs-compute-0-nfjjcv[97599]: Tue Sep 30 14:17:02 2025: (VI_0) Received advert from 192.168.122.102 with lower priority 90, ours 100, forcing new election
Sep 30 14:17:03 compute-0 ceph-mon[74194]: 6.5 scrub starts
Sep 30 14:17:03 compute-0 ceph-mon[74194]: 5.13 scrub starts
Sep 30 14:17:03 compute-0 ceph-mon[74194]: 5.13 scrub ok
Sep 30 14:17:03 compute-0 ceph-mon[74194]: 6.5 scrub ok
Sep 30 14:17:03 compute-0 ceph-mon[74194]: 7.4 deep-scrub starts
Sep 30 14:17:03 compute-0 ceph-mon[74194]: 7.4 deep-scrub ok
Sep 30 14:17:03 compute-0 ceph-mon[74194]: 6.a scrub starts
Sep 30 14:17:03 compute-0 ceph-mon[74194]: 6.a scrub ok
Sep 30 14:17:03 compute-0 ceph-mon[74194]: 5.3 scrub starts
Sep 30 14:17:03 compute-0 ceph-mon[74194]: 5.3 scrub ok
Sep 30 14:17:03 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:17:03 compute-0 ceph-mon[74194]: 7.c scrub starts
Sep 30 14:17:03 compute-0 ceph-mon[74194]: 7.c scrub ok
Sep 30 14:17:03 compute-0 ceph-mon[74194]: 6.3 scrub starts
Sep 30 14:17:03 compute-0 ceph-mon[74194]: 6.3 scrub ok
Sep 30 14:17:03 compute-0 ceph-mon[74194]: 5.1d scrub starts
Sep 30 14:17:03 compute-0 ceph-mon[74194]: 5.1d scrub ok
Sep 30 14:17:03 compute-0 ceph-mon[74194]: pgmap v53: 337 pgs: 1 peering, 31 unknown, 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 593 B/s rd, 0 op/s
Sep 30 14:17:03 compute-0 ceph-mon[74194]: 7.16 scrub starts
Sep 30 14:17:03 compute-0 ceph-mon[74194]: 7.16 scrub ok
Sep 30 14:17:03 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:17:03 compute-0 ceph-mon[74194]: osdmap e59: 3 total, 3 up, 3 in
Sep 30 14:17:03 compute-0 ceph-mon[74194]: 6.2 scrub starts
Sep 30 14:17:03 compute-0 ceph-mon[74194]: 6.2 scrub ok
Sep 30 14:17:03 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:17:03 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:17:03 compute-0 ceph-mon[74194]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Sep 30 14:17:03 compute-0 ceph-mon[74194]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Sep 30 14:17:03 compute-0 ceph-mon[74194]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Sep 30 14:17:03 compute-0 ceph-mon[74194]: Deploying daemon keepalived.nfs.cephfs.compute-1.ixytoe on compute-1
Sep 30 14:17:03 compute-0 ceph-mon[74194]: 5.1c scrub starts
Sep 30 14:17:03 compute-0 ceph-mon[74194]: 5.1c scrub ok
Sep 30 14:17:03 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:17:03 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf78002f70 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:17:03 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 6.0 deep-scrub starts
Sep 30 14:17:03 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 6.0 deep-scrub ok
Sep 30 14:17:03 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:17:03 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf70003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:17:03 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v56: 337 pgs: 1 peering, 31 unknown, 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 436 B/s rd, 0 op/s
Sep 30 14:17:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:17:04 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf9c00a2b0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:17:04 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Sep 30 14:17:04 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Sep 30 14:17:05 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:17:05 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf84001a70 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:17:05 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e59 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 14:17:05 compute-0 ceph-mon[74194]: 7.1c deep-scrub starts
Sep 30 14:17:05 compute-0 ceph-mon[74194]: 7.1c deep-scrub ok
Sep 30 14:17:05 compute-0 ceph-mon[74194]: 6.4 scrub starts
Sep 30 14:17:05 compute-0 ceph-mon[74194]: 6.4 scrub ok
Sep 30 14:17:05 compute-0 ceph-mon[74194]: 5.12 scrub starts
Sep 30 14:17:05 compute-0 ceph-mon[74194]: 5.12 scrub ok
Sep 30 14:17:05 compute-0 ceph-mon[74194]: pgmap v55: 337 pgs: 1 peering, 31 unknown, 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:17:05 compute-0 ceph-mon[74194]: 7.1f scrub starts
Sep 30 14:17:05 compute-0 ceph-mon[74194]: 6.6 scrub starts
Sep 30 14:17:05 compute-0 ceph-mon[74194]: 6.6 scrub ok
Sep 30 14:17:05 compute-0 ceph-mon[74194]: 5.e deep-scrub starts
Sep 30 14:17:05 compute-0 ceph-mon[74194]: 5.e deep-scrub ok
Sep 30 14:17:05 compute-0 ceph-mon[74194]: 7.12 scrub starts
Sep 30 14:17:05 compute-0 ceph-mon[74194]: 7.1f scrub ok
Sep 30 14:17:05 compute-0 ceph-mon[74194]: pgmap v56: 337 pgs: 1 peering, 31 unknown, 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 436 B/s rd, 0 op/s
Sep 30 14:17:05 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 6.1 deep-scrub starts
Sep 30 14:17:05 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 6.1 deep-scrub ok
Sep 30 14:17:05 compute-0 hardcore_lumiere[97657]: could not fetch user info: no user info saved
Sep 30 14:17:05 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:17:05 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf84001a70 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:17:05 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v57: 337 pgs: 1 active+clean+scrubbing, 1 active+clean+scrubbing+deep, 335 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Sep 30 14:17:05 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"} v 0)
Sep 30 14:17:05 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]: dispatch
Sep 30 14:17:05 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0)
Sep 30 14:17:05 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Sep 30 14:17:05 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0)
Sep 30 14:17:05 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Sep 30 14:17:05 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} v 0)
Sep 30 14:17:05 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]: dispatch
Sep 30 14:17:05 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0)
Sep 30 14:17:05 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Sep 30 14:17:05 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0)
Sep 30 14:17:05 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Sep 30 14:17:05 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0)
Sep 30 14:17:05 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Sep 30 14:17:05 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0)
Sep 30 14:17:05 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Sep 30 14:17:06 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:17:06 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf84001a70 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:17:06 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Sep 30 14:17:06 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 6.d deep-scrub starts
Sep 30 14:17:06 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 6.d deep-scrub ok
Sep 30 14:17:06 compute-0 systemd[1]: libpod-50a0fdfa458c3d53052fed60c9c75661d86167afd2f54b039c714a42f1a36b1f.scope: Deactivated successfully.
Sep 30 14:17:06 compute-0 podman[97640]: 2025-09-30 14:17:06.719607024 +0000 UTC m=+5.784958266 container died 50a0fdfa458c3d53052fed60c9c75661d86167afd2f54b039c714a42f1a36b1f (image=quay.io/ceph/ceph:v19, name=hardcore_lumiere, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Sep 30 14:17:06 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]': finished
Sep 30 14:17:06 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Sep 30 14:17:06 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Sep 30 14:17:06 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Sep 30 14:17:06 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Sep 30 14:17:06 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Sep 30 14:17:06 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Sep 30 14:17:06 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Sep 30 14:17:06 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Sep 30 14:17:07 compute-0 ceph-mon[74194]: 6.0 deep-scrub starts
Sep 30 14:17:07 compute-0 ceph-mon[74194]: 6.0 deep-scrub ok
Sep 30 14:17:07 compute-0 ceph-mon[74194]: 5.6 scrub starts
Sep 30 14:17:07 compute-0 ceph-mon[74194]: 5.6 scrub ok
Sep 30 14:17:07 compute-0 ceph-mon[74194]: 6.7 scrub starts
Sep 30 14:17:07 compute-0 ceph-mon[74194]: 6.7 scrub ok
Sep 30 14:17:07 compute-0 ceph-mon[74194]: 5.15 scrub starts
Sep 30 14:17:07 compute-0 ceph-mon[74194]: 5.15 scrub ok
Sep 30 14:17:07 compute-0 ceph-mon[74194]: 7.12 scrub ok
Sep 30 14:17:07 compute-0 ceph-mon[74194]: 7.15 deep-scrub starts
Sep 30 14:17:07 compute-0 ceph-mon[74194]: 7.15 deep-scrub ok
Sep 30 14:17:07 compute-0 ceph-mon[74194]: 6.1 deep-scrub starts
Sep 30 14:17:07 compute-0 ceph-mon[74194]: 6.1 deep-scrub ok
Sep 30 14:17:07 compute-0 ceph-mon[74194]: 5.b scrub starts
Sep 30 14:17:07 compute-0 ceph-mon[74194]: 5.b scrub ok
Sep 30 14:17:07 compute-0 ceph-mon[74194]: pgmap v57: 337 pgs: 1 active+clean+scrubbing, 1 active+clean+scrubbing+deep, 335 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Sep 30 14:17:07 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]: dispatch
Sep 30 14:17:07 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Sep 30 14:17:07 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Sep 30 14:17:07 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]: dispatch
Sep 30 14:17:07 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Sep 30 14:17:07 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Sep 30 14:17:07 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Sep 30 14:17:07 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[5.1d( empty local-lis/les=0/0 n=0 ec=51/21 lis/c=51/51 les/c/f=52/52/0 sis=60) [0] r=0 lpr=60 pi=[51,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[5.c( empty local-lis/les=0/0 n=0 ec=51/21 lis/c=51/51 les/c/f=52/52/0 sis=60) [0] r=0 lpr=60 pi=[51,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[5.1e( empty local-lis/les=0/0 n=0 ec=51/21 lis/c=51/51 les/c/f=52/52/0 sis=60) [0] r=0 lpr=60 pi=[51,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[5.14( empty local-lis/les=0/0 n=0 ec=51/21 lis/c=51/51 les/c/f=52/52/0 sis=60) [0] r=0 lpr=60 pi=[51,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[5.17( empty local-lis/les=0/0 n=0 ec=51/21 lis/c=51/51 les/c/f=52/52/0 sis=60) [0] r=0 lpr=60 pi=[51,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:17:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-a742b366faf9808ebf978e1befff6b160506a014c83cc60f3ec76f487342476e-merged.mount: Deactivated successfully.
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[5.a( empty local-lis/les=0/0 n=0 ec=51/21 lis/c=51/51 les/c/f=52/52/0 sis=60) [0] r=0 lpr=60 pi=[51,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[5.6( empty local-lis/les=0/0 n=0 ec=51/21 lis/c=51/51 les/c/f=52/52/0 sis=60) [0] r=0 lpr=60 pi=[51,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[5.5( empty local-lis/les=0/0 n=0 ec=51/21 lis/c=51/51 les/c/f=52/52/0 sis=60) [0] r=0 lpr=60 pi=[51,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[5.3( empty local-lis/les=0/0 n=0 ec=51/21 lis/c=51/51 les/c/f=52/52/0 sis=60) [0] r=0 lpr=60 pi=[51,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[5.19( empty local-lis/les=0/0 n=0 ec=51/21 lis/c=51/51 les/c/f=52/52/0 sis=60) [0] r=0 lpr=60 pi=[51,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[11.17( empty local-lis/les=57/59 n=0 ec=57/41 lis/c=57/57 les/c/f=59/59/0 sis=60 pruub=9.465577126s) [2] r=-1 lpr=60 pi=[57,60)/1 crt=0'0 mlcod 0'0 active pruub 203.384689331s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[11.17( empty local-lis/les=57/59 n=0 ec=57/41 lis/c=57/57 les/c/f=59/59/0 sis=60 pruub=9.465552330s) [2] r=-1 lpr=60 pi=[57,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 203.384689331s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[8.14( v 36'6 (0'0,36'6] local-lis/les=53/54 n=0 ec=53/35 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=10.552213669s) [1] r=-1 lpr=60 pi=[53,60)/1 crt=36'6 lcod 0'0 mlcod 0'0 active pruub 204.471466064s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[8.14( v 36'6 (0'0,36'6] local-lis/les=53/54 n=0 ec=53/35 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=10.552199364s) [1] r=-1 lpr=60 pi=[53,60)/1 crt=36'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 204.471466064s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[8.15( v 36'6 (0'0,36'6] local-lis/les=53/54 n=0 ec=53/35 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=10.551512718s) [2] r=-1 lpr=60 pi=[53,60)/1 crt=36'6 lcod 0'0 mlcod 0'0 active pruub 204.471466064s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[6.d( v 48'39 (0'0,48'39] local-lis/les=51/52 n=2 ec=51/22 lis/c=51/51 les/c/f=52/52/0 sis=60 pruub=8.292827606s) [2] r=-1 lpr=60 pi=[51,60)/1 crt=48'39 lcod 0'0 mlcod 0'0 active pruub 202.212829590s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[8.15( v 36'6 (0'0,36'6] local-lis/les=53/54 n=0 ec=53/35 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=10.551479340s) [2] r=-1 lpr=60 pi=[53,60)/1 crt=36'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 204.471466064s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[6.d( v 48'39 (0'0,48'39] local-lis/les=51/52 n=2 ec=51/22 lis/c=51/51 les/c/f=52/52/0 sis=60 pruub=8.292809486s) [2] r=-1 lpr=60 pi=[51,60)/1 crt=48'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 202.212829590s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[8.3( v 36'6 (0'0,36'6] local-lis/les=53/54 n=1 ec=53/35 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=10.551178932s) [2] r=-1 lpr=60 pi=[53,60)/1 crt=36'6 lcod 0'0 mlcod 0'0 active pruub 204.471466064s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[8.3( v 36'6 (0'0,36'6] local-lis/les=53/54 n=1 ec=53/35 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=10.551121712s) [2] r=-1 lpr=60 pi=[53,60)/1 crt=36'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 204.471466064s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[6.1( v 48'39 (0'0,48'39] local-lis/les=51/52 n=0 ec=51/22 lis/c=51/51 les/c/f=52/52/0 sis=60 pruub=8.292254448s) [2] r=-1 lpr=60 pi=[51,60)/1 crt=48'39 lcod 0'0 mlcod 0'0 active pruub 202.212692261s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[8.f( v 36'6 (0'0,36'6] local-lis/les=53/54 n=0 ec=53/35 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=10.550978661s) [2] r=-1 lpr=60 pi=[53,60)/1 crt=36'6 lcod 0'0 mlcod 0'0 active pruub 204.471420288s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[6.1( v 48'39 (0'0,48'39] local-lis/les=51/52 n=0 ec=51/22 lis/c=51/51 les/c/f=52/52/0 sis=60 pruub=8.292233467s) [2] r=-1 lpr=60 pi=[51,60)/1 crt=48'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 202.212692261s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[8.f( v 36'6 (0'0,36'6] local-lis/les=53/54 n=0 ec=53/35 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=10.550949097s) [2] r=-1 lpr=60 pi=[53,60)/1 crt=36'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 204.471420288s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[8.8( v 36'6 (0'0,36'6] local-lis/les=53/54 n=0 ec=53/35 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=10.550733566s) [1] r=-1 lpr=60 pi=[53,60)/1 crt=36'6 lcod 0'0 mlcod 0'0 active pruub 204.471374512s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[8.8( v 36'6 (0'0,36'6] local-lis/les=53/54 n=0 ec=53/35 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=10.550720215s) [1] r=-1 lpr=60 pi=[53,60)/1 crt=36'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 204.471374512s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[8.9( v 36'6 (0'0,36'6] local-lis/les=53/54 n=0 ec=53/35 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=10.550628662s) [2] r=-1 lpr=60 pi=[53,60)/1 crt=36'6 lcod 0'0 mlcod 0'0 active pruub 204.471389771s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[8.9( v 36'6 (0'0,36'6] local-lis/les=53/54 n=0 ec=53/35 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=10.550610542s) [2] r=-1 lpr=60 pi=[53,60)/1 crt=36'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 204.471389771s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[6.7( v 48'39 (0'0,48'39] local-lis/les=51/52 n=1 ec=51/22 lis/c=51/51 les/c/f=52/52/0 sis=60 pruub=8.291865349s) [2] r=-1 lpr=60 pi=[51,60)/1 crt=48'39 lcod 0'0 mlcod 0'0 active pruub 202.212661743s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[11.a( empty local-lis/les=57/59 n=0 ec=57/41 lis/c=57/57 les/c/f=59/59/0 sis=60 pruub=9.502193451s) [2] r=-1 lpr=60 pi=[57,60)/1 crt=0'0 mlcod 0'0 active pruub 203.423034668s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[6.7( v 48'39 (0'0,48'39] local-lis/les=51/52 n=1 ec=51/22 lis/c=51/51 les/c/f=52/52/0 sis=60 pruub=8.291812897s) [2] r=-1 lpr=60 pi=[51,60)/1 crt=48'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 202.212661743s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[11.a( empty local-lis/les=57/59 n=0 ec=57/41 lis/c=57/57 les/c/f=59/59/0 sis=60 pruub=9.502174377s) [2] r=-1 lpr=60 pi=[57,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 203.423034668s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[8.a( v 36'6 (0'0,36'6] local-lis/les=53/54 n=0 ec=53/35 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=10.550317764s) [2] r=-1 lpr=60 pi=[53,60)/1 crt=36'6 lcod 0'0 mlcod 0'0 active pruub 204.471298218s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[8.a( v 36'6 (0'0,36'6] local-lis/les=53/54 n=0 ec=53/35 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=10.550299644s) [2] r=-1 lpr=60 pi=[53,60)/1 crt=36'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 204.471298218s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:17:07 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[6.3( v 48'39 (0'0,48'39] local-lis/les=51/52 n=2 ec=51/22 lis/c=51/51 les/c/f=52/52/0 sis=60 pruub=8.291488647s) [2] r=-1 lpr=60 pi=[51,60)/1 crt=48'39 lcod 0'0 mlcod 0'0 active pruub 202.212570190s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[6.3( v 48'39 (0'0,48'39] local-lis/les=51/52 n=2 ec=51/22 lis/c=51/51 les/c/f=52/52/0 sis=60 pruub=8.291457176s) [2] r=-1 lpr=60 pi=[51,60)/1 crt=48'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 202.212570190s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[8.d( v 36'6 (0'0,36'6] local-lis/les=53/54 n=0 ec=53/35 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=10.550190926s) [2] r=-1 lpr=60 pi=[53,60)/1 crt=36'6 lcod 0'0 mlcod 0'0 active pruub 204.471267700s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[8.d( v 36'6 (0'0,36'6] local-lis/les=53/54 n=0 ec=53/35 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=10.550116539s) [2] r=-1 lpr=60 pi=[53,60)/1 crt=36'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 204.471267700s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[11.e( empty local-lis/les=57/59 n=0 ec=57/41 lis/c=57/57 les/c/f=59/59/0 sis=60 pruub=9.502015114s) [2] r=-1 lpr=60 pi=[57,60)/1 crt=0'0 mlcod 0'0 active pruub 203.423202515s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[11.e( empty local-lis/les=57/59 n=0 ec=57/41 lis/c=57/57 les/c/f=59/59/0 sis=60 pruub=9.501993179s) [2] r=-1 lpr=60 pi=[57,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 203.423202515s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[8.c( v 36'6 (0'0,36'6] local-lis/les=53/54 n=0 ec=53/35 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=10.550007820s) [2] r=-1 lpr=60 pi=[53,60)/1 crt=36'6 lcod 0'0 mlcod 0'0 active pruub 204.471282959s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[11.f( empty local-lis/les=57/59 n=0 ec=57/41 lis/c=57/57 les/c/f=59/59/0 sis=60 pruub=9.501939774s) [1] r=-1 lpr=60 pi=[57,60)/1 crt=0'0 mlcod 0'0 active pruub 203.423217773s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[8.c( v 36'6 (0'0,36'6] local-lis/les=53/54 n=0 ec=53/35 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=10.549976349s) [2] r=-1 lpr=60 pi=[53,60)/1 crt=36'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 204.471282959s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[11.f( empty local-lis/les=57/59 n=0 ec=57/41 lis/c=57/57 les/c/f=59/59/0 sis=60 pruub=9.501918793s) [1] r=-1 lpr=60 pi=[57,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 203.423217773s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[6.5( v 48'39 (0'0,48'39] local-lis/les=51/52 n=2 ec=51/22 lis/c=51/51 les/c/f=52/52/0 sis=60 pruub=8.291156769s) [2] r=-1 lpr=60 pi=[51,60)/1 crt=48'39 lcod 0'0 mlcod 0'0 active pruub 202.212539673s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[8.b( v 36'6 (0'0,36'6] local-lis/les=53/54 n=0 ec=53/35 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=10.549851418s) [2] r=-1 lpr=60 pi=[53,60)/1 crt=36'6 lcod 0'0 mlcod 0'0 active pruub 204.471267700s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[11.8( empty local-lis/les=57/59 n=0 ec=57/41 lis/c=57/57 les/c/f=59/59/0 sis=60 pruub=9.501805305s) [2] r=-1 lpr=60 pi=[57,60)/1 crt=0'0 mlcod 0'0 active pruub 203.423263550s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[8.b( v 36'6 (0'0,36'6] local-lis/les=53/54 n=0 ec=53/35 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=10.549831390s) [2] r=-1 lpr=60 pi=[53,60)/1 crt=36'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 204.471267700s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[11.8( empty local-lis/les=57/59 n=0 ec=57/41 lis/c=57/57 les/c/f=59/59/0 sis=60 pruub=9.501792908s) [2] r=-1 lpr=60 pi=[57,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 203.423263550s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[6.5( v 48'39 (0'0,48'39] local-lis/les=51/52 n=2 ec=51/22 lis/c=51/51 les/c/f=52/52/0 sis=60 pruub=8.290976524s) [2] r=-1 lpr=60 pi=[51,60)/1 crt=48'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 202.212539673s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[11.3( empty local-lis/les=57/59 n=0 ec=57/41 lis/c=57/57 les/c/f=59/59/0 sis=60 pruub=9.501621246s) [2] r=-1 lpr=60 pi=[57,60)/1 crt=0'0 mlcod 0'0 active pruub 203.423233032s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[6.9( v 48'39 (0'0,48'39] local-lis/les=51/52 n=0 ec=51/22 lis/c=51/51 les/c/f=52/52/0 sis=60 pruub=8.290764809s) [2] r=-1 lpr=60 pi=[51,60)/1 crt=48'39 lcod 0'0 mlcod 0'0 active pruub 202.212432861s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[11.3( empty local-lis/les=57/59 n=0 ec=57/41 lis/c=57/57 les/c/f=59/59/0 sis=60 pruub=9.501601219s) [2] r=-1 lpr=60 pi=[57,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 203.423233032s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[11.4( empty local-lis/les=57/59 n=0 ec=57/41 lis/c=57/57 les/c/f=59/59/0 sis=60 pruub=9.501510620s) [1] r=-1 lpr=60 pi=[57,60)/1 crt=0'0 mlcod 0'0 active pruub 203.423263550s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[6.9( v 48'39 (0'0,48'39] local-lis/les=51/52 n=0 ec=51/22 lis/c=51/51 les/c/f=52/52/0 sis=60 pruub=8.290741920s) [2] r=-1 lpr=60 pi=[51,60)/1 crt=48'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 202.212432861s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[11.7( empty local-lis/les=57/59 n=0 ec=57/41 lis/c=57/57 les/c/f=59/59/0 sis=60 pruub=9.501081467s) [1] r=-1 lpr=60 pi=[57,60)/1 crt=0'0 mlcod 0'0 active pruub 203.423278809s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[11.7( empty local-lis/les=57/59 n=0 ec=57/41 lis/c=57/57 les/c/f=59/59/0 sis=60 pruub=9.501065254s) [1] r=-1 lpr=60 pi=[57,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 203.423278809s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[8.1b( v 36'6 (0'0,36'6] local-lis/les=53/54 n=0 ec=53/35 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=10.548951149s) [1] r=-1 lpr=60 pi=[53,60)/1 crt=36'6 lcod 0'0 mlcod 0'0 active pruub 204.471237183s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[8.1b( v 36'6 (0'0,36'6] local-lis/les=53/54 n=0 ec=53/35 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=10.548933029s) [1] r=-1 lpr=60 pi=[53,60)/1 crt=36'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 204.471237183s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[11.19( empty local-lis/les=57/59 n=0 ec=57/41 lis/c=57/57 les/c/f=59/59/0 sis=60 pruub=9.500913620s) [2] r=-1 lpr=60 pi=[57,60)/1 crt=0'0 mlcod 0'0 active pruub 203.423324585s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[11.19( empty local-lis/les=57/59 n=0 ec=57/41 lis/c=57/57 les/c/f=59/59/0 sis=60 pruub=9.500901222s) [2] r=-1 lpr=60 pi=[57,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 203.423324585s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[11.4( empty local-lis/les=57/59 n=0 ec=57/41 lis/c=57/57 les/c/f=59/59/0 sis=60 pruub=9.501471519s) [1] r=-1 lpr=60 pi=[57,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 203.423263550s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[11.1a( empty local-lis/les=57/59 n=0 ec=57/41 lis/c=57/57 les/c/f=59/59/0 sis=60 pruub=9.500809669s) [1] r=-1 lpr=60 pi=[57,60)/1 crt=0'0 mlcod 0'0 active pruub 203.423339844s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[11.1a( empty local-lis/les=57/59 n=0 ec=57/41 lis/c=57/57 les/c/f=59/59/0 sis=60 pruub=9.500718117s) [1] r=-1 lpr=60 pi=[57,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 203.423339844s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[8.19( v 36'6 (0'0,36'6] local-lis/les=53/54 n=0 ec=53/35 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=10.548574448s) [1] r=-1 lpr=60 pi=[53,60)/1 crt=36'6 lcod 0'0 mlcod 0'0 active pruub 204.471237183s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[8.19( v 36'6 (0'0,36'6] local-lis/les=53/54 n=0 ec=53/35 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=10.548559189s) [1] r=-1 lpr=60 pi=[53,60)/1 crt=36'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 204.471237183s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[8.4( v 36'6 (0'0,36'6] local-lis/les=53/54 n=1 ec=53/35 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=10.549344063s) [1] r=-1 lpr=60 pi=[53,60)/1 crt=36'6 lcod 0'0 mlcod 0'0 active pruub 204.471252441s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[11.1d( empty local-lis/les=57/59 n=0 ec=57/41 lis/c=57/57 les/c/f=59/59/0 sis=60 pruub=9.500517845s) [1] r=-1 lpr=60 pi=[57,60)/1 crt=0'0 mlcod 0'0 active pruub 203.423339844s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[11.1d( empty local-lis/les=57/59 n=0 ec=57/41 lis/c=57/57 les/c/f=59/59/0 sis=60 pruub=9.500500679s) [1] r=-1 lpr=60 pi=[57,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 203.423339844s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[8.4( v 36'6 (0'0,36'6] local-lis/les=53/54 n=1 ec=53/35 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=10.548357964s) [1] r=-1 lpr=60 pi=[53,60)/1 crt=36'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 204.471252441s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[11.1e( empty local-lis/les=57/59 n=0 ec=57/41 lis/c=57/57 les/c/f=59/59/0 sis=60 pruub=9.500361443s) [1] r=-1 lpr=60 pi=[57,60)/1 crt=0'0 mlcod 0'0 active pruub 203.423355103s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[11.16( empty local-lis/les=57/59 n=0 ec=57/41 lis/c=57/57 les/c/f=59/59/0 sis=60 pruub=9.499586105s) [2] r=-1 lpr=60 pi=[57,60)/1 crt=0'0 mlcod 0'0 active pruub 203.422561646s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[11.1e( empty local-lis/les=57/59 n=0 ec=57/41 lis/c=57/57 les/c/f=59/59/0 sis=60 pruub=9.500347137s) [1] r=-1 lpr=60 pi=[57,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 203.423355103s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[11.16( empty local-lis/les=57/59 n=0 ec=57/41 lis/c=57/57 les/c/f=59/59/0 sis=60 pruub=9.499518394s) [2] r=-1 lpr=60 pi=[57,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 203.422561646s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[8.1c( v 36'6 (0'0,36'6] local-lis/les=53/54 n=0 ec=53/35 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=10.547923088s) [2] r=-1 lpr=60 pi=[53,60)/1 crt=36'6 lcod 0'0 mlcod 0'0 active pruub 204.471160889s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[8.1c( v 36'6 (0'0,36'6] local-lis/les=53/54 n=0 ec=53/35 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=10.547878265s) [2] r=-1 lpr=60 pi=[53,60)/1 crt=36'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 204.471160889s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[8.12( v 36'6 (0'0,36'6] local-lis/les=53/54 n=0 ec=53/35 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=10.547688484s) [1] r=-1 lpr=60 pi=[53,60)/1 crt=36'6 lcod 0'0 mlcod 0'0 active pruub 204.471008301s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[8.12( v 36'6 (0'0,36'6] local-lis/les=53/54 n=0 ec=53/35 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=10.547664642s) [1] r=-1 lpr=60 pi=[53,60)/1 crt=36'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 204.471008301s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[6.f( v 48'39 (0'0,48'39] local-lis/les=51/52 n=3 ec=51/22 lis/c=51/51 les/c/f=52/52/0 sis=60 pruub=8.283540726s) [2] r=-1 lpr=60 pi=[51,60)/1 crt=48'39 lcod 0'0 mlcod 0'0 active pruub 202.206909180s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[6.f( v 48'39 (0'0,48'39] local-lis/les=51/52 n=3 ec=51/22 lis/c=51/51 les/c/f=52/52/0 sis=60 pruub=8.283521652s) [2] r=-1 lpr=60 pi=[51,60)/1 crt=48'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 202.206909180s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[8.6( v 36'6 (0'0,36'6] local-lis/les=53/54 n=1 ec=53/35 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=10.547410965s) [2] r=-1 lpr=60 pi=[53,60)/1 crt=36'6 lcod 0'0 mlcod 0'0 active pruub 204.470916748s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[8.5( v 36'6 (0'0,36'6] local-lis/les=53/54 n=1 ec=53/35 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=10.547381401s) [2] r=-1 lpr=60 pi=[53,60)/1 crt=36'6 lcod 0'0 mlcod 0'0 active pruub 204.470901489s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[11.5( empty local-lis/les=57/59 n=0 ec=57/41 lis/c=57/57 les/c/f=59/59/0 sis=60 pruub=9.500125885s) [1] r=-1 lpr=60 pi=[57,60)/1 crt=0'0 mlcod 0'0 active pruub 203.423645020s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[11.5( empty local-lis/les=57/59 n=0 ec=57/41 lis/c=57/57 les/c/f=59/59/0 sis=60 pruub=9.500105858s) [1] r=-1 lpr=60 pi=[57,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 203.423645020s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[8.5( v 36'6 (0'0,36'6] local-lis/les=53/54 n=1 ec=53/35 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=10.547367096s) [2] r=-1 lpr=60 pi=[53,60)/1 crt=36'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 204.470901489s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[8.6( v 36'6 (0'0,36'6] local-lis/les=53/54 n=1 ec=53/35 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=10.547374725s) [2] r=-1 lpr=60 pi=[53,60)/1 crt=36'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 204.470916748s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[6.b( v 48'39 (0'0,48'39] local-lis/les=51/52 n=1 ec=51/22 lis/c=51/51 les/c/f=52/52/0 sis=60 pruub=8.288737297s) [2] r=-1 lpr=60 pi=[51,60)/1 crt=48'39 lcod 0'0 mlcod 0'0 active pruub 202.212310791s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[6.b( v 48'39 (0'0,48'39] local-lis/les=51/52 n=1 ec=51/22 lis/c=51/51 les/c/f=52/52/0 sis=60 pruub=8.288710594s) [2] r=-1 lpr=60 pi=[51,60)/1 crt=48'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 202.212310791s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[11.13( empty local-lis/les=57/59 n=0 ec=57/41 lis/c=57/57 les/c/f=59/59/0 sis=60 pruub=9.499952316s) [2] r=-1 lpr=60 pi=[57,60)/1 crt=0'0 mlcod 0'0 active pruub 203.423690796s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[11.13( empty local-lis/les=57/59 n=0 ec=57/41 lis/c=57/57 les/c/f=59/59/0 sis=60 pruub=9.499937057s) [2] r=-1 lpr=60 pi=[57,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 203.423690796s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[8.2( v 36'6 (0'0,36'6] local-lis/les=53/54 n=1 ec=53/35 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=10.547109604s) [2] r=-1 lpr=60 pi=[53,60)/1 crt=36'6 lcod 0'0 mlcod 0'0 active pruub 204.470870972s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[11.1( empty local-lis/les=57/59 n=0 ec=57/41 lis/c=57/57 les/c/f=59/59/0 sis=60 pruub=9.499851227s) [1] r=-1 lpr=60 pi=[57,60)/1 crt=0'0 mlcod 0'0 active pruub 203.423675537s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[11.12( empty local-lis/les=57/59 n=0 ec=57/41 lis/c=57/57 les/c/f=59/59/0 sis=60 pruub=9.499831200s) [1] r=-1 lpr=60 pi=[57,60)/1 crt=0'0 mlcod 0'0 active pruub 203.423690796s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[8.2( v 36'6 (0'0,36'6] local-lis/les=53/54 n=1 ec=53/35 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=10.547044754s) [2] r=-1 lpr=60 pi=[53,60)/1 crt=36'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 204.470870972s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[11.1( empty local-lis/les=57/59 n=0 ec=57/41 lis/c=57/57 les/c/f=59/59/0 sis=60 pruub=9.499831200s) [1] r=-1 lpr=60 pi=[57,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 203.423675537s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[11.12( empty local-lis/les=57/59 n=0 ec=57/41 lis/c=57/57 les/c/f=59/59/0 sis=60 pruub=9.499808311s) [1] r=-1 lpr=60 pi=[57,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 203.423690796s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[8.11( v 36'6 (0'0,36'6] local-lis/les=53/54 n=0 ec=53/35 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=10.547019958s) [2] r=-1 lpr=60 pi=[53,60)/1 crt=36'6 lcod 0'0 mlcod 0'0 active pruub 204.470932007s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[8.11( v 36'6 (0'0,36'6] local-lis/les=53/54 n=0 ec=53/35 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=10.547003746s) [2] r=-1 lpr=60 pi=[53,60)/1 crt=36'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 204.470932007s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[8.16( v 36'6 (0'0,36'6] local-lis/les=53/54 n=0 ec=53/35 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=10.547415733s) [2] r=-1 lpr=60 pi=[53,60)/1 crt=36'6 lcod 0'0 mlcod 0'0 active pruub 204.471466064s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[8.16( v 36'6 (0'0,36'6] local-lis/les=53/54 n=0 ec=53/35 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=10.547401428s) [2] r=-1 lpr=60 pi=[53,60)/1 crt=36'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 204.471466064s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[8.17( v 36'6 (0'0,36'6] local-lis/les=53/54 n=0 ec=53/35 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=10.541036606s) [1] r=-1 lpr=60 pi=[53,60)/1 crt=36'6 lcod 0'0 mlcod 0'0 active pruub 204.465133667s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[11.14( empty local-lis/les=57/59 n=0 ec=57/41 lis/c=57/57 les/c/f=59/59/0 sis=60 pruub=9.499674797s) [1] r=-1 lpr=60 pi=[57,60)/1 crt=0'0 mlcod 0'0 active pruub 203.423736572s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[8.17( v 36'6 (0'0,36'6] local-lis/les=53/54 n=0 ec=53/35 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=10.541023254s) [1] r=-1 lpr=60 pi=[53,60)/1 crt=36'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 204.465133667s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[11.14( empty local-lis/les=57/59 n=0 ec=57/41 lis/c=57/57 les/c/f=59/59/0 sis=60 pruub=9.499640465s) [1] r=-1 lpr=60 pi=[57,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 203.423736572s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[11.1b( empty local-lis/les=57/59 n=0 ec=57/41 lis/c=57/57 les/c/f=59/59/0 sis=60 pruub=9.499504089s) [1] r=-1 lpr=60 pi=[57,60)/1 crt=0'0 mlcod 0'0 active pruub 203.423706055s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[8.18( v 36'6 (0'0,36'6] local-lis/les=53/54 n=0 ec=53/35 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=10.546630859s) [1] r=-1 lpr=60 pi=[53,60)/1 crt=36'6 lcod 0'0 mlcod 0'0 active pruub 204.470840454s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[11.1b( empty local-lis/les=57/59 n=0 ec=57/41 lis/c=57/57 les/c/f=59/59/0 sis=60 pruub=9.499491692s) [1] r=-1 lpr=60 pi=[57,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 203.423706055s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[8.18( v 36'6 (0'0,36'6] local-lis/les=53/54 n=0 ec=53/35 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=10.546618462s) [1] r=-1 lpr=60 pi=[53,60)/1 crt=36'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 204.470840454s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[11.1c( empty local-lis/les=57/59 n=0 ec=57/41 lis/c=57/57 les/c/f=59/59/0 sis=60 pruub=9.499506950s) [1] r=-1 lpr=60 pi=[57,60)/1 crt=0'0 mlcod 0'0 active pruub 203.423767090s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[11.1c( empty local-lis/les=57/59 n=0 ec=57/41 lis/c=57/57 les/c/f=59/59/0 sis=60 pruub=9.499487877s) [1] r=-1 lpr=60 pi=[57,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 203.423767090s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[8.1f( v 36'6 (0'0,36'6] local-lis/les=53/54 n=0 ec=53/35 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=10.546561241s) [2] r=-1 lpr=60 pi=[53,60)/1 crt=36'6 lcod 0'0 mlcod 0'0 active pruub 204.470901489s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[8.1f( v 36'6 (0'0,36'6] local-lis/les=53/54 n=0 ec=53/35 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=10.546537399s) [2] r=-1 lpr=60 pi=[53,60)/1 crt=36'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 204.470901489s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[8.10( v 36'6 (0'0,36'6] local-lis/les=53/54 n=0 ec=53/35 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=10.546778679s) [1] r=-1 lpr=60 pi=[53,60)/1 crt=36'6 lcod 0'0 mlcod 0'0 active pruub 204.471252441s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[8.10( v 36'6 (0'0,36'6] local-lis/les=53/54 n=0 ec=53/35 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=10.546759605s) [1] r=-1 lpr=60 pi=[53,60)/1 crt=36'6 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 204.471252441s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[12.10( empty local-lis/les=0/0 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=60) [0] r=0 lpr=60 pi=[57,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[7.18( empty local-lis/les=0/0 n=0 ec=53/23 lis/c=53/53 les/c/f=55/55/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[10.15( empty local-lis/les=0/0 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60) [0] r=0 lpr=60 pi=[55,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[7.1b( empty local-lis/les=0/0 n=0 ec=53/23 lis/c=53/53 les/c/f=55/55/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[10.14( empty local-lis/les=0/0 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60) [0] r=0 lpr=60 pi=[55,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[12.12( empty local-lis/les=0/0 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=60) [0] r=0 lpr=60 pi=[57,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[7.e( empty local-lis/les=0/0 n=0 ec=53/23 lis/c=53/53 les/c/f=55/55/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[7.f( empty local-lis/les=0/0 n=0 ec=53/23 lis/c=53/53 les/c/f=55/55/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[10.2( empty local-lis/les=0/0 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60) [0] r=0 lpr=60 pi=[55,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[7.2( empty local-lis/les=0/0 n=0 ec=53/23 lis/c=53/53 les/c/f=55/55/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[12.b( empty local-lis/les=0/0 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=60) [0] r=0 lpr=60 pi=[57,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[12.c( empty local-lis/les=0/0 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=60) [0] r=0 lpr=60 pi=[57,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[12.6( empty local-lis/les=0/0 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=60) [0] r=0 lpr=60 pi=[57,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[10.8( empty local-lis/les=0/0 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60) [0] r=0 lpr=60 pi=[55,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[12.e( empty local-lis/les=0/0 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=60) [0] r=0 lpr=60 pi=[57,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[7.6( empty local-lis/les=0/0 n=0 ec=53/23 lis/c=53/53 les/c/f=55/55/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[7.9( empty local-lis/les=0/0 n=0 ec=53/23 lis/c=53/53 les/c/f=55/55/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[12.a( empty local-lis/les=0/0 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=60) [0] r=0 lpr=60 pi=[57,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[7.8( empty local-lis/les=0/0 n=0 ec=53/23 lis/c=53/53 les/c/f=55/55/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[7.b( empty local-lis/les=0/0 n=0 ec=53/23 lis/c=53/53 les/c/f=55/55/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[10.5( empty local-lis/les=0/0 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60) [0] r=0 lpr=60 pi=[55,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[10.19( empty local-lis/les=0/0 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60) [0] r=0 lpr=60 pi=[55,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[12.1c( empty local-lis/les=0/0 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=60) [0] r=0 lpr=60 pi=[57,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[7.10( empty local-lis/les=0/0 n=0 ec=53/23 lis/c=53/53 les/c/f=55/55/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[7.13( empty local-lis/les=0/0 n=0 ec=53/23 lis/c=53/53 les/c/f=55/55/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[12.19( empty local-lis/les=0/0 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=60) [0] r=0 lpr=60 pi=[57,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[12.8( empty local-lis/les=0/0 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=60) [0] r=0 lpr=60 pi=[57,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[7.4( empty local-lis/les=0/0 n=0 ec=53/23 lis/c=53/53 les/c/f=55/55/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[7.3( empty local-lis/les=0/0 n=0 ec=53/23 lis/c=53/53 les/c/f=55/55/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[10.18( empty local-lis/les=0/0 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60) [0] r=0 lpr=60 pi=[55,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[10.1b( empty local-lis/les=0/0 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60) [0] r=0 lpr=60 pi=[55,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[7.1e( empty local-lis/les=0/0 n=0 ec=53/23 lis/c=53/53 les/c/f=55/55/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:17:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 60 pg[10.13( empty local-lis/les=0/0 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60) [0] r=0 lpr=60 pi=[55,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:17:07 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:17:07 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf78004070 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:17:07 compute-0 podman[97640]: 2025-09-30 14:17:07.504213755 +0000 UTC m=+6.569564987 container remove 50a0fdfa458c3d53052fed60c9c75661d86167afd2f54b039c714a42f1a36b1f (image=quay.io/ceph/ceph:v19, name=hardcore_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Sep 30 14:17:07 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 9.15 deep-scrub starts
Sep 30 14:17:07 compute-0 sudo[97637]: pam_unix(sudo:session): session closed for user root
Sep 30 14:17:07 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 9.15 deep-scrub ok
Sep 30 14:17:07 compute-0 systemd[1]: libpod-conmon-50a0fdfa458c3d53052fed60c9c75661d86167afd2f54b039c714a42f1a36b1f.scope: Deactivated successfully.
Sep 30 14:17:07 compute-0 sudo[97781]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wyhmbpvajfnytoynxwozvqwqisncluen ; /usr/bin/python3'
Sep 30 14:17:07 compute-0 sudo[97781]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:17:07 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v59: 337 pgs: 1 active+clean+scrubbing, 1 active+clean+scrubbing+deep, 335 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Sep 30 14:17:07 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:17:07 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf78004070 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:17:07 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} v 0)
Sep 30 14:17:07 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]: dispatch
Sep 30 14:17:07 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0)
Sep 30 14:17:07 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Sep 30 14:17:07 compute-0 python3[97783]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:17:07 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Sep 30 14:17:07 compute-0 podman[97784]: 2025-09-30 14:17:07.919932818 +0000 UTC m=+0.073302713 container create 82557bef686977cecd00782f3174b66fa0886c091836871ed9cdc3a56e85907f (image=quay.io/ceph/ceph:v19, name=ecstatic_wu, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Sep 30 14:17:07 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Sep 30 14:17:07 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Sep 30 14:17:07 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Sep 30 14:17:07 compute-0 podman[97784]: 2025-09-30 14:17:07.872313593 +0000 UTC m=+0.025683518 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:17:07 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Sep 30 14:17:07 compute-0 systemd[1]: Started libpod-conmon-82557bef686977cecd00782f3174b66fa0886c091836871ed9cdc3a56e85907f.scope.
Sep 30 14:17:08 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:17:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/045dcb044c3ddcccea28307a438ad1ae9c5fdd294319475030a8cc7b69b7ddea/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:17:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/045dcb044c3ddcccea28307a438ad1ae9c5fdd294319475030a8cc7b69b7ddea/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:17:08 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 61 pg[6.6( v 48'39 (0'0,48'39] local-lis/les=51/52 n=1 ec=51/22 lis/c=51/51 les/c/f=52/52/0 sis=61 pruub=15.350039482s) [1] r=-1 lpr=61 pi=[51,61)/1 crt=48'39 lcod 0'0 mlcod 0'0 active pruub 210.212875366s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:08 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 61 pg[6.6( v 48'39 (0'0,48'39] local-lis/les=51/52 n=1 ec=51/22 lis/c=51/51 les/c/f=52/52/0 sis=61 pruub=15.350008011s) [1] r=-1 lpr=61 pi=[51,61)/1 crt=48'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 210.212875366s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:17:08 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 61 pg[6.2( v 48'39 (0'0,48'39] local-lis/les=51/52 n=0 ec=51/22 lis/c=51/51 les/c/f=52/52/0 sis=61 pruub=15.349562645s) [1] r=-1 lpr=61 pi=[51,61)/1 crt=48'39 lcod 0'0 mlcod 0'0 active pruub 210.212753296s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:08 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 61 pg[6.2( v 48'39 (0'0,48'39] local-lis/les=51/52 n=0 ec=51/22 lis/c=51/51 les/c/f=52/52/0 sis=61 pruub=15.349544525s) [1] r=-1 lpr=61 pi=[51,61)/1 crt=48'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 210.212753296s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:17:08 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 61 pg[6.e( v 48'39 (0'0,48'39] local-lis/les=51/52 n=1 ec=51/22 lis/c=51/51 les/c/f=52/52/0 sis=61 pruub=15.349314690s) [1] r=-1 lpr=61 pi=[51,61)/1 crt=48'39 lcod 0'0 mlcod 0'0 active pruub 210.212646484s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:08 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 61 pg[6.e( v 48'39 (0'0,48'39] local-lis/les=51/52 n=1 ec=51/22 lis/c=51/51 les/c/f=52/52/0 sis=61 pruub=15.349302292s) [1] r=-1 lpr=61 pi=[51,61)/1 crt=48'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 210.212646484s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:17:08 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 61 pg[6.a( v 48'39 (0'0,48'39] local-lis/les=51/52 n=0 ec=51/22 lis/c=51/51 les/c/f=52/52/0 sis=61 pruub=15.349094391s) [1] r=-1 lpr=61 pi=[51,61)/1 crt=48'39 lcod 0'0 mlcod 0'0 active pruub 210.212646484s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:08 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 61 pg[6.a( v 48'39 (0'0,48'39] local-lis/les=51/52 n=0 ec=51/22 lis/c=51/51 les/c/f=52/52/0 sis=61 pruub=15.349068642s) [1] r=-1 lpr=61 pi=[51,61)/1 crt=48'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 210.212646484s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:17:08 compute-0 ceph-mon[74194]: 7.3 scrub starts
Sep 30 14:17:08 compute-0 ceph-mon[74194]: 7.3 scrub ok
Sep 30 14:17:08 compute-0 ceph-mon[74194]: 6.d deep-scrub starts
Sep 30 14:17:08 compute-0 ceph-mon[74194]: 6.d deep-scrub ok
Sep 30 14:17:08 compute-0 ceph-mon[74194]: 5.14 deep-scrub starts
Sep 30 14:17:08 compute-0 ceph-mon[74194]: 5.14 deep-scrub ok
Sep 30 14:17:08 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]': finished
Sep 30 14:17:08 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Sep 30 14:17:08 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Sep 30 14:17:08 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Sep 30 14:17:08 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Sep 30 14:17:08 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Sep 30 14:17:08 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Sep 30 14:17:08 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Sep 30 14:17:08 compute-0 ceph-mon[74194]: osdmap e60: 3 total, 3 up, 3 in
Sep 30 14:17:08 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]: dispatch
Sep 30 14:17:08 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Sep 30 14:17:08 compute-0 podman[97784]: 2025-09-30 14:17:08.04980324 +0000 UTC m=+0.203173155 container init 82557bef686977cecd00782f3174b66fa0886c091836871ed9cdc3a56e85907f (image=quay.io/ceph/ceph:v19, name=ecstatic_wu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Sep 30 14:17:08 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 61 pg[5.1d( empty local-lis/les=60/61 n=0 ec=51/21 lis/c=51/51 les/c/f=52/52/0 sis=60) [0] r=0 lpr=60 pi=[51,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:17:08 compute-0 podman[97784]: 2025-09-30 14:17:08.055771357 +0000 UTC m=+0.209141252 container start 82557bef686977cecd00782f3174b66fa0886c091836871ed9cdc3a56e85907f (image=quay.io/ceph/ceph:v19, name=ecstatic_wu, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid)
Sep 30 14:17:08 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:17:08 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf9c00a2b0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:17:08 compute-0 podman[97784]: 2025-09-30 14:17:08.081104354 +0000 UTC m=+0.234474249 container attach 82557bef686977cecd00782f3174b66fa0886c091836871ed9cdc3a56e85907f (image=quay.io/ceph/ceph:v19, name=ecstatic_wu, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Sep 30 14:17:08 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 61 pg[5.c( empty local-lis/les=60/61 n=0 ec=51/21 lis/c=51/51 les/c/f=52/52/0 sis=60) [0] r=0 lpr=60 pi=[51,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:17:08 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 61 pg[5.1e( empty local-lis/les=60/61 n=0 ec=51/21 lis/c=51/51 les/c/f=52/52/0 sis=60) [0] r=0 lpr=60 pi=[51,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:17:08 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 61 pg[5.14( empty local-lis/les=60/61 n=0 ec=51/21 lis/c=51/51 les/c/f=52/52/0 sis=60) [0] r=0 lpr=60 pi=[51,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:17:08 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 61 pg[5.a( empty local-lis/les=60/61 n=0 ec=51/21 lis/c=51/51 les/c/f=52/52/0 sis=60) [0] r=0 lpr=60 pi=[51,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:17:08 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 61 pg[5.6( empty local-lis/les=60/61 n=0 ec=51/21 lis/c=51/51 les/c/f=52/52/0 sis=60) [0] r=0 lpr=60 pi=[51,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:17:08 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 61 pg[5.17( empty local-lis/les=60/61 n=0 ec=51/21 lis/c=51/51 les/c/f=52/52/0 sis=60) [0] r=0 lpr=60 pi=[51,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:17:08 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 61 pg[5.3( empty local-lis/les=60/61 n=0 ec=51/21 lis/c=51/51 les/c/f=52/52/0 sis=60) [0] r=0 lpr=60 pi=[51,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:17:08 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 61 pg[5.5( empty local-lis/les=60/61 n=0 ec=51/21 lis/c=51/51 les/c/f=52/52/0 sis=60) [0] r=0 lpr=60 pi=[51,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:17:08 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 61 pg[7.1b( empty local-lis/les=60/61 n=0 ec=53/23 lis/c=53/53 les/c/f=55/55/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:17:08 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 61 pg[5.19( empty local-lis/les=60/61 n=0 ec=51/21 lis/c=51/51 les/c/f=52/52/0 sis=60) [0] r=0 lpr=60 pi=[51,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:17:08 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 61 pg[12.a( v 48'45 (0'0,48'45] local-lis/les=60/61 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=60) [0] r=0 lpr=60 pi=[57,60)/1 crt=48'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:17:08 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 61 pg[12.b( v 48'45 (0'0,48'45] local-lis/les=60/61 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=60) [0] r=0 lpr=60 pi=[57,60)/1 crt=48'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:17:08 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 61 pg[12.c( v 48'45 (0'0,48'45] local-lis/les=60/61 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=60) [0] r=0 lpr=60 pi=[57,60)/1 crt=48'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:17:08 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 61 pg[7.6( empty local-lis/les=60/61 n=0 ec=53/23 lis/c=53/53 les/c/f=55/55/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:17:08 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 61 pg[12.e( v 48'45 (0'0,48'45] local-lis/les=60/61 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=60) [0] r=0 lpr=60 pi=[57,60)/1 crt=48'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:17:08 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 61 pg[12.8( v 48'45 (0'0,48'45] local-lis/les=60/61 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=60) [0] r=0 lpr=60 pi=[57,60)/1 crt=48'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:17:08 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 61 pg[7.2( empty local-lis/les=60/61 n=0 ec=53/23 lis/c=53/53 les/c/f=55/55/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:17:08 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 61 pg[10.8( v 48'48 (0'0,48'48] local-lis/les=60/61 n=1 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60) [0] r=0 lpr=60 pi=[55,60)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:17:08 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 61 pg[7.3( empty local-lis/les=60/61 n=0 ec=53/23 lis/c=53/53 les/c/f=55/55/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:17:08 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 61 pg[7.4( empty local-lis/les=60/61 n=0 ec=53/23 lis/c=53/53 les/c/f=55/55/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:17:08 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 61 pg[7.f( empty local-lis/les=60/61 n=0 ec=53/23 lis/c=53/53 les/c/f=55/55/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:17:08 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 61 pg[7.8( empty local-lis/les=60/61 n=0 ec=53/23 lis/c=53/53 les/c/f=55/55/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:17:08 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 61 pg[10.5( v 48'48 (0'0,48'48] local-lis/les=60/61 n=1 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60) [0] r=0 lpr=60 pi=[55,60)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:17:08 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 61 pg[7.b( empty local-lis/les=60/61 n=0 ec=53/23 lis/c=53/53 les/c/f=55/55/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:17:08 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 61 pg[10.2( v 48'48 (0'0,48'48] local-lis/les=60/61 n=1 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60) [0] r=0 lpr=60 pi=[55,60)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:17:08 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 61 pg[12.10( v 58'48 lc 48'14 (0'0,58'48] local-lis/les=60/61 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=60) [0] r=0 lpr=60 pi=[57,60)/1 crt=58'48 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:17:08 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 61 pg[10.18( v 48'48 (0'0,48'48] local-lis/les=60/61 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60) [0] r=0 lpr=60 pi=[55,60)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:17:08 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 61 pg[10.1b( v 48'48 (0'0,48'48] local-lis/les=60/61 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60) [0] r=0 lpr=60 pi=[55,60)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:17:08 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 61 pg[12.19( v 48'45 (0'0,48'45] local-lis/les=60/61 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=60) [0] r=0 lpr=60 pi=[57,60)/1 crt=48'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:17:08 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 61 pg[7.13( empty local-lis/les=60/61 n=0 ec=53/23 lis/c=53/53 les/c/f=55/55/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:17:08 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 61 pg[7.9( empty local-lis/les=60/61 n=0 ec=53/23 lis/c=53/53 les/c/f=55/55/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:17:08 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 61 pg[7.e( empty local-lis/les=60/61 n=0 ec=53/23 lis/c=53/53 les/c/f=55/55/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:17:08 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 61 pg[12.6( v 48'45 (0'0,48'45] local-lis/les=60/61 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=60) [0] r=0 lpr=60 pi=[57,60)/1 crt=48'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:17:08 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 61 pg[10.13( v 48'48 (0'0,48'48] local-lis/les=60/61 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60) [0] r=0 lpr=60 pi=[55,60)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:17:08 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 61 pg[10.19( v 48'48 (0'0,48'48] local-lis/les=60/61 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60) [0] r=0 lpr=60 pi=[55,60)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:17:08 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 61 pg[12.12( v 48'45 (0'0,48'45] local-lis/les=60/61 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=60) [0] r=0 lpr=60 pi=[57,60)/1 crt=48'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:17:08 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 61 pg[7.1e( empty local-lis/les=60/61 n=0 ec=53/23 lis/c=53/53 les/c/f=55/55/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:17:08 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 61 pg[10.14( v 59'54 lc 59'53 (0'0,59'54] local-lis/les=60/61 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60) [0] r=0 lpr=60 pi=[55,60)/1 crt=59'54 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:17:08 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 61 pg[10.15( v 59'54 lc 59'53 (0'0,59'54] local-lis/les=60/61 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60) [0] r=0 lpr=60 pi=[55,60)/1 crt=59'54 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:17:08 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 61 pg[12.1c( v 48'45 (0'0,48'45] local-lis/les=60/61 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=60) [0] r=0 lpr=60 pi=[57,60)/1 crt=48'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:17:08 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 61 pg[7.18( empty local-lis/les=60/61 n=0 ec=53/23 lis/c=53/53 les/c/f=55/55/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:17:08 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 61 pg[7.10( empty local-lis/les=60/61 n=0 ec=53/23 lis/c=53/53 les/c/f=55/55/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:17:08 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 11.0 scrub starts
Sep 30 14:17:08 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 11.0 scrub ok
Sep 30 14:17:08 compute-0 ceph-mgr[74485]: [progress INFO root] Completed event 5e70e6a2-8f12-4a22-9fce-a80b463e809b (Global Recovery Event) in 16 seconds
Sep 30 14:17:08 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Sep 30 14:17:09 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Sep 30 14:17:09 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Sep 30 14:17:09 compute-0 ceph-mon[74194]: 9.15 deep-scrub starts
Sep 30 14:17:09 compute-0 ceph-mon[74194]: 9.15 deep-scrub ok
Sep 30 14:17:09 compute-0 ceph-mon[74194]: 5.0 scrub starts
Sep 30 14:17:09 compute-0 ceph-mon[74194]: 5.0 scrub ok
Sep 30 14:17:09 compute-0 ceph-mon[74194]: pgmap v59: 337 pgs: 1 active+clean+scrubbing, 1 active+clean+scrubbing+deep, 335 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Sep 30 14:17:09 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Sep 30 14:17:09 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Sep 30 14:17:09 compute-0 ceph-mon[74194]: osdmap e61: 3 total, 3 up, 3 in
Sep 30 14:17:09 compute-0 ceph-mon[74194]: 10.16 scrub starts
Sep 30 14:17:09 compute-0 ceph-mon[74194]: 10.16 scrub ok
Sep 30 14:17:09 compute-0 ceph-mon[74194]: osdmap e62: 3 total, 3 up, 3 in
Sep 30 14:17:09 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:17:09 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf84001a70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:17:09 compute-0 ecstatic_wu[97799]: {
Sep 30 14:17:09 compute-0 ecstatic_wu[97799]:     "user_id": "openstack",
Sep 30 14:17:09 compute-0 ecstatic_wu[97799]:     "display_name": "openstack",
Sep 30 14:17:09 compute-0 ecstatic_wu[97799]:     "email": "",
Sep 30 14:17:09 compute-0 ecstatic_wu[97799]:     "suspended": 0,
Sep 30 14:17:09 compute-0 ecstatic_wu[97799]:     "max_buckets": 1000,
Sep 30 14:17:09 compute-0 ecstatic_wu[97799]:     "subusers": [],
Sep 30 14:17:09 compute-0 ecstatic_wu[97799]:     "keys": [
Sep 30 14:17:09 compute-0 ecstatic_wu[97799]:         {
Sep 30 14:17:09 compute-0 ecstatic_wu[97799]:             "user": "openstack",
Sep 30 14:17:09 compute-0 ecstatic_wu[97799]:             "access_key": "SYZMHL53S5RG8JANIGKZ",
Sep 30 14:17:09 compute-0 ecstatic_wu[97799]:             "secret_key": "p4oim9bNBWXWXhrvW48W7s1B8rBIJvnY5RYEWUer",
Sep 30 14:17:09 compute-0 ecstatic_wu[97799]:             "active": true,
Sep 30 14:17:09 compute-0 ecstatic_wu[97799]:             "create_date": "2025-09-30T14:17:09.156373Z"
Sep 30 14:17:09 compute-0 ecstatic_wu[97799]:         }
Sep 30 14:17:09 compute-0 ecstatic_wu[97799]:     ],
Sep 30 14:17:09 compute-0 ecstatic_wu[97799]:     "swift_keys": [],
Sep 30 14:17:09 compute-0 ecstatic_wu[97799]:     "caps": [],
Sep 30 14:17:09 compute-0 ecstatic_wu[97799]:     "op_mask": "read, write, delete",
Sep 30 14:17:09 compute-0 ecstatic_wu[97799]:     "default_placement": "",
Sep 30 14:17:09 compute-0 ecstatic_wu[97799]:     "default_storage_class": "",
Sep 30 14:17:09 compute-0 ecstatic_wu[97799]:     "placement_tags": [],
Sep 30 14:17:09 compute-0 ecstatic_wu[97799]:     "bucket_quota": {
Sep 30 14:17:09 compute-0 ecstatic_wu[97799]:         "enabled": false,
Sep 30 14:17:09 compute-0 ecstatic_wu[97799]:         "check_on_raw": false,
Sep 30 14:17:09 compute-0 ecstatic_wu[97799]:         "max_size": -1,
Sep 30 14:17:09 compute-0 ecstatic_wu[97799]:         "max_size_kb": 0,
Sep 30 14:17:09 compute-0 ecstatic_wu[97799]:         "max_objects": -1
Sep 30 14:17:09 compute-0 ecstatic_wu[97799]:     },
Sep 30 14:17:09 compute-0 ecstatic_wu[97799]:     "user_quota": {
Sep 30 14:17:09 compute-0 ecstatic_wu[97799]:         "enabled": false,
Sep 30 14:17:09 compute-0 ecstatic_wu[97799]:         "check_on_raw": false,
Sep 30 14:17:09 compute-0 ecstatic_wu[97799]:         "max_size": -1,
Sep 30 14:17:09 compute-0 ecstatic_wu[97799]:         "max_size_kb": 0,
Sep 30 14:17:09 compute-0 ecstatic_wu[97799]:         "max_objects": -1
Sep 30 14:17:09 compute-0 ecstatic_wu[97799]:     },
Sep 30 14:17:09 compute-0 ecstatic_wu[97799]:     "temp_url_keys": [],
Sep 30 14:17:09 compute-0 ecstatic_wu[97799]:     "type": "rgw",
Sep 30 14:17:09 compute-0 ecstatic_wu[97799]:     "mfa_ids": [],
Sep 30 14:17:09 compute-0 ecstatic_wu[97799]:     "account_id": "",
Sep 30 14:17:09 compute-0 ecstatic_wu[97799]:     "path": "/",
Sep 30 14:17:09 compute-0 ecstatic_wu[97799]:     "create_date": "2025-09-30T14:17:09.155419Z",
Sep 30 14:17:09 compute-0 ecstatic_wu[97799]:     "tags": [],
Sep 30 14:17:09 compute-0 ecstatic_wu[97799]:     "group_ids": []
Sep 30 14:17:09 compute-0 ecstatic_wu[97799]: }
Sep 30 14:17:09 compute-0 ecstatic_wu[97799]: 
Sep 30 14:17:09 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 8.e scrub starts
Sep 30 14:17:09 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 8.e scrub ok
Sep 30 14:17:09 compute-0 systemd[1]: libpod-82557bef686977cecd00782f3174b66fa0886c091836871ed9cdc3a56e85907f.scope: Deactivated successfully.
Sep 30 14:17:09 compute-0 podman[97784]: 2025-09-30 14:17:09.602238424 +0000 UTC m=+1.755608329 container died 82557bef686977cecd00782f3174b66fa0886c091836871ed9cdc3a56e85907f (image=quay.io/ceph/ceph:v19, name=ecstatic_wu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:17:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-045dcb044c3ddcccea28307a438ad1ae9c5fdd294319475030a8cc7b69b7ddea-merged.mount: Deactivated successfully.
Sep 30 14:17:09 compute-0 podman[97784]: 2025-09-30 14:17:09.692577495 +0000 UTC m=+1.845947390 container remove 82557bef686977cecd00782f3174b66fa0886c091836871ed9cdc3a56e85907f (image=quay.io/ceph/ceph:v19, name=ecstatic_wu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Sep 30 14:17:09 compute-0 sudo[97781]: pam_unix(sudo:session): session closed for user root
Sep 30 14:17:09 compute-0 systemd[1]: libpod-conmon-82557bef686977cecd00782f3174b66fa0886c091836871ed9cdc3a56e85907f.scope: Deactivated successfully.
Sep 30 14:17:09 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v62: 337 pgs: 4 active+recovery_wait, 4 active+recovery_wait+degraded, 1 active+clean+scrubbing, 1 active+clean+scrubbing+deep, 1 active+recovering, 326 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 4/220 objects degraded (1.818%); 3/220 objects misplaced (1.364%); 122 B/s, 2 keys/s, 2 objects/s recovering
Sep 30 14:17:09 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:17:09 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf78004070 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:17:10 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:17:10 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf70003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:17:10 compute-0 ceph-mon[74194]: log_channel(cluster) log [WRN] : Health check failed: Degraded data redundancy: 4/220 objects degraded (1.818%), 4 pgs degraded (PG_DEGRADED)
Sep 30 14:17:10 compute-0 ceph-mon[74194]: 11.0 scrub starts
Sep 30 14:17:10 compute-0 ceph-mon[74194]: 11.0 scrub ok
Sep 30 14:17:10 compute-0 ceph-mon[74194]: 7.19 scrub starts
Sep 30 14:17:10 compute-0 ceph-mon[74194]: 7.19 scrub ok
Sep 30 14:17:10 compute-0 ceph-mon[74194]: 12.1a scrub starts
Sep 30 14:17:10 compute-0 ceph-mon[74194]: 12.1a scrub ok
Sep 30 14:17:10 compute-0 ceph-mon[74194]: pgmap v62: 337 pgs: 4 active+recovery_wait, 4 active+recovery_wait+degraded, 1 active+clean+scrubbing, 1 active+clean+scrubbing+deep, 1 active+recovering, 326 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 4/220 objects degraded (1.818%); 3/220 objects misplaced (1.364%); 122 B/s, 2 keys/s, 2 objects/s recovering
Sep 30 14:17:10 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 14:17:10 compute-0 python3[97920]: ansible-ansible.builtin.get_url Invoked with url=http://192.168.122.100:8443 dest=/tmp/dash_response mode=0644 validate_certs=False force=False http_agent=ansible-httpget use_proxy=True force_basic_auth=False use_gssapi=False backup=False checksum= timeout=10 unredirected_headers=[] decompress=True use_netrc=True unsafe_writes=False url_username=None url_password=NOT_LOGGING_PARAMETER client_cert=None client_key=None headers=None tmp_dest=None ciphers=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:17:10 compute-0 ceph-mgr[74485]: [dashboard INFO request] [192.168.122.100:52002] [GET] [200] [0.117s] [6.3K] [6d2bb495-f9d6-40bc-b466-e307c7713fc1] /
Sep 30 14:17:10 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 11.d scrub starts
Sep 30 14:17:10 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 11.d scrub ok
Sep 30 14:17:10 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Sep 30 14:17:10 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:17:10 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Sep 30 14:17:10 compute-0 python3[97944]: ansible-ansible.builtin.get_url Invoked with url=http://192.168.122.100:8443 dest=/tmp/dash_http_response mode=0644 validate_certs=False username=VALUE_SPECIFIED_IN_NO_LOG_PARAMETER password=NOT_LOGGING_PARAMETER url_username=VALUE_SPECIFIED_IN_NO_LOG_PARAMETER url_password=NOT_LOGGING_PARAMETER force=False http_agent=ansible-httpget use_proxy=True force_basic_auth=False use_gssapi=False backup=False checksum= timeout=10 unredirected_headers=[] decompress=True use_netrc=True unsafe_writes=False client_cert=None client_key=None headers=None tmp_dest=None ciphers=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:17:10 compute-0 ceph-mgr[74485]: [dashboard INFO request] [192.168.122.100:56400] [GET] [200] [0.003s] [6.3K] [e842c108-e809-4b83-8eaf-5776b79ca93d] /
Sep 30 14:17:10 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:17:10 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Sep 30 14:17:11 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:17:11 compute-0 ceph-mgr[74485]: [progress INFO root] complete: finished ev 2eae1830-bde6-4594-8917-8eee46c8750a (Updating ingress.nfs.cephfs deployment (+6 -> 6))
Sep 30 14:17:11 compute-0 ceph-mgr[74485]: [progress INFO root] Completed event 2eae1830-bde6-4594-8917-8eee46c8750a (Updating ingress.nfs.cephfs deployment (+6 -> 6)) in 37 seconds
Sep 30 14:17:11 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Sep 30 14:17:11 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:17:11 compute-0 ceph-mgr[74485]: [progress INFO root] update: starting ev 6d07f7aa-a9a9-4472-a869-4b025251677d (Updating alertmanager deployment (+1 -> 1))
Sep 30 14:17:11 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Deploying daemon alertmanager.compute-0 on compute-0
Sep 30 14:17:11 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Deploying daemon alertmanager.compute-0 on compute-0
Sep 30 14:17:11 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:17:11 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf9c00a2b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:17:11 compute-0 sudo[97945]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:17:11 compute-0 sudo[97945]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:17:11 compute-0 sudo[97945]: pam_unix(sudo:session): session closed for user root
Sep 30 14:17:11 compute-0 ceph-mon[74194]: 8.e scrub starts
Sep 30 14:17:11 compute-0 ceph-mon[74194]: 8.e scrub ok
Sep 30 14:17:11 compute-0 ceph-mon[74194]: 12.5 scrub starts
Sep 30 14:17:11 compute-0 ceph-mon[74194]: 12.5 scrub ok
Sep 30 14:17:11 compute-0 ceph-mon[74194]: Health check failed: Degraded data redundancy: 4/220 objects degraded (1.818%), 4 pgs degraded (PG_DEGRADED)
Sep 30 14:17:11 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:17:11 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:17:11 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:17:11 compute-0 sudo[97972]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/prometheus/alertmanager:v0.25.0 --timeout 895 _orch deploy --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6
Sep 30 14:17:11 compute-0 sudo[97972]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:17:11 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 11.c scrub starts
Sep 30 14:17:11 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 11.c scrub ok
Sep 30 14:17:11 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v63: 337 pgs: 4 active+recovery_wait, 4 active+recovery_wait+degraded, 1 active+recovering, 328 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 4/219 objects degraded (1.826%); 3/219 objects misplaced (1.370%); 130 B/s, 2 keys/s, 2 objects/s recovering
Sep 30 14:17:11 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:17:11 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf84001a70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:17:12 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:17:12 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf84001a70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:17:12 compute-0 ceph-mon[74194]: 11.d scrub starts
Sep 30 14:17:12 compute-0 ceph-mon[74194]: 11.d scrub ok
Sep 30 14:17:12 compute-0 ceph-mon[74194]: 12.7 scrub starts
Sep 30 14:17:12 compute-0 ceph-mon[74194]: 12.7 scrub ok
Sep 30 14:17:12 compute-0 ceph-mon[74194]: 10.0 scrub starts
Sep 30 14:17:12 compute-0 ceph-mon[74194]: 10.0 scrub ok
Sep 30 14:17:12 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:17:12 compute-0 ceph-mon[74194]: Deploying daemon alertmanager.compute-0 on compute-0
Sep 30 14:17:12 compute-0 ceph-mon[74194]: 11.c scrub starts
Sep 30 14:17:12 compute-0 ceph-mon[74194]: 11.c scrub ok
Sep 30 14:17:12 compute-0 ceph-mon[74194]: 8.6 scrub starts
Sep 30 14:17:12 compute-0 ceph-mon[74194]: 8.6 scrub ok
Sep 30 14:17:12 compute-0 ceph-mon[74194]: pgmap v63: 337 pgs: 4 active+recovery_wait, 4 active+recovery_wait+degraded, 1 active+recovering, 328 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 4/219 objects degraded (1.826%); 3/219 objects misplaced (1.370%); 130 B/s, 2 keys/s, 2 objects/s recovering
Sep 30 14:17:12 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 11.b scrub starts
Sep 30 14:17:12 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 11.b scrub ok
Sep 30 14:17:13 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:17:13 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf70003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:17:13 compute-0 ceph-mon[74194]: 7.d scrub starts
Sep 30 14:17:13 compute-0 ceph-mon[74194]: 7.d scrub ok
Sep 30 14:17:13 compute-0 ceph-mon[74194]: 11.b scrub starts
Sep 30 14:17:13 compute-0 ceph-mon[74194]: 11.b scrub ok
Sep 30 14:17:13 compute-0 ceph-mon[74194]: 10.3 scrub starts
Sep 30 14:17:13 compute-0 ceph-mon[74194]: 10.3 scrub ok
Sep 30 14:17:13 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Sep 30 14:17:13 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Sep 30 14:17:13 compute-0 podman[98037]: 2025-09-30 14:17:13.787287002 +0000 UTC m=+2.141766983 volume create 5af37099633de84afa261a3fbac0d666b4b36ad6eaf87103410bf5b74f3284ae
Sep 30 14:17:13 compute-0 podman[98037]: 2025-09-30 14:17:13.796567197 +0000 UTC m=+2.151047178 container create dd55233323e41b0b65640e6eeb648c6ceec29945c0aee99458a02386b6640579 (image=quay.io/prometheus/alertmanager:v0.25.0, name=suspicious_dubinsky, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:17:13 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v64: 337 pgs: 4 active+recovery_wait, 4 active+recovery_wait+degraded, 1 active+recovering, 328 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 4/219 objects degraded (1.826%); 3/219 objects misplaced (1.370%); 116 B/s, 1 keys/s, 2 objects/s recovering
Sep 30 14:17:13 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:17:13 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf9c00a2b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:17:13 compute-0 systemd[1]: Started libpod-conmon-dd55233323e41b0b65640e6eeb648c6ceec29945c0aee99458a02386b6640579.scope.
Sep 30 14:17:13 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:17:13 compute-0 podman[98037]: 2025-09-30 14:17:13.772157834 +0000 UTC m=+2.126637845 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Sep 30 14:17:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dc1f8f3c83d8d31200a8f7e2e8a0004516f5f321bccba15664711f2313b8725/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Sep 30 14:17:13 compute-0 podman[98037]: 2025-09-30 14:17:13.881367821 +0000 UTC m=+2.235847822 container init dd55233323e41b0b65640e6eeb648c6ceec29945c0aee99458a02386b6640579 (image=quay.io/prometheus/alertmanager:v0.25.0, name=suspicious_dubinsky, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:17:13 compute-0 podman[98037]: 2025-09-30 14:17:13.887300678 +0000 UTC m=+2.241780659 container start dd55233323e41b0b65640e6eeb648c6ceec29945c0aee99458a02386b6640579 (image=quay.io/prometheus/alertmanager:v0.25.0, name=suspicious_dubinsky, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:17:13 compute-0 suspicious_dubinsky[98175]: 65534 65534
Sep 30 14:17:13 compute-0 systemd[1]: libpod-dd55233323e41b0b65640e6eeb648c6ceec29945c0aee99458a02386b6640579.scope: Deactivated successfully.
Sep 30 14:17:13 compute-0 podman[98037]: 2025-09-30 14:17:13.89232295 +0000 UTC m=+2.246802941 container attach dd55233323e41b0b65640e6eeb648c6ceec29945c0aee99458a02386b6640579 (image=quay.io/prometheus/alertmanager:v0.25.0, name=suspicious_dubinsky, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:17:13 compute-0 podman[98037]: 2025-09-30 14:17:13.892739981 +0000 UTC m=+2.247219962 container died dd55233323e41b0b65640e6eeb648c6ceec29945c0aee99458a02386b6640579 (image=quay.io/prometheus/alertmanager:v0.25.0, name=suspicious_dubinsky, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:17:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-4dc1f8f3c83d8d31200a8f7e2e8a0004516f5f321bccba15664711f2313b8725-merged.mount: Deactivated successfully.
Sep 30 14:17:13 compute-0 podman[98037]: 2025-09-30 14:17:13.934424479 +0000 UTC m=+2.288904470 container remove dd55233323e41b0b65640e6eeb648c6ceec29945c0aee99458a02386b6640579 (image=quay.io/prometheus/alertmanager:v0.25.0, name=suspicious_dubinsky, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:17:13 compute-0 podman[98037]: 2025-09-30 14:17:13.938188088 +0000 UTC m=+2.292668099 volume remove 5af37099633de84afa261a3fbac0d666b4b36ad6eaf87103410bf5b74f3284ae
Sep 30 14:17:13 compute-0 ceph-mgr[74485]: [progress INFO root] Writing back 23 completed events
Sep 30 14:17:13 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Sep 30 14:17:13 compute-0 systemd[1]: libpod-conmon-dd55233323e41b0b65640e6eeb648c6ceec29945c0aee99458a02386b6640579.scope: Deactivated successfully.
Sep 30 14:17:13 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:17:13 compute-0 ceph-mgr[74485]: [progress WARNING root] Starting Global Recovery Event,9 pgs not in active + clean state
Sep 30 14:17:13 compute-0 podman[98192]: 2025-09-30 14:17:13.996797093 +0000 UTC m=+0.031923693 volume create d856606e44b0a45bcb27dc750ba8708190c51ba212668b4a71dc98272889118e
Sep 30 14:17:14 compute-0 podman[98192]: 2025-09-30 14:17:14.003824828 +0000 UTC m=+0.038951438 container create 62c4fa4c0afad1d68bfca02b9f40fcbd5a328b2460156f19b06062497735ed64 (image=quay.io/prometheus/alertmanager:v0.25.0, name=happy_hellman, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:17:14 compute-0 systemd[1]: Started libpod-conmon-62c4fa4c0afad1d68bfca02b9f40fcbd5a328b2460156f19b06062497735ed64.scope.
Sep 30 14:17:14 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:17:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/117bda8c35c87f2dce737fc6dc2c332c167905b1a84ef62cf39421ff3d470d8b/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Sep 30 14:17:14 compute-0 podman[98192]: 2025-09-30 14:17:14.079369408 +0000 UTC m=+0.114496058 container init 62c4fa4c0afad1d68bfca02b9f40fcbd5a328b2460156f19b06062497735ed64 (image=quay.io/prometheus/alertmanager:v0.25.0, name=happy_hellman, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:17:14 compute-0 podman[98192]: 2025-09-30 14:17:13.983839631 +0000 UTC m=+0.018966261 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Sep 30 14:17:14 compute-0 podman[98192]: 2025-09-30 14:17:14.086874556 +0000 UTC m=+0.122001166 container start 62c4fa4c0afad1d68bfca02b9f40fcbd5a328b2460156f19b06062497735ed64 (image=quay.io/prometheus/alertmanager:v0.25.0, name=happy_hellman, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:17:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:17:14 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf90000f90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:17:14 compute-0 happy_hellman[98208]: 65534 65534
Sep 30 14:17:14 compute-0 systemd[1]: libpod-62c4fa4c0afad1d68bfca02b9f40fcbd5a328b2460156f19b06062497735ed64.scope: Deactivated successfully.
Sep 30 14:17:14 compute-0 podman[98192]: 2025-09-30 14:17:14.090348058 +0000 UTC m=+0.125474668 container attach 62c4fa4c0afad1d68bfca02b9f40fcbd5a328b2460156f19b06062497735ed64 (image=quay.io/prometheus/alertmanager:v0.25.0, name=happy_hellman, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:17:14 compute-0 podman[98192]: 2025-09-30 14:17:14.090563543 +0000 UTC m=+0.125690153 container died 62c4fa4c0afad1d68bfca02b9f40fcbd5a328b2460156f19b06062497735ed64 (image=quay.io/prometheus/alertmanager:v0.25.0, name=happy_hellman, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:17:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-117bda8c35c87f2dce737fc6dc2c332c167905b1a84ef62cf39421ff3d470d8b-merged.mount: Deactivated successfully.
Sep 30 14:17:14 compute-0 podman[98192]: 2025-09-30 14:17:14.133915285 +0000 UTC m=+0.169041895 container remove 62c4fa4c0afad1d68bfca02b9f40fcbd5a328b2460156f19b06062497735ed64 (image=quay.io/prometheus/alertmanager:v0.25.0, name=happy_hellman, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:17:14 compute-0 podman[98192]: 2025-09-30 14:17:14.137211232 +0000 UTC m=+0.172337862 volume remove d856606e44b0a45bcb27dc750ba8708190c51ba212668b4a71dc98272889118e
Sep 30 14:17:14 compute-0 systemd[1]: libpod-conmon-62c4fa4c0afad1d68bfca02b9f40fcbd5a328b2460156f19b06062497735ed64.scope: Deactivated successfully.
Sep 30 14:17:14 compute-0 systemd[1]: Reloading.
Sep 30 14:17:14 compute-0 systemd-rc-local-generator[98250]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:17:14 compute-0 systemd-sysv-generator[98254]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 14:17:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-keepalived-nfs-cephfs-compute-0-nfjjcv[97599]: Tue Sep 30 14:17:14 2025: (VI_0) Received advert from 192.168.122.101 with lower priority 90, ours 100, forcing new election
Sep 30 14:17:14 compute-0 systemd[1]: Reloading.
Sep 30 14:17:14 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 8.0 scrub starts
Sep 30 14:17:14 compute-0 systemd-rc-local-generator[98294]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:17:14 compute-0 systemd-sysv-generator[98297]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 14:17:14 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 8.0 scrub ok
Sep 30 14:17:14 compute-0 systemd[1]: Starting Ceph alertmanager.compute-0 for 5e3c7776-ac03-5698-b79f-a6dc2d80cae6...
Sep 30 14:17:14 compute-0 ceph-mon[74194]: 7.0 deep-scrub starts
Sep 30 14:17:14 compute-0 ceph-mon[74194]: 7.0 deep-scrub ok
Sep 30 14:17:14 compute-0 ceph-mon[74194]: 11.9 scrub starts
Sep 30 14:17:14 compute-0 ceph-mon[74194]: 11.9 scrub ok
Sep 30 14:17:14 compute-0 ceph-mon[74194]: 5.d scrub starts
Sep 30 14:17:14 compute-0 ceph-mon[74194]: 5.d scrub ok
Sep 30 14:17:14 compute-0 ceph-mon[74194]: pgmap v64: 337 pgs: 4 active+recovery_wait, 4 active+recovery_wait+degraded, 1 active+recovering, 328 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 4/219 objects degraded (1.826%); 3/219 objects misplaced (1.370%); 116 B/s, 1 keys/s, 2 objects/s recovering
Sep 30 14:17:14 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:17:15 compute-0 podman[98347]: 2025-09-30 14:17:14.933844441 +0000 UTC m=+0.021867777 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Sep 30 14:17:15 compute-0 podman[98347]: 2025-09-30 14:17:15.168674709 +0000 UTC m=+0.256698025 volume create ddc0ab3592974bb99080d8c2adea1ce5e08c5ee1460ca5d4ada1331f578f0139
Sep 30 14:17:15 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:17:15 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf84001a70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:17:15 compute-0 podman[98347]: 2025-09-30 14:17:15.246080358 +0000 UTC m=+0.334103674 container create bd20ee432b94b120e4d4e48f8e160634ffb584df5fe8133f3bd8a9cff9cb64c7 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:17:15 compute-0 ceph-mon[74194]: log_channel(cluster) log [WRN] : Health check update: Degraded data redundancy: 4/219 objects degraded (1.826%), 4 pgs degraded (PG_DEGRADED)
Sep 30 14:17:15 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 14:17:15 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 8.7 scrub starts
Sep 30 14:17:15 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 8.7 scrub ok
Sep 30 14:17:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7007a0b098904ef6e90e9786a87ccacf3f39b864b9dccf403d481cb5c7c0584/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Sep 30 14:17:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7007a0b098904ef6e90e9786a87ccacf3f39b864b9dccf403d481cb5c7c0584/merged/etc/alertmanager supports timestamps until 2038 (0x7fffffff)
Sep 30 14:17:15 compute-0 podman[98347]: 2025-09-30 14:17:15.580073768 +0000 UTC m=+0.668097184 container init bd20ee432b94b120e4d4e48f8e160634ffb584df5fe8133f3bd8a9cff9cb64c7 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:17:15 compute-0 podman[98347]: 2025-09-30 14:17:15.594077637 +0000 UTC m=+0.682101003 container start bd20ee432b94b120e4d4e48f8e160634ffb584df5fe8133f3bd8a9cff9cb64c7 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:17:15 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[98363]: ts=2025-09-30T14:17:15.617Z caller=main.go:240 level=info msg="Starting Alertmanager" version="(version=0.25.0, branch=HEAD, revision=258fab7cdd551f2cf251ed0348f0ad7289aee789)"
Sep 30 14:17:15 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[98363]: ts=2025-09-30T14:17:15.617Z caller=main.go:241 level=info build_context="(go=go1.19.4, user=root@abe866dd5717, date=20221222-14:51:36)"
Sep 30 14:17:15 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[98363]: ts=2025-09-30T14:17:15.627Z caller=cluster.go:185 level=info component=cluster msg="setting advertise address explicitly" addr=192.168.122.100 port=9094
Sep 30 14:17:15 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[98363]: ts=2025-09-30T14:17:15.628Z caller=cluster.go:681 level=info component=cluster msg="Waiting for gossip to settle..." interval=2s
Sep 30 14:17:15 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[98363]: ts=2025-09-30T14:17:15.662Z caller=coordinator.go:113 level=info component=configuration msg="Loading configuration file" file=/etc/alertmanager/alertmanager.yml
Sep 30 14:17:15 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[98363]: ts=2025-09-30T14:17:15.663Z caller=coordinator.go:126 level=info component=configuration msg="Completed loading of configuration file" file=/etc/alertmanager/alertmanager.yml
Sep 30 14:17:15 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[98363]: ts=2025-09-30T14:17:15.667Z caller=tls_config.go:232 level=info msg="Listening on" address=192.168.122.100:9093
Sep 30 14:17:15 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[98363]: ts=2025-09-30T14:17:15.667Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=192.168.122.100:9093
Sep 30 14:17:15 compute-0 bash[98347]: bd20ee432b94b120e4d4e48f8e160634ffb584df5fe8133f3bd8a9cff9cb64c7
Sep 30 14:17:15 compute-0 systemd[1]: Started Ceph alertmanager.compute-0 for 5e3c7776-ac03-5698-b79f-a6dc2d80cae6.
Sep 30 14:17:15 compute-0 sudo[97972]: pam_unix(sudo:session): session closed for user root
Sep 30 14:17:15 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 14:17:15 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v65: 337 pgs: 337 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 172 B/s, 1 keys/s, 2 objects/s recovering
Sep 30 14:17:15 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} v 0)
Sep 30 14:17:15 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Sep 30 14:17:15 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0)
Sep 30 14:17:15 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Sep 30 14:17:15 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:17:15 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf70003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:17:15 compute-0 ceph-mon[74194]: 10.d scrub starts
Sep 30 14:17:15 compute-0 ceph-mon[74194]: 10.d scrub ok
Sep 30 14:17:15 compute-0 ceph-mon[74194]: 8.0 scrub starts
Sep 30 14:17:15 compute-0 ceph-mon[74194]: 8.0 scrub ok
Sep 30 14:17:15 compute-0 ceph-mon[74194]: 5.1a scrub starts
Sep 30 14:17:15 compute-0 ceph-mon[74194]: 5.1a scrub ok
Sep 30 14:17:15 compute-0 ceph-mon[74194]: Health check update: Degraded data redundancy: 4/219 objects degraded (1.826%), 4 pgs degraded (PG_DEGRADED)
Sep 30 14:17:16 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:17:16 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 14:17:16 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:17:16 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.alertmanager}] v 0)
Sep 30 14:17:16 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:17:16 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf9c00a2b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:17:16 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:17:16 compute-0 ceph-mgr[74485]: [progress INFO root] complete: finished ev 6d07f7aa-a9a9-4472-a869-4b025251677d (Updating alertmanager deployment (+1 -> 1))
Sep 30 14:17:16 compute-0 ceph-mgr[74485]: [progress INFO root] Completed event 6d07f7aa-a9a9-4472-a869-4b025251677d (Updating alertmanager deployment (+1 -> 1)) in 5 seconds
Sep 30 14:17:16 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.alertmanager}] v 0)
Sep 30 14:17:16 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:17:16 compute-0 ceph-mgr[74485]: [progress INFO root] update: starting ev 60b7d403-312a-4b3d-87a9-67c4cd323c41 (Updating grafana deployment (+1 -> 1))
Sep 30 14:17:16 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.services.monitoring] Regenerating cephadm self-signed grafana TLS certificates
Sep 30 14:17:16 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Regenerating cephadm self-signed grafana TLS certificates
Sep 30 14:17:16 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.cert.grafana_cert}] v 0)
Sep 30 14:17:16 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:17:16 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.key.grafana_key}] v 0)
Sep 30 14:17:16 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:17:16 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"} v 0)
Sep 30 14:17:16 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Sep 30 14:17:16 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Sep 30 14:17:16 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_SSL_VERIFY}] v 0)
Sep 30 14:17:16 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:17:16 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Deploying daemon grafana.compute-0 on compute-0
Sep 30 14:17:16 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Deploying daemon grafana.compute-0 on compute-0
Sep 30 14:17:16 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Sep 30 14:17:16 compute-0 ceph-mon[74194]: log_channel(cluster) log [INF] : Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 4/219 objects degraded (1.826%), 4 pgs degraded)
Sep 30 14:17:16 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Sep 30 14:17:16 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Sep 30 14:17:16 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Sep 30 14:17:16 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Sep 30 14:17:16 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 63 pg[9.f( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=6 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=63 pruub=11.402248383s) [2] r=-1 lpr=63 pi=[55,63)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active pruub 214.536071777s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:16 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 63 pg[9.f( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=6 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=63 pruub=11.402192116s) [2] r=-1 lpr=63 pi=[55,63)/1 crt=48'1157 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 214.536071777s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:17:16 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 63 pg[9.b( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=6 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=63 pruub=11.401890755s) [2] r=-1 lpr=63 pi=[55,63)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active pruub 214.535964966s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:16 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 63 pg[9.b( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=6 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=63 pruub=11.401838303s) [2] r=-1 lpr=63 pi=[55,63)/1 crt=48'1157 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 214.535964966s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:17:16 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 63 pg[9.1b( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=5 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=63 pruub=11.401349068s) [2] r=-1 lpr=63 pi=[55,63)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active pruub 214.535812378s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:16 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 63 pg[9.1b( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=5 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=63 pruub=11.401330948s) [2] r=-1 lpr=63 pi=[55,63)/1 crt=48'1157 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 214.535812378s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:17:16 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 63 pg[9.1f( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=5 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=63 pruub=11.401209831s) [2] r=-1 lpr=63 pi=[55,63)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active pruub 214.535736084s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:16 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 63 pg[9.1f( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=5 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=63 pruub=11.401178360s) [2] r=-1 lpr=63 pi=[55,63)/1 crt=48'1157 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 214.535736084s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:17:16 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 63 pg[9.13( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=5 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=63 pruub=11.400501251s) [2] r=-1 lpr=63 pi=[55,63)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active pruub 214.535476685s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:16 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 63 pg[9.3( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=6 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=63 pruub=11.400350571s) [2] r=-1 lpr=63 pi=[55,63)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active pruub 214.535369873s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:16 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 63 pg[9.7( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=6 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=63 pruub=11.400323868s) [2] r=-1 lpr=63 pi=[55,63)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active pruub 214.535354614s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:16 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 63 pg[9.3( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=6 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=63 pruub=11.400333405s) [2] r=-1 lpr=63 pi=[55,63)/1 crt=48'1157 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 214.535369873s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:17:16 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 63 pg[9.7( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=6 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=63 pruub=11.400293350s) [2] r=-1 lpr=63 pi=[55,63)/1 crt=48'1157 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 214.535354614s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:17:16 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 63 pg[9.13( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=5 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=63 pruub=11.400420189s) [2] r=-1 lpr=63 pi=[55,63)/1 crt=48'1157 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 214.535476685s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:17:16 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 63 pg[9.17( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=5 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=63 pruub=11.399954796s) [2] r=-1 lpr=63 pi=[55,63)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active pruub 214.535247803s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:16 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 63 pg[9.17( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=5 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=63 pruub=11.399932861s) [2] r=-1 lpr=63 pi=[55,63)/1 crt=48'1157 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 214.535247803s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:17:16 compute-0 sudo[98385]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:17:16 compute-0 sudo[98385]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:17:16 compute-0 sudo[98385]: pam_unix(sudo:session): session closed for user root
Sep 30 14:17:16 compute-0 sudo[98410]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/grafana:10.4.0 --timeout 895 _orch deploy --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6
Sep 30 14:17:16 compute-0 sudo[98410]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:17:16 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Sep 30 14:17:16 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Sep 30 14:17:16 compute-0 ceph-mon[74194]: 7.1 scrub starts
Sep 30 14:17:16 compute-0 ceph-mon[74194]: 7.1 scrub ok
Sep 30 14:17:16 compute-0 ceph-mon[74194]: 8.7 scrub starts
Sep 30 14:17:16 compute-0 ceph-mon[74194]: 8.7 scrub ok
Sep 30 14:17:16 compute-0 ceph-mon[74194]: 5.4 scrub starts
Sep 30 14:17:16 compute-0 ceph-mon[74194]: 5.4 scrub ok
Sep 30 14:17:16 compute-0 ceph-mon[74194]: pgmap v65: 337 pgs: 337 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 172 B/s, 1 keys/s, 2 objects/s recovering
Sep 30 14:17:16 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Sep 30 14:17:16 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Sep 30 14:17:16 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:17:16 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:17:16 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:17:16 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:17:16 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:17:16 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:17:16 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Sep 30 14:17:16 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:17:16 compute-0 ceph-mon[74194]: Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 4/219 objects degraded (1.826%), 4 pgs degraded)
Sep 30 14:17:16 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Sep 30 14:17:16 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Sep 30 14:17:16 compute-0 ceph-mon[74194]: osdmap e63: 3 total, 3 up, 3 in
Sep 30 14:17:17 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:17:17 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf90001ab0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:17:17 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Sep 30 14:17:17 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Sep 30 14:17:17 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Sep 30 14:17:17 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 64 pg[9.17( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=5 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=64) [2]/[0] r=0 lpr=64 pi=[55,64)/1 crt=48'1157 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:17 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 64 pg[9.3( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=6 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=64) [2]/[0] r=0 lpr=64 pi=[55,64)/1 crt=48'1157 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:17 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 64 pg[9.7( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=6 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=64) [2]/[0] r=0 lpr=64 pi=[55,64)/1 crt=48'1157 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:17 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 64 pg[9.17( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=5 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=64) [2]/[0] r=0 lpr=64 pi=[55,64)/1 crt=48'1157 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Sep 30 14:17:17 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 64 pg[9.13( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=5 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=64) [2]/[0] r=0 lpr=64 pi=[55,64)/1 crt=48'1157 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:17 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 64 pg[9.7( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=6 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=64) [2]/[0] r=0 lpr=64 pi=[55,64)/1 crt=48'1157 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Sep 30 14:17:17 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 64 pg[9.3( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=6 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=64) [2]/[0] r=0 lpr=64 pi=[55,64)/1 crt=48'1157 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Sep 30 14:17:17 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 64 pg[9.13( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=5 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=64) [2]/[0] r=0 lpr=64 pi=[55,64)/1 crt=48'1157 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Sep 30 14:17:17 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 64 pg[9.1f( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=5 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=64) [2]/[0] r=0 lpr=64 pi=[55,64)/1 crt=48'1157 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:17 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 64 pg[9.1f( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=5 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=64) [2]/[0] r=0 lpr=64 pi=[55,64)/1 crt=48'1157 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Sep 30 14:17:17 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 64 pg[9.1b( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=5 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=64) [2]/[0] r=0 lpr=64 pi=[55,64)/1 crt=48'1157 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:17 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 64 pg[9.1b( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=5 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=64) [2]/[0] r=0 lpr=64 pi=[55,64)/1 crt=48'1157 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Sep 30 14:17:17 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 64 pg[9.b( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=6 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=64) [2]/[0] r=0 lpr=64 pi=[55,64)/1 crt=48'1157 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:17 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 64 pg[9.f( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=6 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=64) [2]/[0] r=0 lpr=64 pi=[55,64)/1 crt=48'1157 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:17 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 64 pg[9.f( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=6 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=64) [2]/[0] r=0 lpr=64 pi=[55,64)/1 crt=48'1157 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Sep 30 14:17:17 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 64 pg[9.b( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=6 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=64) [2]/[0] r=0 lpr=64 pi=[55,64)/1 crt=48'1157 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Sep 30 14:17:17 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 8.1a scrub starts
Sep 30 14:17:17 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 8.1a scrub ok
Sep 30 14:17:17 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[98363]: ts=2025-09-30T14:17:17.628Z caller=cluster.go:706 level=info component=cluster msg="gossip not settled" polls=0 before=0 now=1 elapsed=2.000189311s
Sep 30 14:17:17 compute-0 sshd-session[98491]: Invalid user admin from 139.19.117.130 port 39744
Sep 30 14:17:17 compute-0 sshd-session[98491]: userauth_pubkey: signature algorithm ssh-rsa not in PubkeyAcceptedAlgorithms [preauth]
Sep 30 14:17:17 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v68: 337 pgs: 337 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 80 B/s, 0 objects/s recovering
Sep 30 14:17:17 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} v 0)
Sep 30 14:17:17 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]: dispatch
Sep 30 14:17:17 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0)
Sep 30 14:17:17 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Sep 30 14:17:17 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:17:17 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf84001a70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:17:17 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:17:17 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:17:17 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:17:17 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:17:17 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:17:17 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:17:17 compute-0 ceph-mon[74194]: 10.c scrub starts
Sep 30 14:17:17 compute-0 ceph-mon[74194]: 10.c scrub ok
Sep 30 14:17:17 compute-0 ceph-mon[74194]: Regenerating cephadm self-signed grafana TLS certificates
Sep 30 14:17:17 compute-0 ceph-mon[74194]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Sep 30 14:17:17 compute-0 ceph-mon[74194]: Deploying daemon grafana.compute-0 on compute-0
Sep 30 14:17:17 compute-0 ceph-mon[74194]: 11.18 scrub starts
Sep 30 14:17:17 compute-0 ceph-mon[74194]: 11.18 scrub ok
Sep 30 14:17:17 compute-0 ceph-mon[74194]: 11.17 scrub starts
Sep 30 14:17:17 compute-0 ceph-mon[74194]: 11.17 scrub ok
Sep 30 14:17:17 compute-0 ceph-mon[74194]: osdmap e64: 3 total, 3 up, 3 in
Sep 30 14:17:17 compute-0 ceph-mon[74194]: pgmap v68: 337 pgs: 337 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 80 B/s, 0 objects/s recovering
Sep 30 14:17:17 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]: dispatch
Sep 30 14:17:17 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Sep 30 14:17:18 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:17:18 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf70003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:17:18 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Sep 30 14:17:18 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Sep 30 14:17:18 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Sep 30 14:17:18 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Sep 30 14:17:18 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Sep 30 14:17:18 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Sep 30 14:17:18 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Sep 30 14:17:18 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 65 pg[9.1b( v 48'1157 (0'0,48'1157] local-lis/les=64/65 n=5 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[55,64)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:17:18 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 65 pg[9.7( v 48'1157 (0'0,48'1157] local-lis/les=64/65 n=6 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[55,64)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:17:18 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 65 pg[9.b( v 48'1157 (0'0,48'1157] local-lis/les=64/65 n=6 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[55,64)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:17:18 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 65 pg[9.13( v 48'1157 (0'0,48'1157] local-lis/les=64/65 n=5 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[55,64)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:17:18 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 65 pg[9.f( v 48'1157 (0'0,48'1157] local-lis/les=64/65 n=6 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[55,64)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:17:18 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 65 pg[9.17( v 48'1157 (0'0,48'1157] local-lis/les=64/65 n=5 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[55,64)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:17:18 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 65 pg[9.3( v 48'1157 (0'0,48'1157] local-lis/les=64/65 n=6 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[55,64)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:17:18 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 65 pg[9.1f( v 48'1157 (0'0,48'1157] local-lis/les=64/65 n=5 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[55,64)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:17:18 compute-0 ceph-mon[74194]: 7.7 scrub starts
Sep 30 14:17:18 compute-0 ceph-mon[74194]: 7.7 scrub ok
Sep 30 14:17:18 compute-0 ceph-mon[74194]: 8.1a scrub starts
Sep 30 14:17:18 compute-0 ceph-mon[74194]: 8.1a scrub ok
Sep 30 14:17:18 compute-0 ceph-mon[74194]: 10.11 scrub starts
Sep 30 14:17:18 compute-0 ceph-mon[74194]: 10.11 scrub ok
Sep 30 14:17:18 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Sep 30 14:17:18 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Sep 30 14:17:18 compute-0 ceph-mon[74194]: osdmap e65: 3 total, 3 up, 3 in
Sep 30 14:17:18 compute-0 ceph-mgr[74485]: [progress INFO root] Writing back 24 completed events
Sep 30 14:17:18 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Sep 30 14:17:18 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:17:18 compute-0 ceph-mgr[74485]: [progress INFO root] Completed event 6754bcdc-3dec-430b-a52a-62d8c51fc760 (Global Recovery Event) in 5 seconds
Sep 30 14:17:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:17:19 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf9c00a2b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:17:19 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 8.1d scrub starts
Sep 30 14:17:19 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 8.1d scrub ok
Sep 30 14:17:19 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Sep 30 14:17:19 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Sep 30 14:17:19 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Sep 30 14:17:19 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 66 pg[9.f( v 48'1157 (0'0,48'1157] local-lis/les=64/65 n=6 ec=55/37 lis/c=64/55 les/c/f=65/56/0 sis=66 pruub=15.084016800s) [2] async=[2] r=-1 lpr=66 pi=[55,66)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active pruub 221.531616211s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:19 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 66 pg[9.f( v 48'1157 (0'0,48'1157] local-lis/les=64/65 n=6 ec=55/37 lis/c=64/55 les/c/f=65/56/0 sis=66 pruub=15.083960533s) [2] r=-1 lpr=66 pi=[55,66)/1 crt=48'1157 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 221.531616211s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:17:19 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 66 pg[9.b( v 48'1157 (0'0,48'1157] local-lis/les=64/65 n=6 ec=55/37 lis/c=64/55 les/c/f=65/56/0 sis=66 pruub=15.083684921s) [2] async=[2] r=-1 lpr=66 pi=[55,66)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active pruub 221.531402588s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:19 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 66 pg[9.b( v 48'1157 (0'0,48'1157] local-lis/les=64/65 n=6 ec=55/37 lis/c=64/55 les/c/f=65/56/0 sis=66 pruub=15.083655357s) [2] r=-1 lpr=66 pi=[55,66)/1 crt=48'1157 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 221.531402588s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:17:19 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 66 pg[9.1b( v 48'1157 (0'0,48'1157] local-lis/les=64/65 n=5 ec=55/37 lis/c=64/55 les/c/f=65/56/0 sis=66 pruub=15.075691223s) [2] async=[2] r=-1 lpr=66 pi=[55,66)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active pruub 221.523818970s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:19 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 66 pg[9.1b( v 48'1157 (0'0,48'1157] local-lis/les=64/65 n=5 ec=55/37 lis/c=64/55 les/c/f=65/56/0 sis=66 pruub=15.075669289s) [2] r=-1 lpr=66 pi=[55,66)/1 crt=48'1157 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 221.523818970s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:17:19 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 66 pg[9.1f( v 48'1157 (0'0,48'1157] local-lis/les=64/65 n=5 ec=55/37 lis/c=64/55 les/c/f=65/56/0 sis=66 pruub=15.083480835s) [2] async=[2] r=-1 lpr=66 pi=[55,66)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active pruub 221.531723022s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:19 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 66 pg[9.1f( v 48'1157 (0'0,48'1157] local-lis/les=64/65 n=5 ec=55/37 lis/c=64/55 les/c/f=65/56/0 sis=66 pruub=15.083446503s) [2] r=-1 lpr=66 pi=[55,66)/1 crt=48'1157 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 221.531723022s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:17:19 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 66 pg[9.7( v 48'1157 (0'0,48'1157] local-lis/les=64/65 n=6 ec=55/37 lis/c=64/55 les/c/f=65/56/0 sis=66 pruub=15.082942009s) [2] async=[2] r=-1 lpr=66 pi=[55,66)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active pruub 221.531417847s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:19 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 66 pg[9.13( v 48'1157 (0'0,48'1157] local-lis/les=64/65 n=5 ec=55/37 lis/c=64/55 les/c/f=65/56/0 sis=66 pruub=15.083055496s) [2] async=[2] r=-1 lpr=66 pi=[55,66)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active pruub 221.531539917s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:19 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 66 pg[9.7( v 48'1157 (0'0,48'1157] local-lis/les=64/65 n=6 ec=55/37 lis/c=64/55 les/c/f=65/56/0 sis=66 pruub=15.082916260s) [2] r=-1 lpr=66 pi=[55,66)/1 crt=48'1157 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 221.531417847s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:17:19 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 66 pg[9.3( v 48'1157 (0'0,48'1157] local-lis/les=64/65 n=6 ec=55/37 lis/c=64/55 les/c/f=65/56/0 sis=66 pruub=15.083191872s) [2] async=[2] r=-1 lpr=66 pi=[55,66)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active pruub 221.531723022s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:19 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 66 pg[9.13( v 48'1157 (0'0,48'1157] local-lis/les=64/65 n=5 ec=55/37 lis/c=64/55 les/c/f=65/56/0 sis=66 pruub=15.083010674s) [2] r=-1 lpr=66 pi=[55,66)/1 crt=48'1157 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 221.531539917s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:17:19 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 66 pg[9.3( v 48'1157 (0'0,48'1157] local-lis/les=64/65 n=6 ec=55/37 lis/c=64/55 les/c/f=65/56/0 sis=66 pruub=15.083147049s) [2] r=-1 lpr=66 pi=[55,66)/1 crt=48'1157 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 221.531723022s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:17:19 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 66 pg[9.17( v 48'1157 (0'0,48'1157] local-lis/les=64/65 n=5 ec=55/37 lis/c=64/55 les/c/f=65/56/0 sis=66 pruub=15.083061218s) [2] async=[2] r=-1 lpr=66 pi=[55,66)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active pruub 221.531646729s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:19 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 66 pg[9.17( v 48'1157 (0'0,48'1157] local-lis/les=64/65 n=5 ec=55/37 lis/c=64/55 les/c/f=65/56/0 sis=66 pruub=15.083041191s) [2] r=-1 lpr=66 pi=[55,66)/1 crt=48'1157 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 221.531646729s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:17:19 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v71: 337 pgs: 3 active+recovery_wait+remapped, 4 active+remapped, 1 active+recovering+remapped, 329 active+clean; 457 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 22/226 objects misplaced (9.735%); 164 B/s, 4 objects/s recovering
Sep 30 14:17:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:17:19 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf90001ab0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:17:19 compute-0 ceph-mon[74194]: 10.a scrub starts
Sep 30 14:17:19 compute-0 ceph-mon[74194]: 10.a scrub ok
Sep 30 14:17:19 compute-0 ceph-mon[74194]: 8.1e scrub starts
Sep 30 14:17:19 compute-0 ceph-mon[74194]: 8.1e scrub ok
Sep 30 14:17:19 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:17:19 compute-0 ceph-mon[74194]: osdmap e66: 3 total, 3 up, 3 in
Sep 30 14:17:19 compute-0 ceph-mon[74194]: pgmap v71: 337 pgs: 3 active+recovery_wait+remapped, 4 active+remapped, 1 active+recovering+remapped, 329 active+clean; 457 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 22/226 objects misplaced (9.735%); 164 B/s, 4 objects/s recovering
Sep 30 14:17:20 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:17:20 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf84001a70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:17:20 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e66 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 14:17:20 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 11.1f scrub starts
Sep 30 14:17:20 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Sep 30 14:17:20 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 11.1f scrub ok
Sep 30 14:17:20 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Sep 30 14:17:20 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Sep 30 14:17:20 compute-0 ceph-mon[74194]: 10.b scrub starts
Sep 30 14:17:20 compute-0 ceph-mon[74194]: 10.b scrub ok
Sep 30 14:17:20 compute-0 ceph-mon[74194]: 8.1d scrub starts
Sep 30 14:17:20 compute-0 ceph-mon[74194]: 8.1d scrub ok
Sep 30 14:17:20 compute-0 ceph-mon[74194]: 12.d scrub starts
Sep 30 14:17:20 compute-0 ceph-mon[74194]: 12.d scrub ok
Sep 30 14:17:20 compute-0 ceph-mon[74194]: osdmap e67: 3 total, 3 up, 3 in
Sep 30 14:17:21 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:17:21 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf70003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:17:21 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 11.10 scrub starts
Sep 30 14:17:21 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 11.10 scrub ok
Sep 30 14:17:21 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v73: 337 pgs: 3 active+recovery_wait+remapped, 4 active+remapped, 1 active+recovering+remapped, 329 active+clean; 457 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 22/226 objects misplaced (9.735%); 289 B/s, 2 keys/s, 5 objects/s recovering
Sep 30 14:17:21 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:17:21 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf9c00a2b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:17:22 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:17:22 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf90001ab0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:17:22 compute-0 ceph-mon[74194]: 11.1f scrub starts
Sep 30 14:17:22 compute-0 ceph-mon[74194]: 11.1f scrub ok
Sep 30 14:17:22 compute-0 ceph-mon[74194]: 9.17 scrub starts
Sep 30 14:17:22 compute-0 ceph-mon[74194]: 9.17 scrub ok
Sep 30 14:17:22 compute-0 ceph-mon[74194]: 12.0 scrub starts
Sep 30 14:17:22 compute-0 ceph-mon[74194]: 12.0 scrub ok
Sep 30 14:17:22 compute-0 ceph-mon[74194]: pgmap v73: 337 pgs: 3 active+recovery_wait+remapped, 4 active+remapped, 1 active+recovering+remapped, 329 active+clean; 457 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 22/226 objects misplaced (9.735%); 289 B/s, 2 keys/s, 5 objects/s recovering
Sep 30 14:17:22 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Sep 30 14:17:22 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Sep 30 14:17:22 compute-0 podman[98476]: 2025-09-30 14:17:22.921266545 +0000 UTC m=+6.067799006 container create 8966225ccf96e0b206ad119119c08c8afe00ee9179519294cc709c99d833149a (image=quay.io/ceph/grafana:10.4.0, name=trusting_torvalds, maintainer=Grafana Labs <hello@grafana.com>)
Sep 30 14:17:22 compute-0 systemd[1]: Started libpod-conmon-8966225ccf96e0b206ad119119c08c8afe00ee9179519294cc709c99d833149a.scope.
Sep 30 14:17:22 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:17:22 compute-0 podman[98476]: 2025-09-30 14:17:22.900154409 +0000 UTC m=+6.046686890 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Sep 30 14:17:22 compute-0 podman[98476]: 2025-09-30 14:17:22.991340052 +0000 UTC m=+6.137872533 container init 8966225ccf96e0b206ad119119c08c8afe00ee9179519294cc709c99d833149a (image=quay.io/ceph/grafana:10.4.0, name=trusting_torvalds, maintainer=Grafana Labs <hello@grafana.com>)
Sep 30 14:17:22 compute-0 podman[98476]: 2025-09-30 14:17:22.998711146 +0000 UTC m=+6.145243607 container start 8966225ccf96e0b206ad119119c08c8afe00ee9179519294cc709c99d833149a (image=quay.io/ceph/grafana:10.4.0, name=trusting_torvalds, maintainer=Grafana Labs <hello@grafana.com>)
Sep 30 14:17:23 compute-0 trusting_torvalds[98697]: 472 0
Sep 30 14:17:23 compute-0 systemd[1]: libpod-8966225ccf96e0b206ad119119c08c8afe00ee9179519294cc709c99d833149a.scope: Deactivated successfully.
Sep 30 14:17:23 compute-0 podman[98476]: 2025-09-30 14:17:23.003678977 +0000 UTC m=+6.150211438 container attach 8966225ccf96e0b206ad119119c08c8afe00ee9179519294cc709c99d833149a (image=quay.io/ceph/grafana:10.4.0, name=trusting_torvalds, maintainer=Grafana Labs <hello@grafana.com>)
Sep 30 14:17:23 compute-0 podman[98476]: 2025-09-30 14:17:23.004344884 +0000 UTC m=+6.150877345 container died 8966225ccf96e0b206ad119119c08c8afe00ee9179519294cc709c99d833149a (image=quay.io/ceph/grafana:10.4.0, name=trusting_torvalds, maintainer=Grafana Labs <hello@grafana.com>)
Sep 30 14:17:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-760ff6e48cafa718d9dca636a2a546f47f72bc290d304b18f42d579880197bc4-merged.mount: Deactivated successfully.
Sep 30 14:17:23 compute-0 podman[98476]: 2025-09-30 14:17:23.049385721 +0000 UTC m=+6.195918182 container remove 8966225ccf96e0b206ad119119c08c8afe00ee9179519294cc709c99d833149a (image=quay.io/ceph/grafana:10.4.0, name=trusting_torvalds, maintainer=Grafana Labs <hello@grafana.com>)
Sep 30 14:17:23 compute-0 systemd[1]: libpod-conmon-8966225ccf96e0b206ad119119c08c8afe00ee9179519294cc709c99d833149a.scope: Deactivated successfully.
Sep 30 14:17:23 compute-0 podman[98714]: 2025-09-30 14:17:23.111052256 +0000 UTC m=+0.042505741 container create 73d8203928f96acefbdee0da92a990688ff01dcdc7e09106c9c82c2b79d44a0b (image=quay.io/ceph/grafana:10.4.0, name=gallant_agnesi, maintainer=Grafana Labs <hello@grafana.com>)
Sep 30 14:17:23 compute-0 systemd[1]: Started libpod-conmon-73d8203928f96acefbdee0da92a990688ff01dcdc7e09106c9c82c2b79d44a0b.scope.
Sep 30 14:17:23 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:17:23 compute-0 podman[98714]: 2025-09-30 14:17:23.179731195 +0000 UTC m=+0.111184710 container init 73d8203928f96acefbdee0da92a990688ff01dcdc7e09106c9c82c2b79d44a0b (image=quay.io/ceph/grafana:10.4.0, name=gallant_agnesi, maintainer=Grafana Labs <hello@grafana.com>)
Sep 30 14:17:23 compute-0 podman[98714]: 2025-09-30 14:17:23.18522994 +0000 UTC m=+0.116683425 container start 73d8203928f96acefbdee0da92a990688ff01dcdc7e09106c9c82c2b79d44a0b (image=quay.io/ceph/grafana:10.4.0, name=gallant_agnesi, maintainer=Grafana Labs <hello@grafana.com>)
Sep 30 14:17:23 compute-0 podman[98714]: 2025-09-30 14:17:23.092462796 +0000 UTC m=+0.023916301 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Sep 30 14:17:23 compute-0 gallant_agnesi[98731]: 472 0
Sep 30 14:17:23 compute-0 systemd[1]: libpod-73d8203928f96acefbdee0da92a990688ff01dcdc7e09106c9c82c2b79d44a0b.scope: Deactivated successfully.
Sep 30 14:17:23 compute-0 podman[98714]: 2025-09-30 14:17:23.191213848 +0000 UTC m=+0.122667433 container attach 73d8203928f96acefbdee0da92a990688ff01dcdc7e09106c9c82c2b79d44a0b (image=quay.io/ceph/grafana:10.4.0, name=gallant_agnesi, maintainer=Grafana Labs <hello@grafana.com>)
Sep 30 14:17:23 compute-0 podman[98714]: 2025-09-30 14:17:23.191465985 +0000 UTC m=+0.122919470 container died 73d8203928f96acefbdee0da92a990688ff01dcdc7e09106c9c82c2b79d44a0b (image=quay.io/ceph/grafana:10.4.0, name=gallant_agnesi, maintainer=Grafana Labs <hello@grafana.com>)
Sep 30 14:17:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-e0cea177e0f0cf38f62ccf0f8552b7c10f35f2542d3896d0c5214aa1f00a2ade-merged.mount: Deactivated successfully.
Sep 30 14:17:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:17:23 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf84001a70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:17:23 compute-0 podman[98714]: 2025-09-30 14:17:23.227950876 +0000 UTC m=+0.159404361 container remove 73d8203928f96acefbdee0da92a990688ff01dcdc7e09106c9c82c2b79d44a0b (image=quay.io/ceph/grafana:10.4.0, name=gallant_agnesi, maintainer=Grafana Labs <hello@grafana.com>)
Sep 30 14:17:23 compute-0 systemd[1]: libpod-conmon-73d8203928f96acefbdee0da92a990688ff01dcdc7e09106c9c82c2b79d44a0b.scope: Deactivated successfully.
Sep 30 14:17:23 compute-0 ceph-mon[74194]: 11.10 scrub starts
Sep 30 14:17:23 compute-0 ceph-mon[74194]: 11.10 scrub ok
Sep 30 14:17:23 compute-0 ceph-mon[74194]: 9.b scrub starts
Sep 30 14:17:23 compute-0 ceph-mon[74194]: 9.b scrub ok
Sep 30 14:17:23 compute-0 ceph-mon[74194]: 10.6 deep-scrub starts
Sep 30 14:17:23 compute-0 ceph-mon[74194]: 10.6 deep-scrub ok
Sep 30 14:17:23 compute-0 ceph-mon[74194]: 8.13 scrub starts
Sep 30 14:17:23 compute-0 ceph-mon[74194]: 8.13 scrub ok
Sep 30 14:17:23 compute-0 ceph-mon[74194]: 9.3 scrub starts
Sep 30 14:17:23 compute-0 ceph-mon[74194]: 9.3 scrub ok
Sep 30 14:17:23 compute-0 systemd[1]: Reloading.
Sep 30 14:17:23 compute-0 systemd-sysv-generator[98778]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 14:17:23 compute-0 systemd-rc-local-generator[98774]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:17:23 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 11.11 scrub starts
Sep 30 14:17:23 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 11.11 scrub ok
Sep 30 14:17:23 compute-0 systemd[1]: Reloading.
Sep 30 14:17:23 compute-0 systemd-sysv-generator[98817]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 14:17:23 compute-0 systemd-rc-local-generator[98814]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:17:23 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v74: 337 pgs: 3 active+recovery_wait+remapped, 4 active+remapped, 1 active+recovering+remapped, 329 active+clean; 457 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 22/226 objects misplaced (9.735%); 217 B/s, 1 keys/s, 3 objects/s recovering
Sep 30 14:17:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:17:23 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf84001a70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:17:23 compute-0 systemd[1]: Starting Ceph grafana.compute-0 for 5e3c7776-ac03-5698-b79f-a6dc2d80cae6...
Sep 30 14:17:23 compute-0 ceph-mgr[74485]: [progress INFO root] Writing back 25 completed events
Sep 30 14:17:23 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Sep 30 14:17:23 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:17:23 compute-0 ceph-mgr[74485]: [progress WARNING root] Starting Global Recovery Event,8 pgs not in active + clean state
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:17:24 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf84001a70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:17:24 compute-0 podman[98871]: 2025-09-30 14:17:24.138944569 +0000 UTC m=+0.046393343 container create 93c8c5607d3b21ae8cda4d1f43e88d294c0ac0bcb4ca72548c6be243950b6313 (image=quay.io/ceph/grafana:10.4.0, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Sep 30 14:17:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60fbe55edb8aeed63fb88162a2bf9bacdaa3d5779c54b3650ef61b3a1a18afcc/merged/etc/grafana/grafana.ini supports timestamps until 2038 (0x7fffffff)
Sep 30 14:17:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60fbe55edb8aeed63fb88162a2bf9bacdaa3d5779c54b3650ef61b3a1a18afcc/merged/etc/grafana/certs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:17:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60fbe55edb8aeed63fb88162a2bf9bacdaa3d5779c54b3650ef61b3a1a18afcc/merged/etc/grafana/provisioning/datasources supports timestamps until 2038 (0x7fffffff)
Sep 30 14:17:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60fbe55edb8aeed63fb88162a2bf9bacdaa3d5779c54b3650ef61b3a1a18afcc/merged/etc/grafana/provisioning/dashboards supports timestamps until 2038 (0x7fffffff)
Sep 30 14:17:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60fbe55edb8aeed63fb88162a2bf9bacdaa3d5779c54b3650ef61b3a1a18afcc/merged/var/lib/grafana/grafana.db supports timestamps until 2038 (0x7fffffff)
Sep 30 14:17:24 compute-0 podman[98871]: 2025-09-30 14:17:24.201531998 +0000 UTC m=+0.108980792 container init 93c8c5607d3b21ae8cda4d1f43e88d294c0ac0bcb4ca72548c6be243950b6313 (image=quay.io/ceph/grafana:10.4.0, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Sep 30 14:17:24 compute-0 podman[98871]: 2025-09-30 14:17:24.207264169 +0000 UTC m=+0.114712953 container start 93c8c5607d3b21ae8cda4d1f43e88d294c0ac0bcb4ca72548c6be243950b6313 (image=quay.io/ceph/grafana:10.4.0, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Sep 30 14:17:24 compute-0 podman[98871]: 2025-09-30 14:17:24.115692396 +0000 UTC m=+0.023141200 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Sep 30 14:17:24 compute-0 bash[98871]: 93c8c5607d3b21ae8cda4d1f43e88d294c0ac0bcb4ca72548c6be243950b6313
Sep 30 14:17:24 compute-0 systemd[1]: Started Ceph grafana.compute-0 for 5e3c7776-ac03-5698-b79f-a6dc2d80cae6.
Sep 30 14:17:24 compute-0 sudo[98410]: pam_unix(sudo:session): session closed for user root
Sep 30 14:17:24 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 14:17:24 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:17:24 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 14:17:24 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:17:24 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.grafana}] v 0)
Sep 30 14:17:24 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:17:24 compute-0 ceph-mgr[74485]: [progress INFO root] complete: finished ev 60b7d403-312a-4b3d-87a9-67c4cd323c41 (Updating grafana deployment (+1 -> 1))
Sep 30 14:17:24 compute-0 ceph-mgr[74485]: [progress INFO root] Completed event 60b7d403-312a-4b3d-87a9-67c4cd323c41 (Updating grafana deployment (+1 -> 1)) in 8 seconds
Sep 30 14:17:24 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.grafana}] v 0)
Sep 30 14:17:24 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:17:24 compute-0 ceph-mgr[74485]: [progress INFO root] update: starting ev 5c269cfc-3bce-47bb-9bef-5c64c8398507 (Updating ingress.rgw.default deployment (+4 -> 4))
Sep 30 14:17:24 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.rgw.default/monitor_password}] v 0)
Sep 30 14:17:24 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:17:24 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.rgw.default.compute-0.fikipk on compute-0
Sep 30 14:17:24 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.rgw.default.compute-0.fikipk on compute-0
Sep 30 14:17:24 compute-0 ceph-mon[74194]: 12.1f scrub starts
Sep 30 14:17:24 compute-0 ceph-mon[74194]: 12.1f scrub ok
Sep 30 14:17:24 compute-0 ceph-mon[74194]: 11.11 scrub starts
Sep 30 14:17:24 compute-0 ceph-mon[74194]: 11.11 scrub ok
Sep 30 14:17:24 compute-0 ceph-mon[74194]: 9.13 scrub starts
Sep 30 14:17:24 compute-0 ceph-mon[74194]: 9.13 scrub ok
Sep 30 14:17:24 compute-0 ceph-mon[74194]: pgmap v74: 337 pgs: 3 active+recovery_wait+remapped, 4 active+remapped, 1 active+recovering+remapped, 329 active+clean; 457 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 22/226 objects misplaced (9.735%); 217 B/s, 1 keys/s, 3 objects/s recovering
Sep 30 14:17:24 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:17:24 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:17:24 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:17:24 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:17:24 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:17:24 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=settings t=2025-09-30T14:17:24.377497224Z level=info msg="Starting Grafana" version=10.4.0 commit=03f502a94d17f7dc4e6c34acdf8428aedd986e4c branch=HEAD compiled=2025-09-30T14:17:24Z
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=settings t=2025-09-30T14:17:24.379251441Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=settings t=2025-09-30T14:17:24.379297262Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=settings t=2025-09-30T14:17:24.379322733Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=settings t=2025-09-30T14:17:24.379346253Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=settings t=2025-09-30T14:17:24.379370104Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=settings t=2025-09-30T14:17:24.379393174Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=settings t=2025-09-30T14:17:24.379416425Z level=info msg="Config overridden from command line" arg="default.log.mode=console"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=settings t=2025-09-30T14:17:24.379439966Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana"
Sep 30 14:17:24 compute-0 sudo[98906]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=settings t=2025-09-30T14:17:24.379465196Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=settings t=2025-09-30T14:17:24.379488117Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=settings t=2025-09-30T14:17:24.379511698Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=settings t=2025-09-30T14:17:24.379535078Z level=info msg=Target target=[all]
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=settings t=2025-09-30T14:17:24.379565469Z level=info msg="Path Home" path=/usr/share/grafana
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=settings t=2025-09-30T14:17:24.37958894Z level=info msg="Path Data" path=/var/lib/grafana
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=settings t=2025-09-30T14:17:24.37961188Z level=info msg="Path Logs" path=/var/log/grafana
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=settings t=2025-09-30T14:17:24.379636441Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=settings t=2025-09-30T14:17:24.379659861Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=settings t=2025-09-30T14:17:24.379682612Z level=info msg="App mode production"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=sqlstore t=2025-09-30T14:17:24.380072472Z level=info msg="Connecting to DB" dbtype=sqlite3
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=sqlstore t=2025-09-30T14:17:24.380121444Z level=warn msg="SQLite database file has broader permissions than it should" path=/var/lib/grafana/grafana.db mode=-rw-r--r-- expected=-rw-r-----
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.380847533Z level=info msg="Starting DB migrations"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.382059605Z level=info msg="Executing migration" id="create migration_log table"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.383218485Z level=info msg="Migration successfully executed" id="create migration_log table" duration=1.15715ms
Sep 30 14:17:24 compute-0 sudo[98906]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.385487225Z level=info msg="Executing migration" id="create user table"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.386195774Z level=info msg="Migration successfully executed" id="create user table" duration=708.409µs
Sep 30 14:17:24 compute-0 sudo[98906]: pam_unix(sudo:session): session closed for user root
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.388615267Z level=info msg="Executing migration" id="add unique index user.login"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.389448639Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=832.732µs
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.39175671Z level=info msg="Executing migration" id="add unique index user.email"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.39251329Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=757.04µs
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.395702004Z level=info msg="Executing migration" id="drop index UQE_user_login - v1"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.396508135Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=806.361µs
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.398233101Z level=info msg="Executing migration" id="drop index UQE_user_email - v1"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.398887268Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=654.317µs
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.401102326Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.403437718Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=2.334962ms
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.406255422Z level=info msg="Executing migration" id="create user table v2"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.406966381Z level=info msg="Migration successfully executed" id="create user table v2" duration=713.359µs
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.40996398Z level=info msg="Executing migration" id="create index UQE_user_login - v2"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.41072657Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=759.57µs
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.413237346Z level=info msg="Executing migration" id="create index UQE_user_email - v2"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.413992976Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=755.29µs
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.416665127Z level=info msg="Executing migration" id="copy data_source v1 to v2"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.417143759Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=478.393µs
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.419902992Z level=info msg="Executing migration" id="Drop old table user_v1"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.420696563Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=794.051µs
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.423374093Z level=info msg="Executing migration" id="Add column help_flags1 to user table"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.424519603Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=1.14485ms
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.42703267Z level=info msg="Executing migration" id="Update user table charset"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.427112782Z level=info msg="Migration successfully executed" id="Update user table charset" duration=80.442µs
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.434187698Z level=info msg="Executing migration" id="Add last_seen_at column to user"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.435535374Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=1.348786ms
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.438102001Z level=info msg="Executing migration" id="Add missing user data"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.4384467Z level=info msg="Migration successfully executed" id="Add missing user data" duration=345.149µs
Sep 30 14:17:24 compute-0 sudo[98931]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/haproxy:2.3 --timeout 895 _orch deploy --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6
Sep 30 14:17:24 compute-0 sudo[98931]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.443844183Z level=info msg="Executing migration" id="Add is_disabled column to user"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.445119136Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=1.276303ms
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.448130806Z level=info msg="Executing migration" id="Add index user.login/user.email"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.448861805Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=733.49µs
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.450836797Z level=info msg="Executing migration" id="Add is_service_account column to user"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.451796342Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=959.105µs
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.454243957Z level=info msg="Executing migration" id="Update is_service_account column to nullable"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.461717944Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=7.473437ms
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.464361213Z level=info msg="Executing migration" id="Add uid column to user"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.465522234Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=1.159051ms
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.468006409Z level=info msg="Executing migration" id="Update uid column values for users"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.468309767Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=303.468µs
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.470607078Z level=info msg="Executing migration" id="Add unique index user_uid"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.471539192Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=932.584µs
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.473902975Z level=info msg="Executing migration" id="create temp user table v1-7"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.474716776Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=813.721µs
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.477277814Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.478067774Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=791.951µs
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.481448593Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.482213864Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=765.331µs
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.486475856Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.487361179Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=885.803µs
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.490750118Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.491746915Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=998.517µs
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.496563872Z level=info msg="Executing migration" id="Update temp_user table charset"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.496702675Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=140.403µs
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.503060943Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.504312406Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=1.255063ms
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.506585376Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.50751443Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=926.814µs
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.510375846Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.511255869Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=879.383µs
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.513857647Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.514592517Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=735µs
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.519592078Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.523306346Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=3.715478ms
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.525560966Z level=info msg="Executing migration" id="create temp_user v2"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.526626664Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=1.065468ms
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.530318981Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.531110492Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=792.811µs
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.533427653Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.534333977Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=906.484µs
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.537445439Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.538554838Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=1.112409ms
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.547855463Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.548906791Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=1.054048ms
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.552025323Z level=info msg="Executing migration" id="copy temp_user v1 to v2"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.552513536Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=488.273µs
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.554642172Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.555380071Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=739.319µs
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.557157608Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.55760866Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=451.312µs
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.559374167Z level=info msg="Executing migration" id="create star table"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.560227869Z level=info msg="Migration successfully executed" id="create star table" duration=854.172µs
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.562254572Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id"
Sep 30 14:17:24 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 8.1 scrub starts
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.563080004Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=825.022µs
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.568284791Z level=info msg="Executing migration" id="create org table v1"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.569014651Z level=info msg="Migration successfully executed" id="create org table v1" duration=730.77µs
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.572481562Z level=info msg="Executing migration" id="create index UQE_org_name - v1"
Sep 30 14:17:24 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 8.1 scrub ok
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.573383096Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=901.424µs
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.576748084Z level=info msg="Executing migration" id="create org_user table v1"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.577426982Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=677.728µs
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.581233703Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.582245939Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=1.012356ms
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.586320727Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.587009885Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=688.498µs
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.589509261Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.590127737Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=618.376µs
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.594381379Z level=info msg="Executing migration" id="Update org table charset"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.59443921Z level=info msg="Migration successfully executed" id="Update org table charset" duration=58.561µs
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.597118811Z level=info msg="Executing migration" id="Update org_user table charset"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.597217234Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=98.833µs
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.59971984Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.599962756Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=233.506µs
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.602658607Z level=info msg="Executing migration" id="create dashboard table"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.603592962Z level=info msg="Migration successfully executed" id="create dashboard table" duration=934.345µs
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.608113571Z level=info msg="Executing migration" id="add index dashboard.account_id"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.609058556Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=946.075µs
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.611734066Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.612445905Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=709.909µs
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.616013169Z level=info msg="Executing migration" id="create dashboard_tag table"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.616721168Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=707.869µs
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.62022439Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.620942699Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=717.869µs
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.62364761Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.624478872Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=832.582µs
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.630151232Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.635476502Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=5.322221ms
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.63843647Z level=info msg="Executing migration" id="create dashboard v2"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.639442786Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=1.008886ms
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.641524261Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.642288831Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=764.83µs
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.645591218Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.646883472Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=1.294314ms
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.649735127Z level=info msg="Executing migration" id="copy dashboard v1 to v2"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.650505978Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=772.921µs
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.652627524Z level=info msg="Executing migration" id="drop table dashboard_v1"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.654160404Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=1.53345ms
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.658974131Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.659231818Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=261.367µs
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.66272721Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.664589429Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=1.862369ms
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.669729724Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.671722037Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=1.994093ms
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.675194668Z level=info msg="Executing migration" id="Add column gnetId in dashboard"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.67715995Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=1.964592ms
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.681067563Z level=info msg="Executing migration" id="Add index for gnetId in dashboard"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.681857294Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=788.951µs
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.68512305Z level=info msg="Executing migration" id="Add column plugin_id in dashboard"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.68665001Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=1.52764ms
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.689817624Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.690519792Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=702.338µs
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.694548928Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.695401741Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=854.783µs
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.698645216Z level=info msg="Executing migration" id="Update dashboard table charset"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.69880278Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=162.024µs
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.705154288Z level=info msg="Executing migration" id="Update dashboard_tag table charset"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.705298512Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=142.423µs
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.73979338Z level=info msg="Executing migration" id="Add column folder_id in dashboard"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.74169111Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=1.90207ms
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.793139176Z level=info msg="Executing migration" id="Add column isFolder in dashboard"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.795068867Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=1.930341ms
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.797394868Z level=info msg="Executing migration" id="Add column has_acl in dashboard"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.799292928Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=1.89758ms
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.805468351Z level=info msg="Executing migration" id="Add column uid in dashboard"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.807359991Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=1.89482ms
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.810606746Z level=info msg="Executing migration" id="Update uid column values in dashboard"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.810808491Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=202.555µs
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.81415984Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.815165786Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=1.008726ms
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.818497544Z level=info msg="Executing migration" id="Remove unique index org_id_slug"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.819434639Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=939.875µs
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.822973952Z level=info msg="Executing migration" id="Update dashboard title length"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.823011843Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=35.751µs
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.82860057Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.829473723Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=874.093µs
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.835556044Z level=info msg="Executing migration" id="create dashboard_provisioning"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.836285563Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=730.85µs
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.849382918Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.854927274Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=5.543286ms
Sep 30 14:17:24 compute-0 podman[98997]: 2025-09-30 14:17:24.859895225 +0000 UTC m=+0.061369508 container create d9d2163fa8f7f2c52c62a363dfa3da381b418d7804125107beb954a5abef75e7 (image=quay.io/ceph/haproxy:2.3, name=laughing_shaw)
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.861947379Z level=info msg="Executing migration" id="create dashboard_provisioning v2"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.862852973Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=910.884µs
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.866057337Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.866779366Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=724.799µs
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.873513764Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.874252983Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=739.209µs
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.877274053Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.8775402Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=266.577µs
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.879425919Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.879983174Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=557.195µs
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.882811899Z level=info msg="Executing migration" id="Add check_sum column"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.88475523Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=1.944331ms
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.88665831Z level=info msg="Executing migration" id="Add index for dashboard_title"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.887839491Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=1.180851ms
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.892187896Z level=info msg="Executing migration" id="delete tags for deleted dashboards"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.892479553Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=295.107µs
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.895859642Z level=info msg="Executing migration" id="delete stars for deleted dashboards"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.896047157Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=185.125µs
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.899363135Z level=info msg="Executing migration" id="Add index for dashboard_is_folder"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.900236878Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=870.293µs
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.904789078Z level=info msg="Executing migration" id="Add isPublic for dashboard"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.906742619Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=1.957201ms
Sep 30 14:17:24 compute-0 systemd[1]: Started libpod-conmon-d9d2163fa8f7f2c52c62a363dfa3da381b418d7804125107beb954a5abef75e7.scope.
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.910976451Z level=info msg="Executing migration" id="create data_source table"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.912364337Z level=info msg="Migration successfully executed" id="create data_source table" duration=1.389376ms
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.916990069Z level=info msg="Executing migration" id="add index data_source.account_id"
Sep 30 14:17:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:24.9181612Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=1.174301ms
Sep 30 14:17:24 compute-0 podman[98997]: 2025-09-30 14:17:24.822811328 +0000 UTC m=+0.024285631 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Sep 30 14:17:24 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:17:25 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:25.069510827Z level=info msg="Executing migration" id="add unique index data_source.account_id_name"
Sep 30 14:17:25 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:25.070428781Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=920.694µs
Sep 30 14:17:25 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:25.205242903Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1"
Sep 30 14:17:25 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:25.206320832Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=1.081529ms
Sep 30 14:17:25 compute-0 podman[98997]: 2025-09-30 14:17:25.208809337 +0000 UTC m=+0.410283640 container init d9d2163fa8f7f2c52c62a363dfa3da381b418d7804125107beb954a5abef75e7 (image=quay.io/ceph/haproxy:2.3, name=laughing_shaw)
Sep 30 14:17:25 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:25.210695317Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1"
Sep 30 14:17:25 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:25.211594411Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=899.404µs
Sep 30 14:17:25 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:25.214549738Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1"
Sep 30 14:17:25 compute-0 podman[98997]: 2025-09-30 14:17:25.215689699 +0000 UTC m=+0.417163982 container start d9d2163fa8f7f2c52c62a363dfa3da381b418d7804125107beb954a5abef75e7 (image=quay.io/ceph/haproxy:2.3, name=laughing_shaw)
Sep 30 14:17:25 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:25.220223118Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=5.67089ms
Sep 30 14:17:25 compute-0 laughing_shaw[99013]: 0 0
Sep 30 14:17:25 compute-0 systemd[1]: libpod-d9d2163fa8f7f2c52c62a363dfa3da381b418d7804125107beb954a5abef75e7.scope: Deactivated successfully.
Sep 30 14:17:25 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:17:25 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf90002f40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:17:25 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:25.246645744Z level=info msg="Executing migration" id="create data_source table v2"
Sep 30 14:17:25 compute-0 podman[98997]: 2025-09-30 14:17:25.247270001 +0000 UTC m=+0.448744284 container attach d9d2163fa8f7f2c52c62a363dfa3da381b418d7804125107beb954a5abef75e7 (image=quay.io/ceph/haproxy:2.3, name=laughing_shaw)
Sep 30 14:17:25 compute-0 podman[98997]: 2025-09-30 14:17:25.247749053 +0000 UTC m=+0.449223336 container died d9d2163fa8f7f2c52c62a363dfa3da381b418d7804125107beb954a5abef75e7 (image=quay.io/ceph/haproxy:2.3, name=laughing_shaw)
Sep 30 14:17:25 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:25.247797164Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=1.15572ms
Sep 30 14:17:25 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:25.251827851Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2"
Sep 30 14:17:25 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:25.252768245Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=946.445µs
Sep 30 14:17:25 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:25.258127147Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2"
Sep 30 14:17:25 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:25.259313498Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=1.187791ms
Sep 30 14:17:25 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e67 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 14:17:25 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:25.268424878Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2"
Sep 30 14:17:25 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:25.269157337Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=733.759µs
Sep 30 14:17:25 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:25.27307007Z level=info msg="Executing migration" id="Add column with_credentials"
Sep 30 14:17:25 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:25.275014632Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=1.946591ms
Sep 30 14:17:25 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:25.281288617Z level=info msg="Executing migration" id="Add secure json data column"
Sep 30 14:17:25 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:25.283285779Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=1.998272ms
Sep 30 14:17:25 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:25.286388951Z level=info msg="Executing migration" id="Update data_source table charset"
Sep 30 14:17:25 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:25.286414582Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=27.041µs
Sep 30 14:17:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-1adcfb4e2cc76562d279fc23d6500af2a3c0ceec94eaeec81e32dc3954374f11-merged.mount: Deactivated successfully.
Sep 30 14:17:25 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:25.289654047Z level=info msg="Executing migration" id="Update initial version to 1"
Sep 30 14:17:25 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:25.289913594Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=257.147µs
Sep 30 14:17:25 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:25.293573121Z level=info msg="Executing migration" id="Add read_only data column"
Sep 30 14:17:25 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:25.2954495Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=1.876589ms
Sep 30 14:17:25 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:25.297394051Z level=info msg="Executing migration" id="Migrate logging ds to loki ds"
Sep 30 14:17:25 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:25.297574866Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=180.345µs
Sep 30 14:17:25 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:25.302492716Z level=info msg="Executing migration" id="Update json_data with nulls"
Sep 30 14:17:25 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:25.302688891Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=199.206µs
Sep 30 14:17:25 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:25.305524755Z level=info msg="Executing migration" id="Add uid column"
Sep 30 14:17:25 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:25.307679842Z level=info msg="Migration successfully executed" id="Add uid column" duration=2.154547ms
Sep 30 14:17:25 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:25.312088618Z level=info msg="Executing migration" id="Update uid value"
Sep 30 14:17:25 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:25.312386816Z level=info msg="Migration successfully executed" id="Update uid value" duration=301.648µs
Sep 30 14:17:25 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:25.317187873Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid"
Sep 30 14:17:25 compute-0 podman[98997]: 2025-09-30 14:17:25.31784633 +0000 UTC m=+0.519320613 container remove d9d2163fa8f7f2c52c62a363dfa3da381b418d7804125107beb954a5abef75e7 (image=quay.io/ceph/haproxy:2.3, name=laughing_shaw)
Sep 30 14:17:25 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:25.32391972Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=6.728457ms
Sep 30 14:17:25 compute-0 systemd[1]: libpod-conmon-d9d2163fa8f7f2c52c62a363dfa3da381b418d7804125107beb954a5abef75e7.scope: Deactivated successfully.
Sep 30 14:17:25 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:25.326882398Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default"
Sep 30 14:17:25 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:25.327832893Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=950.995µs
Sep 30 14:17:25 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:25.330443062Z level=info msg="Executing migration" id="create api_key table"
Sep 30 14:17:25 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:25.331207812Z level=info msg="Migration successfully executed" id="create api_key table" duration=764.26µs
Sep 30 14:17:25 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:25.364572851Z level=info msg="Executing migration" id="add index api_key.account_id"
Sep 30 14:17:25 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:25.365633529Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=1.064758ms
Sep 30 14:17:25 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:25.399295636Z level=info msg="Executing migration" id="add index api_key.key"
Sep 30 14:17:25 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:25.400128098Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=838.802µs
Sep 30 14:17:25 compute-0 ceph-mon[74194]: 7.17 scrub starts
Sep 30 14:17:25 compute-0 ceph-mon[74194]: 7.17 scrub ok
Sep 30 14:17:25 compute-0 ceph-mon[74194]: Deploying daemon haproxy.rgw.default.compute-0.fikipk on compute-0
Sep 30 14:17:25 compute-0 ceph-mon[74194]: 8.1 scrub starts
Sep 30 14:17:25 compute-0 ceph-mon[74194]: 8.1 scrub ok
Sep 30 14:17:25 compute-0 ceph-mon[74194]: 12.17 scrub starts
Sep 30 14:17:25 compute-0 ceph-mon[74194]: 12.17 scrub ok
Sep 30 14:17:25 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:25.496721983Z level=info msg="Executing migration" id="add index api_key.account_id_name"
Sep 30 14:17:25 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:25.497901854Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=1.182971ms
Sep 30 14:17:25 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:25.525502851Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1"
Sep 30 14:17:25 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:25.52657778Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=1.078149ms
Sep 30 14:17:25 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:25.542061838Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1"
Sep 30 14:17:25 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:25.543256129Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=1.198351ms
Sep 30 14:17:25 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Sep 30 14:17:25 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Sep 30 14:17:25 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[98363]: ts=2025-09-30T14:17:25.631Z caller=cluster.go:698 level=info component=cluster msg="gossip settled; proceeding" elapsed=10.003192015s
Sep 30 14:17:25 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v75: 337 pgs: 337 active+clean; 457 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 181 B/s, 1 keys/s, 3 objects/s recovering
Sep 30 14:17:25 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} v 0)
Sep 30 14:17:25 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]: dispatch
Sep 30 14:17:25 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0)
Sep 30 14:17:25 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Sep 30 14:17:25 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:17:25 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf70003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:17:26 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:17:26 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf9c00a2b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:17:26 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:26.210542461Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1"
Sep 30 14:17:26 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:26.211563818Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=1.024907ms
Sep 30 14:17:26 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 11.6 scrub starts
Sep 30 14:17:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:17:27 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf9c00a2b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:17:27 compute-0 sshd-session[98491]: Connection closed by invalid user admin 139.19.117.130 port 39744 [preauth]
Sep 30 14:17:27 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v76: 337 pgs: 337 active+clean; 457 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 78 B/s, 1 keys/s, 1 objects/s recovering
Sep 30 14:17:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:17:27 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf90002f40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:17:28 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Sep 30 14:17:28 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} v 0)
Sep 30 14:17:28 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]: dispatch
Sep 30 14:17:28 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0)
Sep 30 14:17:28 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Sep 30 14:17:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:17:28 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf70003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:17:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:28.130408397Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1"
Sep 30 14:17:28 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 11.6 scrub ok
Sep 30 14:17:28 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 11.15 scrub starts
Sep 30 14:17:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:28.13812393Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=7.713563ms
Sep 30 14:17:28 compute-0 systemd[1]: Reloading.
Sep 30 14:17:28 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 11.15 scrub ok
Sep 30 14:17:28 compute-0 ceph-mon[74194]: 10.1a scrub starts
Sep 30 14:17:28 compute-0 ceph-mon[74194]: 10.1a scrub ok
Sep 30 14:17:28 compute-0 ceph-mon[74194]: 11.2 scrub starts
Sep 30 14:17:28 compute-0 ceph-mon[74194]: 11.2 scrub ok
Sep 30 14:17:28 compute-0 ceph-mon[74194]: 7.a scrub starts
Sep 30 14:17:28 compute-0 ceph-mon[74194]: 7.a scrub ok
Sep 30 14:17:28 compute-0 ceph-mon[74194]: pgmap v75: 337 pgs: 337 active+clean; 457 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 181 B/s, 1 keys/s, 3 objects/s recovering
Sep 30 14:17:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:28.154021039Z level=info msg="Executing migration" id="create api_key table v2"
Sep 30 14:17:28 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]: dispatch
Sep 30 14:17:28 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Sep 30 14:17:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:28.155297273Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=1.280224ms
Sep 30 14:17:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:28.163397486Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2"
Sep 30 14:17:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:28.164382902Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=984.196µs
Sep 30 14:17:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:28.173390929Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2"
Sep 30 14:17:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:28.174425857Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=1.039247ms
Sep 30 14:17:28 compute-0 systemd-rc-local-generator[99064]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:17:28 compute-0 systemd-sysv-generator[99068]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 14:17:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:28.247519782Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2"
Sep 30 14:17:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:28.248607901Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=1.089669ms
Sep 30 14:17:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:28.267191781Z level=info msg="Executing migration" id="copy api_key v1 to v2"
Sep 30 14:17:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:28.267777426Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=612.506µs
Sep 30 14:17:28 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Sep 30 14:17:28 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Sep 30 14:17:28 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Sep 30 14:17:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:28.270868027Z level=info msg="Executing migration" id="Drop old table api_key_v1"
Sep 30 14:17:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:28.271673709Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=806.522µs
Sep 30 14:17:28 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Sep 30 14:17:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:28.289728314Z level=info msg="Executing migration" id="Update api_key table charset"
Sep 30 14:17:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:28.289769615Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=45.741µs
Sep 30 14:17:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:28.337923814Z level=info msg="Executing migration" id="Add expires to api_key table"
Sep 30 14:17:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:28.340480672Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=2.559257ms
Sep 30 14:17:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:28.349028167Z level=info msg="Executing migration" id="Add service account foreign key"
Sep 30 14:17:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:28.35257878Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=3.557934ms
Sep 30 14:17:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:28.418736793Z level=info msg="Executing migration" id="set service account foreign key to nil if 0"
Sep 30 14:17:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:28.419015731Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=283.238µs
Sep 30 14:17:28 compute-0 systemd[1]: Reloading.
Sep 30 14:17:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:28.483563122Z level=info msg="Executing migration" id="Add last_used_at to api_key table"
Sep 30 14:17:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:28.486317304Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=2.757033ms
Sep 30 14:17:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:28.52335473Z level=info msg="Executing migration" id="Add is_revoked column to api_key table"
Sep 30 14:17:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:28.526498273Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=3.145533ms
Sep 30 14:17:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:28.537552794Z level=info msg="Executing migration" id="create dashboard_snapshot table v4"
Sep 30 14:17:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:28.539052314Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=1.502449ms
Sep 30 14:17:28 compute-0 systemd-rc-local-generator[99106]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:17:28 compute-0 systemd-sysv-generator[99110]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 14:17:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:28.54422706Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1"
Sep 30 14:17:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:28.54500633Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=781.47µs
Sep 30 14:17:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:28.548661267Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2"
Sep 30 14:17:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:28.549671653Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=1.011016ms
Sep 30 14:17:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:28.552220371Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5"
Sep 30 14:17:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:28.553160825Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=937.725µs
Sep 30 14:17:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:28.555706662Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5"
Sep 30 14:17:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:28.556783791Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=1.080519ms
Sep 30 14:17:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:28.559209965Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5"
Sep 30 14:17:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:28.560129109Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=918.724µs
Sep 30 14:17:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:28.564747461Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2"
Sep 30 14:17:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:28.564878494Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=133.734µs
Sep 30 14:17:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:28.584533101Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset"
Sep 30 14:17:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:28.584622023Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=94.742µs
Sep 30 14:17:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:28.643207427Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table"
Sep 30 14:17:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:28.646339849Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=3.134282ms
Sep 30 14:17:28 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 68 pg[9.15( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=4 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=68 pruub=15.053972244s) [2] r=-1 lpr=68 pi=[55,68)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active pruub 230.536468506s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:28 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 68 pg[9.15( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=4 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=68 pruub=15.053927422s) [2] r=-1 lpr=68 pi=[55,68)/1 crt=48'1157 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 230.536468506s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:17:28 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 68 pg[9.d( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=6 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=68 pruub=15.052972794s) [2] r=-1 lpr=68 pi=[55,68)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active pruub 230.536056519s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:28 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 68 pg[9.d( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=6 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=68 pruub=15.052945137s) [2] r=-1 lpr=68 pi=[55,68)/1 crt=48'1157 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 230.536056519s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:17:28 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 68 pg[9.5( v 58'1160 (0'0,58'1160] local-lis/les=55/56 n=6 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=68 pruub=15.052925110s) [2] r=-1 lpr=68 pi=[55,68)/1 crt=56'1158 lcod 57'1159 mlcod 57'1159 active pruub 230.536163330s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:28 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 68 pg[9.5( v 58'1160 (0'0,58'1160] local-lis/les=55/56 n=6 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=68 pruub=15.052891731s) [2] r=-1 lpr=68 pi=[55,68)/1 crt=56'1158 lcod 57'1159 mlcod 0'0 unknown NOTIFY pruub 230.536163330s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:17:28 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 68 pg[9.1d( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=5 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=68 pruub=15.052262306s) [2] r=-1 lpr=68 pi=[55,68)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active pruub 230.535919189s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:28 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 68 pg[9.1d( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=5 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=68 pruub=15.052222252s) [2] r=-1 lpr=68 pi=[55,68)/1 crt=48'1157 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 230.535919189s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:17:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:28.650119079Z level=info msg="Executing migration" id="Add encrypted dashboard json column"
Sep 30 14:17:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:28.652462361Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=2.345052ms
Sep 30 14:17:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:28.660522173Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB"
Sep 30 14:17:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:28.660726558Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=207.995µs
Sep 30 14:17:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:28.66269839Z level=info msg="Executing migration" id="create quota table v1"
Sep 30 14:17:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:28.66343997Z level=info msg="Migration successfully executed" id="create quota table v1" duration=741.58µs
Sep 30 14:17:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:28.66799642Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1"
Sep 30 14:17:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:28.668885773Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=892.173µs
Sep 30 14:17:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:28.704244185Z level=info msg="Executing migration" id="Update quota table charset"
Sep 30 14:17:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:28.704314447Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=74.862µs
Sep 30 14:17:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:28.716627771Z level=info msg="Executing migration" id="create plugin_setting table"
Sep 30 14:17:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:28.71772524Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=1.096169ms
Sep 30 14:17:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:28.720471753Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1"
Sep 30 14:17:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:28.721713835Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=1.296394ms
Sep 30 14:17:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:28.72987075Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings"
Sep 30 14:17:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:28.73289886Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=3.02881ms
Sep 30 14:17:28 compute-0 systemd[1]: Starting Ceph haproxy.rgw.default.compute-0.fikipk for 5e3c7776-ac03-5698-b79f-a6dc2d80cae6...
Sep 30 14:17:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:28.802292779Z level=info msg="Executing migration" id="Update plugin_setting table charset"
Sep 30 14:17:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:28.80272994Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=440.482µs
Sep 30 14:17:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:28.830736068Z level=info msg="Executing migration" id="create session table"
Sep 30 14:17:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:28.831703333Z level=info msg="Migration successfully executed" id="create session table" duration=973.045µs
Sep 30 14:17:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:28.884630958Z level=info msg="Executing migration" id="Drop old table playlist table"
Sep 30 14:17:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:28.884822533Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=194.795µs
Sep 30 14:17:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:28.914352071Z level=info msg="Executing migration" id="Drop old table playlist_item table"
Sep 30 14:17:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:28.914527576Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=179.405µs
Sep 30 14:17:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:28.980463483Z level=info msg="Executing migration" id="create playlist table v2"
Sep 30 14:17:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:28.981372197Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=908.884µs
Sep 30 14:17:29 compute-0 ceph-mgr[74485]: [progress INFO root] Writing back 26 completed events
Sep 30 14:17:29 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.010436403Z level=info msg="Executing migration" id="create playlist item table v2"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.011388728Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=955.485µs
Sep 30 14:17:29 compute-0 podman[99162]: 2025-09-30 14:17:28.932450938 +0000 UTC m=+0.020904432 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Sep 30 14:17:29 compute-0 podman[99162]: 2025-09-30 14:17:29.061611831 +0000 UTC m=+0.150065305 container create 73d92d0d3427c55171d47802622bd9ac0a80df30a60d7128643c2709f857c840 (image=quay.io/ceph/haproxy:2.3, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-rgw-default-compute-0-fikipk)
Sep 30 14:17:29 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 5.c scrub starts
Sep 30 14:17:29 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 5.c scrub ok
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.127682462Z level=info msg="Executing migration" id="Update playlist table charset"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.127759804Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=85.722µs
Sep 30 14:17:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e9cda47c6a0b0f85c52277e99099d8569ae975445de41de261d26a8fa48b905/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.180836513Z level=info msg="Executing migration" id="Update playlist_item table charset"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.180887764Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=55.442µs
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:17:29 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf9c00a2b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:17:29 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.272281142Z level=info msg="Executing migration" id="Add playlist column created_at"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.275040295Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=2.759252ms
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.359539991Z level=info msg="Executing migration" id="Add playlist column updated_at"
Sep 30 14:17:29 compute-0 podman[99162]: 2025-09-30 14:17:29.362048477 +0000 UTC m=+0.450501961 container init 73d92d0d3427c55171d47802622bd9ac0a80df30a60d7128643c2709f857c840 (image=quay.io/ceph/haproxy:2.3, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-rgw-default-compute-0-fikipk)
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.362196381Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=2.65976ms
Sep 30 14:17:29 compute-0 podman[99162]: 2025-09-30 14:17:29.367231474 +0000 UTC m=+0.455684948 container start 73d92d0d3427c55171d47802622bd9ac0a80df30a60d7128643c2709f857c840 (image=quay.io/ceph/haproxy:2.3, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-rgw-default-compute-0-fikipk)
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.370421968Z level=info msg="Executing migration" id="drop preferences table v2"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.370592062Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=173.374µs
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-rgw-default-compute-0-fikipk[99177]: [NOTICE] 272/141729 (2) : New worker #1 (4) forked
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.389913321Z level=info msg="Executing migration" id="drop preferences table v3"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.390054085Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=144.234µs
Sep 30 14:17:29 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:17:29 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.002000053s ======
Sep 30 14:17:29 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:17:29.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Sep 30 14:17:29 compute-0 bash[99162]: 73d92d0d3427c55171d47802622bd9ac0a80df30a60d7128643c2709f857c840
Sep 30 14:17:29 compute-0 systemd[1]: Started Ceph haproxy.rgw.default.compute-0.fikipk for 5e3c7776-ac03-5698-b79f-a6dc2d80cae6.
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.414691714Z level=info msg="Executing migration" id="create preferences table v3"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.41568145Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=996.246µs
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.435082661Z level=info msg="Executing migration" id="Update preferences table charset"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.435243346Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=165.055µs
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.440491544Z level=info msg="Executing migration" id="Add column team_id in preferences"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.444923371Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=4.418826ms
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.447968671Z level=info msg="Executing migration" id="Update team_id column values in preferences"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.448497655Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=531.874µs
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.450788375Z level=info msg="Executing migration" id="Add column week_start in preferences"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.454782611Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=3.981245ms
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.458440077Z level=info msg="Executing migration" id="Add column preferences.json_data"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.463137711Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=4.690353ms
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.466425877Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.466956601Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=535.184µs
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.47107655Z level=info msg="Executing migration" id="Add preferences index org_id"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.473621147Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=2.551097ms
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.47754944Z level=info msg="Executing migration" id="Add preferences index user_id"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.479658076Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=2.113106ms
Sep 30 14:17:29 compute-0 sudo[98931]: pam_unix(sudo:session): session closed for user root
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.484076182Z level=info msg="Executing migration" id="create alert table v1"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.485991363Z level=info msg="Migration successfully executed" id="create alert table v1" duration=1.915941ms
Sep 30 14:17:29 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.489804513Z level=info msg="Executing migration" id="add index alert org_id & id "
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.49121363Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=1.414047ms
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.495553195Z level=info msg="Executing migration" id="add index alert state"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.496563971Z level=info msg="Migration successfully executed" id="add index alert state" duration=1.011956ms
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.499048067Z level=info msg="Executing migration" id="add index alert dashboard_id"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.49994126Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=893.013µs
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.502134678Z level=info msg="Executing migration" id="Create alert_rule_tag table v1"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.502819976Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=684.848µs
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.507645143Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.508822324Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=1.178301ms
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.511135795Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.51208324Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=946.905µs
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.514467483Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.523032409Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=8.561546ms
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.525448062Z level=info msg="Executing migration" id="Create alert_rule_tag table v2"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.526230893Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=782.191µs
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.531951894Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.532917039Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=965.265µs
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.535681102Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.536081443Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=400.271µs
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.537780617Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.538506896Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=730.509µs
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.541422983Z level=info msg="Executing migration" id="create alert_notification table v1"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.542563093Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=1.14392ms
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.546158758Z level=info msg="Executing migration" id="Add column is_default"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.549326432Z level=info msg="Migration successfully executed" id="Add column is_default" duration=3.168814ms
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.551232812Z level=info msg="Executing migration" id="Add column frequency"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.554729944Z level=info msg="Migration successfully executed" id="Add column frequency" duration=3.493702ms
Sep 30 14:17:29 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:17:29 compute-0 ceph-mgr[74485]: [progress INFO root] Completed event 2bfd320e-eca1-49dd-9798-91ea89e7feda (Global Recovery Event) in 6 seconds
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.56405999Z level=info msg="Executing migration" id="Add column send_reminder"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.567426988Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=3.367758ms
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.569443182Z level=info msg="Executing migration" id="Add column disable_resolve_message"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.57241793Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=2.974048ms
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.575216664Z level=info msg="Executing migration" id="add index alert_notification org_id & name"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.576081096Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=864.742µs
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.578195652Z level=info msg="Executing migration" id="Update alert table charset"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.578235043Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=39.711µs
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.594927593Z level=info msg="Executing migration" id="Update alert_notification table charset"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.594980264Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=59.811µs
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.606074487Z level=info msg="Executing migration" id="create notification_journal table v1"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.60693963Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=867.162µs
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.609647741Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.61038187Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=734.199µs
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.63200636Z level=info msg="Executing migration" id="drop alert_notification_journal"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.632997786Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=995.656µs
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.6414934Z level=info msg="Executing migration" id="create alert_notification_state table v1"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.642606899Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=1.115879ms
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.64527612Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.64605919Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=782.77µs
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.649903052Z level=info msg="Executing migration" id="Add for to alert table"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.652817148Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=2.914806ms
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.655561531Z level=info msg="Executing migration" id="Add column uid in alert_notification"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.658608501Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=3.04996ms
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.660877331Z level=info msg="Executing migration" id="Update uid column values in alert_notification"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.661037705Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=159.904µs
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.664467255Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.665294987Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=827.852µs
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.668858391Z level=info msg="Executing migration" id="Remove unique index org_id_name"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.669762305Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=904.404µs
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.674611763Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.677414787Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=2.803283ms
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.681004081Z level=info msg="Executing migration" id="alter alert.settings to mediumtext"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.681068033Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=65.482µs
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.683534288Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.68436457Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=829.812µs
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.686608289Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.687550964Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=945.015µs
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.691266381Z level=info msg="Executing migration" id="Drop old annotation table v4"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.691384485Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=118.243µs
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.693296025Z level=info msg="Executing migration" id="create annotation table v5"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.694202829Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=906.934µs
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.6961611Z level=info msg="Executing migration" id="add index annotation 0 v3"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.697069414Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=909.624µs
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.702447656Z level=info msg="Executing migration" id="add index annotation 1 v3"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.70335597Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=910.744µs
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.706667757Z level=info msg="Executing migration" id="add index annotation 2 v3"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.707400507Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=732.84µs
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.711808543Z level=info msg="Executing migration" id="add index annotation 3 v3"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.712658815Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=850.342µs
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.716029684Z level=info msg="Executing migration" id="add index annotation 4 v3"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.716887237Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=857.232µs
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.718968171Z level=info msg="Executing migration" id="Update annotation table charset"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.718990222Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=22.751µs
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.723774658Z level=info msg="Executing migration" id="Add column region_id to annotation table"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.728963225Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=5.189287ms
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.731617665Z level=info msg="Executing migration" id="Drop category_id index"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.732488598Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=871.212µs
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.734403818Z level=info msg="Executing migration" id="Add column tags to annotation table"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.737438118Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=3.03483ms
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.739523113Z level=info msg="Executing migration" id="Create annotation_tag table v2"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.740228372Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=705.699µs
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.742755148Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.743563319Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=807.431µs
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.746413745Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.747289828Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=877.194µs
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.750523073Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.75915432Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=8.626497ms
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.765080546Z level=info msg="Executing migration" id="Create annotation_tag table v3"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.765812746Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=731.95µs
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.767912211Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.768661331Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=748.54µs
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.770721585Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.770973372Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=252.287µs
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.772589584Z level=info msg="Executing migration" id="drop table annotation_tag_v2"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.773091507Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=502.053µs
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.775787268Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.775948753Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=162.035µs
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.777962116Z level=info msg="Executing migration" id="Add created time to annotation table"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.780953185Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=2.991149ms
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.782985178Z level=info msg="Executing migration" id="Add updated time to annotation table"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.785989227Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=3.004069ms
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.78875808Z level=info msg="Executing migration" id="Add index for created in annotation table"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.7895143Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=757.5µs
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.791861442Z level=info msg="Executing migration" id="Add index for updated in annotation table"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.792595691Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=734.769µs
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.795131148Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.795351704Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=221.296µs
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.79859944Z level=info msg="Executing migration" id="Add epoch_end column"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.801478875Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=2.879775ms
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.803221411Z level=info msg="Executing migration" id="Add index for epoch_end"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.803976981Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=755.58µs
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.806363044Z level=info msg="Executing migration" id="Make epoch_end the same as epoch"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.806501628Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=138.704µs
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.808216693Z level=info msg="Executing migration" id="Move region to single row"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.808636544Z level=info msg="Migration successfully executed" id="Move region to single row" duration=420.101µs
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.811766296Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.812785123Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=1.019057ms
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.814305613Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.815269939Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=962.126µs
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.817030675Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.818131204Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=1.096069ms
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.827948703Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.828853057Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=904.874µs
Sep 30 14:17:29 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v78: 337 pgs: 4 unknown, 333 active+clean; 457 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.832623036Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.833460918Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=838.512µs
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.83544396Z level=info msg="Executing migration" id="Add index for alert_id on annotation table"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.836143749Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=697.289µs
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.837761211Z level=info msg="Executing migration" id="Increase tags column to length 4096"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.837806923Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=46.122µs
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:17:29 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf84001a70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.840013421Z level=info msg="Executing migration" id="create test_data table"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.84073955Z level=info msg="Migration successfully executed" id="create test_data table" duration=726.439µs
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.842799804Z level=info msg="Executing migration" id="create dashboard_version table v1"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.843535063Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=735.399µs
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.845748712Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.846576604Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=828.582µs
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.848580186Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.849502201Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=923.245µs
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.851530394Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.851730489Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=200.485µs
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.855502719Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.85592935Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=427.471µs
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.857666276Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.857729877Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=64.191µs
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.862947125Z level=info msg="Executing migration" id="create team table"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.863761326Z level=info msg="Migration successfully executed" id="create team table" duration=815.731µs
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.86580614Z level=info msg="Executing migration" id="add index team.org_id"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.866765936Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=959.336µs
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.868808879Z level=info msg="Executing migration" id="add unique index team_org_id_name"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.869709413Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=903.184µs
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.871820539Z level=info msg="Executing migration" id="Add column uid in team"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.875110335Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=3.293926ms
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.876825241Z level=info msg="Executing migration" id="Update uid column values in team"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.876960894Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=135.923µs
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.878625968Z level=info msg="Executing migration" id="Add unique index team_org_id_uid"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.879577723Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=951.615µs
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.883361303Z level=info msg="Executing migration" id="create team member table"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.884053961Z level=info msg="Migration successfully executed" id="create team member table" duration=693.598µs
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.885956521Z level=info msg="Executing migration" id="add index team_member.org_id"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.886604258Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=647.567µs
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.888627792Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.889644808Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=1.019907ms
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.891720183Z level=info msg="Executing migration" id="add index team_member.team_id"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.892481683Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=760.92µs
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.898103241Z level=info msg="Executing migration" id="Add column email to team table"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.90223755Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=4.137239ms
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.90452017Z level=info msg="Executing migration" id="Add column external to team_member table"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.908578557Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=4.058907ms
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.910339014Z level=info msg="Executing migration" id="Add column permission to team_member table"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.913846736Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=3.507772ms
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.915555101Z level=info msg="Executing migration" id="create dashboard acl table"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.916421784Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=866.683µs
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.921482817Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.922266368Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=783.281µs
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.924819305Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.925850362Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=1.031047ms
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.927981529Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.929128599Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=1.146511ms
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.931507861Z level=info msg="Executing migration" id="add index dashboard_acl_user_id"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.932360574Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=852.113µs
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.934526841Z level=info msg="Executing migration" id="add index dashboard_acl_team_id"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.935334652Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=805.601µs
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.939080651Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.939991455Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=909.594µs
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.946075605Z level=info msg="Executing migration" id="add index dashboard_permission"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.947062441Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=987.306µs
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.976782684Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.977480403Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=694.778µs
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.991752999Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.992124799Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=376.54µs
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.994286466Z level=info msg="Executing migration" id="create tag table"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.995087207Z level=info msg="Migration successfully executed" id="create tag table" duration=800.691µs
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.997211313Z level=info msg="Executing migration" id="add index tag.key_value"
Sep 30 14:17:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:29.997975263Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=763.97µs
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.002226435Z level=info msg="Executing migration" id="create login attempt table"
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.003054527Z level=info msg="Migration successfully executed" id="create login attempt table" duration=828.221µs
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.007435862Z level=info msg="Executing migration" id="add index login_attempt.username"
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.008291645Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=855.442µs
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.015653279Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1"
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.01648493Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=831.442µs
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.023528196Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1"
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.033885829Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=10.355513ms
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.03810144Z level=info msg="Executing migration" id="create login_attempt v2"
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.038931982Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=830.862µs
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.040919504Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2"
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.042318251Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=1.398707ms
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.045532986Z level=info msg="Executing migration" id="copy login_attempt v1 to v2"
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.045956147Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=424.631µs
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.047690643Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty"
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.048385801Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=694.738µs
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.050018614Z level=info msg="Executing migration" id="create user auth table"
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.050751463Z level=info msg="Migration successfully executed" id="create user auth table" duration=732.159µs
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.052766506Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1"
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.053700411Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=933.285µs
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.055785776Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190"
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.055859638Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=74.342µs
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.057886131Z level=info msg="Executing migration" id="Add OAuth access token to user_auth"
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.062080412Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=4.193711ms
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.063625422Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth"
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.067247428Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=3.621736ms
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.068958013Z level=info msg="Executing migration" id="Add OAuth token type to user_auth"
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.072480986Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=3.522493ms
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.073986415Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth"
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.077763795Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=3.77689ms
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.080081096Z level=info msg="Executing migration" id="Add index to user_id column in user_auth"
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.080954809Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=874.103µs
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.083324052Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth"
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.08783114Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=4.505578ms
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.089856944Z level=info msg="Executing migration" id="create server_lock table"
Sep 30 14:17:30 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 5.a deep-scrub starts
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.091083446Z level=info msg="Migration successfully executed" id="create server_lock table" duration=1.227212ms
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.094511506Z level=info msg="Executing migration" id="add index server_lock.operation_uid"
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.09578283Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=1.273644ms
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.098151432Z level=info msg="Executing migration" id="create user auth token table"
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.099154379Z level=info msg="Migration successfully executed" id="create user auth token table" duration=1.003807ms
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.101144741Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token"
Sep 30 14:17:30 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 5.a deep-scrub ok
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:17:30 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf84001a70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.102553568Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=1.408547ms
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.104921081Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token"
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.105889126Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=968.106µs
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.107747115Z level=info msg="Executing migration" id="add index user_auth_token.user_id"
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.108777452Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=1.032467ms
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.111235017Z level=info msg="Executing migration" id="Add revoked_at to the user auth token"
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.115490479Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=4.254942ms
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.121781725Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at"
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.122888834Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=1.110489ms
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.12958838Z level=info msg="Executing migration" id="create cache_data table"
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.131312876Z level=info msg="Migration successfully executed" id="create cache_data table" duration=1.730856ms
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.138275729Z level=info msg="Executing migration" id="add unique index cache_data.cache_key"
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.139683086Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=1.411457ms
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.145776847Z level=info msg="Executing migration" id="create short_url table v1"
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.147077871Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=1.303064ms
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.149812943Z level=info msg="Executing migration" id="add index short_url.org_id-uid"
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.150866221Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=1.053498ms
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.152886784Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint"
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.152940366Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=54.442µs
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.154844866Z level=info msg="Executing migration" id="delete alert_definition table"
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.154925838Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=82.242µs
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.15690983Z level=info msg="Executing migration" id="recreate alert_definition table"
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.15803224Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=1.12168ms
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.163501614Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns"
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.164783218Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=1.283944ms
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.168368902Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns"
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.169740648Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=1.370206ms
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.171913806Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql"
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.171965267Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=52.351µs
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.173644041Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns"
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.174593556Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=949.625µs
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.176350153Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns"
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.177288547Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=941.954µs
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.186834539Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns"
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.187962969Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=1.13153ms
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.189682884Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns"
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.190532956Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=854.042µs
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.194009298Z level=info msg="Executing migration" id="Add column paused in alert_definition"
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.198326432Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=4.316074ms
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.200817437Z level=info msg="Executing migration" id="drop alert_definition table"
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.201859145Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=1.038468ms
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.204309919Z level=info msg="Executing migration" id="delete alert_definition_version table"
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.204382011Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=72.572µs
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.206134557Z level=info msg="Executing migration" id="recreate alert_definition_version table"
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.207011241Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=876.173µs
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.209027874Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns"
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.209839235Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=811.151µs
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.215105944Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns"
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.21609007Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=986.376µs
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.218030451Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql"
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.218094693Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=65.391µs
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.219774837Z level=info msg="Executing migration" id="drop alert_definition_version table"
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.220867706Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=1.092579ms
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.222795706Z level=info msg="Executing migration" id="create alert_instance table"
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.223863354Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=1.066488ms
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.225732304Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns"
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.226809222Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=1.076728ms
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.228794004Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns"
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.229799081Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=1.005197ms
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.23544919Z level=info msg="Executing migration" id="add column current_state_end to alert_instance"
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.239724742Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=4.276382ms
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.25594647Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance"
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.257295005Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=1.352755ms
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.351463507Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance"
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.352748091Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=1.288054ms
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.481965565Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance"
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.507034176Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=25.068911ms
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.602036429Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance"
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.622510498Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=20.470999ms
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.676730597Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance"
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.677846946Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=1.118629ms
Sep 30 14:17:30 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Sep 30 14:17:30 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Sep 30 14:17:30 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.737873348Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance"
Sep 30 14:17:30 compute-0 ceph-mon[74194]: 10.1c scrub starts
Sep 30 14:17:30 compute-0 ceph-mon[74194]: 10.1c scrub ok
Sep 30 14:17:30 compute-0 ceph-mon[74194]: 11.6 scrub starts
Sep 30 14:17:30 compute-0 ceph-mon[74194]: 8.5 scrub starts
Sep 30 14:17:30 compute-0 ceph-mon[74194]: 8.5 scrub ok
Sep 30 14:17:30 compute-0 ceph-mon[74194]: 10.1d scrub starts
Sep 30 14:17:30 compute-0 ceph-mon[74194]: 10.1d scrub ok
Sep 30 14:17:30 compute-0 ceph-mon[74194]: 10.10 scrub starts
Sep 30 14:17:30 compute-0 ceph-mon[74194]: 10.10 scrub ok
Sep 30 14:17:30 compute-0 ceph-mon[74194]: pgmap v76: 337 pgs: 337 active+clean; 457 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 78 B/s, 1 keys/s, 1 objects/s recovering
Sep 30 14:17:30 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]: dispatch
Sep 30 14:17:30 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Sep 30 14:17:30 compute-0 ceph-mon[74194]: 12.1b scrub starts
Sep 30 14:17:30 compute-0 ceph-mon[74194]: 12.1b scrub ok
Sep 30 14:17:30 compute-0 ceph-mon[74194]: 11.6 scrub ok
Sep 30 14:17:30 compute-0 ceph-mon[74194]: 11.15 scrub starts
Sep 30 14:17:30 compute-0 ceph-mon[74194]: 11.15 scrub ok
Sep 30 14:17:30 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Sep 30 14:17:30 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Sep 30 14:17:30 compute-0 ceph-mon[74194]: osdmap e68: 3 total, 3 up, 3 in
Sep 30 14:17:30 compute-0 ceph-mon[74194]: 11.16 deep-scrub starts
Sep 30 14:17:30 compute-0 ceph-mon[74194]: 11.16 deep-scrub ok
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.741342439Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=3.469122ms
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.754949898Z level=info msg="Executing migration" id="add current_reason column related to current_state"
Sep 30 14:17:30 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:17:30 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.762871526Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=7.919969ms
Sep 30 14:17:30 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.861728551Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance"
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.866695622Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=4.969181ms
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.976238948Z level=info msg="Executing migration" id="create alert_rule table"
Sep 30 14:17:30 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 69 pg[9.15( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=4 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=69) [2]/[0] r=0 lpr=69 pi=[55,69)/1 crt=48'1157 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:30 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 69 pg[9.15( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=4 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=69) [2]/[0] r=0 lpr=69 pi=[55,69)/1 crt=48'1157 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Sep 30 14:17:30 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 69 pg[9.d( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=6 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=69) [2]/[0] r=0 lpr=69 pi=[55,69)/1 crt=48'1157 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:30 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 69 pg[9.d( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=6 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=69) [2]/[0] r=0 lpr=69 pi=[55,69)/1 crt=48'1157 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Sep 30 14:17:30 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 69 pg[9.5( v 58'1160 (0'0,58'1160] local-lis/les=55/56 n=6 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=69) [2]/[0] r=0 lpr=69 pi=[55,69)/1 crt=56'1158 lcod 57'1159 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:30 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 69 pg[9.5( v 58'1160 (0'0,58'1160] local-lis/les=55/56 n=6 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=69) [2]/[0] r=0 lpr=69 pi=[55,69)/1 crt=56'1158 lcod 57'1159 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Sep 30 14:17:30 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 69 pg[9.1d( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=5 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=69) [2]/[0] r=0 lpr=69 pi=[55,69)/1 crt=48'1157 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:30 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 69 pg[9.1d( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=5 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=69) [2]/[0] r=0 lpr=69 pi=[55,69)/1 crt=48'1157 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Sep 30 14:17:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:30.978979891Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=2.742482ms
Sep 30 14:17:31 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:31.018517252Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns"
Sep 30 14:17:31 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:31.019892689Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=1.383906ms
Sep 30 14:17:31 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:31.026471762Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns"
Sep 30 14:17:31 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:31.027473088Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=1.001526ms
Sep 30 14:17:31 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 5.17 deep-scrub starts
Sep 30 14:17:31 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 5.17 deep-scrub ok
Sep 30 14:17:31 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:31.123060117Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns"
Sep 30 14:17:31 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:31.124799083Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=1.741126ms
Sep 30 14:17:31 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:17:31 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf70003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:17:31 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:31.240425949Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql"
Sep 30 14:17:31 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:31.240544163Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=122.273µs
Sep 30 14:17:31 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:17:31 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Sep 30 14:17:31 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:31.364721864Z level=info msg="Executing migration" id="add column for to alert_rule"
Sep 30 14:17:31 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:31.369629684Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=4.909419ms
Sep 30 14:17:31 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:17:31 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:17:31 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:17:31.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:17:31 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:31.462875111Z level=info msg="Executing migration" id="add column annotations to alert_rule"
Sep 30 14:17:31 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:31.468467028Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=5.592268ms
Sep 30 14:17:31 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:31.59987481Z level=info msg="Executing migration" id="add column labels to alert_rule"
Sep 30 14:17:31 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:31.606052373Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=6.177023ms
Sep 30 14:17:31 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Sep 30 14:17:31 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:31.767924608Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns"
Sep 30 14:17:31 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:31.769057498Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=1.13659ms
Sep 30 14:17:31 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:17:31 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.rgw.default.compute-2.nozgvj on compute-2
Sep 30 14:17:31 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.rgw.default.compute-2.nozgvj on compute-2
Sep 30 14:17:31 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:31.824717594Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns"
Sep 30 14:17:31 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:31.825865535Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=1.150991ms
Sep 30 14:17:31 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v80: 337 pgs: 2 peering, 4 unknown, 331 active+clean; 457 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Sep 30 14:17:31 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:17:31 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf9c00a2b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:17:31 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:31.866656399Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule"
Sep 30 14:17:31 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:31.871693182Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=5.037773ms
Sep 30 14:17:31 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:31.880005011Z level=info msg="Executing migration" id="add panel_id column to alert_rule"
Sep 30 14:17:31 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:31.885775283Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=5.767092ms
Sep 30 14:17:31 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Sep 30 14:17:31 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:31.900419369Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns"
Sep 30 14:17:31 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:31.901551759Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=1.1459ms
Sep 30 14:17:31 compute-0 ceph-mon[74194]: 10.1f scrub starts
Sep 30 14:17:31 compute-0 ceph-mon[74194]: 10.1f scrub ok
Sep 30 14:17:31 compute-0 ceph-mon[74194]: 5.c scrub starts
Sep 30 14:17:31 compute-0 ceph-mon[74194]: 5.c scrub ok
Sep 30 14:17:31 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:17:31 compute-0 ceph-mon[74194]: 8.15 scrub starts
Sep 30 14:17:31 compute-0 ceph-mon[74194]: 8.15 scrub ok
Sep 30 14:17:31 compute-0 ceph-mon[74194]: pgmap v78: 337 pgs: 4 unknown, 333 active+clean; 457 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Sep 30 14:17:31 compute-0 ceph-mon[74194]: 12.14 scrub starts
Sep 30 14:17:31 compute-0 ceph-mon[74194]: 12.14 scrub ok
Sep 30 14:17:31 compute-0 ceph-mon[74194]: 5.a deep-scrub starts
Sep 30 14:17:31 compute-0 ceph-mon[74194]: 5.a deep-scrub ok
Sep 30 14:17:31 compute-0 ceph-mon[74194]: 12.11 scrub starts
Sep 30 14:17:31 compute-0 ceph-mon[74194]: 12.11 scrub ok
Sep 30 14:17:31 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Sep 30 14:17:31 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Sep 30 14:17:31 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:17:31 compute-0 ceph-mon[74194]: osdmap e69: 3 total, 3 up, 3 in
Sep 30 14:17:31 compute-0 ceph-mon[74194]: 5.17 deep-scrub starts
Sep 30 14:17:31 compute-0 ceph-mon[74194]: 5.17 deep-scrub ok
Sep 30 14:17:31 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:17:31 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:31.916733809Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule"
Sep 30 14:17:31 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:31.922185822Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=5.454473ms
Sep 30 14:17:31 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:31.936810238Z level=info msg="Executing migration" id="add is_paused column to alert_rule table"
Sep 30 14:17:31 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:31.942020175Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=5.206767ms
Sep 30 14:17:31 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Sep 30 14:17:31 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:31.94598533Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table"
Sep 30 14:17:31 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:31.946252047Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=274.217µs
Sep 30 14:17:31 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:31.954551215Z level=info msg="Executing migration" id="create alert_rule_version table"
Sep 30 14:17:31 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:31.95626243Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=1.716155ms
Sep 30 14:17:31 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 70 pg[9.15( v 48'1157 (0'0,48'1157] local-lis/les=69/70 n=4 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=69) [2]/[0] async=[2] r=0 lpr=69 pi=[55,69)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:17:31 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:31.959787533Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns"
Sep 30 14:17:31 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:31.961013786Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.229072ms
Sep 30 14:17:31 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 70 pg[9.5( v 58'1160 (0'0,58'1160] local-lis/les=69/70 n=6 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=69) [2]/[0] async=[2] r=0 lpr=69 pi=[55,69)/1 crt=58'1160 lcod 57'1159 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:17:31 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 70 pg[9.d( v 48'1157 (0'0,48'1157] local-lis/les=69/70 n=6 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=69) [2]/[0] async=[2] r=0 lpr=69 pi=[55,69)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:17:31 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 70 pg[9.1d( v 48'1157 (0'0,48'1157] local-lis/les=69/70 n=5 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=69) [2]/[0] async=[2] r=0 lpr=69 pi=[55,69)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:17:31 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:31.96991138Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns"
Sep 30 14:17:31 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:31.971630635Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=1.724655ms
Sep 30 14:17:31 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:31.978713082Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql"
Sep 30 14:17:31 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:31.978824445Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=113.653µs
Sep 30 14:17:31 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:31.984372111Z level=info msg="Executing migration" id="add column for to alert_rule_version"
Sep 30 14:17:31 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:31.989780183Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=5.411282ms
Sep 30 14:17:31 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:31.998301758Z level=info msg="Executing migration" id="add column annotations to alert_rule_version"
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.003036713Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=4.736555ms
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.014536786Z level=info msg="Executing migration" id="add column labels to alert_rule_version"
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.019454755Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=4.918279ms
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.025099554Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version"
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.030016244Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=4.912379ms
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.032116739Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table"
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.037297055Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=5.177666ms
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.040338696Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table"
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.040413058Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=74.972µs
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.043221031Z level=info msg="Executing migration" id=create_alert_configuration_table
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.043999312Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=777.851µs
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.04657683Z level=info msg="Executing migration" id="Add column default in alert_configuration"
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.052719332Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=6.142701ms
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.057069026Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql"
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.057157289Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=89.423µs
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.060269571Z level=info msg="Executing migration" id="add column org_id in alert_configuration"
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.064910773Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=4.643902ms
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.067032349Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column"
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.067917252Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=884.193µs
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.069897034Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration"
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.074954178Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=5.056544ms
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.077297439Z level=info msg="Executing migration" id=create_ngalert_configuration_table
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.078058939Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=760.68µs
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.080589896Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column"
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.081465679Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=915.734µs
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.083667787Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration"
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.088765742Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=5.095464ms
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.094120503Z level=info msg="Executing migration" id="create provenance_type table"
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.095411917Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=1.299165ms
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.101108057Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns"
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.102065252Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=956.535µs
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:17:32 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf90003c50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.104858916Z level=info msg="Executing migration" id="create alert_image table"
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.105734429Z level=info msg="Migration successfully executed" id="create alert_image table" duration=871.382µs
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.107843504Z level=info msg="Executing migration" id="add unique index on token to alert_image table"
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.108907432Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=1.061858ms
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.110907175Z level=info msg="Executing migration" id="support longer URLs in alert_image table"
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.110963316Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=56.191µs
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.113186265Z level=info msg="Executing migration" id=create_alert_configuration_history_table
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.11412509Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=938.835µs
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.117072277Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration"
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.117996002Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=923.545µs
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.12134108Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists"
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.121763921Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists"
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.127558334Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table"
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.128358845Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=803.371µs
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.133303775Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration"
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.134543768Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=1.240013ms
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.139794846Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history"
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.145445495Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=5.648659ms
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.14982796Z level=info msg="Executing migration" id="create library_element table v1"
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.15097147Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=1.14481ms
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.156988958Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind"
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.158826897Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=1.843218ms
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.162353269Z level=info msg="Executing migration" id="create library_element_connection table v1"
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.163608643Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=1.258193ms
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.166036997Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id"
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.16694262Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=905.724µs
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.173204995Z level=info msg="Executing migration" id="add unique index library_element org_id_uid"
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.174382346Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=1.182511ms
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.180637681Z level=info msg="Executing migration" id="increase max description length to 2048"
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.180672472Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=36.101µs
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.186456955Z level=info msg="Executing migration" id="alter library_element model to mediumtext"
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.18665089Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=201.046µs
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.191450516Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting"
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.191870837Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=430.281µs
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.193738026Z level=info msg="Executing migration" id="create data_keys table"
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.194798854Z level=info msg="Migration successfully executed" id="create data_keys table" duration=1.060788ms
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.198135352Z level=info msg="Executing migration" id="create secrets table"
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.198843631Z level=info msg="Migration successfully executed" id="create secrets table" duration=710.749µs
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.201982484Z level=info msg="Executing migration" id="rename data_keys name column to id"
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.231787009Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=29.800125ms
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.235449235Z level=info msg="Executing migration" id="add name column into data_keys"
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.241020302Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=5.570337ms
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.246509837Z level=info msg="Executing migration" id="copy data_keys id column values into name"
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.246688452Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=179.815µs
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.250213184Z level=info msg="Executing migration" id="rename data_keys name column to label"
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.27812547Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=27.909046ms
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.280226425Z level=info msg="Executing migration" id="rename data_keys id column back to name"
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.307669818Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=27.435303ms
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.312865225Z level=info msg="Executing migration" id="create kv_store table v1"
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.313753309Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=890.404µs
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.319205992Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key"
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.320149577Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=939.525µs
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.322699854Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations"
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.322867479Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=167.935µs
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.327272775Z level=info msg="Executing migration" id="create permission table"
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.328030145Z level=info msg="Migration successfully executed" id="create permission table" duration=757.33µs
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.33012719Z level=info msg="Executing migration" id="add unique index permission.role_id"
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.33088581Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=758.19µs
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.333635583Z level=info msg="Executing migration" id="add unique index role_id_action_scope"
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.334637879Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=1.002837ms
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.336671493Z level=info msg="Executing migration" id="create role table"
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.337597477Z level=info msg="Migration successfully executed" id="create role table" duration=925.715µs
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.340927805Z level=info msg="Executing migration" id="add column display_name"
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.346244935Z level=info msg="Migration successfully executed" id="add column display_name" duration=5.31343ms
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.352976582Z level=info msg="Executing migration" id="add column group_name"
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.358678622Z level=info msg="Migration successfully executed" id="add column group_name" duration=5.70246ms
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.38136665Z level=info msg="Executing migration" id="add index role.org_id"
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.382438268Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=1.074558ms
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.400403902Z level=info msg="Executing migration" id="add unique index role_org_id_name"
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.401692056Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=1.287864ms
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.408851164Z level=info msg="Executing migration" id="add index role_org_id_uid"
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.409661836Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=810.772µs
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.414021621Z level=info msg="Executing migration" id="create team role table"
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.414703198Z level=info msg="Migration successfully executed" id="create team role table" duration=681.608µs
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.433763471Z level=info msg="Executing migration" id="add index team_role.org_id"
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.434958822Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=1.198341ms
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.469221645Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id"
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.47054815Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=1.328645ms
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.591161708Z level=info msg="Executing migration" id="add index team_role.team_id"
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.593218422Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=2.061235ms
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.660259338Z level=info msg="Executing migration" id="create user role table"
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.661586333Z level=info msg="Migration successfully executed" id="create user role table" duration=1.327985ms
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.708822028Z level=info msg="Executing migration" id="add index user_role.org_id"
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.710832671Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=2.013933ms
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.745146625Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id"
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.746505211Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=1.361666ms
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.763268103Z level=info msg="Executing migration" id="add index user_role.user_id"
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.764824094Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=1.559861ms
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.789397151Z level=info msg="Executing migration" id="create builtin role table"
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.790786897Z level=info msg="Migration successfully executed" id="create builtin role table" duration=1.392836ms
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.845563771Z level=info msg="Executing migration" id="add index builtin_role.role_id"
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.847096201Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=1.53496ms
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.882805072Z level=info msg="Executing migration" id="add index builtin_role.name"
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.884152637Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=1.350925ms
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.894977923Z level=info msg="Executing migration" id="Add column org_id to builtin_role table"
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.901298559Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=6.316756ms
Sep 30 14:17:32 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.930806017Z level=info msg="Executing migration" id="add index builtin_role.org_id"
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.932604894Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=1.805587ms
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.965736267Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role"
Sep 30 14:17:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:32.967070662Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=1.343745ms
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.006448301Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid"
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.008659209Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=2.217628ms
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.035959699Z level=info msg="Executing migration" id="add unique index role.uid"
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.037483379Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=1.52657ms
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.04056393Z level=info msg="Executing migration" id="create seed assignment table"
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.041749641Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=1.185961ms
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.106107237Z level=info msg="Executing migration" id="add unique index builtin_role_role_name"
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.107539954Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=1.435207ms
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.117689802Z level=info msg="Executing migration" id="add column hidden to role table"
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.124156402Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=6.46719ms
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.128749983Z level=info msg="Executing migration" id="permission kind migration"
Sep 30 14:17:33 compute-0 ceph-mon[74194]: 10.e scrub starts
Sep 30 14:17:33 compute-0 ceph-mon[74194]: 10.e scrub ok
Sep 30 14:17:33 compute-0 ceph-mon[74194]: 7.11 scrub starts
Sep 30 14:17:33 compute-0 ceph-mon[74194]: 7.11 scrub ok
Sep 30 14:17:33 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:17:33 compute-0 ceph-mon[74194]: Deploying daemon haproxy.rgw.default.compute-2.nozgvj on compute-2
Sep 30 14:17:33 compute-0 ceph-mon[74194]: pgmap v80: 337 pgs: 2 peering, 4 unknown, 331 active+clean; 457 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Sep 30 14:17:33 compute-0 ceph-mon[74194]: osdmap e70: 3 total, 3 up, 3 in
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.135187023Z level=info msg="Migration successfully executed" id="permission kind migration" duration=6.420209ms
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.142332291Z level=info msg="Executing migration" id="permission attribute migration"
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.148337789Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=6.006858ms
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.152012116Z level=info msg="Executing migration" id="permission identifier migration"
Sep 30 14:17:33 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.160004777Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=7.99162ms
Sep 30 14:17:33 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.165283346Z level=info msg="Executing migration" id="add permission identifier index"
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.166506838Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=1.224992ms
Sep 30 14:17:33 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 71 pg[9.15( v 48'1157 (0'0,48'1157] local-lis/les=69/70 n=4 ec=55/37 lis/c=69/55 les/c/f=70/56/0 sis=71 pruub=14.783017159s) [2] async=[2] r=-1 lpr=71 pi=[55,71)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active pruub 234.793060303s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:33 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 71 pg[9.15( v 48'1157 (0'0,48'1157] local-lis/les=69/70 n=4 ec=55/37 lis/c=69/55 les/c/f=70/56/0 sis=71 pruub=14.782953262s) [2] r=-1 lpr=71 pi=[55,71)/1 crt=48'1157 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 234.793060303s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:17:33 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 71 pg[9.5( v 70'1163 (0'0,70'1163] local-lis/les=69/70 n=6 ec=55/37 lis/c=69/55 les/c/f=70/56/0 sis=71 pruub=14.790976524s) [2] async=[2] r=-1 lpr=71 pi=[55,71)/1 crt=58'1160 lcod 70'1162 mlcod 70'1162 active pruub 234.801239014s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:33 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 71 pg[9.d( v 48'1157 (0'0,48'1157] local-lis/les=69/70 n=6 ec=55/37 lis/c=69/55 les/c/f=70/56/0 sis=71 pruub=14.790972710s) [2] async=[2] r=-1 lpr=71 pi=[55,71)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active pruub 234.801223755s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:33 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 71 pg[9.5( v 70'1163 (0'0,70'1163] local-lis/les=69/70 n=6 ec=55/37 lis/c=69/55 les/c/f=70/56/0 sis=71 pruub=14.790917397s) [2] r=-1 lpr=71 pi=[55,71)/1 crt=58'1160 lcod 70'1162 mlcod 0'0 unknown NOTIFY pruub 234.801239014s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:17:33 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 71 pg[9.d( v 48'1157 (0'0,48'1157] local-lis/les=69/70 n=6 ec=55/37 lis/c=69/55 les/c/f=70/56/0 sis=71 pruub=14.790883064s) [2] r=-1 lpr=71 pi=[55,71)/1 crt=48'1157 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 234.801223755s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.174747915Z level=info msg="Executing migration" id="add permission action scope role_id index"
Sep 30 14:17:33 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 71 pg[9.1d( v 48'1157 (0'0,48'1157] local-lis/les=69/70 n=5 ec=55/37 lis/c=69/55 les/c/f=70/56/0 sis=71 pruub=14.791084290s) [2] async=[2] r=-1 lpr=71 pi=[55,71)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active pruub 234.801254272s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:33 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 71 pg[9.1d( v 48'1157 (0'0,48'1157] local-lis/les=69/70 n=5 ec=55/37 lis/c=69/55 les/c/f=70/56/0 sis=71 pruub=14.790219307s) [2] r=-1 lpr=71 pi=[55,71)/1 crt=48'1157 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 234.801254272s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.177064126Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=2.320101ms
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.18935851Z level=info msg="Executing migration" id="remove permission role_id action scope index"
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.191048334Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=1.694484ms
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.196666602Z level=info msg="Executing migration" id="create query_history table v1"
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.197644348Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=977.746µs
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.203580545Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid"
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.204982311Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=1.402077ms
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.210857656Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint"
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.210962319Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=111.473µs
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.215318744Z level=info msg="Executing migration" id="rbac disabled migrator"
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.215420116Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=103.332µs
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.22238877Z level=info msg="Executing migration" id="teams permissions migration"
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.222889163Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=502.063µs
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.226781856Z level=info msg="Executing migration" id="dashboard permissions"
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.227411732Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=630.896µs
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:17:33 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf84001a70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.230653058Z level=info msg="Executing migration" id="dashboard permissions uid scopes"
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.231404758Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=752.98µs
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.239291975Z level=info msg="Executing migration" id="drop managed folder create actions"
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.239660595Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=370.13µs
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.246423553Z level=info msg="Executing migration" id="alerting notification permissions"
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.247139862Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=718.329µs
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.250696286Z level=info msg="Executing migration" id="create query_history_star table v1"
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.25160527Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=909.144µs
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.257101165Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid"
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.258485131Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=1.387087ms
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.263075872Z level=info msg="Executing migration" id="add column org_id in query_history_star"
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.269621044Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=6.539622ms
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.27326939Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint"
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.273394424Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=126.644µs
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.277328967Z level=info msg="Executing migration" id="create correlation table v1"
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.278596981Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=1.267364ms
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.284726102Z level=info msg="Executing migration" id="add index correlations.uid"
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.285662637Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=936.595µs
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.291229284Z level=info msg="Executing migration" id="add index correlations.source_uid"
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.292158378Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=929.274µs
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.299892902Z level=info msg="Executing migration" id="add correlation config column"
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.307549354Z level=info msg="Migration successfully executed" id="add correlation config column" duration=7.656821ms
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.313274764Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1"
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.31423756Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=962.756µs
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.318210434Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1"
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.31919849Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=988.806µs
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.323760781Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1"
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.342085023Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=18.320692ms
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.345983006Z level=info msg="Executing migration" id="create correlation v2"
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.347313941Z level=info msg="Migration successfully executed" id="create correlation v2" duration=1.332115ms
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.350081684Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2"
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.351333057Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=1.248783ms
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.354307525Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2"
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.355599329Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=1.293344ms
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.362262735Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2"
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.363295032Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=1.032377ms
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.367999966Z level=info msg="Executing migration" id="copy correlation v1 to v2"
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.368322305Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=322.399µs
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.371448877Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty"
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.372631718Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=1.182221ms
Sep 30 14:17:33 compute-0 sshd-session[99194]: Accepted publickey for zuul from 192.168.122.30 port 37946 ssh2: ECDSA SHA256:bXV1aFTGAGwGo0hLh6HZ3pTGxlJrPf0VedxXflT3nU8
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.37573367Z level=info msg="Executing migration" id="add provisioning column"
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.382762195Z level=info msg="Migration successfully executed" id="add provisioning column" duration=7.024955ms
Sep 30 14:17:33 compute-0 systemd-logind[808]: New session 37 of user zuul.
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.386692579Z level=info msg="Executing migration" id="create entity_events table"
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.387750226Z level=info msg="Migration successfully executed" id="create entity_events table" duration=1.059188ms
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.391964127Z level=info msg="Executing migration" id="create dashboard public config v1"
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.393262262Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=1.298715ms
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.39700859Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1"
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.397735349Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1"
Sep 30 14:17:33 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:17:33 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:17:33 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:17:33.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.40345633Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1"
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.404121918Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1"
Sep 30 14:17:33 compute-0 systemd[1]: Started Session 37 of User zuul.
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.409833668Z level=info msg="Executing migration" id="Drop old dashboard public config table"
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.413207847Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=3.372999ms
Sep 30 14:17:33 compute-0 sshd-session[99194]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.418321502Z level=info msg="Executing migration" id="recreate dashboard public config v1"
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.420455578Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=2.136006ms
Sep 30 14:17:33 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:17:33 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:17:33 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:17:33.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:17:33 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.52263965Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1"
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.524360636Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=1.723686ms
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.627215666Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1"
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.629094335Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=1.925351ms
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.635365131Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2"
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.637262091Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=1.89855ms
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.640566868Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2"
Sep 30 14:17:33 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.642028586Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.462268ms
Sep 30 14:17:33 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.649643337Z level=info msg="Executing migration" id="Drop public config table"
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.650774327Z level=info msg="Migration successfully executed" id="Drop public config table" duration=1.13295ms
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.656293662Z level=info msg="Executing migration" id="Recreate dashboard public config v2"
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.65773081Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=1.437678ms
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.661301424Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2"
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.662323851Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=1.023437ms
Sep 30 14:17:33 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.66875133Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2"
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.670008673Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.257813ms
Sep 30 14:17:33 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.672611182Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2"
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.673554397Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=943.285µs
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.681876636Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2"
Sep 30 14:17:33 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:17:33 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.rgw.default/keepalived_password}] v 0)
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.707286256Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=25.407219ms
Sep 30 14:17:33 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:17:33 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Sep 30 14:17:33 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Sep 30 14:17:33 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Sep 30 14:17:33 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.710463669Z level=info msg="Executing migration" id="add annotations_enabled column"
Sep 30 14:17:33 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.rgw.default.compute-0.rnvjsg on compute-0
Sep 30 14:17:33 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.rgw.default.compute-0.rnvjsg on compute-0
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.721372317Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=10.904008ms
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.725641529Z level=info msg="Executing migration" id="add time_selection_enabled column"
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.7332591Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=7.616951ms
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.736153596Z level=info msg="Executing migration" id="delete orphaned public dashboards"
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.736419433Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=266.667µs
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.738301623Z level=info msg="Executing migration" id="add share column"
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.745271926Z level=info msg="Migration successfully executed" id="add share column" duration=6.964463ms
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.747347031Z level=info msg="Executing migration" id="backfill empty share column fields with default of public"
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.747592138Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=246.707µs
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.749729264Z level=info msg="Executing migration" id="create file table"
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.750628288Z level=info msg="Migration successfully executed" id="create file table" duration=898.794µs
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.75375713Z level=info msg="Executing migration" id="file table idx: path natural pk"
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.755110196Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=1.353796ms
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.757315494Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval"
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.758350181Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=1.034427ms
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.760460017Z level=info msg="Executing migration" id="create file_meta table"
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.761137124Z level=info msg="Migration successfully executed" id="create file_meta table" duration=676.657µs
Sep 30 14:17:33 compute-0 sudo[99250]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.765110419Z level=info msg="Executing migration" id="file table idx: path key"
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.766086405Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=976.826µs
Sep 30 14:17:33 compute-0 sudo[99250]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.769000052Z level=info msg="Executing migration" id="set path collation in file table"
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.769050533Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=51.511µs
Sep 30 14:17:33 compute-0 sudo[99250]: pam_unix(sudo:session): session closed for user root
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.772471403Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL"
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.772523834Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=53.161µs
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.774206419Z level=info msg="Executing migration" id="managed permissions migration"
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.77462916Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=422.661µs
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.776786177Z level=info msg="Executing migration" id="managed folder permissions alert actions migration"
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.776965142Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=179.675µs
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.778695037Z level=info msg="Executing migration" id="RBAC action name migrator"
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.780071193Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=1.374866ms
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.782031285Z level=info msg="Executing migration" id="Add UID column to playlist"
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.789480541Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=7.448306ms
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.792804319Z level=info msg="Executing migration" id="Update uid column values in playlist"
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.792966853Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=163.074µs
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.798096718Z level=info msg="Executing migration" id="Add index for uid in playlist"
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.799345521Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=1.248423ms
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.802069493Z level=info msg="Executing migration" id="update group index for alert rules"
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.802467513Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=399.44µs
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.80422949Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration"
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.804451296Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=222.286µs
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.806978282Z level=info msg="Executing migration" id="admin only folder/dashboard permission"
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.807521157Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=542.875µs
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.809469478Z level=info msg="Executing migration" id="add action column to seed_assignment"
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.816299968Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=6.8285ms
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.818855635Z level=info msg="Executing migration" id="add scope column to seed_assignment"
Sep 30 14:17:33 compute-0 sudo[99275]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/keepalived:2.2.4 --timeout 895 _orch deploy --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6
Sep 30 14:17:33 compute-0 sudo[99275]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.825586693Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=6.726897ms
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.829259329Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update"
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.830484002Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=1.232143ms
Sep 30 14:17:33 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v83: 337 pgs: 2 peering, 4 unknown, 331 active+clean; 457 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.83271169Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable"
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:17:33 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf70003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.906350781Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=73.63349ms
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.9188664Z level=info msg="Executing migration" id="add unique index builtin_role_name back"
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.92036842Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=1.5039ms
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.925753442Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope"
Sep 30 14:17:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:33.927266912Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=1.515439ms
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:34.033388038Z level=info msg="Executing migration" id="add primary key to seed_assigment"
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:34.057070192Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=23.682064ms
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:34.097450806Z level=info msg="Executing migration" id="add origin column to seed_assignment"
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:17:34 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf9c00a2b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:34.106068503Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=8.614686ms
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:34.130570108Z level=info msg="Executing migration" id="add origin to plugin seed_assignment"
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:34.130985199Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=418.131µs
Sep 30 14:17:34 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:34.16553549Z level=info msg="Executing migration" id="prevent seeding OnCall access"
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:34.165880389Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=346.45µs
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:34.214280114Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration"
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:34.214754186Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=444.162µs
Sep 30 14:17:34 compute-0 ceph-mon[74194]: 10.9 deep-scrub starts
Sep 30 14:17:34 compute-0 ceph-mon[74194]: 10.9 deep-scrub ok
Sep 30 14:17:34 compute-0 ceph-mon[74194]: 10.1 scrub starts
Sep 30 14:17:34 compute-0 ceph-mon[74194]: 10.1 scrub ok
Sep 30 14:17:34 compute-0 ceph-mon[74194]: 12.f scrub starts
Sep 30 14:17:34 compute-0 ceph-mon[74194]: 12.f scrub ok
Sep 30 14:17:34 compute-0 ceph-mon[74194]: osdmap e71: 3 total, 3 up, 3 in
Sep 30 14:17:34 compute-0 ceph-mon[74194]: 8.3 scrub starts
Sep 30 14:17:34 compute-0 ceph-mon[74194]: 8.3 scrub ok
Sep 30 14:17:34 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:17:34 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:17:34 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:17:34 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:17:34 compute-0 ceph-mon[74194]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Sep 30 14:17:34 compute-0 ceph-mon[74194]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Sep 30 14:17:34 compute-0 ceph-mon[74194]: Deploying daemon keepalived.rgw.default.compute-0.rnvjsg on compute-0
Sep 30 14:17:34 compute-0 ceph-mon[74194]: pgmap v83: 337 pgs: 2 peering, 4 unknown, 331 active+clean; 457 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:34.227640976Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration"
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:34.227983475Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=346.469µs
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:34.266409117Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse"
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:34.266803858Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=413.631µs
Sep 30 14:17:34 compute-0 podman[99435]: 2025-09-30 14:17:34.207360992 +0000 UTC m=+0.021913189 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Sep 30 14:17:34 compute-0 podman[99435]: 2025-09-30 14:17:34.34127841 +0000 UTC m=+0.155830597 container create 694cc52944a1a1d5f4b9e15cf5f50e8b06c6daf3c23d152cb8dab56e3e40bf5e (image=quay.io/ceph/keepalived:2.2.4, name=epic_edison, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., summary=Provides keepalived on RHEL 9 for Ceph., name=keepalived, com.redhat.component=keepalived-container, distribution-scope=public, build-date=2023-02-22T09:23:20, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=keepalived for Ceph, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.tags=Ceph keepalived, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793)
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:34.341527787Z level=info msg="Executing migration" id="create folder table"
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:34.343201981Z level=info msg="Migration successfully executed" id="create folder table" duration=1.675734ms
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:34.34620444Z level=info msg="Executing migration" id="Add index for parent_uid"
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:34.347817982Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=1.617202ms
Sep 30 14:17:34 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:34.353082521Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id"
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:34.354467888Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=1.387377ms
Sep 30 14:17:34 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:34.357130368Z level=info msg="Executing migration" id="Update folder title length"
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:34.357179259Z level=info msg="Migration successfully executed" id="Update folder title length" duration=51.201µs
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:34.359290825Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid"
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:34.360609699Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=1.319044ms
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:34.363568217Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid"
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:34.365017666Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=1.452398ms
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:34.367422139Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id"
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:34.368695772Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=1.274053ms
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:34.372114523Z level=info msg="Executing migration" id="Sync dashboard and folder table"
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:34.372589065Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=475.842µs
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:34.374645099Z level=info msg="Executing migration" id="Remove ghost folders from the folder table"
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:34.375253735Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=608.676µs
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:34.377278549Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id"
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:34.378791499Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=1.512119ms
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:34.381041438Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid"
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:34.3838Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=2.758002ms
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:34.387452487Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id"
Sep 30 14:17:34 compute-0 systemd[1]: Started libpod-conmon-694cc52944a1a1d5f4b9e15cf5f50e8b06c6daf3c23d152cb8dab56e3e40bf5e.scope.
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:34.3887348Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=1.283863ms
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:34.390506107Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title"
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:34.391699889Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=1.195262ms
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:34.393322731Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id"
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:34.394520743Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=1.195802ms
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:34.396143196Z level=info msg="Executing migration" id="create anon_device table"
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:34.396927406Z level=info msg="Migration successfully executed" id="create anon_device table" duration=797.471µs
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:34.398493268Z level=info msg="Executing migration" id="add unique index anon_device.device_id"
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:34.399545515Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=1.052397ms
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:34.402850302Z level=info msg="Executing migration" id="add index anon_device.updated_at"
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:34.405312167Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=2.466245ms
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:34.407981468Z level=info msg="Executing migration" id="create signing_key table"
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:34.409699263Z level=info msg="Migration successfully executed" id="create signing_key table" duration=1.717316ms
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:34.41339868Z level=info msg="Executing migration" id="add unique index signing_key.key_id"
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:34.414996662Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=1.602372ms
Sep 30 14:17:34 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:34.419013698Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore"
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:34.420339873Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=1.329705ms
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:34.42248361Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore"
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:34.422857069Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=375.359µs
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:34.425196651Z level=info msg="Executing migration" id="Add folder_uid for dashboard"
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:34.432386771Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=7.188419ms
Sep 30 14:17:34 compute-0 podman[99435]: 2025-09-30 14:17:34.432341069 +0000 UTC m=+0.246893276 container init 694cc52944a1a1d5f4b9e15cf5f50e8b06c6daf3c23d152cb8dab56e3e40bf5e (image=quay.io/ceph/keepalived:2.2.4, name=epic_edison, io.k8s.display-name=Keepalived on RHEL 9, build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, distribution-scope=public, io.openshift.tags=Ceph keepalived, version=2.2.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., vcs-type=git, io.openshift.expose-services=, description=keepalived for Ceph, architecture=x86_64)
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:34.435050761Z level=info msg="Executing migration" id="Populate dashboard folder_uid column"
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:34.435954925Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=905.874µs
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:34.437745792Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title"
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:34.43880097Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=1.054598ms
Sep 30 14:17:34 compute-0 podman[99435]: 2025-09-30 14:17:34.440615697 +0000 UTC m=+0.255167864 container start 694cc52944a1a1d5f4b9e15cf5f50e8b06c6daf3c23d152cb8dab56e3e40bf5e (image=quay.io/ceph/keepalived:2.2.4, name=epic_edison, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.tags=Ceph keepalived, io.openshift.expose-services=, build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=keepalived for Ceph, version=2.2.4, vendor=Red Hat, Inc., vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, name=keepalived)
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:34.440831433Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title"
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:34.441800849Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=968.936µs
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:34.443892524Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title"
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:34.444978132Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=1.086148ms
Sep 30 14:17:34 compute-0 podman[99435]: 2025-09-30 14:17:34.445084305 +0000 UTC m=+0.259636492 container attach 694cc52944a1a1d5f4b9e15cf5f50e8b06c6daf3c23d152cb8dab56e3e40bf5e (image=quay.io/ceph/keepalived:2.2.4, name=epic_edison, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, com.redhat.component=keepalived-container, io.openshift.expose-services=, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, distribution-scope=public, vendor=Red Hat, Inc., summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, vcs-type=git, build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Sep 30 14:17:34 compute-0 epic_edison[99451]: 0 0
Sep 30 14:17:34 compute-0 systemd[1]: libpod-694cc52944a1a1d5f4b9e15cf5f50e8b06c6daf3c23d152cb8dab56e3e40bf5e.scope: Deactivated successfully.
Sep 30 14:17:34 compute-0 podman[99435]: 2025-09-30 14:17:34.44678029 +0000 UTC m=+0.261332467 container died 694cc52944a1a1d5f4b9e15cf5f50e8b06c6daf3c23d152cb8dab56e3e40bf5e (image=quay.io/ceph/keepalived:2.2.4, name=epic_edison, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, architecture=x86_64, com.redhat.component=keepalived-container, description=keepalived for Ceph, version=2.2.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., vcs-type=git, io.openshift.expose-services=)
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:34.447287383Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder"
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:34.448481115Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=1.193582ms
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:34.450445636Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title"
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:34.451400152Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=954.685µs
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:34.45626862Z level=info msg="Executing migration" id="create sso_setting table"
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:34.45779113Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=1.52546ms
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:34.461234811Z level=info msg="Executing migration" id="copy kvstore migration status to each org"
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:34.462431122Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=1.197091ms
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:34.471925932Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status"
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:34.472413335Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=491.113µs
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:34.475825945Z level=info msg="Executing migration" id="alter kv_store.value to longtext"
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:34.475985869Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=165.234µs
Sep 30 14:17:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-de4638ab5bea8f2192cb4b50316d2e5ad3c7bbf7376dd056ce956cf82ff83a3f-merged.mount: Deactivated successfully.
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:34.480993691Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table"
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:34.49007414Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=9.075629ms
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:34.492414642Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table"
Sep 30 14:17:34 compute-0 python3.9[99429]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:34.502159479Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=9.740357ms
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:34.505519387Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration"
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:34.506161824Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=644.447µs
Sep 30 14:17:34 compute-0 podman[99435]: 2025-09-30 14:17:34.506423151 +0000 UTC m=+0.320975328 container remove 694cc52944a1a1d5f4b9e15cf5f50e8b06c6daf3c23d152cb8dab56e3e40bf5e (image=quay.io/ceph/keepalived:2.2.4, name=epic_edison, name=keepalived, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, architecture=x86_64, vendor=Red Hat, Inc., description=keepalived for Ceph, release=1793, distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=migrator t=2025-09-30T14:17:34.508949198Z level=info msg="migrations completed" performed=547 skipped=0 duration=10.126953385s
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=sqlstore t=2025-09-30T14:17:34.51056589Z level=info msg="Created default organization"
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=secrets t=2025-09-30T14:17:34.51282417Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1
Sep 30 14:17:34 compute-0 systemd[1]: libpod-conmon-694cc52944a1a1d5f4b9e15cf5f50e8b06c6daf3c23d152cb8dab56e3e40bf5e.scope: Deactivated successfully.
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=plugin.store t=2025-09-30T14:17:34.5405249Z level=info msg="Loading plugins..."
Sep 30 14:17:34 compute-0 ceph-mgr[74485]: [progress INFO root] Writing back 27 completed events
Sep 30 14:17:34 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Sep 30 14:17:34 compute-0 systemd[1]: Reloading.
Sep 30 14:17:34 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:17:34 compute-0 ceph-mgr[74485]: [progress WARNING root] Starting Global Recovery Event,6 pgs not in active + clean state
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=local.finder t=2025-09-30T14:17:34.631136117Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=plugin.store t=2025-09-30T14:17:34.631595109Z level=info msg="Plugins loaded" count=55 duration=90.642518ms
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=query_data t=2025-09-30T14:17:34.634249169Z level=info msg="Query Service initialization"
Sep 30 14:17:34 compute-0 systemd-rc-local-generator[99499]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=live.push_http t=2025-09-30T14:17:34.637870525Z level=info msg="Live Push Gateway initialization"
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=ngalert.migration t=2025-09-30T14:17:34.643520023Z level=info msg=Starting
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=ngalert.migration t=2025-09-30T14:17:34.644059228Z level=info msg="Applying transition" currentType=Legacy desiredType=UnifiedAlerting cleanOnDowngrade=false cleanOnUpgrade=false
Sep 30 14:17:34 compute-0 systemd-sysv-generator[99502]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=ngalert.migration orgID=1 t=2025-09-30T14:17:34.645477225Z level=info msg="Migrating alerts for organisation"
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=ngalert.migration orgID=1 t=2025-09-30T14:17:34.646127692Z level=info msg="Alerts found to migrate" alerts=0
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=ngalert.migration t=2025-09-30T14:17:34.647908739Z level=info msg="Completed alerting migration"
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=ngalert.state.manager t=2025-09-30T14:17:34.695844302Z level=info msg="Running in alternative execution of Error/NoData mode"
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=infra.usagestats.collector t=2025-09-30T14:17:34.699005145Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=provisioning.datasources t=2025-09-30T14:17:34.700549476Z level=info msg="inserting datasource from configuration" name=Loki uid=P8E80F9AEF21F6940
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=provisioning.alerting t=2025-09-30T14:17:34.716377113Z level=info msg="starting to provision alerting"
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=provisioning.alerting t=2025-09-30T14:17:34.716406634Z level=info msg="finished to provision alerting"
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=ngalert.state.manager t=2025-09-30T14:17:34.717735829Z level=info msg="Warming state cache for startup"
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=provisioning.dashboard t=2025-09-30T14:17:34.718100649Z level=info msg="starting to provision dashboards"
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=grafanaStorageLogger t=2025-09-30T14:17:34.719031893Z level=info msg="Storage starting"
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=http.server t=2025-09-30T14:17:34.720903062Z level=info msg="HTTP Server TLS settings" MinTLSVersion=TLS1.2 configuredciphers=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=http.server t=2025-09-30T14:17:34.721377755Z level=info msg="HTTP Server Listen" address=192.168.122.100:3000 protocol=https subUrl= socket=
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=ngalert.multiorg.alertmanager t=2025-09-30T14:17:34.718248142Z level=info msg="Starting MultiOrg Alertmanager"
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=ngalert.state.manager t=2025-09-30T14:17:34.741997668Z level=info msg="State cache has been initialized" states=0 duration=24.262099ms
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=ngalert.scheduler t=2025-09-30T14:17:34.742038229Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=ticker t=2025-09-30T14:17:34.74208686Z level=info msg=starting first_tick=2025-09-30T14:17:40Z
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=grafana.update.checker t=2025-09-30T14:17:34.788812312Z level=info msg="Update check succeeded" duration=71.278998ms
Sep 30 14:17:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=plugins.update.checker t=2025-09-30T14:17:34.794292546Z level=info msg="Update check succeeded" duration=76.65914ms
Sep 30 14:17:34 compute-0 systemd[1]: Reloading.
Sep 30 14:17:34 compute-0 systemd-rc-local-generator[99563]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:17:34 compute-0 systemd-sysv-generator[99566]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 14:17:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=grafana-apiserver t=2025-09-30T14:17:35.036918539Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager"
Sep 30 14:17:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=grafana-apiserver t=2025-09-30T14:17:35.03884588Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager"
Sep 30 14:17:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=provisioning.dashboard t=2025-09-30T14:17:35.069912688Z level=info msg="finished to provision dashboards"
Sep 30 14:17:35 compute-0 systemd[1]: Starting Ceph keepalived.rgw.default.compute-0.rnvjsg for 5e3c7776-ac03-5698-b79f-a6dc2d80cae6...
Sep 30 14:17:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:17:35 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf9c00a2b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:17:35 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e72 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 14:17:35 compute-0 ceph-mon[74194]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Sep 30 14:17:35 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:17:35.282839) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Sep 30 14:17:35 compute-0 ceph-mon[74194]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Sep 30 14:17:35 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759241855282928, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7498, "num_deletes": 257, "total_data_size": 13990592, "memory_usage": 14675832, "flush_reason": "Manual Compaction"}
Sep 30 14:17:35 compute-0 ceph-mon[74194]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Sep 30 14:17:35 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:17:35 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:17:35 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:17:35.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:17:35 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:17:35 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:17:35 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:17:35.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:17:35 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759241855512421, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 12562967, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 142, "largest_seqno": 7635, "table_properties": {"data_size": 12535326, "index_size": 17798, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8773, "raw_key_size": 84539, "raw_average_key_size": 24, "raw_value_size": 12467774, "raw_average_value_size": 3561, "num_data_blocks": 781, "num_entries": 3501, "num_filter_entries": 3501, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759241529, "oldest_key_time": 1759241529, "file_creation_time": 1759241855, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4a74fe2f-a33e-416b-ba25-743e7942b3ac", "db_session_id": "KY5CTSKWFSFJYE5835A9", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Sep 30 14:17:35 compute-0 ceph-mon[74194]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 229637 microseconds, and 25499 cpu microseconds.
Sep 30 14:17:35 compute-0 podman[99663]: 2025-09-30 14:17:35.513365173 +0000 UTC m=+0.110866193 container create 7fe74253a48758446d4f3d369c1fd3ab2fb4a83e1cd7f0f49b2f78468226f0e6 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-keepalived-rgw-default-compute-0-rnvjsg, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., name=keepalived, description=keepalived for Ceph, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, version=2.2.4, build-date=2023-02-22T09:23:20, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, release=1793, io.openshift.tags=Ceph keepalived, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Sep 30 14:17:35 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:17:35.512477) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 12562967 bytes OK
Sep 30 14:17:35 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:17:35.512500) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Sep 30 14:17:35 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:17:35.514245) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Sep 30 14:17:35 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:17:35.514264) EVENT_LOG_v1 {"time_micros": 1759241855514258, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Sep 30 14:17:35 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:17:35.514290) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Sep 30 14:17:35 compute-0 ceph-mon[74194]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 13956593, prev total WAL file size 13958991, number of live WAL files 2.
Sep 30 14:17:35 compute-0 ceph-mon[74194]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 14:17:35 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:17:35.517288) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323538' seq:0, type:0; will stop at (end)
Sep 30 14:17:35 compute-0 ceph-mon[74194]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Sep 30 14:17:35 compute-0 ceph-mon[74194]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(11MB) 13(57KB) 8(1944B)]
Sep 30 14:17:35 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759241855517426, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 12623405, "oldest_snapshot_seqno": -1}
Sep 30 14:17:35 compute-0 podman[99663]: 2025-09-30 14:17:35.422857258 +0000 UTC m=+0.020358298 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Sep 30 14:17:35 compute-0 ceph-mon[74194]: 12.1 scrub starts
Sep 30 14:17:35 compute-0 ceph-mon[74194]: 12.1 scrub ok
Sep 30 14:17:35 compute-0 ceph-mon[74194]: osdmap e72: 3 total, 3 up, 3 in
Sep 30 14:17:35 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:17:35 compute-0 ceph-mon[74194]: 11.13 scrub starts
Sep 30 14:17:35 compute-0 ceph-mon[74194]: 11.13 scrub ok
Sep 30 14:17:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d3f2329df98f94e8f43a23f117f4b24dfe910a3ea690f76bcb387c7be47199c/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:17:35 compute-0 ceph-mon[74194]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 3317 keys, 12605449 bytes, temperature: kUnknown
Sep 30 14:17:35 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759241855642905, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 12605449, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12578106, "index_size": 17953, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8325, "raw_key_size": 83437, "raw_average_key_size": 25, "raw_value_size": 12512119, "raw_average_value_size": 3772, "num_data_blocks": 789, "num_entries": 3317, "num_filter_entries": 3317, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759241526, "oldest_key_time": 0, "file_creation_time": 1759241855, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4a74fe2f-a33e-416b-ba25-743e7942b3ac", "db_session_id": "KY5CTSKWFSFJYE5835A9", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Sep 30 14:17:35 compute-0 ceph-mon[74194]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 14:17:35 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:17:35.643149) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 12605449 bytes
Sep 30 14:17:35 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:17:35.644882) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 100.5 rd, 100.4 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(12.0, 0.0 +0.0 blob) out(12.0 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3610, records dropped: 293 output_compression: NoCompression
Sep 30 14:17:35 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:17:35.644904) EVENT_LOG_v1 {"time_micros": 1759241855644894, "job": 4, "event": "compaction_finished", "compaction_time_micros": 125555, "compaction_time_cpu_micros": 28583, "output_level": 6, "num_output_files": 1, "total_output_size": 12605449, "num_input_records": 3610, "num_output_records": 3317, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Sep 30 14:17:35 compute-0 podman[99663]: 2025-09-30 14:17:35.645917865 +0000 UTC m=+0.243418905 container init 7fe74253a48758446d4f3d369c1fd3ab2fb4a83e1cd7f0f49b2f78468226f0e6 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-keepalived-rgw-default-compute-0-rnvjsg, release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.tags=Ceph keepalived, architecture=x86_64, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, vcs-type=git, com.redhat.component=keepalived-container, version=2.2.4, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public, build-date=2023-02-22T09:23:20, description=keepalived for Ceph)
Sep 30 14:17:35 compute-0 ceph-mon[74194]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 14:17:35 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759241855647154, "job": 4, "event": "table_file_deletion", "file_number": 19}
Sep 30 14:17:35 compute-0 ceph-mon[74194]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 14:17:35 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759241855647267, "job": 4, "event": "table_file_deletion", "file_number": 13}
Sep 30 14:17:35 compute-0 ceph-mon[74194]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 14:17:35 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759241855647332, "job": 4, "event": "table_file_deletion", "file_number": 8}
Sep 30 14:17:35 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:17:35.517006) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:17:35 compute-0 podman[99663]: 2025-09-30 14:17:35.652056137 +0000 UTC m=+0.249557157 container start 7fe74253a48758446d4f3d369c1fd3ab2fb4a83e1cd7f0f49b2f78468226f0e6 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-keepalived-rgw-default-compute-0-rnvjsg, io.openshift.tags=Ceph keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=keepalived-container, version=2.2.4, io.openshift.expose-services=, build-date=2023-02-22T09:23:20, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., name=keepalived, release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, io.buildah.version=1.28.2, description=keepalived for Ceph)
Sep 30 14:17:35 compute-0 bash[99663]: 7fe74253a48758446d4f3d369c1fd3ab2fb4a83e1cd7f0f49b2f78468226f0e6
Sep 30 14:17:35 compute-0 systemd[1]: Started Ceph keepalived.rgw.default.compute-0.rnvjsg for 5e3c7776-ac03-5698-b79f-a6dc2d80cae6.
Sep 30 14:17:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-keepalived-rgw-default-compute-0-rnvjsg[99723]: Tue Sep 30 14:17:35 2025: Starting Keepalived v2.2.4 (08/21,2021)
Sep 30 14:17:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-keepalived-rgw-default-compute-0-rnvjsg[99723]: Tue Sep 30 14:17:35 2025: Running on Linux 5.14.0-617.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Sep 15 21:46:13 UTC 2025 (built for Linux 5.14.0)
Sep 30 14:17:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-keepalived-rgw-default-compute-0-rnvjsg[99723]: Tue Sep 30 14:17:35 2025: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Sep 30 14:17:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-keepalived-rgw-default-compute-0-rnvjsg[99723]: Tue Sep 30 14:17:35 2025: Configuration file /etc/keepalived/keepalived.conf
Sep 30 14:17:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-keepalived-rgw-default-compute-0-rnvjsg[99723]: Tue Sep 30 14:17:35 2025: Failed to bind to process monitoring socket - errno 98 - Address already in use
Sep 30 14:17:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-keepalived-rgw-default-compute-0-rnvjsg[99723]: Tue Sep 30 14:17:35 2025: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Sep 30 14:17:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-keepalived-rgw-default-compute-0-rnvjsg[99723]: Tue Sep 30 14:17:35 2025: Starting VRRP child process, pid=4
Sep 30 14:17:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-keepalived-rgw-default-compute-0-rnvjsg[99723]: Tue Sep 30 14:17:35 2025: Startup complete
Sep 30 14:17:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-keepalived-rgw-default-compute-0-rnvjsg[99723]: Tue Sep 30 14:17:35 2025: (VI_0) Entering BACKUP STATE (init)
Sep 30 14:17:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-keepalived-nfs-cephfs-compute-0-nfjjcv[97599]: Tue Sep 30 14:17:35 2025: (VI_0) Entering BACKUP STATE
Sep 30 14:17:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-keepalived-rgw-default-compute-0-rnvjsg[99723]: Tue Sep 30 14:17:35 2025: VRRP_Script(check_backend) succeeded
Sep 30 14:17:35 compute-0 sudo[99275]: pam_unix(sudo:session): session closed for user root
Sep 30 14:17:35 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 14:17:35 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:17:35 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 14:17:35 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:17:35 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Sep 30 14:17:35 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:17:35 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Sep 30 14:17:35 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Sep 30 14:17:35 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Sep 30 14:17:35 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Sep 30 14:17:35 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.rgw.default.compute-2.shztfi on compute-2
Sep 30 14:17:35 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.rgw.default.compute-2.shztfi on compute-2
Sep 30 14:17:35 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v85: 337 pgs: 4 peering, 333 active+clean; 457 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 53 op/s; 31 B/s, 0 objects/s recovering
Sep 30 14:17:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:17:35 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf90003c50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:17:36 compute-0 sudo[99835]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvqvuduccjzckcmnlxxmggkcoaxnpjny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241855.5611644-56-272326210564227/AnsiballZ_command.py'
Sep 30 14:17:36 compute-0 sudo[99835]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:17:36 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:17:36 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf70003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:17:36 compute-0 python3.9[99837]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:17:36 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-keepalived-nfs-cephfs-compute-0-nfjjcv[97599]: Tue Sep 30 14:17:36 2025: (VI_0) Entering MASTER STATE
Sep 30 14:17:36 compute-0 ceph-mon[74194]: 10.7 scrub starts
Sep 30 14:17:36 compute-0 ceph-mon[74194]: 10.7 scrub ok
Sep 30 14:17:36 compute-0 ceph-mon[74194]: 8.11 scrub starts
Sep 30 14:17:36 compute-0 ceph-mon[74194]: 8.11 scrub ok
Sep 30 14:17:36 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:17:36 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:17:36 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:17:36 compute-0 ceph-mon[74194]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Sep 30 14:17:36 compute-0 ceph-mon[74194]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Sep 30 14:17:36 compute-0 ceph-mon[74194]: Deploying daemon keepalived.rgw.default.compute-2.shztfi on compute-2
Sep 30 14:17:36 compute-0 ceph-mon[74194]: pgmap v85: 337 pgs: 4 peering, 333 active+clean; 457 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 53 op/s; 31 B/s, 0 objects/s recovering
Sep 30 14:17:37 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Sep 30 14:17:37 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Sep 30 14:17:37 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:17:37 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf84001a70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:17:37 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:17:37 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:17:37 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:17:37.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:17:37 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:17:37 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:17:37 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:17:37.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:17:37 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Sep 30 14:17:37 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v86: 337 pgs: 4 peering, 333 active+clean; 457 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 0 B/s wr, 45 op/s; 26 B/s, 0 objects/s recovering
Sep 30 14:17:37 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:17:37 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf84001a70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:17:38 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:17:38 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Sep 30 14:17:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:17:38 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf90003c50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:17:38 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 12.a scrub starts
Sep 30 14:17:38 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 12.a scrub ok
Sep 30 14:17:38 compute-0 ceph-mon[74194]: 10.17 scrub starts
Sep 30 14:17:38 compute-0 ceph-mon[74194]: 10.17 scrub ok
Sep 30 14:17:38 compute-0 ceph-mon[74194]: 8.1f deep-scrub starts
Sep 30 14:17:38 compute-0 ceph-mon[74194]: 8.1f deep-scrub ok
Sep 30 14:17:38 compute-0 ceph-mon[74194]: 7.1b scrub starts
Sep 30 14:17:38 compute-0 ceph-mon[74194]: 7.1b scrub ok
Sep 30 14:17:38 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:17:38 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Sep 30 14:17:38 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:17:38 compute-0 ceph-mgr[74485]: [progress INFO root] complete: finished ev 5c269cfc-3bce-47bb-9bef-5c64c8398507 (Updating ingress.rgw.default deployment (+4 -> 4))
Sep 30 14:17:38 compute-0 ceph-mgr[74485]: [progress INFO root] Completed event 5c269cfc-3bce-47bb-9bef-5c64c8398507 (Updating ingress.rgw.default deployment (+4 -> 4)) in 14 seconds
Sep 30 14:17:38 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Sep 30 14:17:38 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:17:38 compute-0 ceph-mgr[74485]: [progress INFO root] update: starting ev a24d8808-c0d5-405b-b748-5a0b721e0398 (Updating prometheus deployment (+1 -> 1))
Sep 30 14:17:39 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Sep 30 14:17:39 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Sep 30 14:17:39 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Deploying daemon prometheus.compute-0 on compute-0
Sep 30 14:17:39 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Deploying daemon prometheus.compute-0 on compute-0
Sep 30 14:17:39 compute-0 sudo[99852]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:17:39 compute-0 sudo[99852]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:17:39 compute-0 sudo[99852]: pam_unix(sudo:session): session closed for user root
Sep 30 14:17:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:17:39 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf70003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:17:39 compute-0 sudo[99877]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/prometheus/prometheus:v2.51.0 --timeout 895 _orch deploy --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6
Sep 30 14:17:39 compute-0 sudo[99877]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:17:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-keepalived-rgw-default-compute-0-rnvjsg[99723]: Tue Sep 30 14:17:39 2025: (VI_0) Entering MASTER STATE
Sep 30 14:17:39 compute-0 ceph-mon[74194]: 12.16 scrub starts
Sep 30 14:17:39 compute-0 ceph-mon[74194]: 12.16 scrub ok
Sep 30 14:17:39 compute-0 ceph-mon[74194]: 12.1d scrub starts
Sep 30 14:17:39 compute-0 ceph-mon[74194]: 12.1d scrub ok
Sep 30 14:17:39 compute-0 ceph-mon[74194]: pgmap v86: 337 pgs: 4 peering, 333 active+clean; 457 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 0 B/s wr, 45 op/s; 26 B/s, 0 objects/s recovering
Sep 30 14:17:39 compute-0 ceph-mon[74194]: 12.15 scrub starts
Sep 30 14:17:39 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:17:39 compute-0 ceph-mon[74194]: 12.15 scrub ok
Sep 30 14:17:39 compute-0 ceph-mon[74194]: 12.a scrub starts
Sep 30 14:17:39 compute-0 ceph-mon[74194]: 12.a scrub ok
Sep 30 14:17:39 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:17:39 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:17:39 compute-0 ceph-mon[74194]: 10.12 scrub starts
Sep 30 14:17:39 compute-0 ceph-mon[74194]: 10.12 scrub ok
Sep 30 14:17:39 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:17:39 compute-0 ceph-mon[74194]: 5.5 scrub starts
Sep 30 14:17:39 compute-0 ceph-mon[74194]: 5.5 scrub ok
Sep 30 14:17:39 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:17:39 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:17:39 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:17:39.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:17:39 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:17:39 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:17:39 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:17:39.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:17:39 compute-0 ceph-mgr[74485]: [progress INFO root] Writing back 28 completed events
Sep 30 14:17:39 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Sep 30 14:17:39 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v87: 337 pgs: 337 active+clean; 457 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 0 B/s wr, 33 op/s; 95 B/s, 3 objects/s recovering
Sep 30 14:17:39 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} v 0)
Sep 30 14:17:39 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]: dispatch
Sep 30 14:17:39 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0)
Sep 30 14:17:39 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Sep 30 14:17:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:17:39 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf9c00a2b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:17:39 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:17:39 compute-0 ceph-mgr[74485]: [progress INFO root] Completed event 8f2845bb-eece-4cb4-9148-049f62f14c02 (Global Recovery Event) in 5 seconds
Sep 30 14:17:40 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 12.c scrub starts
Sep 30 14:17:40 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 12.c scrub ok
Sep 30 14:17:40 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:17:40 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf9c00a2b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:17:40 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e72 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 14:17:40 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Sep 30 14:17:40 compute-0 ceph-mon[74194]: 5.11 scrub starts
Sep 30 14:17:40 compute-0 ceph-mon[74194]: 5.11 scrub ok
Sep 30 14:17:40 compute-0 ceph-mon[74194]: Deploying daemon prometheus.compute-0 on compute-0
Sep 30 14:17:40 compute-0 ceph-mon[74194]: 8.b deep-scrub starts
Sep 30 14:17:40 compute-0 ceph-mon[74194]: 8.b deep-scrub ok
Sep 30 14:17:40 compute-0 ceph-mon[74194]: pgmap v87: 337 pgs: 337 active+clean; 457 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 0 B/s wr, 33 op/s; 95 B/s, 3 objects/s recovering
Sep 30 14:17:40 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]: dispatch
Sep 30 14:17:40 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Sep 30 14:17:40 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:17:40 compute-0 ceph-mon[74194]: 12.c scrub starts
Sep 30 14:17:40 compute-0 ceph-mon[74194]: 12.c scrub ok
Sep 30 14:17:40 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Sep 30 14:17:40 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Sep 30 14:17:40 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Sep 30 14:17:40 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Sep 30 14:17:40 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 73 pg[9.e( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=6 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=73 pruub=10.977923393s) [1] r=-1 lpr=73 pi=[55,73)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active pruub 238.536437988s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:40 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 73 pg[9.e( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=6 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=73 pruub=10.977883339s) [1] r=-1 lpr=73 pi=[55,73)/1 crt=48'1157 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 238.536437988s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:17:40 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 73 pg[9.6( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=6 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=73 pruub=10.977243423s) [1] r=-1 lpr=73 pi=[55,73)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active pruub 238.536239624s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:40 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 73 pg[9.6( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=6 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=73 pruub=10.977208138s) [1] r=-1 lpr=73 pi=[55,73)/1 crt=48'1157 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 238.536239624s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:17:40 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 73 pg[9.1e( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=5 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=73 pruub=10.970804214s) [1] r=-1 lpr=73 pi=[55,73)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active pruub 238.530609131s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:40 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 73 pg[9.16( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=5 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=73 pruub=10.976459503s) [1] r=-1 lpr=73 pi=[55,73)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active pruub 238.536209106s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:40 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 73 pg[9.1e( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=5 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=73 pruub=10.970687866s) [1] r=-1 lpr=73 pi=[55,73)/1 crt=48'1157 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 238.530609131s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:17:40 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 73 pg[9.16( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=5 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=73 pruub=10.976211548s) [1] r=-1 lpr=73 pi=[55,73)/1 crt=48'1157 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 238.536209106s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:17:40 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 73 pg[6.6( empty local-lis/les=0/0 n=0 ec=51/22 lis/c=61/61 les/c/f=62/62/0 sis=73) [0] r=0 lpr=73 pi=[61,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:17:40 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 73 pg[6.e( empty local-lis/les=0/0 n=0 ec=51/22 lis/c=61/61 les/c/f=62/62/0 sis=73) [0] r=0 lpr=73 pi=[61,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:17:41 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Sep 30 14:17:41 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Sep 30 14:17:41 compute-0 sshd-session[99962]: Received disconnect from 210.90.155.80 port 45610:11: Bye Bye [preauth]
Sep 30 14:17:41 compute-0 sshd-session[99962]: Disconnected from authenticating user root 210.90.155.80 port 45610 [preauth]
Sep 30 14:17:41 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:17:41 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf90003c50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:17:41 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:17:41 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:17:41 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:17:41.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:17:41 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:17:41 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:17:41 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:17:41.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:17:41 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Sep 30 14:17:41 compute-0 ceph-mon[74194]: 5.1 scrub starts
Sep 30 14:17:41 compute-0 ceph-mon[74194]: 5.1 scrub ok
Sep 30 14:17:41 compute-0 ceph-mon[74194]: 11.8 scrub starts
Sep 30 14:17:41 compute-0 ceph-mon[74194]: 11.8 scrub ok
Sep 30 14:17:41 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Sep 30 14:17:41 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Sep 30 14:17:41 compute-0 ceph-mon[74194]: osdmap e73: 3 total, 3 up, 3 in
Sep 30 14:17:41 compute-0 ceph-mon[74194]: 7.6 scrub starts
Sep 30 14:17:41 compute-0 ceph-mon[74194]: 7.6 scrub ok
Sep 30 14:17:41 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Sep 30 14:17:41 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Sep 30 14:17:41 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 74 pg[9.1e( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=5 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=74) [1]/[0] r=0 lpr=74 pi=[55,74)/1 crt=48'1157 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:41 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 74 pg[9.16( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=5 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=74) [1]/[0] r=0 lpr=74 pi=[55,74)/1 crt=48'1157 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:41 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 74 pg[9.1e( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=5 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=74) [1]/[0] r=0 lpr=74 pi=[55,74)/1 crt=48'1157 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Sep 30 14:17:41 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 74 pg[9.16( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=5 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=74) [1]/[0] r=0 lpr=74 pi=[55,74)/1 crt=48'1157 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Sep 30 14:17:41 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 74 pg[9.e( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=6 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=74) [1]/[0] r=0 lpr=74 pi=[55,74)/1 crt=48'1157 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:41 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 74 pg[9.e( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=6 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=74) [1]/[0] r=0 lpr=74 pi=[55,74)/1 crt=48'1157 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Sep 30 14:17:41 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 74 pg[9.6( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=6 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=74) [1]/[0] r=0 lpr=74 pi=[55,74)/1 crt=48'1157 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:41 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 74 pg[9.6( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=6 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=74) [1]/[0] r=0 lpr=74 pi=[55,74)/1 crt=48'1157 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Sep 30 14:17:41 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v90: 337 pgs: 337 active+clean; 457 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 100 B/s, 3 objects/s recovering
Sep 30 14:17:41 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} v 0)
Sep 30 14:17:41 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]: dispatch
Sep 30 14:17:41 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0)
Sep 30 14:17:41 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Sep 30 14:17:41 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 74 pg[6.6( v 48'39 lc 0'0 (0'0,48'39] local-lis/les=73/74 n=1 ec=51/22 lis/c=61/61 les/c/f=62/62/0 sis=73) [0] r=0 lpr=73 pi=[61,73)/1 crt=48'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:17:41 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:17:41 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf70003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:17:41 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 74 pg[6.e( v 48'39 lc 48'15 (0'0,48'39] local-lis/les=73/74 n=1 ec=51/22 lis/c=61/61 les/c/f=62/62/0 sis=73) [0] r=0 lpr=73 pi=[61,73)/1 crt=48'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:17:42 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 12.b scrub starts
Sep 30 14:17:42 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 12.b scrub ok
Sep 30 14:17:42 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:17:42 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf78002ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:17:42 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Sep 30 14:17:43 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 12.e scrub starts
Sep 30 14:17:43 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 12.e scrub ok
Sep 30 14:17:43 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:17:43 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf9c00a2b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:17:43 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Sep 30 14:17:43 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Sep 30 14:17:43 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Sep 30 14:17:43 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:17:43 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:17:43 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:17:43.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:17:43 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:17:43 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:17:43 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:17:43.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:17:43 compute-0 ceph-mon[74194]: 5.10 deep-scrub starts
Sep 30 14:17:43 compute-0 ceph-mon[74194]: 5.10 deep-scrub ok
Sep 30 14:17:43 compute-0 ceph-mon[74194]: 10.f scrub starts
Sep 30 14:17:43 compute-0 ceph-mon[74194]: 10.f scrub ok
Sep 30 14:17:43 compute-0 ceph-mon[74194]: osdmap e74: 3 total, 3 up, 3 in
Sep 30 14:17:43 compute-0 ceph-mon[74194]: pgmap v90: 337 pgs: 337 active+clean; 457 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 100 B/s, 3 objects/s recovering
Sep 30 14:17:43 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]: dispatch
Sep 30 14:17:43 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Sep 30 14:17:43 compute-0 ceph-mon[74194]: 12.b scrub starts
Sep 30 14:17:43 compute-0 ceph-mon[74194]: 12.b scrub ok
Sep 30 14:17:43 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v92: 337 pgs: 337 active+clean; 457 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 98 B/s, 4 objects/s recovering
Sep 30 14:17:43 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Sep 30 14:17:43 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} v 0)
Sep 30 14:17:43 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]: dispatch
Sep 30 14:17:43 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0)
Sep 30 14:17:43 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Sep 30 14:17:43 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:17:43 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf90003c50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:17:43 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 75 pg[9.e( v 48'1157 (0'0,48'1157] local-lis/les=74/75 n=6 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=74) [1]/[0] async=[1] r=0 lpr=74 pi=[55,74)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:17:43 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 75 pg[9.6( v 48'1157 (0'0,48'1157] local-lis/les=74/75 n=6 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=74) [1]/[0] async=[1] r=0 lpr=74 pi=[55,74)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:17:43 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 75 pg[9.1e( v 48'1157 (0'0,48'1157] local-lis/les=74/75 n=5 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=74) [1]/[0] async=[1] r=0 lpr=74 pi=[55,74)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:17:43 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 75 pg[9.16( v 48'1157 (0'0,48'1157] local-lis/les=74/75 n=5 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=74) [1]/[0] async=[1] r=0 lpr=74 pi=[55,74)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:17:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:17:44 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf90003c50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:17:44 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Sep 30 14:17:44 compute-0 ceph-mon[74194]: 5.16 deep-scrub starts
Sep 30 14:17:44 compute-0 ceph-mon[74194]: 5.16 deep-scrub ok
Sep 30 14:17:44 compute-0 ceph-mon[74194]: 10.4 deep-scrub starts
Sep 30 14:17:44 compute-0 ceph-mon[74194]: 10.4 deep-scrub ok
Sep 30 14:17:44 compute-0 ceph-mon[74194]: 12.e scrub starts
Sep 30 14:17:44 compute-0 ceph-mon[74194]: 5.2 scrub starts
Sep 30 14:17:44 compute-0 ceph-mon[74194]: 12.e scrub ok
Sep 30 14:17:44 compute-0 ceph-mon[74194]: 5.2 scrub ok
Sep 30 14:17:44 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Sep 30 14:17:44 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Sep 30 14:17:44 compute-0 ceph-mon[74194]: 11.19 deep-scrub starts
Sep 30 14:17:44 compute-0 ceph-mon[74194]: 11.19 deep-scrub ok
Sep 30 14:17:44 compute-0 ceph-mon[74194]: pgmap v92: 337 pgs: 337 active+clean; 457 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 98 B/s, 4 objects/s recovering
Sep 30 14:17:44 compute-0 ceph-mon[74194]: osdmap e75: 3 total, 3 up, 3 in
Sep 30 14:17:44 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]: dispatch
Sep 30 14:17:44 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Sep 30 14:17:44 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Sep 30 14:17:44 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Sep 30 14:17:44 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Sep 30 14:17:44 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Sep 30 14:17:44 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 76 pg[9.e( v 48'1157 (0'0,48'1157] local-lis/les=74/75 n=6 ec=55/37 lis/c=74/55 les/c/f=75/56/0 sis=76 pruub=15.520419121s) [1] async=[1] r=-1 lpr=76 pi=[55,76)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active pruub 246.795791626s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:44 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 76 pg[9.8( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=6 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=76 pruub=15.260885239s) [2] r=-1 lpr=76 pi=[55,76)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active pruub 246.536575317s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:44 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 76 pg[9.6( v 48'1157 (0'0,48'1157] local-lis/les=74/75 n=6 ec=55/37 lis/c=74/55 les/c/f=75/56/0 sis=74) [1]/[0] async=[1] r=0 lpr=74 pi=[55,74)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] scrubber<NotActive>: update_scrub_job !!! primary but not scheduled! 
Sep 30 14:17:44 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 76 pg[9.8( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=6 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=76 pruub=15.260816574s) [2] r=-1 lpr=76 pi=[55,76)/1 crt=48'1157 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 246.536575317s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:17:44 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 76 pg[9.18( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=5 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=76 pruub=15.260411263s) [2] r=-1 lpr=76 pi=[55,76)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active pruub 246.536361694s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:44 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 76 pg[9.18( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=5 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=76 pruub=15.260351181s) [2] r=-1 lpr=76 pi=[55,76)/1 crt=48'1157 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 246.536361694s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:17:44 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 76 pg[9.e( v 48'1157 (0'0,48'1157] local-lis/les=74/75 n=6 ec=55/37 lis/c=74/55 les/c/f=75/56/0 sis=76 pruub=15.520182610s) [1] r=-1 lpr=76 pi=[55,76)/1 crt=48'1157 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 246.795791626s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:17:44 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 76 pg[6.8( v 48'39 (0'0,48'39] local-lis/les=51/52 n=0 ec=51/22 lis/c=51/51 les/c/f=52/52/0 sis=76 pruub=10.931583405s) [1] r=-1 lpr=76 pi=[51,76)/1 crt=48'39 lcod 0'0 mlcod 0'0 active pruub 242.208038330s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:44 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 76 pg[6.8( v 48'39 (0'0,48'39] local-lis/les=51/52 n=0 ec=51/22 lis/c=51/51 les/c/f=52/52/0 sis=76 pruub=10.931559563s) [1] r=-1 lpr=76 pi=[51,76)/1 crt=48'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 242.208038330s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:17:44 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 76 pg[9.16( v 48'1157 (0'0,48'1157] local-lis/les=74/75 n=5 ec=55/37 lis/c=74/55 les/c/f=75/56/0 sis=74) [1]/[0] async=[1] r=0 lpr=74 pi=[55,74)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active+recovering+remapped rops=1 mbc={255={(0+1)=3}}] scrubber<NotActive>: update_scrub_job !!! primary but not scheduled! 
Sep 30 14:17:44 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 76 pg[9.1e( v 48'1157 (0'0,48'1157] local-lis/les=74/75 n=5 ec=55/37 lis/c=74/55 les/c/f=75/56/0 sis=74) [1]/[0] async=[1] r=0 lpr=74 pi=[55,74)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] scrubber<NotActive>: update_scrub_job !!! primary but not scheduled! 
Sep 30 14:17:44 compute-0 ceph-mgr[74485]: [progress INFO root] Writing back 29 completed events
Sep 30 14:17:44 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Sep 30 14:17:45 compute-0 sudo[99835]: pam_unix(sudo:session): session closed for user root
Sep 30 14:17:45 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 9.14 deep-scrub starts
Sep 30 14:17:45 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 9.14 deep-scrub ok
Sep 30 14:17:45 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:17:45 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf70003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:17:45 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e76 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 14:17:45 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Sep 30 14:17:45 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:17:45 compute-0 ceph-mgr[74485]: [progress WARNING root] Starting Global Recovery Event,4 pgs not in active + clean state
Sep 30 14:17:45 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:17:45 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:17:45 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:17:45.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:17:45 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:17:45 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:17:45 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:17:45.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:17:45 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v94: 337 pgs: 2 peering, 3 activating+remapped, 332 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 15/221 objects misplaced (6.787%); 21 B/s, 1 objects/s recovering
Sep 30 14:17:45 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:17:45 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf9c00a2b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:17:45 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Sep 30 14:17:45 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Sep 30 14:17:45 compute-0 podman[99944]: 2025-09-30 14:17:45.8901206 +0000 UTC m=+6.278868628 image pull 1d3b7f56885b6dd623f1785be963aa9c195f86bc256ea454e8d02a7980b79c53 quay.io/prometheus/prometheus:v2.51.0
Sep 30 14:17:46 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 9.2 scrub starts
Sep 30 14:17:46 compute-0 sshd-session[99197]: Connection closed by 192.168.122.30 port 37946
Sep 30 14:17:46 compute-0 sshd-session[99194]: pam_unix(sshd:session): session closed for user zuul
Sep 30 14:17:46 compute-0 systemd-logind[808]: Session 37 logged out. Waiting for processes to exit.
Sep 30 14:17:46 compute-0 systemd[1]: session-37.scope: Deactivated successfully.
Sep 30 14:17:46 compute-0 systemd[1]: session-37.scope: Consumed 8.657s CPU time.
Sep 30 14:17:46 compute-0 systemd-logind[808]: Removed session 37.
Sep 30 14:17:46 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:17:46 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf9c00a2b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:17:46 compute-0 podman[99944]: 2025-09-30 14:17:46.377032759 +0000 UTC m=+6.765780767 volume create ce1e8c091c7d5d5a2d66b955418d09e9b47e189cc53edb08a21feb949092d709
Sep 30 14:17:46 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 77 pg[9.18( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=5 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=77) [2]/[0] r=0 lpr=77 pi=[55,77)/1 crt=48'1157 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:46 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 77 pg[9.18( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=5 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=77) [2]/[0] r=0 lpr=77 pi=[55,77)/1 crt=48'1157 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Sep 30 14:17:46 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 77 pg[9.6( v 48'1157 (0'0,48'1157] local-lis/les=74/75 n=6 ec=55/37 lis/c=74/55 les/c/f=75/56/0 sis=77 pruub=13.217950821s) [1] async=[1] r=-1 lpr=77 pi=[55,77)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active pruub 246.795852661s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:46 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 77 pg[9.6( v 48'1157 (0'0,48'1157] local-lis/les=74/75 n=6 ec=55/37 lis/c=74/55 les/c/f=75/56/0 sis=77 pruub=13.217908859s) [1] r=-1 lpr=77 pi=[55,77)/1 crt=48'1157 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 246.795852661s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:17:46 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 77 pg[9.8( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=6 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=77) [2]/[0] r=0 lpr=77 pi=[55,77)/1 crt=48'1157 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:46 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 77 pg[9.8( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=6 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=77) [2]/[0] r=0 lpr=77 pi=[55,77)/1 crt=48'1157 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Sep 30 14:17:46 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 77 pg[9.16( v 48'1157 (0'0,48'1157] local-lis/les=74/75 n=5 ec=55/37 lis/c=74/55 les/c/f=75/56/0 sis=77 pruub=13.217513084s) [1] async=[1] r=-1 lpr=77 pi=[55,77)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active pruub 246.795867920s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:46 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 77 pg[9.16( v 48'1157 (0'0,48'1157] local-lis/les=74/75 n=5 ec=55/37 lis/c=74/55 les/c/f=75/56/0 sis=77 pruub=13.217414856s) [1] r=-1 lpr=77 pi=[55,77)/1 crt=48'1157 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 246.795867920s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:17:46 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 77 pg[9.1e( v 48'1157 (0'0,48'1157] local-lis/les=74/75 n=5 ec=55/37 lis/c=74/55 les/c/f=75/56/0 sis=77 pruub=13.217301369s) [1] async=[1] r=-1 lpr=77 pi=[55,77)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active pruub 246.795959473s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:46 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 77 pg[9.1e( v 48'1157 (0'0,48'1157] local-lis/les=74/75 n=5 ec=55/37 lis/c=74/55 les/c/f=75/56/0 sis=77 pruub=13.217231750s) [1] r=-1 lpr=77 pi=[55,77)/1 crt=48'1157 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 246.795959473s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:17:46 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 9.2 scrub ok
Sep 30 14:17:46 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Sep 30 14:17:47 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 9.9 scrub starts
Sep 30 14:17:47 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Sep 30 14:17:47 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Sep 30 14:17:47 compute-0 ceph-mon[74194]: osdmap e76: 3 total, 3 up, 3 in
Sep 30 14:17:47 compute-0 ceph-mon[74194]: 12.4 scrub starts
Sep 30 14:17:47 compute-0 ceph-mon[74194]: 12.4 scrub ok
Sep 30 14:17:47 compute-0 ceph-mon[74194]: 9.14 deep-scrub starts
Sep 30 14:17:47 compute-0 ceph-mon[74194]: 9.14 deep-scrub ok
Sep 30 14:17:47 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:17:47 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 9.9 scrub ok
Sep 30 14:17:47 compute-0 podman[99944]: 2025-09-30 14:17:47.203519865 +0000 UTC m=+7.592267893 container create df57f72bbd250cd8aeea3fd939ed0079864597b6a5454a891e1276bd1b1fe87f (image=quay.io/prometheus/prometheus:v2.51.0, name=reverent_greider, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:17:47 compute-0 systemd[75514]: Created slice User Background Tasks Slice.
Sep 30 14:17:47 compute-0 systemd[75514]: Starting Cleanup of User's Temporary Files and Directories...
Sep 30 14:17:47 compute-0 systemd[75514]: Finished Cleanup of User's Temporary Files and Directories.
Sep 30 14:17:47 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:17:47 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf90003c50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:17:47 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:17:47 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:17:47 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:17:47.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:17:47 compute-0 ceph-mon[74194]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Sep 30 14:17:47 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:17:47.446094) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Sep 30 14:17:47 compute-0 ceph-mon[74194]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Sep 30 14:17:47 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759241867446128, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 546, "num_deletes": 251, "total_data_size": 545770, "memory_usage": 557464, "flush_reason": "Manual Compaction"}
Sep 30 14:17:47 compute-0 ceph-mon[74194]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Sep 30 14:17:47 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Sep 30 14:17:47 compute-0 systemd[1]: Started libpod-conmon-df57f72bbd250cd8aeea3fd939ed0079864597b6a5454a891e1276bd1b1fe87f.scope.
Sep 30 14:17:47 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Sep 30 14:17:47 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:17:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b407790439720a7900749201c02d2962bb527721d5b7411b574db76d0ac8a2b8/merged/prometheus supports timestamps until 2038 (0x7fffffff)
Sep 30 14:17:47 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759241867486060, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 533540, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7636, "largest_seqno": 8181, "table_properties": {"data_size": 530255, "index_size": 1129, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8555, "raw_average_key_size": 20, "raw_value_size": 523246, "raw_average_value_size": 1231, "num_data_blocks": 49, "num_entries": 425, "num_filter_entries": 425, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759241855, "oldest_key_time": 1759241855, "file_creation_time": 1759241867, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4a74fe2f-a33e-416b-ba25-743e7942b3ac", "db_session_id": "KY5CTSKWFSFJYE5835A9", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Sep 30 14:17:47 compute-0 ceph-mon[74194]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 40012 microseconds, and 2804 cpu microseconds.
Sep 30 14:17:47 compute-0 ceph-mon[74194]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 14:17:47 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:17:47 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:17:47 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:17:47.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:17:47 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:17:47.486102) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 533540 bytes OK
Sep 30 14:17:47 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:17:47.486120) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Sep 30 14:17:47 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:17:47.518800) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Sep 30 14:17:47 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:17:47.518856) EVENT_LOG_v1 {"time_micros": 1759241867518845, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Sep 30 14:17:47 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:17:47.518885) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Sep 30 14:17:47 compute-0 ceph-mon[74194]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 542381, prev total WAL file size 542422, number of live WAL files 2.
Sep 30 14:17:47 compute-0 ceph-mon[74194]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 14:17:47 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:17:47.519502) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Sep 30 14:17:47 compute-0 ceph-mon[74194]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Sep 30 14:17:47 compute-0 ceph-mon[74194]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(521KB)], [20(12MB)]
Sep 30 14:17:47 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759241867519582, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 13138989, "oldest_snapshot_seqno": -1}
Sep 30 14:17:47 compute-0 podman[99944]: 2025-09-30 14:17:47.798231985 +0000 UTC m=+8.186980023 container init df57f72bbd250cd8aeea3fd939ed0079864597b6a5454a891e1276bd1b1fe87f (image=quay.io/prometheus/prometheus:v2.51.0, name=reverent_greider, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:17:47 compute-0 podman[99944]: 2025-09-30 14:17:47.80564213 +0000 UTC m=+8.194390128 container start df57f72bbd250cd8aeea3fd939ed0079864597b6a5454a891e1276bd1b1fe87f (image=quay.io/prometheus/prometheus:v2.51.0, name=reverent_greider, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:17:47 compute-0 ceph-mon[74194]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 3221 keys, 11982271 bytes, temperature: kUnknown
Sep 30 14:17:47 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759241867807467, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 11982271, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11956142, "index_size": 16995, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8069, "raw_key_size": 83003, "raw_average_key_size": 25, "raw_value_size": 11892216, "raw_average_value_size": 3692, "num_data_blocks": 740, "num_entries": 3221, "num_filter_entries": 3221, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759241526, "oldest_key_time": 0, "file_creation_time": 1759241867, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4a74fe2f-a33e-416b-ba25-743e7942b3ac", "db_session_id": "KY5CTSKWFSFJYE5835A9", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Sep 30 14:17:47 compute-0 ceph-mon[74194]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 14:17:47 compute-0 reverent_greider[100257]: 65534 65534
Sep 30 14:17:47 compute-0 systemd[1]: libpod-df57f72bbd250cd8aeea3fd939ed0079864597b6a5454a891e1276bd1b1fe87f.scope: Deactivated successfully.
Sep 30 14:17:47 compute-0 ceph-mgr[74485]: [balancer INFO root] Optimize plan auto_2025-09-30_14:17:47
Sep 30 14:17:47 compute-0 ceph-mgr[74485]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 14:17:47 compute-0 ceph-mgr[74485]: [balancer INFO root] Some PGs (0.014837) are inactive; try again later
Sep 30 14:17:47 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v97: 337 pgs: 2 peering, 3 activating+remapped, 332 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 15/221 objects misplaced (6.787%); 25 B/s, 1 objects/s recovering
Sep 30 14:17:47 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:17:47.807859) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 11982271 bytes
Sep 30 14:17:47 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:17:47.842775) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 45.6 rd, 41.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 12.0 +0.0 blob) out(11.4 +0.0 blob), read-write-amplify(47.1) write-amplify(22.5) OK, records in: 3742, records dropped: 521 output_compression: NoCompression
Sep 30 14:17:47 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:17:47.842817) EVENT_LOG_v1 {"time_micros": 1759241867842801, "job": 6, "event": "compaction_finished", "compaction_time_micros": 288104, "compaction_time_cpu_micros": 26367, "output_level": 6, "num_output_files": 1, "total_output_size": 11982271, "num_input_records": 3742, "num_output_records": 3221, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Sep 30 14:17:47 compute-0 ceph-mon[74194]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 14:17:47 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759241867843117, "job": 6, "event": "table_file_deletion", "file_number": 22}
Sep 30 14:17:47 compute-0 ceph-mon[74194]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 14:17:47 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759241867845519, "job": 6, "event": "table_file_deletion", "file_number": 20}
Sep 30 14:17:47 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:17:47.519418) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:17:47 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:17:47.845596) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:17:47 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:17:47.845606) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:17:47 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:17:47.845608) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:17:47 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:17:47.845610) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:17:47 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:17:47.845613) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:17:47 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:17:47 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf70003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:17:47 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:17:47 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:17:47 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:17:47 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:17:47 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:17:47 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:17:47 compute-0 podman[99944]: 2025-09-30 14:17:47.923452134 +0000 UTC m=+8.312200152 container attach df57f72bbd250cd8aeea3fd939ed0079864597b6a5454a891e1276bd1b1fe87f (image=quay.io/prometheus/prometheus:v2.51.0, name=reverent_greider, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:17:47 compute-0 podman[99944]: 2025-09-30 14:17:47.925785946 +0000 UTC m=+8.314533964 container died df57f72bbd250cd8aeea3fd939ed0079864597b6a5454a891e1276bd1b1fe87f (image=quay.io/prometheus/prometheus:v2.51.0, name=reverent_greider, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:17:47 compute-0 ceph-mgr[74485]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 14:17:47 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 14:17:47 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 14:17:47 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 14:17:47 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 14:17:47 compute-0 ceph-mgr[74485]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 14:17:47 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 14:17:47 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 14:17:47 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 14:17:47 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 14:17:48 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 78 pg[9.8( v 48'1157 (0'0,48'1157] local-lis/les=77/78 n=6 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=77) [2]/[0] async=[2] r=0 lpr=77 pi=[55,77)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:17:48 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 78 pg[9.18( v 48'1157 (0'0,48'1157] local-lis/les=77/78 n=5 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=77) [2]/[0] async=[2] r=0 lpr=77 pi=[55,77)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:17:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:17:48 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf6c000ea0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:17:48 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 9.c deep-scrub starts
Sep 30 14:17:48 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 9.c deep-scrub ok
Sep 30 14:17:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-b407790439720a7900749201c02d2962bb527721d5b7411b574db76d0ac8a2b8-merged.mount: Deactivated successfully.
Sep 30 14:17:48 compute-0 ceph-mon[74194]: 12.1e scrub starts
Sep 30 14:17:48 compute-0 ceph-mon[74194]: 12.1e scrub ok
Sep 30 14:17:48 compute-0 ceph-mon[74194]: pgmap v94: 337 pgs: 2 peering, 3 activating+remapped, 332 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 15/221 objects misplaced (6.787%); 21 B/s, 1 objects/s recovering
Sep 30 14:17:48 compute-0 ceph-mon[74194]: osdmap e77: 3 total, 3 up, 3 in
Sep 30 14:17:48 compute-0 ceph-mon[74194]: 9.2 scrub starts
Sep 30 14:17:48 compute-0 ceph-mon[74194]: 8.d scrub starts
Sep 30 14:17:48 compute-0 ceph-mon[74194]: 8.d scrub ok
Sep 30 14:17:48 compute-0 ceph-mon[74194]: 9.2 scrub ok
Sep 30 14:17:48 compute-0 ceph-mon[74194]: 9.e scrub starts
Sep 30 14:17:48 compute-0 ceph-mon[74194]: 9.9 scrub starts
Sep 30 14:17:48 compute-0 ceph-mon[74194]: 9.9 scrub ok
Sep 30 14:17:48 compute-0 ceph-mon[74194]: 9.e scrub ok
Sep 30 14:17:48 compute-0 ceph-mon[74194]: osdmap e78: 3 total, 3 up, 3 in
Sep 30 14:17:48 compute-0 ceph-mon[74194]: 12.9 scrub starts
Sep 30 14:17:48 compute-0 ceph-mon[74194]: 12.9 scrub ok
Sep 30 14:17:48 compute-0 ceph-mon[74194]: pgmap v97: 337 pgs: 2 peering, 3 activating+remapped, 332 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 15/221 objects misplaced (6.787%); 25 B/s, 1 objects/s recovering
Sep 30 14:17:48 compute-0 ceph-mon[74194]: 9.c deep-scrub starts
Sep 30 14:17:48 compute-0 ceph-mon[74194]: 9.c deep-scrub ok
Sep 30 14:17:48 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Sep 30 14:17:48 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Sep 30 14:17:48 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Sep 30 14:17:48 compute-0 podman[99944]: 2025-09-30 14:17:48.83151471 +0000 UTC m=+9.220262708 container remove df57f72bbd250cd8aeea3fd939ed0079864597b6a5454a891e1276bd1b1fe87f (image=quay.io/prometheus/prometheus:v2.51.0, name=reverent_greider, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:17:48 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 79 pg[9.8( v 48'1157 (0'0,48'1157] local-lis/les=77/78 n=6 ec=55/37 lis/c=77/55 les/c/f=78/56/0 sis=79 pruub=15.121128082s) [2] async=[2] r=-1 lpr=79 pi=[55,79)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active pruub 250.845779419s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:48 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 79 pg[9.8( v 48'1157 (0'0,48'1157] local-lis/les=77/78 n=6 ec=55/37 lis/c=77/55 les/c/f=78/56/0 sis=79 pruub=15.121039391s) [2] r=-1 lpr=79 pi=[55,79)/1 crt=48'1157 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 250.845779419s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:17:48 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 79 pg[9.18( v 48'1157 (0'0,48'1157] local-lis/les=77/78 n=5 ec=55/37 lis/c=77/55 les/c/f=78/56/0 sis=79 pruub=15.120688438s) [2] async=[2] r=-1 lpr=79 pi=[55,79)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active pruub 250.845825195s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:17:48 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 79 pg[9.18( v 48'1157 (0'0,48'1157] local-lis/les=77/78 n=5 ec=55/37 lis/c=77/55 les/c/f=78/56/0 sis=79 pruub=15.120654106s) [2] r=-1 lpr=79 pi=[55,79)/1 crt=48'1157 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 250.845825195s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:17:48 compute-0 podman[99944]: 2025-09-30 14:17:48.938271693 +0000 UTC m=+9.327019741 volume remove ce1e8c091c7d5d5a2d66b955418d09e9b47e189cc53edb08a21feb949092d709
Sep 30 14:17:48 compute-0 systemd[1]: libpod-conmon-df57f72bbd250cd8aeea3fd939ed0079864597b6a5454a891e1276bd1b1fe87f.scope: Deactivated successfully.
Sep 30 14:17:49 compute-0 podman[100275]: 2025-09-30 14:17:49.100407285 +0000 UTC m=+0.114995601 volume create bbbf856aa254d934d98d9a45777b0784acc0cb73c95e1c7e041a5b3c940adb81
Sep 30 14:17:49 compute-0 podman[100275]: 2025-09-30 14:17:49.027228657 +0000 UTC m=+0.041816993 image pull 1d3b7f56885b6dd623f1785be963aa9c195f86bc256ea454e8d02a7980b79c53 quay.io/prometheus/prometheus:v2.51.0
Sep 30 14:17:49 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Sep 30 14:17:49 compute-0 podman[100275]: 2025-09-30 14:17:49.148339418 +0000 UTC m=+0.162927734 container create 491559ce5e7febef46a3ed7d8763ca5421a1fee13e0bc245f18764c7fdac4043 (image=quay.io/prometheus/prometheus:v2.51.0, name=intelligent_swirles, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:17:49 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Sep 30 14:17:49 compute-0 systemd[1]: Started libpod-conmon-491559ce5e7febef46a3ed7d8763ca5421a1fee13e0bc245f18764c7fdac4043.scope.
Sep 30 14:17:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:17:49 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf9c00a2b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:17:49 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:17:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a741a645bce2eebe57a7e7b7aff817c2de375824bb6c07e35990028813f460c/merged/prometheus supports timestamps until 2038 (0x7fffffff)
Sep 30 14:17:49 compute-0 podman[100275]: 2025-09-30 14:17:49.304979925 +0000 UTC m=+0.319568261 container init 491559ce5e7febef46a3ed7d8763ca5421a1fee13e0bc245f18764c7fdac4043 (image=quay.io/prometheus/prometheus:v2.51.0, name=intelligent_swirles, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:17:49 compute-0 podman[100275]: 2025-09-30 14:17:49.311642281 +0000 UTC m=+0.326230597 container start 491559ce5e7febef46a3ed7d8763ca5421a1fee13e0bc245f18764c7fdac4043 (image=quay.io/prometheus/prometheus:v2.51.0, name=intelligent_swirles, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:17:49 compute-0 intelligent_swirles[100293]: 65534 65534
Sep 30 14:17:49 compute-0 systemd[1]: libpod-491559ce5e7febef46a3ed7d8763ca5421a1fee13e0bc245f18764c7fdac4043.scope: Deactivated successfully.
Sep 30 14:17:49 compute-0 podman[100275]: 2025-09-30 14:17:49.320536085 +0000 UTC m=+0.335124421 container attach 491559ce5e7febef46a3ed7d8763ca5421a1fee13e0bc245f18764c7fdac4043 (image=quay.io/prometheus/prometheus:v2.51.0, name=intelligent_swirles, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:17:49 compute-0 podman[100275]: 2025-09-30 14:17:49.320870234 +0000 UTC m=+0.335458580 container died 491559ce5e7febef46a3ed7d8763ca5421a1fee13e0bc245f18764c7fdac4043 (image=quay.io/prometheus/prometheus:v2.51.0, name=intelligent_swirles, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:17:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-8a741a645bce2eebe57a7e7b7aff817c2de375824bb6c07e35990028813f460c-merged.mount: Deactivated successfully.
Sep 30 14:17:49 compute-0 podman[100275]: 2025-09-30 14:17:49.369891615 +0000 UTC m=+0.384479921 container remove 491559ce5e7febef46a3ed7d8763ca5421a1fee13e0bc245f18764c7fdac4043 (image=quay.io/prometheus/prometheus:v2.51.0, name=intelligent_swirles, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:17:49 compute-0 podman[100275]: 2025-09-30 14:17:49.374182488 +0000 UTC m=+0.388770804 volume remove bbbf856aa254d934d98d9a45777b0784acc0cb73c95e1c7e041a5b3c940adb81
Sep 30 14:17:49 compute-0 systemd[1]: libpod-conmon-491559ce5e7febef46a3ed7d8763ca5421a1fee13e0bc245f18764c7fdac4043.scope: Deactivated successfully.
Sep 30 14:17:49 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:17:49 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:17:49 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:17:49.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:17:49 compute-0 systemd[1]: Reloading.
Sep 30 14:17:49 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:17:49 compute-0 systemd-rc-local-generator[100339]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:17:49 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:17:49 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:17:49.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:17:49 compute-0 systemd-sysv-generator[100342]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 14:17:49 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Sep 30 14:17:49 compute-0 ceph-mon[74194]: 9.16 deep-scrub starts
Sep 30 14:17:49 compute-0 ceph-mon[74194]: 9.16 deep-scrub ok
Sep 30 14:17:49 compute-0 ceph-mon[74194]: 12.2 scrub starts
Sep 30 14:17:49 compute-0 ceph-mon[74194]: 12.2 scrub ok
Sep 30 14:17:49 compute-0 ceph-mon[74194]: osdmap e79: 3 total, 3 up, 3 in
Sep 30 14:17:49 compute-0 ceph-mon[74194]: 7.2 scrub starts
Sep 30 14:17:49 compute-0 ceph-mon[74194]: 7.2 scrub ok
Sep 30 14:17:49 compute-0 systemd[1]: Reloading.
Sep 30 14:17:49 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Sep 30 14:17:49 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Sep 30 14:17:49 compute-0 systemd-rc-local-generator[100378]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:17:49 compute-0 systemd-sysv-generator[100381]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 14:17:49 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v100: 337 pgs: 2 active+remapped, 2 peering, 3 activating+remapped, 330 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 15/216 objects misplaced (6.944%); 54 B/s, 2 objects/s recovering
Sep 30 14:17:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:17:49 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf90003c50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:17:49 compute-0 systemd[1]: Starting Ceph prometheus.compute-0 for 5e3c7776-ac03-5698-b79f-a6dc2d80cae6...
Sep 30 14:17:50 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:17:50 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf70003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:17:50 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 10.8 scrub starts
Sep 30 14:17:50 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 10.8 scrub ok
Sep 30 14:17:50 compute-0 podman[100435]: 2025-09-30 14:17:50.202970785 +0000 UTC m=+0.043088707 container create e4a50bbeb60f228cd09239a211f5e468f7ca87363229c6999e3900e12da32b57 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:17:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab5d2213d2b867d5305ab0f313ba5324e393f3f0580df8c5baaa40d245f44111/merged/prometheus supports timestamps until 2038 (0x7fffffff)
Sep 30 14:17:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab5d2213d2b867d5305ab0f313ba5324e393f3f0580df8c5baaa40d245f44111/merged/etc/prometheus supports timestamps until 2038 (0x7fffffff)
Sep 30 14:17:50 compute-0 podman[100435]: 2025-09-30 14:17:50.255012216 +0000 UTC m=+0.095130148 container init e4a50bbeb60f228cd09239a211f5e468f7ca87363229c6999e3900e12da32b57 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:17:50 compute-0 podman[100435]: 2025-09-30 14:17:50.261278881 +0000 UTC m=+0.101396793 container start e4a50bbeb60f228cd09239a211f5e468f7ca87363229c6999e3900e12da32b57 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:17:50 compute-0 bash[100435]: e4a50bbeb60f228cd09239a211f5e468f7ca87363229c6999e3900e12da32b57
Sep 30 14:17:50 compute-0 podman[100435]: 2025-09-30 14:17:50.182372542 +0000 UTC m=+0.022490484 image pull 1d3b7f56885b6dd623f1785be963aa9c195f86bc256ea454e8d02a7980b79c53 quay.io/prometheus/prometheus:v2.51.0
Sep 30 14:17:50 compute-0 systemd[1]: Started Ceph prometheus.compute-0 for 5e3c7776-ac03-5698-b79f-a6dc2d80cae6.
Sep 30 14:17:50 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e80 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 14:17:50 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-prometheus-compute-0[100450]: ts=2025-09-30T14:17:50.295Z caller=main.go:617 level=info msg="Starting Prometheus Server" mode=server version="(version=2.51.0, branch=HEAD, revision=c05c15512acb675e3f6cd662a6727854e93fc024)"
Sep 30 14:17:50 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-prometheus-compute-0[100450]: ts=2025-09-30T14:17:50.296Z caller=main.go:622 level=info build_context="(go=go1.22.1, platform=linux/amd64, user=root@b5723e458358, date=20240319-10:54:45, tags=netgo,builtinassets,stringlabels)"
Sep 30 14:17:50 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-prometheus-compute-0[100450]: ts=2025-09-30T14:17:50.296Z caller=main.go:623 level=info host_details="(Linux 5.14.0-617.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Sep 15 21:46:13 UTC 2025 x86_64 compute-0 (none))"
Sep 30 14:17:50 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-prometheus-compute-0[100450]: ts=2025-09-30T14:17:50.296Z caller=main.go:624 level=info fd_limits="(soft=1048576, hard=1048576)"
Sep 30 14:17:50 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-prometheus-compute-0[100450]: ts=2025-09-30T14:17:50.296Z caller=main.go:625 level=info vm_limits="(soft=unlimited, hard=unlimited)"
Sep 30 14:17:50 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-prometheus-compute-0[100450]: ts=2025-09-30T14:17:50.298Z caller=web.go:568 level=info component=web msg="Start listening for connections" address=192.168.122.100:9095
Sep 30 14:17:50 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-prometheus-compute-0[100450]: ts=2025-09-30T14:17:50.298Z caller=main.go:1129 level=info msg="Starting TSDB ..."
Sep 30 14:17:50 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-prometheus-compute-0[100450]: ts=2025-09-30T14:17:50.300Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=192.168.122.100:9095
Sep 30 14:17:50 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-prometheus-compute-0[100450]: ts=2025-09-30T14:17:50.300Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=192.168.122.100:9095
Sep 30 14:17:50 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-prometheus-compute-0[100450]: ts=2025-09-30T14:17:50.304Z caller=head.go:616 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any"
Sep 30 14:17:50 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-prometheus-compute-0[100450]: ts=2025-09-30T14:17:50.304Z caller=head.go:698 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=3.24µs
Sep 30 14:17:50 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-prometheus-compute-0[100450]: ts=2025-09-30T14:17:50.304Z caller=head.go:706 level=info component=tsdb msg="Replaying WAL, this may take a while"
Sep 30 14:17:50 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-prometheus-compute-0[100450]: ts=2025-09-30T14:17:50.305Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0
Sep 30 14:17:50 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-prometheus-compute-0[100450]: ts=2025-09-30T14:17:50.305Z caller=head.go:815 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=40.701µs wal_replay_duration=270.167µs wbl_replay_duration=150ns total_replay_duration=545.244µs
Sep 30 14:17:50 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-prometheus-compute-0[100450]: ts=2025-09-30T14:17:50.307Z caller=main.go:1150 level=info fs_type=XFS_SUPER_MAGIC
Sep 30 14:17:50 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-prometheus-compute-0[100450]: ts=2025-09-30T14:17:50.307Z caller=main.go:1153 level=info msg="TSDB started"
Sep 30 14:17:50 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-prometheus-compute-0[100450]: ts=2025-09-30T14:17:50.307Z caller=main.go:1335 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml
Sep 30 14:17:50 compute-0 sudo[99877]: pam_unix(sudo:session): session closed for user root
Sep 30 14:17:50 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 14:17:50 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-prometheus-compute-0[100450]: ts=2025-09-30T14:17:50.347Z caller=main.go:1372 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=40.859716ms db_storage=1.19µs remote_storage=1.31µs web_handler=800ns query_engine=750ns scrape=13.924667ms scrape_sd=233.606µs notify=15.21µs notify_sd=13.221µs rules=26.19462ms tracing=10.19µs
Sep 30 14:17:50 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-prometheus-compute-0[100450]: ts=2025-09-30T14:17:50.348Z caller=main.go:1114 level=info msg="Server is ready to receive web requests."
Sep 30 14:17:50 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-prometheus-compute-0[100450]: ts=2025-09-30T14:17:50.348Z caller=manager.go:163 level=info component="rule manager" msg="Starting rule manager..."
Sep 30 14:17:50 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:17:50 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 14:17:50 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:17:50 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.prometheus}] v 0)
Sep 30 14:17:50 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:17:50 compute-0 ceph-mgr[74485]: [progress INFO root] complete: finished ev a24d8808-c0d5-405b-b748-5a0b721e0398 (Updating prometheus deployment (+1 -> 1))
Sep 30 14:17:50 compute-0 ceph-mgr[74485]: [progress INFO root] Completed event a24d8808-c0d5-405b-b748-5a0b721e0398 (Updating prometheus deployment (+1 -> 1)) in 12 seconds
Sep 30 14:17:50 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module enable", "module": "prometheus"} v 0)
Sep 30 14:17:50 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch
Sep 30 14:17:50 compute-0 ceph-mon[74194]: 9.6 scrub starts
Sep 30 14:17:50 compute-0 ceph-mon[74194]: 9.6 scrub ok
Sep 30 14:17:50 compute-0 ceph-mon[74194]: 11.3 scrub starts
Sep 30 14:17:50 compute-0 ceph-mon[74194]: 11.3 scrub ok
Sep 30 14:17:50 compute-0 ceph-mon[74194]: osdmap e80: 3 total, 3 up, 3 in
Sep 30 14:17:50 compute-0 ceph-mon[74194]: pgmap v100: 337 pgs: 2 active+remapped, 2 peering, 3 activating+remapped, 330 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 15/216 objects misplaced (6.944%); 54 B/s, 2 objects/s recovering
Sep 30 14:17:50 compute-0 ceph-mon[74194]: 10.8 scrub starts
Sep 30 14:17:50 compute-0 ceph-mon[74194]: 10.8 scrub ok
Sep 30 14:17:50 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:17:50 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:17:50 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' 
Sep 30 14:17:50 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch
Sep 30 14:17:51 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 12.8 scrub starts
Sep 30 14:17:51 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 12.8 scrub ok
Sep 30 14:17:51 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:17:51 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf6c0019c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:17:51 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:17:51 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:17:51 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:17:51.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:17:51 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:17:51 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:17:51 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:17:51.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:17:51 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished
Sep 30 14:17:51 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : mgrmap e26: compute-0.buxlkm(active, since 2m), standbys: compute-1.zeqptq, compute-2.udzudc
Sep 30 14:17:51 compute-0 sshd-session[91787]: Connection closed by 192.168.122.100 port 39336
Sep 30 14:17:51 compute-0 sshd-session[91784]: pam_unix(sshd:session): session closed for user ceph-admin
Sep 30 14:17:51 compute-0 systemd[1]: session-36.scope: Deactivated successfully.
Sep 30 14:17:51 compute-0 systemd[1]: session-36.scope: Consumed 46.475s CPU time.
Sep 30 14:17:51 compute-0 systemd-logind[808]: Session 36 logged out. Waiting for processes to exit.
Sep 30 14:17:51 compute-0 systemd-logind[808]: Removed session 36.
Sep 30 14:17:51 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ignoring --setuser ceph since I am not root
Sep 30 14:17:51 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ignoring --setgroup ceph since I am not root
Sep 30 14:17:51 compute-0 ceph-mgr[74485]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Sep 30 14:17:51 compute-0 ceph-mgr[74485]: pidfile_write: ignore empty --pid-file
Sep 30 14:17:51 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'alerts'
Sep 30 14:17:51 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:17:51.849+0000 7ffa70715140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Sep 30 14:17:51 compute-0 ceph-mgr[74485]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Sep 30 14:17:51 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'balancer'
Sep 30 14:17:51 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:17:51 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf9c00a2b0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:17:51 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:17:51.935+0000 7ffa70715140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Sep 30 14:17:51 compute-0 ceph-mgr[74485]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Sep 30 14:17:51 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'cephadm'
Sep 30 14:17:52 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:17:52 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf90003c50 fd 14 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:17:52 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 7.f scrub starts
Sep 30 14:17:52 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 7.f scrub ok
Sep 30 14:17:52 compute-0 ceph-mon[74194]: 5.f scrub starts
Sep 30 14:17:52 compute-0 ceph-mon[74194]: 5.f scrub ok
Sep 30 14:17:52 compute-0 ceph-mon[74194]: 12.3 scrub starts
Sep 30 14:17:52 compute-0 ceph-mon[74194]: 12.3 scrub ok
Sep 30 14:17:52 compute-0 ceph-mon[74194]: 12.8 scrub starts
Sep 30 14:17:52 compute-0 ceph-mon[74194]: 12.8 scrub ok
Sep 30 14:17:52 compute-0 ceph-mon[74194]: from='mgr.14412 192.168.122.100:0/2104137300' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished
Sep 30 14:17:52 compute-0 ceph-mon[74194]: mgrmap e26: compute-0.buxlkm(active, since 2m), standbys: compute-1.zeqptq, compute-2.udzudc
Sep 30 14:17:52 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'crash'
Sep 30 14:17:52 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:17:52.761+0000 7ffa70715140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Sep 30 14:17:52 compute-0 ceph-mgr[74485]: mgr[py] Module crash has missing NOTIFY_TYPES member
Sep 30 14:17:52 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'dashboard'
Sep 30 14:17:53 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Sep 30 14:17:53 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:17:53 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf70003c10 fd 14 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:17:53 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'devicehealth'
Sep 30 14:17:53 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:17:53 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:17:53 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:17:53.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:17:53 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:17:53.453+0000 7ffa70715140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Sep 30 14:17:53 compute-0 ceph-mgr[74485]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Sep 30 14:17:53 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'diskprediction_local'
Sep 30 14:17:53 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:17:53 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:17:53 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:17:53.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:17:53 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Sep 30 14:17:53 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Sep 30 14:17:53 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]:   from numpy import show_config as show_numpy_config
Sep 30 14:17:53 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:17:53.623+0000 7ffa70715140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Sep 30 14:17:53 compute-0 ceph-mgr[74485]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Sep 30 14:17:53 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'influx'
Sep 30 14:17:53 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:17:53.699+0000 7ffa70715140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Sep 30 14:17:53 compute-0 ceph-mgr[74485]: mgr[py] Module influx has missing NOTIFY_TYPES member
Sep 30 14:17:53 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'insights'
Sep 30 14:17:53 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'iostat'
Sep 30 14:17:53 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:17:53.844+0000 7ffa70715140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Sep 30 14:17:53 compute-0 ceph-mgr[74485]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Sep 30 14:17:53 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'k8sevents'
Sep 30 14:17:53 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:17:53 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf6c0019c0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:17:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:17:54 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf9c00a2b0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:17:54 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'localpool'
Sep 30 14:17:54 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'mds_autoscaler'
Sep 30 14:17:54 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Sep 30 14:17:54 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 10.5 scrub starts
Sep 30 14:17:54 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'mirroring'
Sep 30 14:17:54 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'nfs'
Sep 30 14:17:54 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 10.5 scrub ok
Sep 30 14:17:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:17:54.997+0000 7ffa70715140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Sep 30 14:17:54 compute-0 ceph-mgr[74485]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Sep 30 14:17:54 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'orchestrator'
Sep 30 14:17:55 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:17:55.215+0000 7ffa70715140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Sep 30 14:17:55 compute-0 ceph-mgr[74485]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Sep 30 14:17:55 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'osd_perf_query'
Sep 30 14:17:55 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:17:55 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf90003c50 fd 14 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:17:55 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e80 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 14:17:55 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:17:55.294+0000 7ffa70715140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Sep 30 14:17:55 compute-0 ceph-mgr[74485]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Sep 30 14:17:55 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'osd_support'
Sep 30 14:17:55 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:17:55.370+0000 7ffa70715140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Sep 30 14:17:55 compute-0 ceph-mgr[74485]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Sep 30 14:17:55 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'pg_autoscaler'
Sep 30 14:17:55 compute-0 ceph-mon[74194]: 5.7 scrub starts
Sep 30 14:17:55 compute-0 ceph-mon[74194]: 5.7 scrub ok
Sep 30 14:17:55 compute-0 ceph-mon[74194]: 8.a scrub starts
Sep 30 14:17:55 compute-0 ceph-mon[74194]: 8.a scrub ok
Sep 30 14:17:55 compute-0 ceph-mon[74194]: 8.17 scrub starts
Sep 30 14:17:55 compute-0 ceph-mon[74194]: 8.17 scrub ok
Sep 30 14:17:55 compute-0 ceph-mon[74194]: 7.f scrub starts
Sep 30 14:17:55 compute-0 ceph-mon[74194]: 7.f scrub ok
Sep 30 14:17:55 compute-0 ceph-mon[74194]: 8.14 scrub starts
Sep 30 14:17:55 compute-0 ceph-mon[74194]: 8.14 scrub ok
Sep 30 14:17:55 compute-0 ceph-mon[74194]: 7.8 scrub starts
Sep 30 14:17:55 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:17:55 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000028s ======
Sep 30 14:17:55 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:17:55.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Sep 30 14:17:55 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:17:55.452+0000 7ffa70715140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Sep 30 14:17:55 compute-0 ceph-mgr[74485]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Sep 30 14:17:55 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'progress'
Sep 30 14:17:55 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:17:55 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:17:55 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:17:55.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:17:55 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 7.b scrub starts
Sep 30 14:17:55 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:17:55.524+0000 7ffa70715140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Sep 30 14:17:55 compute-0 ceph-mgr[74485]: mgr[py] Module progress has missing NOTIFY_TYPES member
Sep 30 14:17:55 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'prometheus'
Sep 30 14:17:55 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 7.b scrub ok
Sep 30 14:17:55 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:17:55 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf70003c10 fd 14 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:17:55 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:17:55.864+0000 7ffa70715140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Sep 30 14:17:55 compute-0 ceph-mgr[74485]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Sep 30 14:17:55 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'rbd_support'
Sep 30 14:17:55 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:17:55.967+0000 7ffa70715140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Sep 30 14:17:55 compute-0 ceph-mgr[74485]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Sep 30 14:17:55 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'restful'
Sep 30 14:17:56 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:17:56 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf6c0019c0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:17:56 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'rgw'
Sep 30 14:17:56 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:17:56.418+0000 7ffa70715140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Sep 30 14:17:56 compute-0 ceph-mgr[74485]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Sep 30 14:17:56 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'rook'
Sep 30 14:17:56 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 10.1b scrub starts
Sep 30 14:17:56 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 10.1b scrub ok
Sep 30 14:17:56 compute-0 ceph-mon[74194]: 8.2 scrub starts
Sep 30 14:17:56 compute-0 ceph-mon[74194]: 8.2 scrub ok
Sep 30 14:17:56 compute-0 ceph-mon[74194]: 7.5 scrub starts
Sep 30 14:17:56 compute-0 ceph-mon[74194]: 7.5 scrub ok
Sep 30 14:17:56 compute-0 ceph-mon[74194]: 11.14 scrub starts
Sep 30 14:17:56 compute-0 ceph-mon[74194]: 11.14 scrub ok
Sep 30 14:17:56 compute-0 ceph-mon[74194]: 11.e scrub starts
Sep 30 14:17:56 compute-0 ceph-mon[74194]: 7.8 scrub ok
Sep 30 14:17:56 compute-0 ceph-mon[74194]: 10.5 scrub starts
Sep 30 14:17:56 compute-0 ceph-mon[74194]: 11.e scrub ok
Sep 30 14:17:56 compute-0 ceph-mon[74194]: 10.5 scrub ok
Sep 30 14:17:56 compute-0 ceph-mon[74194]: 11.1 scrub starts
Sep 30 14:17:56 compute-0 ceph-mon[74194]: 11.1 scrub ok
Sep 30 14:17:56 compute-0 ceph-mon[74194]: 7.b scrub starts
Sep 30 14:17:56 compute-0 ceph-mon[74194]: 11.a scrub starts
Sep 30 14:17:56 compute-0 ceph-mon[74194]: 11.a scrub ok
Sep 30 14:17:56 compute-0 ceph-mon[74194]: 7.b scrub ok
Sep 30 14:17:56 compute-0 ceph-mon[74194]: 8.8 scrub starts
Sep 30 14:17:56 compute-0 ceph-mon[74194]: 8.8 scrub ok
Sep 30 14:17:56 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:17:56.986+0000 7ffa70715140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Sep 30 14:17:56 compute-0 ceph-mgr[74485]: mgr[py] Module rook has missing NOTIFY_TYPES member
Sep 30 14:17:56 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'selftest'
Sep 30 14:17:57 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:17:57.061+0000 7ffa70715140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Sep 30 14:17:57 compute-0 ceph-mgr[74485]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Sep 30 14:17:57 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'snap_schedule'
Sep 30 14:17:57 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:17:57.143+0000 7ffa70715140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Sep 30 14:17:57 compute-0 ceph-mgr[74485]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Sep 30 14:17:57 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'stats'
Sep 30 14:17:57 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'status'
Sep 30 14:17:57 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:17:57 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf90003c50 fd 14 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:17:57 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:17:57.302+0000 7ffa70715140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Sep 30 14:17:57 compute-0 ceph-mgr[74485]: mgr[py] Module status has missing NOTIFY_TYPES member
Sep 30 14:17:57 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'telegraf'
Sep 30 14:17:57 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:17:57.374+0000 7ffa70715140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Sep 30 14:17:57 compute-0 ceph-mgr[74485]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Sep 30 14:17:57 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'telemetry'
Sep 30 14:17:57 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:17:57 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:17:57 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:17:57.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:17:57 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Sep 30 14:17:57 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Sep 30 14:17:57 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:17:57 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:17:57 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:17:57.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:17:57 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:17:57.538+0000 7ffa70715140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Sep 30 14:17:57 compute-0 ceph-mgr[74485]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Sep 30 14:17:57 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'test_orchestrator'
Sep 30 14:17:57 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:17:57.758+0000 7ffa70715140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Sep 30 14:17:57 compute-0 ceph-mgr[74485]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Sep 30 14:17:57 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'volumes'
Sep 30 14:17:57 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:17:57 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf9c00a2b0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:17:58 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:17:58.036+0000 7ffa70715140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Sep 30 14:17:58 compute-0 ceph-mgr[74485]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Sep 30 14:17:58 compute-0 ceph-mgr[74485]: mgr[py] Loading python module 'zabbix'
Sep 30 14:17:58 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:17:58.108+0000 7ffa70715140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Sep 30 14:17:58 compute-0 ceph-mgr[74485]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Sep 30 14:17:58 compute-0 ceph-mon[74194]: log_channel(cluster) log [INF] : Active manager daemon compute-0.buxlkm restarted
Sep 30 14:17:58 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Sep 30 14:17:58 compute-0 ceph-mon[74194]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.buxlkm
Sep 30 14:17:58 compute-0 ceph-mgr[74485]: ms_deliver_dispatch: unhandled message 0x55e1a475d860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Sep 30 14:17:58 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:17:58 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf70003c10 fd 14 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:17:58 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 12.19 deep-scrub starts
Sep 30 14:17:58 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 12.19 deep-scrub ok
Sep 30 14:17:58 compute-0 ceph-mon[74194]: 10.1b scrub starts
Sep 30 14:17:58 compute-0 ceph-mon[74194]: 10.1b scrub ok
Sep 30 14:17:58 compute-0 ceph-mon[74194]: 10.1e deep-scrub starts
Sep 30 14:17:58 compute-0 ceph-mon[74194]: 10.1e deep-scrub ok
Sep 30 14:17:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:17:59 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf6c002e50 fd 14 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:17:59 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Sep 30 14:17:59 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: mgr handle_mgr_map Activating!
Sep 30 14:17:59 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.udzudc restarted
Sep 30 14:17:59 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.udzudc started
Sep 30 14:17:59 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.zeqptq restarted
Sep 30 14:17:59 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.zeqptq started
Sep 30 14:17:59 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : mgrmap e27: compute-0.buxlkm(active, starting, since 1.31029s), standbys: compute-1.zeqptq, compute-2.udzudc
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: mgr handle_mgr_map I am now activating
Sep 30 14:17:59 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Sep 30 14:17:59 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Sep 30 14:17:59 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Sep 30 14:17:59 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Sep 30 14:17:59 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Sep 30 14:17:59 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Sep 30 14:17:59 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.gqfeob"} v 0)
Sep 30 14:17:59 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.gqfeob"}]: dispatch
Sep 30 14:17:59 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).mds e11 all = 0
Sep 30 14:17:59 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-2.cdakzt"} v 0)
Sep 30 14:17:59 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.cdakzt"}]: dispatch
Sep 30 14:17:59 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).mds e11 all = 0
Sep 30 14:17:59 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-1.gwmnhp"} v 0)
Sep 30 14:17:59 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.gwmnhp"}]: dispatch
Sep 30 14:17:59 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).mds e11 all = 0
Sep 30 14:17:59 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.buxlkm", "id": "compute-0.buxlkm"} v 0)
Sep 30 14:17:59 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mgr metadata", "who": "compute-0.buxlkm", "id": "compute-0.buxlkm"}]: dispatch
Sep 30 14:17:59 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.zeqptq", "id": "compute-1.zeqptq"} v 0)
Sep 30 14:17:59 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mgr metadata", "who": "compute-1.zeqptq", "id": "compute-1.zeqptq"}]: dispatch
Sep 30 14:17:59 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.udzudc", "id": "compute-2.udzudc"} v 0)
Sep 30 14:17:59 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mgr metadata", "who": "compute-2.udzudc", "id": "compute-2.udzudc"}]: dispatch
Sep 30 14:17:59 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:17:59 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000028s ======
Sep 30 14:17:59 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:17:59.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Sep 30 14:17:59 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Sep 30 14:17:59 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Sep 30 14:17:59 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Sep 30 14:17:59 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Sep 30 14:17:59 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Sep 30 14:17:59 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Sep 30 14:17:59 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata"} v 0)
Sep 30 14:17:59 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mds metadata"}]: dispatch
Sep 30 14:17:59 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).mds e11 all = 1
Sep 30 14:17:59 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Sep 30 14:17:59 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata"}]: dispatch
Sep 30 14:17:59 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata"} v 0)
Sep 30 14:17:59 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mon metadata"}]: dispatch
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: mgr load Constructed class from module: balancer
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 14:17:59 compute-0 ceph-mon[74194]: log_channel(cluster) log [INF] : Manager daemon compute-0.buxlkm is now available
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [balancer INFO root] Starting
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [balancer INFO root] Optimize plan auto_2025-09-30_14:17:59
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: mgr load Constructed class from module: cephadm
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: mgr load Constructed class from module: crash
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: mgr load Constructed class from module: dashboard
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO access_control] Loading user roles DB version=2
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO sso] Loading SSO DB version=1
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO root] server: ssl=no host=192.168.122.100 port=8443
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO root] Configured CherryPy, starting engine...
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: mgr load Constructed class from module: devicehealth
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: mgr load Constructed class from module: iostat
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [devicehealth INFO root] Starting
Sep 30 14:17:59 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: mgr load Constructed class from module: nfs
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: mgr load Constructed class from module: orchestrator
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: mgr load Constructed class from module: pg_autoscaler
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: mgr load Constructed class from module: progress
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [progress INFO root] Loading...
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [progress INFO root] Loaded [<progress.module.GhostEvent object at 0x7ff9f7ae0af0>, <progress.module.GhostEvent object at 0x7ff9f7ae0b20>, <progress.module.GhostEvent object at 0x7ff9f7ae0b50>, <progress.module.GhostEvent object at 0x7ff9f7ae0b80>, <progress.module.GhostEvent object at 0x7ff9f7ae0bb0>, <progress.module.GhostEvent object at 0x7ff9f7ae0be0>, <progress.module.GhostEvent object at 0x7ff9f7ae0c10>, <progress.module.GhostEvent object at 0x7ff9f7ae0c40>, <progress.module.GhostEvent object at 0x7ff9f7ae0c70>, <progress.module.GhostEvent object at 0x7ff9f7ae0ca0>, <progress.module.GhostEvent object at 0x7ff9f7ae0cd0>, <progress.module.GhostEvent object at 0x7ff9f7ae0d00>, <progress.module.GhostEvent object at 0x7ff9f7ae0d30>, <progress.module.GhostEvent object at 0x7ff9f7ae0d60>, <progress.module.GhostEvent object at 0x7ff9f7ae0d90>, <progress.module.GhostEvent object at 0x7ff9f7ae0dc0>, <progress.module.GhostEvent object at 0x7ff9f7ae0df0>, <progress.module.GhostEvent object at 0x7ff9f7ae0e20>, <progress.module.GhostEvent object at 0x7ff9f7ae0e50>, <progress.module.GhostEvent object at 0x7ff9f7ae0e80>, <progress.module.GhostEvent object at 0x7ff9f7ae0eb0>, <progress.module.GhostEvent object at 0x7ff9f7ae0ee0>, <progress.module.GhostEvent object at 0x7ff9f7ae0f10>, <progress.module.GhostEvent object at 0x7ff9f7ae0f40>, <progress.module.GhostEvent object at 0x7ff9f7ae0f70>, <progress.module.GhostEvent object at 0x7ff9f7ae0fa0>, <progress.module.GhostEvent object at 0x7ff9f7ae0fd0>, <progress.module.GhostEvent object at 0x7ff9f0253040>, <progress.module.GhostEvent object at 0x7ff9f0253070>] historic events
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [progress INFO root] Loaded OSDMap, ready.
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [prometheus DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 14:17:59 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Sep 30 14:17:59 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:17:59 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:17:59 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:17:59.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:17:59 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Sep 30 14:17:59 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: mgr load Constructed class from module: prometheus
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [prometheus INFO root] server_addr: :: server_port: 9283
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [prometheus INFO root] Cache enabled
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [prometheus INFO root] starting metric collection thread
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [prometheus INFO root] Starting engine...
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.error] [30/Sep/2025:14:17:59] ENGINE Bus STARTING
Sep 30 14:17:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: [30/Sep/2025:14:17:59] ENGINE Bus STARTING
Sep 30 14:17:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: CherryPy Checker:
Sep 30 14:17:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: The Application mounted at '' has an empty config.
Sep 30 14:17:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 
Sep 30 14:17:59 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [rbd_support INFO root] recovery thread starting
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [rbd_support INFO root] starting setup
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: mgr load Constructed class from module: rbd_support
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: mgr load Constructed class from module: restful
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [restful INFO root] server_addr: :: server_port: 8003
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: mgr load Constructed class from module: status
Sep 30 14:17:59 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.buxlkm/mirror_snapshot_schedule"} v 0)
Sep 30 14:17:59 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.buxlkm/mirror_snapshot_schedule"}]: dispatch
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: mgr load Constructed class from module: telemetry
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [restful WARNING root] server not running: no certificate configured
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: mgr load Constructed class from module: volumes
Sep 30 14:17:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: [30/Sep/2025:14:17:59] ENGINE Serving on http://:::9283
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.error] [30/Sep/2025:14:17:59] ENGINE Serving on http://:::9283
Sep 30 14:17:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: [30/Sep/2025:14:17:59] ENGINE Bus STARTED
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.error] [30/Sep/2025:14:17:59] ENGINE Bus STARTED
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [prometheus INFO root] Engine started.
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFS -> /api/cephfs
Sep 30 14:17:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:17:59.732+0000 7ff9d8531640 -1 client.0 error registering admin socket command: (17) File exists
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: client.0 error registering admin socket command: (17) File exists
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsUi -> /ui-api/cephfs
Sep 30 14:17:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:17:59.734+0000 7ff9d44e9640 -1 client.0 error registering admin socket command: (17) File exists
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: client.0 error registering admin socket command: (17) File exists
Sep 30 14:17:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:17:59.734+0000 7ff9d44e9640 -1 client.0 error registering admin socket command: (17) File exists
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: client.0 error registering admin socket command: (17) File exists
Sep 30 14:17:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:17:59.734+0000 7ff9d44e9640 -1 client.0 error registering admin socket command: (17) File exists
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: client.0 error registering admin socket command: (17) File exists
Sep 30 14:17:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:17:59.734+0000 7ff9d44e9640 -1 client.0 error registering admin socket command: (17) File exists
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: client.0 error registering admin socket command: (17) File exists
Sep 30 14:17:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:17:59.734+0000 7ff9d44e9640 -1 client.0 error registering admin socket command: (17) File exists
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: client.0 error registering admin socket command: (17) File exists
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolume -> /api/cephfs/subvolume
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeGroups -> /api/cephfs/subvolume/group
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeSnapshots -> /api/cephfs/subvolume/snapshot
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsSnapshotClone -> /api/cephfs/subvolume/snapshot/clone
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSnapshotSchedule -> /api/cephfs/snapshot/schedule
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiUi -> /ui-api/iscsi
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Iscsi -> /api/iscsi
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiTarget -> /api/iscsi/target
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaCluster -> /api/nfs-ganesha/cluster
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaExports -> /api/nfs-ganesha/export
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaUi -> /ui-api/nfs-ganesha
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Orchestrator -> /ui-api/orchestrator
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Service -> /api/service
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroring -> /api/block/mirroring
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringSummary -> /api/block/mirroring/summary
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolMode -> /api/block/mirroring/pool
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolBootstrap -> /api/block/mirroring/pool/{pool_name}/bootstrap
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolPeer -> /api/block/mirroring/pool/{pool_name}/peer
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringStatus -> /ui-api/block/mirroring
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Pool -> /api/pool
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PoolUi -> /ui-api/pool
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RBDPool -> /api/pool
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rbd -> /api/block/image
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdStatus -> /ui-api/block/rbd
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdSnapshot -> /api/block/image/{image_spec}/snap
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdTrash -> /api/block/image/trash
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdNamespace -> /api/block/pool/{pool_name}/namespace
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rgw -> /ui-api/rgw
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteStatus -> /ui-api/rgw/multisite
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteController -> /api/rgw/multisite
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwDaemon -> /api/rgw/daemon
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwSite -> /api/rgw/site
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucket -> /api/rgw/bucket
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucketUi -> /ui-api/rgw/bucket
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwUser -> /api/rgw/user
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClass -> /api/rgw/roles
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClassMetadata -> /ui-api/rgw/roles
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwRealm -> /api/rgw/realm
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZonegroup -> /api/rgw/zonegroup
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZone -> /api/rgw/zone
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Auth -> /api/auth
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClass -> /api/cluster/user
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClassMetadata -> /ui-api/cluster/user
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Cluster -> /api/cluster
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterUpgrade -> /api/cluster/upgrade
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterConfiguration -> /api/cluster_conf
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRule -> /api/crush_rule
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRuleUi -> /ui-api/crush_rule
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Daemon -> /api/daemon
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Docs -> /docs
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfile -> /api/erasure_code_profile
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfileUi -> /ui-api/erasure_code_profile
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackController -> /api/feedback
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackApiController -> /api/feedback/api_key
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackUiController -> /ui-api/feedback/api_key
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FrontendLogging -> /ui-api/logging
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Grafana -> /api/grafana
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Host -> /api/host
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HostUi -> /ui-api/host
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Health -> /api/health
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HomeController -> /
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LangsController -> /ui-api/langs
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LoginController -> /ui-api/login
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Logs -> /api/logs
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrModules -> /api/mgr/module
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Monitor -> /api/monitor
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFGateway -> /api/nvmeof/gateway
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSpdk -> /api/nvmeof/spdk
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSubsystem -> /api/nvmeof/subsystem
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFListener -> /api/nvmeof/subsystem/{nqn}/listener
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFNamespace -> /api/nvmeof/subsystem/{nqn}/namespace
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFHost -> /api/nvmeof/subsystem/{nqn}/host
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFConnection -> /api/nvmeof/subsystem/{nqn}/connection
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFTcpUI -> /ui-api/nvmeof
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Osd -> /api/osd
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdUi -> /ui-api/osd
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdFlagsController -> /api/osd/flags
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MdsPerfCounter -> /api/perf_counters/mds
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MonPerfCounter -> /api/perf_counters/mon
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdPerfCounter -> /api/perf_counters/osd
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwPerfCounter -> /api/perf_counters/rgw
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirrorPerfCounter -> /api/perf_counters/rbd-mirror
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrPerfCounter -> /api/perf_counters/mgr
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: TcmuRunnerPerfCounter -> /api/perf_counters/tcmu-runner
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PerfCounters -> /api/perf_counters
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusReceiver -> /api/prometheus_receiver
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Prometheus -> /api/prometheus
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusNotifications -> /api/prometheus/notifications
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusSettings -> /ui-api/prometheus
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Role -> /api/role
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Scope -> /ui-api/scope
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Saml2 -> /auth/saml2
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Settings -> /api/settings
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: StandardSettings -> /ui-api/standard_settings
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Summary -> /api/summary
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Task -> /api/task
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Telemetry -> /api/telemetry
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: User -> /api/user
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserPasswordPolicy -> /api/user
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserChangePassword -> /api/user/{username}
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeatureTogglesEndpoint -> /api/feature_toggles
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MessageOfTheDay -> /ui-api/motd
Sep 30 14:17:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:17:59 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf90003c50 fd 14 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:17:59 compute-0 ceph-mon[74194]: 11.5 deep-scrub starts
Sep 30 14:17:59 compute-0 ceph-mon[74194]: 11.5 deep-scrub ok
Sep 30 14:17:59 compute-0 ceph-mon[74194]: 10.2 scrub starts
Sep 30 14:17:59 compute-0 ceph-mon[74194]: 10.2 scrub ok
Sep 30 14:17:59 compute-0 ceph-mon[74194]: 8.16 scrub starts
Sep 30 14:17:59 compute-0 ceph-mon[74194]: 8.16 scrub ok
Sep 30 14:17:59 compute-0 ceph-mon[74194]: 11.4 scrub starts
Sep 30 14:17:59 compute-0 ceph-mon[74194]: 11.4 scrub ok
Sep 30 14:17:59 compute-0 ceph-mon[74194]: Active manager daemon compute-0.buxlkm restarted
Sep 30 14:17:59 compute-0 ceph-mon[74194]: Activating manager daemon compute-0.buxlkm
Sep 30 14:17:59 compute-0 ceph-mon[74194]: 12.19 deep-scrub starts
Sep 30 14:17:59 compute-0 ceph-mon[74194]: 12.19 deep-scrub ok
Sep 30 14:17:59 compute-0 ceph-mon[74194]: 8.9 scrub starts
Sep 30 14:17:59 compute-0 ceph-mon[74194]: 8.9 scrub ok
Sep 30 14:17:59 compute-0 ceph-mon[74194]: 8.4 scrub starts
Sep 30 14:17:59 compute-0 ceph-mon[74194]: 8.4 scrub ok
Sep 30 14:17:59 compute-0 ceph-mon[74194]: osdmap e81: 3 total, 3 up, 3 in
Sep 30 14:17:59 compute-0 ceph-mon[74194]: Standby manager daemon compute-2.udzudc restarted
Sep 30 14:17:59 compute-0 ceph-mon[74194]: Standby manager daemon compute-2.udzudc started
Sep 30 14:17:59 compute-0 ceph-mon[74194]: Standby manager daemon compute-1.zeqptq restarted
Sep 30 14:17:59 compute-0 ceph-mon[74194]: Standby manager daemon compute-1.zeqptq started
Sep 30 14:17:59 compute-0 ceph-mon[74194]: mgrmap e27: compute-0.buxlkm(active, starting, since 1.31029s), standbys: compute-1.zeqptq, compute-2.udzudc
Sep 30 14:17:59 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Sep 30 14:17:59 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Sep 30 14:17:59 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Sep 30 14:17:59 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.gqfeob"}]: dispatch
Sep 30 14:17:59 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.cdakzt"}]: dispatch
Sep 30 14:17:59 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.gwmnhp"}]: dispatch
Sep 30 14:17:59 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mgr metadata", "who": "compute-0.buxlkm", "id": "compute-0.buxlkm"}]: dispatch
Sep 30 14:17:59 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mgr metadata", "who": "compute-1.zeqptq", "id": "compute-1.zeqptq"}]: dispatch
Sep 30 14:17:59 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mgr metadata", "who": "compute-2.udzudc", "id": "compute-2.udzudc"}]: dispatch
Sep 30 14:17:59 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Sep 30 14:17:59 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Sep 30 14:17:59 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Sep 30 14:17:59 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mds metadata"}]: dispatch
Sep 30 14:17:59 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd metadata"}]: dispatch
Sep 30 14:17:59 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mon metadata"}]: dispatch
Sep 30 14:17:59 compute-0 ceph-mon[74194]: Manager daemon compute-0.buxlkm is now available
Sep 30 14:17:59 compute-0 sshd-session[100592]: Accepted publickey for ceph-admin from 192.168.122.100 port 52480 ssh2: RSA SHA256:xW6Secl6o9Q/fOm6V4KS97DIZ06Q0FgYLSMG01uhfVw
Sep 30 14:17:59 compute-0 systemd-logind[808]: New session 38 of user ceph-admin.
Sep 30 14:17:59 compute-0 systemd[1]: Started Session 38 of User ceph-admin.
Sep 30 14:17:59 compute-0 sshd-session[100592]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Sep 30 14:17:59 compute-0 ceph-mgr[74485]: [dashboard INFO dashboard.module] Engine started.
Sep 30 14:18:00 compute-0 sudo[100677]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:18:00 compute-0 sudo[100677]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:18:00 compute-0 sudo[100677]: pam_unix(sudo:session): session closed for user root
Sep 30 14:18:00 compute-0 sudo[100702]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Sep 30 14:18:00 compute-0 sudo[100702]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:18:00 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:18:00 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf9c00a2b0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:18:00 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e81 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 14:18:00 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 10.13 scrub starts
Sep 30 14:18:00 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 10.13 scrub ok
Sep 30 14:18:00 compute-0 ceph-mgr[74485]: [cephadm INFO cherrypy.error] [30/Sep/2025:14:18:00] ENGINE Bus STARTING
Sep 30 14:18:00 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : [30/Sep/2025:14:18:00] ENGINE Bus STARTING
Sep 30 14:18:00 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 14:18:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 14:18:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 14:18:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 14:18:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 14:18:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Sep 30 14:18:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] PerfHandler: starting
Sep 30 14:18:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_task_task: vms, start_after=
Sep 30 14:18:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_task_task: volumes, start_after=
Sep 30 14:18:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_task_task: backups, start_after=
Sep 30 14:18:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_task_task: images, start_after=
Sep 30 14:18:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] TaskHandler: starting
Sep 30 14:18:00 compute-0 ceph-mgr[74485]: [cephadm INFO cherrypy.error] [30/Sep/2025:14:18:00] ENGINE Serving on http://192.168.122.100:8765
Sep 30 14:18:00 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : [30/Sep/2025:14:18:00] ENGINE Serving on http://192.168.122.100:8765
Sep 30 14:18:00 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:18:00 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:18:00 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.buxlkm/trash_purge_schedule"} v 0)
Sep 30 14:18:00 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.buxlkm/trash_purge_schedule"}]: dispatch
Sep 30 14:18:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 14:18:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 14:18:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 14:18:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 14:18:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 14:18:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Sep 30 14:18:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] setup complete
Sep 30 14:18:00 compute-0 ceph-mgr[74485]: [cephadm INFO cherrypy.error] [30/Sep/2025:14:18:00] ENGINE Serving on https://192.168.122.100:7150
Sep 30 14:18:00 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : [30/Sep/2025:14:18:00] ENGINE Serving on https://192.168.122.100:7150
Sep 30 14:18:00 compute-0 ceph-mgr[74485]: [cephadm INFO cherrypy.error] [30/Sep/2025:14:18:00] ENGINE Bus STARTED
Sep 30 14:18:00 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : [30/Sep/2025:14:18:00] ENGINE Bus STARTED
Sep 30 14:18:00 compute-0 ceph-mgr[74485]: [cephadm INFO cherrypy.error] [30/Sep/2025:14:18:00] ENGINE Client ('192.168.122.100', 42880) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Sep 30 14:18:00 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : [30/Sep/2025:14:18:00] ENGINE Client ('192.168.122.100', 42880) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Sep 30 14:18:00 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : mgrmap e28: compute-0.buxlkm(active, since 2s), standbys: compute-2.udzudc, compute-1.zeqptq
Sep 30 14:18:01 compute-0 podman[100797]: 2025-09-30 14:18:01.010669686 +0000 UTC m=+0.438812629 container exec a277d7b6b6f3cf10a7ce0ade5eebf0f8127074c248f9bce4451399614b97ded5 (image=quay.io/ceph/ceph:v19, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mon-compute-0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Sep 30 14:18:01 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v3: 337 pgs: 337 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Sep 30 14:18:01 compute-0 ceph-mon[74194]: 7.9 scrub starts
Sep 30 14:18:01 compute-0 ceph-mon[74194]: 7.9 scrub ok
Sep 30 14:18:01 compute-0 ceph-mon[74194]: 12.13 scrub starts
Sep 30 14:18:01 compute-0 ceph-mon[74194]: 12.13 scrub ok
Sep 30 14:18:01 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:01 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.buxlkm/mirror_snapshot_schedule"}]: dispatch
Sep 30 14:18:01 compute-0 ceph-mon[74194]: 11.7 deep-scrub starts
Sep 30 14:18:01 compute-0 ceph-mon[74194]: 11.7 deep-scrub ok
Sep 30 14:18:01 compute-0 ceph-mon[74194]: [30/Sep/2025:14:18:00] ENGINE Bus STARTING
Sep 30 14:18:01 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:01 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:18:01 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.buxlkm/trash_purge_schedule"}]: dispatch
Sep 30 14:18:01 compute-0 podman[100844]: 2025-09-30 14:18:01.199500534 +0000 UTC m=+0.088283677 container exec_died a277d7b6b6f3cf10a7ce0ade5eebf0f8127074c248f9bce4451399614b97ded5 (image=quay.io/ceph/ceph:v19, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Sep 30 14:18:01 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:18:01 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf9c00a2b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:18:01 compute-0 podman[100797]: 2025-09-30 14:18:01.312501736 +0000 UTC m=+0.740644659 container exec_died a277d7b6b6f3cf10a7ce0ade5eebf0f8127074c248f9bce4451399614b97ded5 (image=quay.io/ceph/ceph:v19, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mon-compute-0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Sep 30 14:18:01 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Sep 30 14:18:01 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Sep 30 14:18:01 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:18:01 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:18:01 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:18:01.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:18:01 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v4: 337 pgs: 337 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Sep 30 14:18:01 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} v 0)
Sep 30 14:18:01 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]: dispatch
Sep 30 14:18:01 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0)
Sep 30 14:18:01 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Sep 30 14:18:01 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:18:01 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:18:01 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:18:01.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:18:01 compute-0 sshd-session[100859]: Accepted publickey for zuul from 192.168.122.30 port 60000 ssh2: ECDSA SHA256:bXV1aFTGAGwGo0hLh6HZ3pTGxlJrPf0VedxXflT3nU8
Sep 30 14:18:01 compute-0 systemd-logind[808]: New session 39 of user zuul.
Sep 30 14:18:01 compute-0 systemd[1]: Started Session 39 of User zuul.
Sep 30 14:18:01 compute-0 sshd-session[100859]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Sep 30 14:18:01 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:18:01 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf6c002e50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:18:02 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:18:02 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf90003c50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:18:02 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Sep 30 14:18:02 compute-0 podman[101049]: 2025-09-30 14:18:02.278339207 +0000 UTC m=+0.278727829 container exec 0d94fdcb0089ce3f537370219af53558d7149360386a0f8dbbd34c4af8a36ba9 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:18:02 compute-0 ceph-mgr[74485]: [devicehealth INFO root] Check health
Sep 30 14:18:02 compute-0 podman[101049]: 2025-09-30 14:18:02.310037695 +0000 UTC m=+0.310426337 container exec_died 0d94fdcb0089ce3f537370219af53558d7149360386a0f8dbbd34c4af8a36ba9 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:18:02 compute-0 python3.9[101113]: ansible-ansible.legacy.ping Invoked with data=pong
Sep 30 14:18:02 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Sep 30 14:18:02 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Sep 30 14:18:02 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Sep 30 14:18:02 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Sep 30 14:18:02 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Sep 30 14:18:02 compute-0 ceph-mon[74194]: 10.13 scrub starts
Sep 30 14:18:02 compute-0 ceph-mon[74194]: 10.13 scrub ok
Sep 30 14:18:02 compute-0 ceph-mon[74194]: 8.1c scrub starts
Sep 30 14:18:02 compute-0 ceph-mon[74194]: 8.1c scrub ok
Sep 30 14:18:02 compute-0 ceph-mon[74194]: [30/Sep/2025:14:18:00] ENGINE Serving on http://192.168.122.100:8765
Sep 30 14:18:02 compute-0 ceph-mon[74194]: 8.18 scrub starts
Sep 30 14:18:02 compute-0 ceph-mon[74194]: 8.18 scrub ok
Sep 30 14:18:02 compute-0 ceph-mon[74194]: [30/Sep/2025:14:18:00] ENGINE Serving on https://192.168.122.100:7150
Sep 30 14:18:02 compute-0 ceph-mon[74194]: [30/Sep/2025:14:18:00] ENGINE Bus STARTED
Sep 30 14:18:02 compute-0 ceph-mon[74194]: [30/Sep/2025:14:18:00] ENGINE Client ('192.168.122.100', 42880) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Sep 30 14:18:02 compute-0 ceph-mon[74194]: mgrmap e28: compute-0.buxlkm(active, since 2s), standbys: compute-2.udzudc, compute-1.zeqptq
Sep 30 14:18:02 compute-0 ceph-mon[74194]: pgmap v3: 337 pgs: 337 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Sep 30 14:18:02 compute-0 ceph-mon[74194]: pgmap v4: 337 pgs: 337 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Sep 30 14:18:02 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]: dispatch
Sep 30 14:18:02 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Sep 30 14:18:02 compute-0 ceph-mon[74194]: 7.14 scrub starts
Sep 30 14:18:02 compute-0 ceph-mon[74194]: 7.14 scrub ok
Sep 30 14:18:02 compute-0 ceph-mon[74194]: 11.1b scrub starts
Sep 30 14:18:02 compute-0 ceph-mon[74194]: 11.1b scrub ok
Sep 30 14:18:02 compute-0 podman[101253]: 2025-09-30 14:18:02.709573858 +0000 UTC m=+0.139832997 container exec 7e80d1c63fee1012bbcba29dc5974698e4c3e504ac2a1caae6c03536ec058cd5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:18:02 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Sep 30 14:18:02 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : mgrmap e29: compute-0.buxlkm(active, since 4s), standbys: compute-2.udzudc, compute-1.zeqptq
Sep 30 14:18:02 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 82 pg[6.9( empty local-lis/les=0/0 n=0 ec=51/22 lis/c=60/60 les/c/f=61/61/0 sis=82) [0] r=0 lpr=82 pi=[60,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:18:02 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 82 pg[9.9( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=5 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=82 pruub=12.871338844s) [2] r=-1 lpr=82 pi=[55,82)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active pruub 262.536865234s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:18:02 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 82 pg[9.9( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=5 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=82 pruub=12.871311188s) [2] r=-1 lpr=82 pi=[55,82)/1 crt=48'1157 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 262.536865234s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:18:02 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 82 pg[9.19( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=5 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=82 pruub=12.869329453s) [2] r=-1 lpr=82 pi=[55,82)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active pruub 262.536468506s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:18:02 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 82 pg[9.19( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=5 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=82 pruub=12.869286537s) [2] r=-1 lpr=82 pi=[55,82)/1 crt=48'1157 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 262.536468506s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:18:02 compute-0 podman[101253]: 2025-09-30 14:18:02.8587087 +0000 UTC m=+0.288967849 container exec_died 7e80d1c63fee1012bbcba29dc5974698e4c3e504ac2a1caae6c03536ec058cd5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:18:03 compute-0 podman[101364]: 2025-09-30 14:18:03.171394667 +0000 UTC m=+0.131968693 container exec ec49c6e24c4fbc830188fe80824f1adb9a8c3cd6d4f4491a3e9330b04061bea8 (image=quay.io/ceph/haproxy:2.3, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-nfs-cephfs-compute-0-yvkpei)
Sep 30 14:18:03 compute-0 podman[101432]: 2025-09-30 14:18:03.248443085 +0000 UTC m=+0.059098728 container exec_died ec49c6e24c4fbc830188fe80824f1adb9a8c3cd6d4f4491a3e9330b04061bea8 (image=quay.io/ceph/haproxy:2.3, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-nfs-cephfs-compute-0-yvkpei)
Sep 30 14:18:03 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:18:03 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf90003c50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:18:03 compute-0 podman[101364]: 2025-09-30 14:18:03.27054211 +0000 UTC m=+0.231116136 container exec_died ec49c6e24c4fbc830188fe80824f1adb9a8c3cd6d4f4491a3e9330b04061bea8 (image=quay.io/ceph/haproxy:2.3, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-nfs-cephfs-compute-0-yvkpei)
Sep 30 14:18:03 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 7.e scrub starts
Sep 30 14:18:03 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 7.e scrub ok
Sep 30 14:18:03 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:18:03 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v6: 337 pgs: 337 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Sep 30 14:18:03 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:18:03 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:18:03.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:18:03 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} v 0)
Sep 30 14:18:03 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]: dispatch
Sep 30 14:18:03 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0)
Sep 30 14:18:03 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Sep 30 14:18:03 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:18:03 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:18:03 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:18:03.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:18:03 compute-0 python3.9[101471]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Sep 30 14:18:03 compute-0 podman[101507]: 2025-09-30 14:18:03.611796459 +0000 UTC m=+0.173849349 container exec df25873f420822291a2a2f3e4272e6ab946447daa59ec12441fae67f848da096 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-keepalived-nfs-cephfs-compute-0-nfjjcv, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, com.redhat.component=keepalived-container, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, vcs-type=git, description=keepalived for Ceph, distribution-scope=public, io.buildah.version=1.28.2, io.openshift.expose-services=)
Sep 30 14:18:03 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Sep 30 14:18:03 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Sep 30 14:18:03 compute-0 podman[101530]: 2025-09-30 14:18:03.76350496 +0000 UTC m=+0.132377753 container exec_died df25873f420822291a2a2f3e4272e6ab946447daa59ec12441fae67f848da096 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-keepalived-nfs-cephfs-compute-0-nfjjcv, distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, release=1793, com.redhat.component=keepalived-container, io.openshift.tags=Ceph keepalived, vcs-type=git, version=2.2.4, vendor=Red Hat, Inc., architecture=x86_64, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Sep 30 14:18:03 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:03 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Sep 30 14:18:03 compute-0 podman[101507]: 2025-09-30 14:18:03.798381095 +0000 UTC m=+0.360433965 container exec_died df25873f420822291a2a2f3e4272e6ab946447daa59ec12441fae67f848da096 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-keepalived-nfs-cephfs-compute-0-nfjjcv, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, version=2.2.4, architecture=x86_64, com.redhat.component=keepalived-container, build-date=2023-02-22T09:23:20, name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2, vendor=Red Hat, Inc., description=keepalived for Ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, distribution-scope=public)
Sep 30 14:18:03 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:18:03 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf90003c50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:18:03 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Sep 30 14:18:03 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Sep 30 14:18:03 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Sep 30 14:18:03 compute-0 ceph-mon[74194]: 10.18 scrub starts
Sep 30 14:18:03 compute-0 ceph-mon[74194]: 10.18 scrub ok
Sep 30 14:18:03 compute-0 ceph-mon[74194]: 7.13 scrub starts
Sep 30 14:18:03 compute-0 ceph-mon[74194]: 7.13 scrub ok
Sep 30 14:18:03 compute-0 ceph-mon[74194]: 8.f scrub starts
Sep 30 14:18:03 compute-0 ceph-mon[74194]: 8.f scrub ok
Sep 30 14:18:03 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Sep 30 14:18:03 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Sep 30 14:18:03 compute-0 ceph-mon[74194]: osdmap e82: 3 total, 3 up, 3 in
Sep 30 14:18:03 compute-0 ceph-mon[74194]: mgrmap e29: compute-0.buxlkm(active, since 4s), standbys: compute-2.udzudc, compute-1.zeqptq
Sep 30 14:18:03 compute-0 ceph-mon[74194]: 11.1d scrub starts
Sep 30 14:18:03 compute-0 ceph-mon[74194]: 11.1d scrub ok
Sep 30 14:18:03 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]: dispatch
Sep 30 14:18:03 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Sep 30 14:18:03 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Sep 30 14:18:03 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 83 pg[9.9( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=5 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=83) [2]/[0] r=0 lpr=83 pi=[55,83)/1 crt=48'1157 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:18:03 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 83 pg[9.9( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=5 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=83) [2]/[0] r=0 lpr=83 pi=[55,83)/1 crt=48'1157 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Sep 30 14:18:03 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 83 pg[9.a( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=6 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=83 pruub=11.758865356s) [1] r=-1 lpr=83 pi=[55,83)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active pruub 262.536682129s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:18:03 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 83 pg[9.a( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=6 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=83 pruub=11.758842468s) [1] r=-1 lpr=83 pi=[55,83)/1 crt=48'1157 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 262.536682129s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:18:03 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 83 pg[9.1a( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=5 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=83 pruub=11.758570671s) [1] r=-1 lpr=83 pi=[55,83)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active pruub 262.536590576s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:18:03 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 83 pg[9.1a( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=5 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=83 pruub=11.758553505s) [1] r=-1 lpr=83 pi=[55,83)/1 crt=48'1157 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 262.536590576s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:18:03 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 83 pg[9.19( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=5 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=83) [2]/[0] r=0 lpr=83 pi=[55,83)/1 crt=48'1157 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:18:03 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 83 pg[9.19( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=5 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=83) [2]/[0] r=0 lpr=83 pi=[55,83)/1 crt=48'1157 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Sep 30 14:18:04 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 83 pg[6.9( v 48'39 (0'0,48'39] local-lis/les=82/83 n=0 ec=51/22 lis/c=60/60 les/c/f=61/61/0 sis=82) [0] r=0 lpr=82 pi=[60,82)/1 crt=48'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:18:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:18:04 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf64000b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:18:04 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:04 compute-0 podman[101601]: 2025-09-30 14:18:04.243370343 +0000 UTC m=+0.252311856 container exec bd20ee432b94b120e4d4e48f8e160634ffb584df5fe8133f3bd8a9cff9cb64c7 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:18:04 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 12.12 scrub starts
Sep 30 14:18:04 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 12.12 scrub ok
Sep 30 14:18:04 compute-0 podman[101703]: 2025-09-30 14:18:04.335384351 +0000 UTC m=+0.061885475 container exec_died bd20ee432b94b120e4d4e48f8e160634ffb584df5fe8133f3bd8a9cff9cb64c7 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:18:04 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Sep 30 14:18:04 compute-0 sudo[101766]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqeawjncogyxfitinupyzxaaxzdqwktw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241884.0182326-93-154228070513506/AnsiballZ_command.py'
Sep 30 14:18:04 compute-0 sudo[101766]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:18:04 compute-0 podman[101601]: 2025-09-30 14:18:04.413612021 +0000 UTC m=+0.422553504 container exec_died bd20ee432b94b120e4d4e48f8e160634ffb584df5fe8133f3bd8a9cff9cb64c7 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:18:04 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:04 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Sep 30 14:18:04 compute-0 python3.9[101768]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:18:04 compute-0 sudo[101766]: pam_unix(sudo:session): session closed for user root
Sep 30 14:18:04 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:04 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:18:04] "GET /metrics HTTP/1.1" 200 46656 "" "Prometheus/2.51.0"
Sep 30 14:18:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:18:04] "GET /metrics HTTP/1.1" 200 46656 "" "Prometheus/2.51.0"
Sep 30 14:18:04 compute-0 podman[101799]: 2025-09-30 14:18:04.890402199 +0000 UTC m=+0.185293442 container exec 93c8c5607d3b21ae8cda4d1f43e88d294c0ac0bcb4ca72548c6be243950b6313 (image=quay.io/ceph/grafana:10.4.0, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Sep 30 14:18:04 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Sep 30 14:18:05 compute-0 ceph-mon[74194]: 7.e scrub starts
Sep 30 14:18:05 compute-0 ceph-mon[74194]: 7.e scrub ok
Sep 30 14:18:05 compute-0 ceph-mon[74194]: pgmap v6: 337 pgs: 337 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Sep 30 14:18:05 compute-0 ceph-mon[74194]: 8.c deep-scrub starts
Sep 30 14:18:05 compute-0 ceph-mon[74194]: 8.c deep-scrub ok
Sep 30 14:18:05 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:05 compute-0 ceph-mon[74194]: 11.1e scrub starts
Sep 30 14:18:05 compute-0 ceph-mon[74194]: 11.1e scrub ok
Sep 30 14:18:05 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Sep 30 14:18:05 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Sep 30 14:18:05 compute-0 ceph-mon[74194]: osdmap e83: 3 total, 3 up, 3 in
Sep 30 14:18:05 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:05 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:05 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:05 compute-0 podman[101799]: 2025-09-30 14:18:05.084523811 +0000 UTC m=+0.379415034 container exec_died 93c8c5607d3b21ae8cda4d1f43e88d294c0ac0bcb4ca72548c6be243950b6313 (image=quay.io/ceph/grafana:10.4.0, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Sep 30 14:18:05 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Sep 30 14:18:05 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Sep 30 14:18:05 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Sep 30 14:18:05 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : mgrmap e30: compute-0.buxlkm(active, since 7s), standbys: compute-2.udzudc, compute-1.zeqptq
Sep 30 14:18:05 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 84 pg[9.1a( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=5 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=84) [1]/[0] r=0 lpr=84 pi=[55,84)/1 crt=48'1157 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:18:05 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 84 pg[9.1a( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=5 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=84) [1]/[0] r=0 lpr=84 pi=[55,84)/1 crt=48'1157 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Sep 30 14:18:05 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 84 pg[9.a( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=6 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=84) [1]/[0] r=0 lpr=84 pi=[55,84)/1 crt=48'1157 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:18:05 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 84 pg[9.a( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=6 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=84) [1]/[0] r=0 lpr=84 pi=[55,84)/1 crt=48'1157 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Sep 30 14:18:05 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 84 pg[9.9( v 48'1157 (0'0,48'1157] local-lis/les=83/84 n=5 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=83) [2]/[0] async=[2] r=0 lpr=83 pi=[55,83)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:18:05 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 84 pg[9.19( v 48'1157 (0'0,48'1157] local-lis/les=83/84 n=5 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=83) [2]/[0] async=[2] r=0 lpr=83 pi=[55,83)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:18:05 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:05 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Sep 30 14:18:05 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:18:05 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf6c002e50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:18:05 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 14:18:05 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Sep 30 14:18:05 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:05 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Sep 30 14:18:05 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Sep 30 14:18:05 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Sep 30 14:18:05 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Sep 30 14:18:05 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 85 pg[9.9( v 48'1157 (0'0,48'1157] local-lis/les=83/84 n=5 ec=55/37 lis/c=83/55 les/c/f=84/56/0 sis=85 pruub=15.858986855s) [2] async=[2] r=-1 lpr=85 pi=[55,85)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active pruub 268.071563721s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:18:05 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 85 pg[9.9( v 48'1157 (0'0,48'1157] local-lis/les=83/84 n=5 ec=55/37 lis/c=83/55 les/c/f=84/56/0 sis=85 pruub=15.858902931s) [2] r=-1 lpr=85 pi=[55,85)/1 crt=48'1157 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 268.071563721s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:18:05 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v10: 337 pgs: 337 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Sep 30 14:18:05 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} v 0)
Sep 30 14:18:05 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]: dispatch
Sep 30 14:18:05 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0)
Sep 30 14:18:05 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Sep 30 14:18:05 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:18:05 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:18:05 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:18:05.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:18:05 compute-0 sudo[102054]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shuvhlesusfdtqunwigfewnyzbmgfbze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241885.0559664-129-127191724792062/AnsiballZ_stat.py'
Sep 30 14:18:05 compute-0 sudo[102054]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:18:05 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:18:05 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:18:05 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:18:05.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:18:05 compute-0 podman[102055]: 2025-09-30 14:18:05.610935367 +0000 UTC m=+0.112797538 container exec e4a50bbeb60f228cd09239a211f5e468f7ca87363229c6999e3900e12da32b57 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:18:05 compute-0 podman[102055]: 2025-09-30 14:18:05.644629909 +0000 UTC m=+0.146492060 container exec_died e4a50bbeb60f228cd09239a211f5e468f7ca87363229c6999e3900e12da32b57 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:18:05 compute-0 python3.9[102063]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 14:18:05 compute-0 sudo[102054]: pam_unix(sudo:session): session closed for user root
Sep 30 14:18:05 compute-0 sudo[100702]: pam_unix(sudo:session): session closed for user root
Sep 30 14:18:05 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 14:18:05 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:05 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 14:18:05 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:18:05 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf90003c50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:18:05 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 85 pg[9.1a( v 48'1157 (0'0,48'1157] local-lis/les=84/85 n=5 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=84) [1]/[0] async=[1] r=0 lpr=84 pi=[55,84)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:18:05 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Sep 30 14:18:05 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 85 pg[9.a( v 48'1157 (0'0,48'1157] local-lis/les=84/85 n=6 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=84) [1]/[0] async=[1] r=0 lpr=84 pi=[55,84)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:18:05 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:05 compute-0 sudo[102126]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:18:05 compute-0 sudo[102126]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:18:05 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:06 compute-0 sudo[102126]: pam_unix(sudo:session): session closed for user root
Sep 30 14:18:06 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Sep 30 14:18:06 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:06 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Sep 30 14:18:06 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Sep 30 14:18:06 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Sep 30 14:18:06 compute-0 sudo[102159]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 14:18:06 compute-0 sudo[102159]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:18:06 compute-0 ceph-mon[74194]: 12.12 scrub starts
Sep 30 14:18:06 compute-0 ceph-mon[74194]: 12.12 scrub ok
Sep 30 14:18:06 compute-0 ceph-mon[74194]: 12.18 scrub starts
Sep 30 14:18:06 compute-0 ceph-mon[74194]: 12.18 scrub ok
Sep 30 14:18:06 compute-0 ceph-mon[74194]: 11.1c scrub starts
Sep 30 14:18:06 compute-0 ceph-mon[74194]: 11.1c scrub ok
Sep 30 14:18:06 compute-0 ceph-mon[74194]: osdmap e84: 3 total, 3 up, 3 in
Sep 30 14:18:06 compute-0 ceph-mon[74194]: mgrmap e30: compute-0.buxlkm(active, since 7s), standbys: compute-2.udzudc, compute-1.zeqptq
Sep 30 14:18:06 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:06 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:06 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Sep 30 14:18:06 compute-0 ceph-mon[74194]: osdmap e85: 3 total, 3 up, 3 in
Sep 30 14:18:06 compute-0 ceph-mon[74194]: pgmap v10: 337 pgs: 337 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Sep 30 14:18:06 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]: dispatch
Sep 30 14:18:06 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Sep 30 14:18:06 compute-0 ceph-mon[74194]: 11.f scrub starts
Sep 30 14:18:06 compute-0 ceph-mon[74194]: 11.f scrub ok
Sep 30 14:18:06 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:06 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:06 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:06 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:06 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Sep 30 14:18:06 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Sep 30 14:18:06 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:18:06 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf90003c50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:18:06 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Sep 30 14:18:06 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Sep 30 14:18:06 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Sep 30 14:18:06 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Sep 30 14:18:06 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Sep 30 14:18:06 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 86 pg[9.19( v 48'1157 (0'0,48'1157] local-lis/les=83/84 n=5 ec=55/37 lis/c=83/55 les/c/f=84/56/0 sis=86 pruub=14.786787033s) [2] async=[2] r=-1 lpr=86 pi=[55,86)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active pruub 268.081329346s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:18:06 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 86 pg[9.19( v 48'1157 (0'0,48'1157] local-lis/les=83/84 n=5 ec=55/37 lis/c=83/55 les/c/f=84/56/0 sis=86 pruub=14.786728859s) [2] r=-1 lpr=86 pi=[55,86)/1 crt=48'1157 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 268.081329346s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:18:06 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 86 pg[9.a( v 48'1157 (0'0,48'1157] local-lis/les=84/85 n=6 ec=55/37 lis/c=84/55 les/c/f=85/56/0 sis=86 pruub=15.413999557s) [1] async=[1] r=-1 lpr=86 pi=[55,86)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active pruub 268.708892822s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:18:06 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 86 pg[9.a( v 48'1157 (0'0,48'1157] local-lis/les=84/85 n=6 ec=55/37 lis/c=84/55 les/c/f=85/56/0 sis=86 pruub=15.413953781s) [1] r=-1 lpr=86 pi=[55,86)/1 crt=48'1157 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 268.708892822s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:18:06 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 86 pg[9.1a( v 48'1157 (0'0,48'1157] local-lis/les=84/85 n=5 ec=55/37 lis/c=84/55 les/c/f=85/56/0 sis=86 pruub=15.403252602s) [1] async=[1] r=-1 lpr=86 pi=[55,86)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active pruub 268.698730469s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:18:06 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 86 pg[9.1a( v 48'1157 (0'0,48'1157] local-lis/les=84/85 n=5 ec=55/37 lis/c=84/55 les/c/f=85/56/0 sis=86 pruub=15.403178215s) [1] r=-1 lpr=86 pi=[55,86)/1 crt=48'1157 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 268.698730469s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:18:06 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 86 pg[6.b( empty local-lis/les=0/0 n=0 ec=51/22 lis/c=63/63 les/c/f=64/64/0 sis=86) [0] r=0 lpr=86 pi=[63,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:18:06 compute-0 sudo[102320]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igywzcwjhbyiyocsypaakzcashvefrvd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241886.0337284-162-96480739646505/AnsiballZ_file.py'
Sep 30 14:18:06 compute-0 sudo[102320]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:18:06 compute-0 sudo[102159]: pam_unix(sudo:session): session closed for user root
Sep 30 14:18:06 compute-0 sudo[102335]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:18:06 compute-0 sudo[102335]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:18:06 compute-0 sudo[102335]: pam_unix(sudo:session): session closed for user root
Sep 30 14:18:06 compute-0 sudo[102360]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Sep 30 14:18:06 compute-0 sudo[102360]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:18:06 compute-0 python3.9[102322]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:18:06 compute-0 sudo[102320]: pam_unix(sudo:session): session closed for user root
Sep 30 14:18:06 compute-0 sudo[102360]: pam_unix(sudo:session): session closed for user root
Sep 30 14:18:06 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 14:18:07 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:07 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 14:18:07 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:07 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Sep 30 14:18:07 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Sep 30 14:18:07 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:18:07 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:18:07 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 14:18:07 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 14:18:07 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Sep 30 14:18:07 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Sep 30 14:18:07 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Sep 30 14:18:07 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Sep 30 14:18:07 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Sep 30 14:18:07 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Sep 30 14:18:07 compute-0 sudo[102478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Sep 30 14:18:07 compute-0 sudo[102478]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:18:07 compute-0 sudo[102478]: pam_unix(sudo:session): session closed for user root
Sep 30 14:18:07 compute-0 ceph-mon[74194]: 9.5 scrub starts
Sep 30 14:18:07 compute-0 ceph-mon[74194]: 9.5 scrub ok
Sep 30 14:18:07 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Sep 30 14:18:07 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Sep 30 14:18:07 compute-0 ceph-mon[74194]: osdmap e86: 3 total, 3 up, 3 in
Sep 30 14:18:07 compute-0 ceph-mon[74194]: 9.1f scrub starts
Sep 30 14:18:07 compute-0 ceph-mon[74194]: 9.1f scrub ok
Sep 30 14:18:07 compute-0 ceph-mon[74194]: 8.10 deep-scrub starts
Sep 30 14:18:07 compute-0 ceph-mon[74194]: 8.10 deep-scrub ok
Sep 30 14:18:07 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:07 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:07 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Sep 30 14:18:07 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:18:07 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 14:18:07 compute-0 sudo[102503]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/etc/ceph
Sep 30 14:18:07 compute-0 sudo[102503]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:18:07 compute-0 sudo[102503]: pam_unix(sudo:session): session closed for user root
Sep 30 14:18:07 compute-0 sudo[102529]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/etc/ceph/ceph.conf.new
Sep 30 14:18:07 compute-0 sudo[102529]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:18:07 compute-0 sudo[102529]: pam_unix(sudo:session): session closed for user root
Sep 30 14:18:07 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:18:07 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf640016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:18:07 compute-0 sudo[102576]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6
Sep 30 14:18:07 compute-0 sudo[102576]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:18:07 compute-0 sudo[102576]: pam_unix(sudo:session): session closed for user root
Sep 30 14:18:07 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 9.1 scrub starts
Sep 30 14:18:07 compute-0 sudo[102626]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/etc/ceph/ceph.conf.new
Sep 30 14:18:07 compute-0 sudo[102626]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:18:07 compute-0 sudo[102626]: pam_unix(sudo:session): session closed for user root
Sep 30 14:18:07 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 9.1 scrub ok
Sep 30 14:18:07 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Sep 30 14:18:07 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Sep 30 14:18:07 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Sep 30 14:18:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 87 pg[6.b( v 48'39 lc 0'0 (0'0,48'39] local-lis/les=86/87 n=1 ec=51/22 lis/c=63/63 les/c/f=64/64/0 sis=86) [0] r=0 lpr=86 pi=[63,86)/1 crt=48'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:18:07 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v13: 337 pgs: 1 active+recovering+remapped, 1 active+recovery_wait+remapped, 1 active+remapped, 1 peering, 333 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 10/213 objects misplaced (4.695%); 0 B/s, 0 objects/s recovering
Sep 30 14:18:07 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:18:07 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:18:07 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:18:07.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:18:07 compute-0 sudo[102701]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/etc/ceph/ceph.conf.new
Sep 30 14:18:07 compute-0 sudo[102701]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:18:07 compute-0 sudo[102701]: pam_unix(sudo:session): session closed for user root
Sep 30 14:18:07 compute-0 sudo[102726]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/etc/ceph/ceph.conf.new
Sep 30 14:18:07 compute-0 sudo[102726]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:18:07 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:18:07 compute-0 sudo[102726]: pam_unix(sudo:session): session closed for user root
Sep 30 14:18:07 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:18:07 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:18:07.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:18:07 compute-0 python3.9[102675]: ansible-ansible.builtin.service_facts Invoked
Sep 30 14:18:07 compute-0 sudo[102751]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Sep 30 14:18:07 compute-0 sudo[102751]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:18:07 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.conf
Sep 30 14:18:07 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.conf
Sep 30 14:18:07 compute-0 sudo[102751]: pam_unix(sudo:session): session closed for user root
Sep 30 14:18:07 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.conf
Sep 30 14:18:07 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.conf
Sep 30 14:18:07 compute-0 network[102799]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Sep 30 14:18:07 compute-0 network[102803]: 'network-scripts' will be removed from distribution in near future.
Sep 30 14:18:07 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.conf
Sep 30 14:18:07 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.conf
Sep 30 14:18:07 compute-0 network[102808]: It is advised to switch to 'NetworkManager' instead for network management.
Sep 30 14:18:07 compute-0 sudo[102784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config
Sep 30 14:18:07 compute-0 sudo[102784]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:18:07 compute-0 sudo[102784]: pam_unix(sudo:session): session closed for user root
Sep 30 14:18:07 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:18:07 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf6c003f50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:18:08 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:18:08 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf90003c50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:18:08 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 9.1c scrub starts
Sep 30 14:18:08 compute-0 sudo[102825]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config
Sep 30 14:18:08 compute-0 sudo[102825]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:18:08 compute-0 sudo[102825]: pam_unix(sudo:session): session closed for user root
Sep 30 14:18:08 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 9.1c scrub ok
Sep 30 14:18:08 compute-0 sudo[102852]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.conf.new
Sep 30 14:18:08 compute-0 sudo[102852]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:18:08 compute-0 sudo[102852]: pam_unix(sudo:session): session closed for user root
Sep 30 14:18:08 compute-0 ceph-mon[74194]: Updating compute-0:/etc/ceph/ceph.conf
Sep 30 14:18:08 compute-0 ceph-mon[74194]: Updating compute-1:/etc/ceph/ceph.conf
Sep 30 14:18:08 compute-0 ceph-mon[74194]: Updating compute-2:/etc/ceph/ceph.conf
Sep 30 14:18:08 compute-0 ceph-mon[74194]: 9.1 scrub starts
Sep 30 14:18:08 compute-0 ceph-mon[74194]: 9.1 scrub ok
Sep 30 14:18:08 compute-0 ceph-mon[74194]: osdmap e87: 3 total, 3 up, 3 in
Sep 30 14:18:08 compute-0 ceph-mon[74194]: pgmap v13: 337 pgs: 1 active+recovering+remapped, 1 active+recovery_wait+remapped, 1 active+remapped, 1 peering, 333 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 10/213 objects misplaced (4.695%); 0 B/s, 0 objects/s recovering
Sep 30 14:18:08 compute-0 ceph-mon[74194]: 9.f scrub starts
Sep 30 14:18:08 compute-0 ceph-mon[74194]: 9.f scrub ok
Sep 30 14:18:08 compute-0 ceph-mon[74194]: Updating compute-2:/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.conf
Sep 30 14:18:08 compute-0 ceph-mon[74194]: Updating compute-0:/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.conf
Sep 30 14:18:08 compute-0 ceph-mon[74194]: Updating compute-1:/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.conf
Sep 30 14:18:08 compute-0 ceph-mon[74194]: 11.12 scrub starts
Sep 30 14:18:08 compute-0 ceph-mon[74194]: 11.12 scrub ok
Sep 30 14:18:08 compute-0 sudo[102881]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6
Sep 30 14:18:08 compute-0 sudo[102881]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:18:08 compute-0 sudo[102881]: pam_unix(sudo:session): session closed for user root
Sep 30 14:18:08 compute-0 sudo[102909]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.conf.new
Sep 30 14:18:08 compute-0 sudo[102909]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:18:08 compute-0 sudo[102909]: pam_unix(sudo:session): session closed for user root
Sep 30 14:18:08 compute-0 sudo[102964]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.conf.new
Sep 30 14:18:08 compute-0 sudo[102964]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:18:08 compute-0 sudo[102964]: pam_unix(sudo:session): session closed for user root
Sep 30 14:18:08 compute-0 sudo[102992]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.conf.new
Sep 30 14:18:08 compute-0 sudo[102992]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:18:08 compute-0 sudo[102992]: pam_unix(sudo:session): session closed for user root
Sep 30 14:18:08 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Sep 30 14:18:08 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Sep 30 14:18:08 compute-0 sudo[103020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.conf.new /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.conf
Sep 30 14:18:08 compute-0 sudo[103020]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:18:08 compute-0 sudo[103020]: pam_unix(sudo:session): session closed for user root
Sep 30 14:18:08 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Sep 30 14:18:08 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Sep 30 14:18:08 compute-0 sudo[103048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Sep 30 14:18:08 compute-0 sudo[103048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:18:08 compute-0 sudo[103048]: pam_unix(sudo:session): session closed for user root
Sep 30 14:18:08 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Sep 30 14:18:08 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Sep 30 14:18:08 compute-0 sudo[103077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/etc/ceph
Sep 30 14:18:08 compute-0 sudo[103077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:18:08 compute-0 sudo[103077]: pam_unix(sudo:session): session closed for user root
Sep 30 14:18:08 compute-0 sudo[103105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/etc/ceph/ceph.client.admin.keyring.new
Sep 30 14:18:08 compute-0 sudo[103105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:18:08 compute-0 sudo[103105]: pam_unix(sudo:session): session closed for user root
Sep 30 14:18:09 compute-0 sudo[103134]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6
Sep 30 14:18:09 compute-0 sudo[103134]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:18:09 compute-0 sudo[103134]: pam_unix(sudo:session): session closed for user root
Sep 30 14:18:09 compute-0 sudo[103160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/etc/ceph/ceph.client.admin.keyring.new
Sep 30 14:18:09 compute-0 sudo[103160]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:18:09 compute-0 sudo[103160]: pam_unix(sudo:session): session closed for user root
Sep 30 14:18:09 compute-0 sudo[103208]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/etc/ceph/ceph.client.admin.keyring.new
Sep 30 14:18:09 compute-0 sudo[103208]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:18:09 compute-0 sudo[103208]: pam_unix(sudo:session): session closed for user root
Sep 30 14:18:09 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:18:09 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf90003c50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:18:09 compute-0 sudo[103233]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/etc/ceph/ceph.client.admin.keyring.new
Sep 30 14:18:09 compute-0 sudo[103233]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:18:09 compute-0 sudo[103233]: pam_unix(sudo:session): session closed for user root
Sep 30 14:18:09 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.client.admin.keyring
Sep 30 14:18:09 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.client.admin.keyring
Sep 30 14:18:09 compute-0 sudo[103259]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Sep 30 14:18:09 compute-0 sudo[103259]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:18:09 compute-0 sudo[103259]: pam_unix(sudo:session): session closed for user root
Sep 30 14:18:09 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.client.admin.keyring
Sep 30 14:18:09 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.client.admin.keyring
Sep 30 14:18:09 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Sep 30 14:18:09 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Sep 30 14:18:09 compute-0 sudo[103284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config
Sep 30 14:18:09 compute-0 sudo[103284]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:18:09 compute-0 sudo[103284]: pam_unix(sudo:session): session closed for user root
Sep 30 14:18:09 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v14: 337 pgs: 1 active+recovering+remapped, 1 active+recovery_wait+remapped, 1 active+remapped, 1 peering, 333 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 10/213 objects misplaced (4.695%); 0 B/s, 0 objects/s recovering
Sep 30 14:18:09 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:18:09 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:18:09 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:18:09.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:18:09 compute-0 ceph-mon[74194]: 9.1c scrub starts
Sep 30 14:18:09 compute-0 ceph-mon[74194]: 9.1c scrub ok
Sep 30 14:18:09 compute-0 ceph-mon[74194]: 9.18 scrub starts
Sep 30 14:18:09 compute-0 ceph-mon[74194]: 9.18 scrub ok
Sep 30 14:18:09 compute-0 ceph-mon[74194]: 5.1b scrub starts
Sep 30 14:18:09 compute-0 ceph-mon[74194]: 5.1b scrub ok
Sep 30 14:18:09 compute-0 sudo[103309]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config
Sep 30 14:18:09 compute-0 sudo[103309]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:18:09 compute-0 sudo[103309]: pam_unix(sudo:session): session closed for user root
Sep 30 14:18:09 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.client.admin.keyring
Sep 30 14:18:09 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.client.admin.keyring
Sep 30 14:18:09 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:18:09 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:18:09 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:18:09.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:18:09 compute-0 sudo[103334]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.client.admin.keyring.new
Sep 30 14:18:09 compute-0 sudo[103334]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:18:09 compute-0 sudo[103334]: pam_unix(sudo:session): session closed for user root
Sep 30 14:18:09 compute-0 sudo[103359]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6
Sep 30 14:18:09 compute-0 sudo[103359]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:18:09 compute-0 sudo[103359]: pam_unix(sudo:session): session closed for user root
Sep 30 14:18:09 compute-0 sudo[103384]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.client.admin.keyring.new
Sep 30 14:18:09 compute-0 sudo[103384]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:18:09 compute-0 sudo[103384]: pam_unix(sudo:session): session closed for user root
Sep 30 14:18:09 compute-0 sudo[103433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.client.admin.keyring.new
Sep 30 14:18:09 compute-0 sudo[103433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:18:09 compute-0 sudo[103433]: pam_unix(sudo:session): session closed for user root
Sep 30 14:18:09 compute-0 sudo[103458]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.client.admin.keyring.new
Sep 30 14:18:09 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Sep 30 14:18:09 compute-0 sudo[103458]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:18:09 compute-0 sudo[103458]: pam_unix(sudo:session): session closed for user root
Sep 30 14:18:09 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:09 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Sep 30 14:18:09 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:18:09 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf640016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:18:09 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:09 compute-0 sudo[103483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-5e3c7776-ac03-5698-b79f-a6dc2d80cae6/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.client.admin.keyring.new /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.client.admin.keyring
Sep 30 14:18:09 compute-0 sudo[103483]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:18:09 compute-0 sudo[103483]: pam_unix(sudo:session): session closed for user root
Sep 30 14:18:09 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 14:18:09 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:09 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 14:18:09 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:10 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Sep 30 14:18:10 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:18:10 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf6c003f50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:18:10 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:10 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Sep 30 14:18:10 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 14:18:10 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:10 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 14:18:10 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 12.6 scrub starts
Sep 30 14:18:10 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:10 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 14:18:10 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 12.6 scrub ok
Sep 30 14:18:10 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:10 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 14:18:10 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 14:18:10 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 14:18:10 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 14:18:10 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:18:10 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:18:10 compute-0 sudo[103528]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:18:10 compute-0 sudo[103528]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:18:10 compute-0 sudo[103528]: pam_unix(sudo:session): session closed for user root
Sep 30 14:18:10 compute-0 sudo[103556]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 14:18:10 compute-0 sudo[103556]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:18:10 compute-0 ceph-mon[74194]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Sep 30 14:18:10 compute-0 ceph-mon[74194]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Sep 30 14:18:10 compute-0 ceph-mon[74194]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Sep 30 14:18:10 compute-0 ceph-mon[74194]: Updating compute-2:/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.client.admin.keyring
Sep 30 14:18:10 compute-0 ceph-mon[74194]: Updating compute-0:/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.client.admin.keyring
Sep 30 14:18:10 compute-0 ceph-mon[74194]: 9.12 scrub starts
Sep 30 14:18:10 compute-0 ceph-mon[74194]: 9.12 scrub ok
Sep 30 14:18:10 compute-0 ceph-mon[74194]: pgmap v14: 337 pgs: 1 active+recovering+remapped, 1 active+recovery_wait+remapped, 1 active+remapped, 1 peering, 333 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 10/213 objects misplaced (4.695%); 0 B/s, 0 objects/s recovering
Sep 30 14:18:10 compute-0 ceph-mon[74194]: 9.7 scrub starts
Sep 30 14:18:10 compute-0 ceph-mon[74194]: Updating compute-1:/var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/config/ceph.client.admin.keyring
Sep 30 14:18:10 compute-0 ceph-mon[74194]: 9.7 scrub ok
Sep 30 14:18:10 compute-0 ceph-mon[74194]: 8.12 scrub starts
Sep 30 14:18:10 compute-0 ceph-mon[74194]: 8.12 scrub ok
Sep 30 14:18:10 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:10 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:10 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:10 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:10 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:10 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:10 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:10 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:10 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 14:18:10 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 14:18:10 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:18:10 compute-0 podman[103646]: 2025-09-30 14:18:10.893668483 +0000 UTC m=+0.021017756 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:18:11 compute-0 podman[103646]: 2025-09-30 14:18:11.101610243 +0000 UTC m=+0.228959496 container create 4f4347c054e98efaf31557060a20e33a613be9a0140999f935e9e676100a8fa7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_clarke, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:18:11 compute-0 systemd[1]: Started libpod-conmon-4f4347c054e98efaf31557060a20e33a613be9a0140999f935e9e676100a8fa7.scope.
Sep 30 14:18:11 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:18:11 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:18:11 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf90003c50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:18:11 compute-0 podman[103646]: 2025-09-30 14:18:11.341110367 +0000 UTC m=+0.468459640 container init 4f4347c054e98efaf31557060a20e33a613be9a0140999f935e9e676100a8fa7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:18:11 compute-0 podman[103646]: 2025-09-30 14:18:11.34925758 +0000 UTC m=+0.476606833 container start 4f4347c054e98efaf31557060a20e33a613be9a0140999f935e9e676100a8fa7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Sep 30 14:18:11 compute-0 strange_clarke[103682]: 167 167
Sep 30 14:18:11 compute-0 systemd[1]: libpod-4f4347c054e98efaf31557060a20e33a613be9a0140999f935e9e676100a8fa7.scope: Deactivated successfully.
Sep 30 14:18:11 compute-0 podman[103646]: 2025-09-30 14:18:11.382760757 +0000 UTC m=+0.510110010 container attach 4f4347c054e98efaf31557060a20e33a613be9a0140999f935e9e676100a8fa7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_clarke, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:18:11 compute-0 podman[103646]: 2025-09-30 14:18:11.38324135 +0000 UTC m=+0.510590623 container died 4f4347c054e98efaf31557060a20e33a613be9a0140999f935e9e676100a8fa7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_clarke, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:18:11 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Sep 30 14:18:11 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Sep 30 14:18:11 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v15: 337 pgs: 337 active+clean; 457 KiB data, 130 MiB used, 60 GiB / 60 GiB avail
Sep 30 14:18:11 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} v 0)
Sep 30 14:18:11 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]: dispatch
Sep 30 14:18:11 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0)
Sep 30 14:18:11 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Sep 30 14:18:11 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:18:11 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:18:11 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:18:11.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:18:11 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Sep 30 14:18:11 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:18:11 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:18:11 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:18:11.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:18:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-1c4d107023e1cd368704c4b7152e6593953ddfe9150fb34ad7bb9d4bb1e16c22-merged.mount: Deactivated successfully.
Sep 30 14:18:11 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Sep 30 14:18:11 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Sep 30 14:18:11 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Sep 30 14:18:11 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Sep 30 14:18:11 compute-0 ceph-mon[74194]: 12.6 scrub starts
Sep 30 14:18:11 compute-0 ceph-mon[74194]: 12.6 scrub ok
Sep 30 14:18:11 compute-0 ceph-mon[74194]: 9.1b scrub starts
Sep 30 14:18:11 compute-0 ceph-mon[74194]: 9.1b scrub ok
Sep 30 14:18:11 compute-0 ceph-mon[74194]: 8.1b scrub starts
Sep 30 14:18:11 compute-0 ceph-mon[74194]: 8.1b scrub ok
Sep 30 14:18:11 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]: dispatch
Sep 30 14:18:11 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Sep 30 14:18:11 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:18:11 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf9c00a540 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:18:11 compute-0 podman[103646]: 2025-09-30 14:18:11.88317722 +0000 UTC m=+1.010526483 container remove 4f4347c054e98efaf31557060a20e33a613be9a0140999f935e9e676100a8fa7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_clarke, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:18:11 compute-0 systemd[1]: libpod-conmon-4f4347c054e98efaf31557060a20e33a613be9a0140999f935e9e676100a8fa7.scope: Deactivated successfully.
Sep 30 14:18:12 compute-0 podman[103840]: 2025-09-30 14:18:12.082929906 +0000 UTC m=+0.086322443 container create 9c09efec89d6281a88261b2e99950e565d94e83536cdcb018e2eda0da2076656 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_snyder, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Sep 30 14:18:12 compute-0 podman[103840]: 2025-09-30 14:18:12.021489765 +0000 UTC m=+0.024882322 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:18:12 compute-0 systemd[1]: Started libpod-conmon-9c09efec89d6281a88261b2e99950e565d94e83536cdcb018e2eda0da2076656.scope.
Sep 30 14:18:12 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:18:12 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf640016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:18:12 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:18:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b3b294be7cfe0f71a057b679f7232730c98828b79718854ae539e854a1e6a20/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:18:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b3b294be7cfe0f71a057b679f7232730c98828b79718854ae539e854a1e6a20/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:18:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b3b294be7cfe0f71a057b679f7232730c98828b79718854ae539e854a1e6a20/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:18:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b3b294be7cfe0f71a057b679f7232730c98828b79718854ae539e854a1e6a20/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:18:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b3b294be7cfe0f71a057b679f7232730c98828b79718854ae539e854a1e6a20/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 14:18:12 compute-0 podman[103840]: 2025-09-30 14:18:12.203912277 +0000 UTC m=+0.207304824 container init 9c09efec89d6281a88261b2e99950e565d94e83536cdcb018e2eda0da2076656 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_snyder, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Sep 30 14:18:12 compute-0 podman[103840]: 2025-09-30 14:18:12.210712763 +0000 UTC m=+0.214105300 container start 9c09efec89d6281a88261b2e99950e565d94e83536cdcb018e2eda0da2076656 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_snyder, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Sep 30 14:18:12 compute-0 python3.9[103880]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:18:12 compute-0 podman[103840]: 2025-09-30 14:18:12.304449228 +0000 UTC m=+0.307841775 container attach 9c09efec89d6281a88261b2e99950e565d94e83536cdcb018e2eda0da2076656 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_snyder, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:18:12 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 12.1c scrub starts
Sep 30 14:18:12 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 12.1c scrub ok
Sep 30 14:18:12 compute-0 jolly_snyder[103884]: --> passed data devices: 0 physical, 1 LVM
Sep 30 14:18:12 compute-0 jolly_snyder[103884]: --> All data devices are unavailable
Sep 30 14:18:12 compute-0 systemd[1]: libpod-9c09efec89d6281a88261b2e99950e565d94e83536cdcb018e2eda0da2076656.scope: Deactivated successfully.
Sep 30 14:18:12 compute-0 podman[103840]: 2025-09-30 14:18:12.628950049 +0000 UTC m=+0.632342586 container died 9c09efec89d6281a88261b2e99950e565d94e83536cdcb018e2eda0da2076656 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_snyder, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Sep 30 14:18:13 compute-0 python3.9[104058]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Sep 30 14:18:13 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:18:13 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf640016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:18:13 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 10.19 scrub starts
Sep 30 14:18:13 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 10.19 scrub ok
Sep 30 14:18:13 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v17: 337 pgs: 337 active+clean; 457 KiB data, 130 MiB used, 60 GiB / 60 GiB avail
Sep 30 14:18:13 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:18:13 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:18:13 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:18:13.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:18:13 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:18:13 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:18:13 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:18:13.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:18:13 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} v 0)
Sep 30 14:18:13 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]: dispatch
Sep 30 14:18:13 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0)
Sep 30 14:18:13 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Sep 30 14:18:13 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Sep 30 14:18:13 compute-0 ceph-mon[74194]: 7.18 scrub starts
Sep 30 14:18:13 compute-0 ceph-mon[74194]: 7.18 scrub ok
Sep 30 14:18:13 compute-0 ceph-mon[74194]: pgmap v15: 337 pgs: 337 active+clean; 457 KiB data, 130 MiB used, 60 GiB / 60 GiB avail
Sep 30 14:18:13 compute-0 ceph-mon[74194]: 9.d scrub starts
Sep 30 14:18:13 compute-0 ceph-mon[74194]: 9.d scrub ok
Sep 30 14:18:13 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Sep 30 14:18:13 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Sep 30 14:18:13 compute-0 ceph-mon[74194]: osdmap e88: 3 total, 3 up, 3 in
Sep 30 14:18:13 compute-0 ceph-mon[74194]: 11.1a deep-scrub starts
Sep 30 14:18:13 compute-0 ceph-mon[74194]: 11.1a deep-scrub ok
Sep 30 14:18:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-8b3b294be7cfe0f71a057b679f7232730c98828b79718854ae539e854a1e6a20-merged.mount: Deactivated successfully.
Sep 30 14:18:13 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:18:13 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf90003c50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:18:13 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Sep 30 14:18:13 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Sep 30 14:18:13 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Sep 30 14:18:13 compute-0 podman[103840]: 2025-09-30 14:18:13.978092139 +0000 UTC m=+1.981484676 container remove 9c09efec89d6281a88261b2e99950e565d94e83536cdcb018e2eda0da2076656 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_snyder, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Sep 30 14:18:13 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Sep 30 14:18:14 compute-0 systemd[1]: libpod-conmon-9c09efec89d6281a88261b2e99950e565d94e83536cdcb018e2eda0da2076656.scope: Deactivated successfully.
Sep 30 14:18:14 compute-0 sudo[103556]: pam_unix(sudo:session): session closed for user root
Sep 30 14:18:14 compute-0 sudo[104216]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:18:14 compute-0 sudo[104216]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:18:14 compute-0 sudo[104216]: pam_unix(sudo:session): session closed for user root
Sep 30 14:18:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:18:14 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf9c00a560 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:18:14 compute-0 sudo[104241]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -- lvm list --format json
Sep 30 14:18:14 compute-0 sudo[104241]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:18:14 compute-0 python3.9[104215]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Sep 30 14:18:14 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Sep 30 14:18:14 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Sep 30 14:18:14 compute-0 podman[104310]: 2025-09-30 14:18:14.603820743 +0000 UTC m=+0.046081182 container create 1f1f373e1bf15fe58b0c0eb83a0c48a37f0e6408ed1a47778ef4779d9a71e0bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_davinci, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:18:14 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:18:14 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:18:14 compute-0 systemd[1]: Started libpod-conmon-1f1f373e1bf15fe58b0c0eb83a0c48a37f0e6408ed1a47778ef4779d9a71e0bb.scope.
Sep 30 14:18:14 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:18:14 compute-0 podman[104310]: 2025-09-30 14:18:14.586012496 +0000 UTC m=+0.028272945 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:18:14 compute-0 podman[104310]: 2025-09-30 14:18:14.686346381 +0000 UTC m=+0.128606830 container init 1f1f373e1bf15fe58b0c0eb83a0c48a37f0e6408ed1a47778ef4779d9a71e0bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_davinci, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Sep 30 14:18:14 compute-0 podman[104310]: 2025-09-30 14:18:14.69324586 +0000 UTC m=+0.135506279 container start 1f1f373e1bf15fe58b0c0eb83a0c48a37f0e6408ed1a47778ef4779d9a71e0bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_davinci, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Sep 30 14:18:14 compute-0 podman[104310]: 2025-09-30 14:18:14.696727386 +0000 UTC m=+0.138987825 container attach 1f1f373e1bf15fe58b0c0eb83a0c48a37f0e6408ed1a47778ef4779d9a71e0bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:18:14 compute-0 systemd[1]: libpod-1f1f373e1bf15fe58b0c0eb83a0c48a37f0e6408ed1a47778ef4779d9a71e0bb.scope: Deactivated successfully.
Sep 30 14:18:14 compute-0 magical_davinci[104353]: 167 167
Sep 30 14:18:14 compute-0 conmon[104353]: conmon 1f1f373e1bf15fe58b0c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1f1f373e1bf15fe58b0c0eb83a0c48a37f0e6408ed1a47778ef4779d9a71e0bb.scope/container/memory.events
Sep 30 14:18:14 compute-0 podman[104310]: 2025-09-30 14:18:14.698409602 +0000 UTC m=+0.140670021 container died 1f1f373e1bf15fe58b0c0eb83a0c48a37f0e6408ed1a47778ef4779d9a71e0bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_davinci, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:18:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:18:14] "GET /metrics HTTP/1.1" 200 48348 "" "Prometheus/2.51.0"
Sep 30 14:18:14 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:18:14] "GET /metrics HTTP/1.1" 200 48348 "" "Prometheus/2.51.0"
Sep 30 14:18:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-19a6cdc5e2ec5988818b1e0a0072673fc67d5d31ad8c1170c772138efb0853e6-merged.mount: Deactivated successfully.
Sep 30 14:18:14 compute-0 ceph-mon[74194]: 12.1c scrub starts
Sep 30 14:18:14 compute-0 ceph-mon[74194]: 12.1c scrub ok
Sep 30 14:18:14 compute-0 ceph-mon[74194]: 9.19 scrub starts
Sep 30 14:18:14 compute-0 ceph-mon[74194]: 9.19 scrub ok
Sep 30 14:18:14 compute-0 ceph-mon[74194]: 8.19 scrub starts
Sep 30 14:18:14 compute-0 ceph-mon[74194]: 8.19 scrub ok
Sep 30 14:18:14 compute-0 ceph-mon[74194]: 10.19 scrub starts
Sep 30 14:18:14 compute-0 ceph-mon[74194]: 10.19 scrub ok
Sep 30 14:18:14 compute-0 ceph-mon[74194]: pgmap v17: 337 pgs: 337 active+clean; 457 KiB data, 130 MiB used, 60 GiB / 60 GiB avail
Sep 30 14:18:14 compute-0 ceph-mon[74194]: 9.8 deep-scrub starts
Sep 30 14:18:14 compute-0 ceph-mon[74194]: 9.8 deep-scrub ok
Sep 30 14:18:14 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]: dispatch
Sep 30 14:18:14 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Sep 30 14:18:14 compute-0 ceph-mon[74194]: 9.1a deep-scrub starts
Sep 30 14:18:14 compute-0 ceph-mon[74194]: 9.1a deep-scrub ok
Sep 30 14:18:14 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Sep 30 14:18:14 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Sep 30 14:18:14 compute-0 ceph-mon[74194]: osdmap e89: 3 total, 3 up, 3 in
Sep 30 14:18:14 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:18:14 compute-0 podman[104310]: 2025-09-30 14:18:14.782335908 +0000 UTC m=+0.224596327 container remove 1f1f373e1bf15fe58b0c0eb83a0c48a37f0e6408ed1a47778ef4779d9a71e0bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_davinci, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Sep 30 14:18:14 compute-0 systemd[1]: libpod-conmon-1f1f373e1bf15fe58b0c0eb83a0c48a37f0e6408ed1a47778ef4779d9a71e0bb.scope: Deactivated successfully.
Sep 30 14:18:14 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Sep 30 14:18:15 compute-0 podman[104420]: 2025-09-30 14:18:14.91066927 +0000 UTC m=+0.022028934 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:18:15 compute-0 podman[104420]: 2025-09-30 14:18:15.047478094 +0000 UTC m=+0.158837738 container create d119fb0832be2b4b99bb320d706673a71c6252b246ceb6707f1252fe7909a4f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_banzai, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Sep 30 14:18:15 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Sep 30 14:18:15 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Sep 30 14:18:15 compute-0 systemd[1]: Started libpod-conmon-d119fb0832be2b4b99bb320d706673a71c6252b246ceb6707f1252fe7909a4f5.scope.
Sep 30 14:18:15 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:18:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acc218a2f6daf0e7d252139276e709b5ad256a2dcf529667f0145eee478f7c01/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:18:15 compute-0 sudo[104521]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmsllqdrnbdvcqcvmraocltgmgzzkfqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241894.8406777-306-121817683351687/AnsiballZ_setup.py'
Sep 30 14:18:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acc218a2f6daf0e7d252139276e709b5ad256a2dcf529667f0145eee478f7c01/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:18:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acc218a2f6daf0e7d252139276e709b5ad256a2dcf529667f0145eee478f7c01/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:18:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acc218a2f6daf0e7d252139276e709b5ad256a2dcf529667f0145eee478f7c01/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:18:15 compute-0 sudo[104521]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:18:15 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:18:15 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf6c003f50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:18:15 compute-0 podman[104420]: 2025-09-30 14:18:15.317355199 +0000 UTC m=+0.428714863 container init d119fb0832be2b4b99bb320d706673a71c6252b246ceb6707f1252fe7909a4f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:18:15 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 14:18:15 compute-0 podman[104420]: 2025-09-30 14:18:15.326437697 +0000 UTC m=+0.437797351 container start d119fb0832be2b4b99bb320d706673a71c6252b246ceb6707f1252fe7909a4f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Sep 30 14:18:15 compute-0 podman[104420]: 2025-09-30 14:18:15.334743514 +0000 UTC m=+0.446103188 container attach d119fb0832be2b4b99bb320d706673a71c6252b246ceb6707f1252fe7909a4f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325)
Sep 30 14:18:15 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 12.10 scrub starts
Sep 30 14:18:15 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 12.10 scrub ok
Sep 30 14:18:15 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v20: 337 pgs: 337 active+clean; 457 KiB data, 130 MiB used, 60 GiB / 60 GiB avail
Sep 30 14:18:15 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} v 0)
Sep 30 14:18:15 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]: dispatch
Sep 30 14:18:15 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0)
Sep 30 14:18:15 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Sep 30 14:18:15 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:18:15 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:18:15 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:18:15.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:18:15 compute-0 python3.9[104523]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Sep 30 14:18:15 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:18:15 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:18:15 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:18:15.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:18:15 compute-0 zealous_banzai[104503]: {
Sep 30 14:18:15 compute-0 zealous_banzai[104503]:     "0": [
Sep 30 14:18:15 compute-0 zealous_banzai[104503]:         {
Sep 30 14:18:15 compute-0 zealous_banzai[104503]:             "devices": [
Sep 30 14:18:15 compute-0 zealous_banzai[104503]:                 "/dev/loop3"
Sep 30 14:18:15 compute-0 zealous_banzai[104503]:             ],
Sep 30 14:18:15 compute-0 zealous_banzai[104503]:             "lv_name": "ceph_lv0",
Sep 30 14:18:15 compute-0 zealous_banzai[104503]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:18:15 compute-0 zealous_banzai[104503]:             "lv_size": "21470642176",
Sep 30 14:18:15 compute-0 zealous_banzai[104503]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5e3c7776-ac03-5698-b79f-a6dc2d80cae6,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1bf35304-bfb4-41f5-b832-570aa31de1b2,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 14:18:15 compute-0 zealous_banzai[104503]:             "lv_uuid": "f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L",
Sep 30 14:18:15 compute-0 zealous_banzai[104503]:             "name": "ceph_lv0",
Sep 30 14:18:15 compute-0 zealous_banzai[104503]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:18:15 compute-0 zealous_banzai[104503]:             "tags": {
Sep 30 14:18:15 compute-0 zealous_banzai[104503]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:18:15 compute-0 zealous_banzai[104503]:                 "ceph.block_uuid": "f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L",
Sep 30 14:18:15 compute-0 zealous_banzai[104503]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 14:18:15 compute-0 zealous_banzai[104503]:                 "ceph.cluster_fsid": "5e3c7776-ac03-5698-b79f-a6dc2d80cae6",
Sep 30 14:18:15 compute-0 zealous_banzai[104503]:                 "ceph.cluster_name": "ceph",
Sep 30 14:18:15 compute-0 zealous_banzai[104503]:                 "ceph.crush_device_class": "",
Sep 30 14:18:15 compute-0 zealous_banzai[104503]:                 "ceph.encrypted": "0",
Sep 30 14:18:15 compute-0 zealous_banzai[104503]:                 "ceph.osd_fsid": "1bf35304-bfb4-41f5-b832-570aa31de1b2",
Sep 30 14:18:15 compute-0 zealous_banzai[104503]:                 "ceph.osd_id": "0",
Sep 30 14:18:15 compute-0 zealous_banzai[104503]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 14:18:15 compute-0 zealous_banzai[104503]:                 "ceph.type": "block",
Sep 30 14:18:15 compute-0 zealous_banzai[104503]:                 "ceph.vdo": "0",
Sep 30 14:18:15 compute-0 zealous_banzai[104503]:                 "ceph.with_tpm": "0"
Sep 30 14:18:15 compute-0 zealous_banzai[104503]:             },
Sep 30 14:18:15 compute-0 zealous_banzai[104503]:             "type": "block",
Sep 30 14:18:15 compute-0 zealous_banzai[104503]:             "vg_name": "ceph_vg0"
Sep 30 14:18:15 compute-0 zealous_banzai[104503]:         }
Sep 30 14:18:15 compute-0 zealous_banzai[104503]:     ]
Sep 30 14:18:15 compute-0 zealous_banzai[104503]: }
Sep 30 14:18:15 compute-0 systemd[1]: libpod-d119fb0832be2b4b99bb320d706673a71c6252b246ceb6707f1252fe7909a4f5.scope: Deactivated successfully.
Sep 30 14:18:15 compute-0 podman[104420]: 2025-09-30 14:18:15.674719248 +0000 UTC m=+0.786078892 container died d119fb0832be2b4b99bb320d706673a71c6252b246ceb6707f1252fe7909a4f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Sep 30 14:18:15 compute-0 sudo[104521]: pam_unix(sudo:session): session closed for user root
Sep 30 14:18:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-acc218a2f6daf0e7d252139276e709b5ad256a2dcf529667f0145eee478f7c01-merged.mount: Deactivated successfully.
Sep 30 14:18:15 compute-0 ceph-mon[74194]: 7.10 scrub starts
Sep 30 14:18:15 compute-0 ceph-mon[74194]: 7.10 scrub ok
Sep 30 14:18:15 compute-0 ceph-mon[74194]: 9.1e scrub starts
Sep 30 14:18:15 compute-0 ceph-mon[74194]: 9.1e scrub ok
Sep 30 14:18:15 compute-0 ceph-mon[74194]: osdmap e90: 3 total, 3 up, 3 in
Sep 30 14:18:15 compute-0 ceph-mon[74194]: pgmap v20: 337 pgs: 337 active+clean; 457 KiB data, 130 MiB used, 60 GiB / 60 GiB avail
Sep 30 14:18:15 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]: dispatch
Sep 30 14:18:15 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Sep 30 14:18:15 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:18:15 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf64002f00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:18:15 compute-0 podman[104420]: 2025-09-30 14:18:15.89330337 +0000 UTC m=+1.004663014 container remove d119fb0832be2b4b99bb320d706673a71c6252b246ceb6707f1252fe7909a4f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Sep 30 14:18:15 compute-0 systemd[1]: libpod-conmon-d119fb0832be2b4b99bb320d706673a71c6252b246ceb6707f1252fe7909a4f5.scope: Deactivated successfully.
Sep 30 14:18:15 compute-0 sudo[104241]: pam_unix(sudo:session): session closed for user root
Sep 30 14:18:15 compute-0 sudo[104575]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:18:15 compute-0 sudo[104575]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:18:15 compute-0 sudo[104575]: pam_unix(sudo:session): session closed for user root
Sep 30 14:18:16 compute-0 sudo[104622]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -- raw list --format json
Sep 30 14:18:16 compute-0 sudo[104622]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:18:16 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Sep 30 14:18:16 compute-0 sudo[104675]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkmdefcnjnbjreueuppbyozuignfthsm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241894.8406777-306-121817683351687/AnsiballZ_dnf.py'
Sep 30 14:18:16 compute-0 sudo[104675]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:18:16 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Sep 30 14:18:16 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Sep 30 14:18:16 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Sep 30 14:18:16 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Sep 30 14:18:16 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 91 pg[6.e( v 48'39 (0'0,48'39] local-lis/les=73/74 n=1 ec=51/22 lis/c=73/73 les/c/f=74/74/0 sis=91 pruub=13.758281708s) [1] r=-1 lpr=91 pi=[73,91)/1 crt=48'39 mlcod 48'39 active pruub 276.724365234s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:18:16 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 91 pg[6.e( v 48'39 (0'0,48'39] local-lis/les=73/74 n=1 ec=51/22 lis/c=73/73 les/c/f=74/74/0 sis=91 pruub=13.758207321s) [1] r=-1 lpr=91 pi=[73,91)/1 crt=48'39 mlcod 0'0 unknown NOTIFY pruub 276.724365234s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:18:16 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:18:16 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf90003c50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:18:16 compute-0 python3.9[104677]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Sep 30 14:18:16 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 10.15 scrub starts
Sep 30 14:18:16 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 10.15 scrub ok
Sep 30 14:18:16 compute-0 podman[104719]: 2025-09-30 14:18:16.463629967 +0000 UTC m=+0.071968800 container create 99ca1dcfb3f8e8a9815b6b21c9478c749953f1282142cb8b12f14ec83cb833f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_tu, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Sep 30 14:18:16 compute-0 podman[104719]: 2025-09-30 14:18:16.415119 +0000 UTC m=+0.023457853 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:18:16 compute-0 systemd[1]: Started libpod-conmon-99ca1dcfb3f8e8a9815b6b21c9478c749953f1282142cb8b12f14ec83cb833f3.scope.
Sep 30 14:18:16 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:18:16 compute-0 podman[104719]: 2025-09-30 14:18:16.577580376 +0000 UTC m=+0.185919229 container init 99ca1dcfb3f8e8a9815b6b21c9478c749953f1282142cb8b12f14ec83cb833f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_tu, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Sep 30 14:18:16 compute-0 podman[104719]: 2025-09-30 14:18:16.587002884 +0000 UTC m=+0.195341747 container start 99ca1dcfb3f8e8a9815b6b21c9478c749953f1282142cb8b12f14ec83cb833f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_tu, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:18:16 compute-0 vibrant_tu[104735]: 167 167
Sep 30 14:18:16 compute-0 systemd[1]: libpod-99ca1dcfb3f8e8a9815b6b21c9478c749953f1282142cb8b12f14ec83cb833f3.scope: Deactivated successfully.
Sep 30 14:18:16 compute-0 podman[104719]: 2025-09-30 14:18:16.598352494 +0000 UTC m=+0.206691327 container attach 99ca1dcfb3f8e8a9815b6b21c9478c749953f1282142cb8b12f14ec83cb833f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_tu, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Sep 30 14:18:16 compute-0 podman[104719]: 2025-09-30 14:18:16.598812987 +0000 UTC m=+0.207151820 container died 99ca1dcfb3f8e8a9815b6b21c9478c749953f1282142cb8b12f14ec83cb833f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_tu, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Sep 30 14:18:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-aa9bfbcd517acd85f8a09f26e13bce114e1605d02ad12c05f24cf7eaf862931a-merged.mount: Deactivated successfully.
Sep 30 14:18:16 compute-0 podman[104719]: 2025-09-30 14:18:16.656547547 +0000 UTC m=+0.264886380 container remove 99ca1dcfb3f8e8a9815b6b21c9478c749953f1282142cb8b12f14ec83cb833f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_tu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:18:16 compute-0 systemd[1]: libpod-conmon-99ca1dcfb3f8e8a9815b6b21c9478c749953f1282142cb8b12f14ec83cb833f3.scope: Deactivated successfully.
Sep 30 14:18:16 compute-0 podman[104764]: 2025-09-30 14:18:16.817603694 +0000 UTC m=+0.050496803 container create 1712270cfffd12b622ed05249ac56e819d6086b722f2eb988ed10919d3d7ffa7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_taussig, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:18:16 compute-0 systemd[1]: Started libpod-conmon-1712270cfffd12b622ed05249ac56e819d6086b722f2eb988ed10919d3d7ffa7.scope.
Sep 30 14:18:16 compute-0 ceph-mon[74194]: 12.10 scrub starts
Sep 30 14:18:16 compute-0 ceph-mon[74194]: 12.10 scrub ok
Sep 30 14:18:16 compute-0 ceph-mon[74194]: 9.a scrub starts
Sep 30 14:18:16 compute-0 ceph-mon[74194]: 9.a scrub ok
Sep 30 14:18:16 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Sep 30 14:18:16 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Sep 30 14:18:16 compute-0 ceph-mon[74194]: osdmap e91: 3 total, 3 up, 3 in
Sep 30 14:18:16 compute-0 podman[104764]: 2025-09-30 14:18:16.792690362 +0000 UTC m=+0.025583491 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:18:16 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:18:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fb7fde2b9a49a60348428492bd496e1c78042d0dbd351a8f87d69abcf1c120e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:18:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fb7fde2b9a49a60348428492bd496e1c78042d0dbd351a8f87d69abcf1c120e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:18:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fb7fde2b9a49a60348428492bd496e1c78042d0dbd351a8f87d69abcf1c120e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:18:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fb7fde2b9a49a60348428492bd496e1c78042d0dbd351a8f87d69abcf1c120e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:18:16 compute-0 podman[104764]: 2025-09-30 14:18:16.915638157 +0000 UTC m=+0.148531286 container init 1712270cfffd12b622ed05249ac56e819d6086b722f2eb988ed10919d3d7ffa7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_taussig, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Sep 30 14:18:16 compute-0 podman[104764]: 2025-09-30 14:18:16.92269024 +0000 UTC m=+0.155583359 container start 1712270cfffd12b622ed05249ac56e819d6086b722f2eb988ed10919d3d7ffa7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_taussig, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Sep 30 14:18:16 compute-0 podman[104764]: 2025-09-30 14:18:16.951317333 +0000 UTC m=+0.184210462 container attach 1712270cfffd12b622ed05249ac56e819d6086b722f2eb988ed10919d3d7ffa7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_taussig, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:18:17 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Sep 30 14:18:17 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Sep 30 14:18:17 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Sep 30 14:18:17 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:18:17 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf9c00a580 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:18:17 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 10.14 scrub starts
Sep 30 14:18:17 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v23: 337 pgs: 2 remapped+peering, 335 active+clean; 457 KiB data, 130 MiB used, 60 GiB / 60 GiB avail
Sep 30 14:18:17 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:18:17 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:18:17 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:18:17.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:18:17 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 10.14 scrub ok
Sep 30 14:18:17 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:18:17 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000028s ======
Sep 30 14:18:17 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:18:17.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Sep 30 14:18:17 compute-0 lvm[104872]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 14:18:17 compute-0 lvm[104872]: VG ceph_vg0 finished
Sep 30 14:18:17 compute-0 hungry_taussig[104781]: {}
Sep 30 14:18:17 compute-0 systemd[1]: libpod-1712270cfffd12b622ed05249ac56e819d6086b722f2eb988ed10919d3d7ffa7.scope: Deactivated successfully.
Sep 30 14:18:17 compute-0 systemd[1]: libpod-1712270cfffd12b622ed05249ac56e819d6086b722f2eb988ed10919d3d7ffa7.scope: Consumed 1.152s CPU time.
Sep 30 14:18:17 compute-0 conmon[104781]: conmon 1712270cfffd12b622ed <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1712270cfffd12b622ed05249ac56e819d6086b722f2eb988ed10919d3d7ffa7.scope/container/memory.events
Sep 30 14:18:17 compute-0 podman[104764]: 2025-09-30 14:18:17.66811969 +0000 UTC m=+0.901012799 container died 1712270cfffd12b622ed05249ac56e819d6086b722f2eb988ed10919d3d7ffa7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_taussig, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:18:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-4fb7fde2b9a49a60348428492bd496e1c78042d0dbd351a8f87d69abcf1c120e-merged.mount: Deactivated successfully.
Sep 30 14:18:17 compute-0 podman[104764]: 2025-09-30 14:18:17.712006811 +0000 UTC m=+0.944899920 container remove 1712270cfffd12b622ed05249ac56e819d6086b722f2eb988ed10919d3d7ffa7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_taussig, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:18:17 compute-0 systemd[1]: libpod-conmon-1712270cfffd12b622ed05249ac56e819d6086b722f2eb988ed10919d3d7ffa7.scope: Deactivated successfully.
Sep 30 14:18:17 compute-0 sudo[104622]: pam_unix(sudo:session): session closed for user root
Sep 30 14:18:17 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 14:18:17 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:17 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 14:18:17 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:17 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.prometheus}] v 0)
Sep 30 14:18:17 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:17 compute-0 sudo[104895]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 14:18:17 compute-0 sudo[104896]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:18:17 compute-0 sudo[104896]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:18:17 compute-0 sudo[104895]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:18:17 compute-0 sudo[104896]: pam_unix(sudo:session): session closed for user root
Sep 30 14:18:17 compute-0 sudo[104895]: pam_unix(sudo:session): session closed for user root
Sep 30 14:18:17 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:18:17 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf6c003f50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:18:18 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (monmap changed)...
Sep 30 14:18:18 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (monmap changed)...
Sep 30 14:18:18 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Sep 30 14:18:18 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Sep 30 14:18:18 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Sep 30 14:18:18 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Sep 30 14:18:18 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:18:18 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:18:18 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Sep 30 14:18:18 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Sep 30 14:18:18 compute-0 sudo[104951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:18:18 compute-0 sudo[104951]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:18:18 compute-0 sudo[104951]: pam_unix(sudo:session): session closed for user root
Sep 30 14:18:18 compute-0 sudo[104977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph:v19 --timeout 895 _orch deploy --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6
Sep 30 14:18:18 compute-0 sudo[104977]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:18:18 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:18:18 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf64002f00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:18:18 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Sep 30 14:18:18 compute-0 ceph-mon[74194]: 10.15 scrub starts
Sep 30 14:18:18 compute-0 ceph-mon[74194]: 10.15 scrub ok
Sep 30 14:18:18 compute-0 ceph-mon[74194]: osdmap e92: 3 total, 3 up, 3 in
Sep 30 14:18:18 compute-0 ceph-mon[74194]: pgmap v23: 337 pgs: 2 remapped+peering, 335 active+clean; 457 KiB data, 130 MiB used, 60 GiB / 60 GiB avail
Sep 30 14:18:18 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:18 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:18 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:18 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Sep 30 14:18:18 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Sep 30 14:18:18 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:18:18 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Sep 30 14:18:18 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Sep 30 14:18:18 compute-0 podman[105028]: 2025-09-30 14:18:18.424414866 +0000 UTC m=+0.044353235 container create a4176dd292e40894411ce28105593422bb89f5604c4c8d1ab6195ce4b9c5da1f (image=quay.io/ceph/ceph:v19, name=gifted_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:18:18 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 9.0 scrub starts
Sep 30 14:18:18 compute-0 systemd[1]: Started libpod-conmon-a4176dd292e40894411ce28105593422bb89f5604c4c8d1ab6195ce4b9c5da1f.scope.
Sep 30 14:18:18 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 9.0 scrub ok
Sep 30 14:18:18 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:18:18 compute-0 podman[105028]: 2025-09-30 14:18:18.404667196 +0000 UTC m=+0.024605585 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:18:18 compute-0 podman[105028]: 2025-09-30 14:18:18.537319206 +0000 UTC m=+0.157257605 container init a4176dd292e40894411ce28105593422bb89f5604c4c8d1ab6195ce4b9c5da1f (image=quay.io/ceph/ceph:v19, name=gifted_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Sep 30 14:18:18 compute-0 podman[105028]: 2025-09-30 14:18:18.546999531 +0000 UTC m=+0.166937900 container start a4176dd292e40894411ce28105593422bb89f5604c4c8d1ab6195ce4b9c5da1f (image=quay.io/ceph/ceph:v19, name=gifted_bohr, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Sep 30 14:18:18 compute-0 gifted_bohr[105045]: 167 167
Sep 30 14:18:18 compute-0 systemd[1]: libpod-a4176dd292e40894411ce28105593422bb89f5604c4c8d1ab6195ce4b9c5da1f.scope: Deactivated successfully.
Sep 30 14:18:18 compute-0 podman[105028]: 2025-09-30 14:18:18.559003449 +0000 UTC m=+0.178941868 container attach a4176dd292e40894411ce28105593422bb89f5604c4c8d1ab6195ce4b9c5da1f (image=quay.io/ceph/ceph:v19, name=gifted_bohr, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Sep 30 14:18:18 compute-0 podman[105028]: 2025-09-30 14:18:18.55938693 +0000 UTC m=+0.179325319 container died a4176dd292e40894411ce28105593422bb89f5604c4c8d1ab6195ce4b9c5da1f (image=quay.io/ceph/ceph:v19, name=gifted_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:18:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-ffd844273db366209e37f13141866e66acd59743daa9537b68bf55ff88f57672-merged.mount: Deactivated successfully.
Sep 30 14:18:18 compute-0 podman[105028]: 2025-09-30 14:18:18.70010267 +0000 UTC m=+0.320041069 container remove a4176dd292e40894411ce28105593422bb89f5604c4c8d1ab6195ce4b9c5da1f (image=quay.io/ceph/ceph:v19, name=gifted_bohr, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Sep 30 14:18:18 compute-0 systemd[1]: libpod-conmon-a4176dd292e40894411ce28105593422bb89f5604c4c8d1ab6195ce4b9c5da1f.scope: Deactivated successfully.
Sep 30 14:18:18 compute-0 sudo[104977]: pam_unix(sudo:session): session closed for user root
Sep 30 14:18:18 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 14:18:18 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:18 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 14:18:18 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:18 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.buxlkm (monmap changed)...
Sep 30 14:18:18 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.buxlkm (monmap changed)...
Sep 30 14:18:18 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.buxlkm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Sep 30 14:18:18 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.buxlkm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Sep 30 14:18:18 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Sep 30 14:18:18 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mgr services"}]: dispatch
Sep 30 14:18:18 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:18:18 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:18:18 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.buxlkm on compute-0
Sep 30 14:18:18 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.buxlkm on compute-0
Sep 30 14:18:18 compute-0 sudo[105071]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:18:18 compute-0 sudo[105071]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:18:18 compute-0 sudo[105071]: pam_unix(sudo:session): session closed for user root
Sep 30 14:18:19 compute-0 sudo[105096]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph:v19 --timeout 895 _orch deploy --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6
Sep 30 14:18:19 compute-0 sudo[105096]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:18:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:18:19 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf90003c50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:18:19 compute-0 ceph-mon[74194]: 10.14 scrub starts
Sep 30 14:18:19 compute-0 ceph-mon[74194]: 10.14 scrub ok
Sep 30 14:18:19 compute-0 ceph-mon[74194]: Reconfiguring mon.compute-0 (monmap changed)...
Sep 30 14:18:19 compute-0 ceph-mon[74194]: Reconfiguring daemon mon.compute-0 on compute-0
Sep 30 14:18:19 compute-0 ceph-mon[74194]: osdmap e93: 3 total, 3 up, 3 in
Sep 30 14:18:19 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:19 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:19 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.buxlkm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Sep 30 14:18:19 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "mgr services"}]: dispatch
Sep 30 14:18:19 compute-0 ceph-mon[74194]: 9.1d scrub starts
Sep 30 14:18:19 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:18:19 compute-0 ceph-mon[74194]: 9.1d scrub ok
Sep 30 14:18:19 compute-0 podman[105140]: 2025-09-30 14:18:19.405267927 +0000 UTC m=+0.096142872 container create bea8b18bdac23adb7c1523d82bb54905876744422bf0a33a327e7f7d85c72894 (image=quay.io/ceph/ceph:v19, name=competent_bardeen, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Sep 30 14:18:19 compute-0 podman[105140]: 2025-09-30 14:18:19.332611399 +0000 UTC m=+0.023486364 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Sep 30 14:18:19 compute-0 systemd[1]: Started libpod-conmon-bea8b18bdac23adb7c1523d82bb54905876744422bf0a33a327e7f7d85c72894.scope.
Sep 30 14:18:19 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v25: 337 pgs: 2 remapped+peering, 335 active+clean; 457 KiB data, 130 MiB used, 60 GiB / 60 GiB avail
Sep 30 14:18:19 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:18:19 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:18:19 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:18:19.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:18:19 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 9.4 scrub starts
Sep 30 14:18:19 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:18:19 compute-0 podman[105140]: 2025-09-30 14:18:19.486345986 +0000 UTC m=+0.177220961 container init bea8b18bdac23adb7c1523d82bb54905876744422bf0a33a327e7f7d85c72894 (image=quay.io/ceph/ceph:v19, name=competent_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Sep 30 14:18:19 compute-0 podman[105140]: 2025-09-30 14:18:19.493514802 +0000 UTC m=+0.184389747 container start bea8b18bdac23adb7c1523d82bb54905876744422bf0a33a327e7f7d85c72894 (image=quay.io/ceph/ceph:v19, name=competent_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Sep 30 14:18:19 compute-0 competent_bardeen[105156]: 167 167
Sep 30 14:18:19 compute-0 systemd[1]: libpod-bea8b18bdac23adb7c1523d82bb54905876744422bf0a33a327e7f7d85c72894.scope: Deactivated successfully.
Sep 30 14:18:19 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 9.4 scrub ok
Sep 30 14:18:19 compute-0 podman[105140]: 2025-09-30 14:18:19.515409301 +0000 UTC m=+0.206284266 container attach bea8b18bdac23adb7c1523d82bb54905876744422bf0a33a327e7f7d85c72894 (image=quay.io/ceph/ceph:v19, name=competent_bardeen, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Sep 30 14:18:19 compute-0 podman[105140]: 2025-09-30 14:18:19.516063939 +0000 UTC m=+0.206938884 container died bea8b18bdac23adb7c1523d82bb54905876744422bf0a33a327e7f7d85c72894 (image=quay.io/ceph/ceph:v19, name=competent_bardeen, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:18:19 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:18:19 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:18:19 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:18:19.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:18:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-102c0a9f994216a4b1ec4e80180eb2ac2eff26dd18a9ebe833ae5adbadf62d1c-merged.mount: Deactivated successfully.
Sep 30 14:18:19 compute-0 podman[105140]: 2025-09-30 14:18:19.822850774 +0000 UTC m=+0.513725719 container remove bea8b18bdac23adb7c1523d82bb54905876744422bf0a33a327e7f7d85c72894 (image=quay.io/ceph/ceph:v19, name=competent_bardeen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Sep 30 14:18:19 compute-0 systemd[1]: libpod-conmon-bea8b18bdac23adb7c1523d82bb54905876744422bf0a33a327e7f7d85c72894.scope: Deactivated successfully.
Sep 30 14:18:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:18:19 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf9c00a5a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:18:19 compute-0 sudo[105096]: pam_unix(sudo:session): session closed for user root
Sep 30 14:18:19 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 14:18:19 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:19 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 14:18:20 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:20 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Reconfiguring crash.compute-0 (monmap changed)...
Sep 30 14:18:20 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Reconfiguring crash.compute-0 (monmap changed)...
Sep 30 14:18:20 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Sep 30 14:18:20 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Sep 30 14:18:20 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:18:20 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:18:20 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.compute-0 on compute-0
Sep 30 14:18:20 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.compute-0 on compute-0
Sep 30 14:18:20 compute-0 sudo[105177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:18:20 compute-0 sudo[105177]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:18:20 compute-0 sudo[105177]: pam_unix(sudo:session): session closed for user root
Sep 30 14:18:20 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:18:20 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf6c003f50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:18:20 compute-0 sudo[105202]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6
Sep 30 14:18:20 compute-0 sudo[105202]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:18:20 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 14:18:20 compute-0 ceph-mon[74194]: 9.0 scrub starts
Sep 30 14:18:20 compute-0 ceph-mon[74194]: 9.0 scrub ok
Sep 30 14:18:20 compute-0 ceph-mon[74194]: Reconfiguring mgr.compute-0.buxlkm (monmap changed)...
Sep 30 14:18:20 compute-0 ceph-mon[74194]: Reconfiguring daemon mgr.compute-0.buxlkm on compute-0
Sep 30 14:18:20 compute-0 ceph-mon[74194]: pgmap v25: 337 pgs: 2 remapped+peering, 335 active+clean; 457 KiB data, 130 MiB used, 60 GiB / 60 GiB avail
Sep 30 14:18:20 compute-0 ceph-mon[74194]: 9.4 scrub starts
Sep 30 14:18:20 compute-0 ceph-mon[74194]: 9.4 scrub ok
Sep 30 14:18:20 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:20 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:20 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Sep 30 14:18:20 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:18:20 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 9.11 scrub starts
Sep 30 14:18:20 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 9.11 scrub ok
Sep 30 14:18:20 compute-0 podman[105242]: 2025-09-30 14:18:20.519970372 +0000 UTC m=+0.069758030 container create 965be6fbcd7bfcc4a1fc885ae0ac877a8ba6a69086fceeab2dd6e283cde494d2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_volhard, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Sep 30 14:18:20 compute-0 podman[105242]: 2025-09-30 14:18:20.473828319 +0000 UTC m=+0.023616247 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:18:20 compute-0 systemd[1]: Started libpod-conmon-965be6fbcd7bfcc4a1fc885ae0ac877a8ba6a69086fceeab2dd6e283cde494d2.scope.
Sep 30 14:18:20 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:18:20 compute-0 podman[105242]: 2025-09-30 14:18:20.75741792 +0000 UTC m=+0.307205618 container init 965be6fbcd7bfcc4a1fc885ae0ac877a8ba6a69086fceeab2dd6e283cde494d2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_volhard, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:18:20 compute-0 podman[105242]: 2025-09-30 14:18:20.763596389 +0000 UTC m=+0.313384047 container start 965be6fbcd7bfcc4a1fc885ae0ac877a8ba6a69086fceeab2dd6e283cde494d2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_volhard, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Sep 30 14:18:20 compute-0 kind_volhard[105258]: 167 167
Sep 30 14:18:20 compute-0 systemd[1]: libpod-965be6fbcd7bfcc4a1fc885ae0ac877a8ba6a69086fceeab2dd6e283cde494d2.scope: Deactivated successfully.
Sep 30 14:18:20 compute-0 podman[105242]: 2025-09-30 14:18:20.771785263 +0000 UTC m=+0.321572951 container attach 965be6fbcd7bfcc4a1fc885ae0ac877a8ba6a69086fceeab2dd6e283cde494d2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_volhard, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:18:20 compute-0 podman[105242]: 2025-09-30 14:18:20.772747449 +0000 UTC m=+0.322535127 container died 965be6fbcd7bfcc4a1fc885ae0ac877a8ba6a69086fceeab2dd6e283cde494d2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_volhard, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:18:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-86657f88a0a96f0eb6df22705893a0c54dad94017016e4b6b8affc8f5d16961e-merged.mount: Deactivated successfully.
Sep 30 14:18:20 compute-0 podman[105242]: 2025-09-30 14:18:20.959421638 +0000 UTC m=+0.509209296 container remove 965be6fbcd7bfcc4a1fc885ae0ac877a8ba6a69086fceeab2dd6e283cde494d2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_volhard, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True)
Sep 30 14:18:20 compute-0 systemd[1]: libpod-conmon-965be6fbcd7bfcc4a1fc885ae0ac877a8ba6a69086fceeab2dd6e283cde494d2.scope: Deactivated successfully.
Sep 30 14:18:21 compute-0 sudo[105202]: pam_unix(sudo:session): session closed for user root
Sep 30 14:18:21 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 14:18:21 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:21 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 14:18:21 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:18:21 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf64003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:18:21 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:21 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Reconfiguring osd.0 (monmap changed)...
Sep 30 14:18:21 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Reconfiguring osd.0 (monmap changed)...
Sep 30 14:18:21 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0)
Sep 30 14:18:21 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Sep 30 14:18:21 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:18:21 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:18:21 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.0 on compute-0
Sep 30 14:18:21 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.0 on compute-0
Sep 30 14:18:21 compute-0 sudo[105278]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:18:21 compute-0 sudo[105278]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:18:21 compute-0 sudo[105278]: pam_unix(sudo:session): session closed for user root
Sep 30 14:18:21 compute-0 sudo[105303]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6
Sep 30 14:18:21 compute-0 sudo[105303]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:18:21 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Sep 30 14:18:21 compute-0 ceph-osd[82707]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Sep 30 14:18:21 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v26: 337 pgs: 337 active+clean; 457 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Sep 30 14:18:21 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} v 0)
Sep 30 14:18:21 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Sep 30 14:18:21 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0)
Sep 30 14:18:21 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Sep 30 14:18:21 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:18:21 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:18:21 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:18:21.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:18:21 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:18:21 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:18:21 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:18:21.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:18:21 compute-0 ceph-mon[74194]: Reconfiguring crash.compute-0 (monmap changed)...
Sep 30 14:18:21 compute-0 ceph-mon[74194]: Reconfiguring daemon crash.compute-0 on compute-0
Sep 30 14:18:21 compute-0 ceph-mon[74194]: 9.11 scrub starts
Sep 30 14:18:21 compute-0 ceph-mon[74194]: 9.11 scrub ok
Sep 30 14:18:21 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:21 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:21 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Sep 30 14:18:21 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:18:21 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Sep 30 14:18:21 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Sep 30 14:18:21 compute-0 podman[105346]: 2025-09-30 14:18:21.719140288 +0000 UTC m=+0.056549438 container create 7b55f15ff5a30c9bb1ef2d3650d6b082c9f48a31fc09dc539b656f253c33e946 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_jang, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Sep 30 14:18:21 compute-0 systemd[1]: Started libpod-conmon-7b55f15ff5a30c9bb1ef2d3650d6b082c9f48a31fc09dc539b656f253c33e946.scope.
Sep 30 14:18:21 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:18:21 compute-0 podman[105346]: 2025-09-30 14:18:21.689022354 +0000 UTC m=+0.026431534 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:18:21 compute-0 podman[105346]: 2025-09-30 14:18:21.799711033 +0000 UTC m=+0.137120203 container init 7b55f15ff5a30c9bb1ef2d3650d6b082c9f48a31fc09dc539b656f253c33e946 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:18:21 compute-0 podman[105346]: 2025-09-30 14:18:21.80764371 +0000 UTC m=+0.145052860 container start 7b55f15ff5a30c9bb1ef2d3650d6b082c9f48a31fc09dc539b656f253c33e946 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_jang, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Sep 30 14:18:21 compute-0 interesting_jang[105364]: 167 167
Sep 30 14:18:21 compute-0 systemd[1]: libpod-7b55f15ff5a30c9bb1ef2d3650d6b082c9f48a31fc09dc539b656f253c33e946.scope: Deactivated successfully.
Sep 30 14:18:21 compute-0 conmon[105364]: conmon 7b55f15ff5a30c9bb1ef <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7b55f15ff5a30c9bb1ef2d3650d6b082c9f48a31fc09dc539b656f253c33e946.scope/container/memory.events
Sep 30 14:18:21 compute-0 podman[105346]: 2025-09-30 14:18:21.818566289 +0000 UTC m=+0.155975469 container attach 7b55f15ff5a30c9bb1ef2d3650d6b082c9f48a31fc09dc539b656f253c33e946 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_jang, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Sep 30 14:18:21 compute-0 podman[105346]: 2025-09-30 14:18:21.819008101 +0000 UTC m=+0.156417271 container died 7b55f15ff5a30c9bb1ef2d3650d6b082c9f48a31fc09dc539b656f253c33e946 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_jang, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Sep 30 14:18:21 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:18:21 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf90003c50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:18:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-67fe5bba530c05b00049acc30280936aaf597863276737192105209882583fef-merged.mount: Deactivated successfully.
Sep 30 14:18:21 compute-0 podman[105346]: 2025-09-30 14:18:21.961278145 +0000 UTC m=+0.298687295 container remove 7b55f15ff5a30c9bb1ef2d3650d6b082c9f48a31fc09dc539b656f253c33e946 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Sep 30 14:18:21 compute-0 systemd[1]: libpod-conmon-7b55f15ff5a30c9bb1ef2d3650d6b082c9f48a31fc09dc539b656f253c33e946.scope: Deactivated successfully.
Sep 30 14:18:22 compute-0 sudo[105303]: pam_unix(sudo:session): session closed for user root
Sep 30 14:18:22 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 14:18:22 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:22 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 14:18:22 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:18:22 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf9c00a5c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:18:22 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Sep 30 14:18:22 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:22 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Reconfiguring node-exporter.compute-0 (unknown last config time)...
Sep 30 14:18:22 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Reconfiguring node-exporter.compute-0 (unknown last config time)...
Sep 30 14:18:22 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Reconfiguring daemon node-exporter.compute-0 on compute-0
Sep 30 14:18:22 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Reconfiguring daemon node-exporter.compute-0 on compute-0
Sep 30 14:18:22 compute-0 sudo[105393]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:18:22 compute-0 sudo[105393]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:18:22 compute-0 sudo[105393]: pam_unix(sudo:session): session closed for user root
Sep 30 14:18:22 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Sep 30 14:18:22 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Sep 30 14:18:22 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Sep 30 14:18:22 compute-0 sudo[105418]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/prometheus/node-exporter:v1.7.0 --timeout 895 _orch deploy --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6
Sep 30 14:18:22 compute-0 sudo[105418]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:18:22 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Sep 30 14:18:22 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 94 pg[6.f( empty local-lis/les=0/0 n=0 ec=51/22 lis/c=63/63 les/c/f=64/64/0 sis=94) [0] r=0 lpr=94 pi=[63,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:18:22 compute-0 ceph-mon[74194]: Reconfiguring osd.0 (monmap changed)...
Sep 30 14:18:22 compute-0 ceph-mon[74194]: Reconfiguring daemon osd.0 on compute-0
Sep 30 14:18:22 compute-0 ceph-mon[74194]: 9.10 scrub starts
Sep 30 14:18:22 compute-0 ceph-mon[74194]: 9.10 scrub ok
Sep 30 14:18:22 compute-0 ceph-mon[74194]: pgmap v26: 337 pgs: 337 active+clean; 457 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Sep 30 14:18:22 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:22 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:22 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Sep 30 14:18:22 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Sep 30 14:18:22 compute-0 ceph-mon[74194]: osdmap e94: 3 total, 3 up, 3 in
Sep 30 14:18:22 compute-0 systemd[1]: Stopping Ceph node-exporter.compute-0 for 5e3c7776-ac03-5698-b79f-a6dc2d80cae6...
Sep 30 14:18:22 compute-0 podman[105492]: 2025-09-30 14:18:22.96164161 +0000 UTC m=+0.087522286 container died 0d94fdcb0089ce3f537370219af53558d7149360386a0f8dbbd34c4af8a36ba9 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:18:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-7c1b88179ff197e10250082c25cba9a1f3e4e0cff2c8bd2415f8dadc00fff3bc-merged.mount: Deactivated successfully.
Sep 30 14:18:23 compute-0 podman[105492]: 2025-09-30 14:18:23.206598373 +0000 UTC m=+0.332479049 container remove 0d94fdcb0089ce3f537370219af53558d7149360386a0f8dbbd34c4af8a36ba9 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:18:23 compute-0 bash[105492]: ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0
Sep 30 14:18:23 compute-0 systemd[1]: ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6@node-exporter.compute-0.service: Main process exited, code=exited, status=143/n/a
Sep 30 14:18:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:18:23 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf6c003f50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:18:23 compute-0 systemd[1]: ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6@node-exporter.compute-0.service: Failed with result 'exit-code'.
Sep 30 14:18:23 compute-0 systemd[1]: Stopped Ceph node-exporter.compute-0 for 5e3c7776-ac03-5698-b79f-a6dc2d80cae6.
Sep 30 14:18:23 compute-0 systemd[1]: ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6@node-exporter.compute-0.service: Consumed 2.205s CPU time.
Sep 30 14:18:23 compute-0 systemd[1]: Starting Ceph node-exporter.compute-0 for 5e3c7776-ac03-5698-b79f-a6dc2d80cae6...
Sep 30 14:18:23 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Sep 30 14:18:23 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v28: 337 pgs: 337 active+clean; 457 KiB data, 147 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Sep 30 14:18:23 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0)
Sep 30 14:18:23 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Sep 30 14:18:23 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:18:23 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:18:23 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:18:23.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:18:23 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:18:23 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:18:23 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:18:23.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:18:23 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Sep 30 14:18:23 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Sep 30 14:18:23 compute-0 podman[105598]: 2025-09-30 14:18:23.581215955 +0000 UTC m=+0.051914272 container create 7517aa84b8564a81255eab7821e47762fe9b9d86aae2c7d77e10c0dfa057ab6d (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:18:23 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 95 pg[6.f( v 48'39 lc 48'1 (0'0,48'39] local-lis/les=94/95 n=3 ec=51/22 lis/c=63/63 les/c/f=64/64/0 sis=94) [0] r=0 lpr=94 pi=[63,94)/1 crt=48'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:18:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8b89773ca09ca1cd88128dc1f89e0223945ca21d46d214876d1280726948ae2/merged/etc/node-exporter supports timestamps until 2038 (0x7fffffff)
Sep 30 14:18:23 compute-0 podman[105598]: 2025-09-30 14:18:23.643599802 +0000 UTC m=+0.114298139 container init 7517aa84b8564a81255eab7821e47762fe9b9d86aae2c7d77e10c0dfa057ab6d (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:18:23 compute-0 podman[105598]: 2025-09-30 14:18:23.64862405 +0000 UTC m=+0.119322367 container start 7517aa84b8564a81255eab7821e47762fe9b9d86aae2c7d77e10c0dfa057ab6d (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:18:23 compute-0 podman[105598]: 2025-09-30 14:18:23.553058795 +0000 UTC m=+0.023757132 image pull 72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e quay.io/prometheus/node-exporter:v1.7.0
Sep 30 14:18:23 compute-0 bash[105598]: 7517aa84b8564a81255eab7821e47762fe9b9d86aae2c7d77e10c0dfa057ab6d
Sep 30 14:18:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[105612]: ts=2025-09-30T14:18:23.654Z caller=node_exporter.go:192 level=info msg="Starting node_exporter" version="(version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)"
Sep 30 14:18:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[105612]: ts=2025-09-30T14:18:23.654Z caller=node_exporter.go:193 level=info msg="Build context" build_context="(go=go1.21.4, platform=linux/amd64, user=root@35918982f6d8, date=20231112-23:53:35, tags=netgo osusergo static_build)"
Sep 30 14:18:23 compute-0 systemd[1]: Started Ceph node-exporter.compute-0 for 5e3c7776-ac03-5698-b79f-a6dc2d80cae6.
Sep 30 14:18:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[105612]: ts=2025-09-30T14:18:23.659Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Sep 30 14:18:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[105612]: ts=2025-09-30T14:18:23.659Z caller=diskstats_linux.go:265 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Sep 30 14:18:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[105612]: ts=2025-09-30T14:18:23.659Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Sep 30 14:18:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[105612]: ts=2025-09-30T14:18:23.659Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Sep 30 14:18:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[105612]: ts=2025-09-30T14:18:23.660Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Sep 30 14:18:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[105612]: ts=2025-09-30T14:18:23.660Z caller=node_exporter.go:117 level=info collector=arp
Sep 30 14:18:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[105612]: ts=2025-09-30T14:18:23.660Z caller=node_exporter.go:117 level=info collector=bcache
Sep 30 14:18:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[105612]: ts=2025-09-30T14:18:23.660Z caller=node_exporter.go:117 level=info collector=bonding
Sep 30 14:18:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[105612]: ts=2025-09-30T14:18:23.660Z caller=node_exporter.go:117 level=info collector=btrfs
Sep 30 14:18:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[105612]: ts=2025-09-30T14:18:23.660Z caller=node_exporter.go:117 level=info collector=conntrack
Sep 30 14:18:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[105612]: ts=2025-09-30T14:18:23.660Z caller=node_exporter.go:117 level=info collector=cpu
Sep 30 14:18:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[105612]: ts=2025-09-30T14:18:23.660Z caller=node_exporter.go:117 level=info collector=cpufreq
Sep 30 14:18:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[105612]: ts=2025-09-30T14:18:23.660Z caller=node_exporter.go:117 level=info collector=diskstats
Sep 30 14:18:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[105612]: ts=2025-09-30T14:18:23.660Z caller=node_exporter.go:117 level=info collector=dmi
Sep 30 14:18:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[105612]: ts=2025-09-30T14:18:23.660Z caller=node_exporter.go:117 level=info collector=edac
Sep 30 14:18:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[105612]: ts=2025-09-30T14:18:23.660Z caller=node_exporter.go:117 level=info collector=entropy
Sep 30 14:18:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[105612]: ts=2025-09-30T14:18:23.660Z caller=node_exporter.go:117 level=info collector=fibrechannel
Sep 30 14:18:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[105612]: ts=2025-09-30T14:18:23.660Z caller=node_exporter.go:117 level=info collector=filefd
Sep 30 14:18:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[105612]: ts=2025-09-30T14:18:23.660Z caller=node_exporter.go:117 level=info collector=filesystem
Sep 30 14:18:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[105612]: ts=2025-09-30T14:18:23.660Z caller=node_exporter.go:117 level=info collector=hwmon
Sep 30 14:18:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[105612]: ts=2025-09-30T14:18:23.660Z caller=node_exporter.go:117 level=info collector=infiniband
Sep 30 14:18:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[105612]: ts=2025-09-30T14:18:23.660Z caller=node_exporter.go:117 level=info collector=ipvs
Sep 30 14:18:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[105612]: ts=2025-09-30T14:18:23.660Z caller=node_exporter.go:117 level=info collector=loadavg
Sep 30 14:18:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[105612]: ts=2025-09-30T14:18:23.660Z caller=node_exporter.go:117 level=info collector=mdadm
Sep 30 14:18:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[105612]: ts=2025-09-30T14:18:23.660Z caller=node_exporter.go:117 level=info collector=meminfo
Sep 30 14:18:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[105612]: ts=2025-09-30T14:18:23.660Z caller=node_exporter.go:117 level=info collector=netclass
Sep 30 14:18:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[105612]: ts=2025-09-30T14:18:23.660Z caller=node_exporter.go:117 level=info collector=netdev
Sep 30 14:18:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[105612]: ts=2025-09-30T14:18:23.660Z caller=node_exporter.go:117 level=info collector=netstat
Sep 30 14:18:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[105612]: ts=2025-09-30T14:18:23.660Z caller=node_exporter.go:117 level=info collector=nfs
Sep 30 14:18:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[105612]: ts=2025-09-30T14:18:23.660Z caller=node_exporter.go:117 level=info collector=nfsd
Sep 30 14:18:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[105612]: ts=2025-09-30T14:18:23.660Z caller=node_exporter.go:117 level=info collector=nvme
Sep 30 14:18:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[105612]: ts=2025-09-30T14:18:23.660Z caller=node_exporter.go:117 level=info collector=os
Sep 30 14:18:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[105612]: ts=2025-09-30T14:18:23.660Z caller=node_exporter.go:117 level=info collector=powersupplyclass
Sep 30 14:18:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[105612]: ts=2025-09-30T14:18:23.660Z caller=node_exporter.go:117 level=info collector=pressure
Sep 30 14:18:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[105612]: ts=2025-09-30T14:18:23.661Z caller=node_exporter.go:117 level=info collector=rapl
Sep 30 14:18:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[105612]: ts=2025-09-30T14:18:23.661Z caller=node_exporter.go:117 level=info collector=schedstat
Sep 30 14:18:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[105612]: ts=2025-09-30T14:18:23.661Z caller=node_exporter.go:117 level=info collector=selinux
Sep 30 14:18:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[105612]: ts=2025-09-30T14:18:23.661Z caller=node_exporter.go:117 level=info collector=sockstat
Sep 30 14:18:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[105612]: ts=2025-09-30T14:18:23.661Z caller=node_exporter.go:117 level=info collector=softnet
Sep 30 14:18:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[105612]: ts=2025-09-30T14:18:23.661Z caller=node_exporter.go:117 level=info collector=stat
Sep 30 14:18:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[105612]: ts=2025-09-30T14:18:23.661Z caller=node_exporter.go:117 level=info collector=tapestats
Sep 30 14:18:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[105612]: ts=2025-09-30T14:18:23.661Z caller=node_exporter.go:117 level=info collector=textfile
Sep 30 14:18:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[105612]: ts=2025-09-30T14:18:23.661Z caller=node_exporter.go:117 level=info collector=thermal_zone
Sep 30 14:18:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[105612]: ts=2025-09-30T14:18:23.661Z caller=node_exporter.go:117 level=info collector=time
Sep 30 14:18:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[105612]: ts=2025-09-30T14:18:23.661Z caller=node_exporter.go:117 level=info collector=udp_queues
Sep 30 14:18:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[105612]: ts=2025-09-30T14:18:23.661Z caller=node_exporter.go:117 level=info collector=uname
Sep 30 14:18:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[105612]: ts=2025-09-30T14:18:23.661Z caller=node_exporter.go:117 level=info collector=vmstat
Sep 30 14:18:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[105612]: ts=2025-09-30T14:18:23.661Z caller=node_exporter.go:117 level=info collector=xfs
Sep 30 14:18:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[105612]: ts=2025-09-30T14:18:23.661Z caller=node_exporter.go:117 level=info collector=zfs
Sep 30 14:18:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[105612]: ts=2025-09-30T14:18:23.662Z caller=tls_config.go:274 level=info msg="Listening on" address=[::]:9100
Sep 30 14:18:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0[105612]: ts=2025-09-30T14:18:23.662Z caller=tls_config.go:277 level=info msg="TLS is disabled." http2=false address=[::]:9100
Sep 30 14:18:23 compute-0 sudo[105418]: pam_unix(sudo:session): session closed for user root
Sep 30 14:18:23 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 14:18:23 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:23 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 14:18:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:18:23 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf64003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:18:23 compute-0 ceph-mon[74194]: Reconfiguring node-exporter.compute-0 (unknown last config time)...
Sep 30 14:18:23 compute-0 ceph-mon[74194]: Reconfiguring daemon node-exporter.compute-0 on compute-0
Sep 30 14:18:23 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Sep 30 14:18:23 compute-0 ceph-mon[74194]: osdmap e95: 3 total, 3 up, 3 in
Sep 30 14:18:24 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:24 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Reconfiguring alertmanager.compute-0 (dependencies changed)...
Sep 30 14:18:24 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Reconfiguring alertmanager.compute-0 (dependencies changed)...
Sep 30 14:18:24 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Reconfiguring daemon alertmanager.compute-0 on compute-0
Sep 30 14:18:24 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Reconfiguring daemon alertmanager.compute-0 on compute-0
Sep 30 14:18:24 compute-0 sudo[105623]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:18:24 compute-0 sudo[105623]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:18:24 compute-0 sudo[105623]: pam_unix(sudo:session): session closed for user root
Sep 30 14:18:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:18:24 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf90003c50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:18:24 compute-0 sudo[105648]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/prometheus/alertmanager:v0.25.0 --timeout 895 _orch deploy --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6
Sep 30 14:18:24 compute-0 sudo[105648]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:18:24 compute-0 podman[105690]: 2025-09-30 14:18:24.481115972 +0000 UTC m=+0.039789670 volume create 3063d7a8b549e6276530076801728be83f06ac9688c5760ef171ffe0d7a5efbd
Sep 30 14:18:24 compute-0 podman[105690]: 2025-09-30 14:18:24.525612159 +0000 UTC m=+0.084285857 container create c962b5476ff48b22951b1a741f79715d0b68b67c4c37e41d5b8b095c5b405294 (image=quay.io/prometheus/alertmanager:v0.25.0, name=modest_leakey, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:18:24 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Sep 30 14:18:24 compute-0 podman[105690]: 2025-09-30 14:18:24.463723346 +0000 UTC m=+0.022397064 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Sep 30 14:18:24 compute-0 systemd[1]: Started libpod-conmon-c962b5476ff48b22951b1a741f79715d0b68b67c4c37e41d5b8b095c5b405294.scope.
Sep 30 14:18:24 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:18:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1489904d994f5a50d1456e96da63b89fd7ae12c79e8aa1e07ca2213b5a17ad1a/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Sep 30 14:18:24 compute-0 podman[105690]: 2025-09-30 14:18:24.612539188 +0000 UTC m=+0.171212896 container init c962b5476ff48b22951b1a741f79715d0b68b67c4c37e41d5b8b095c5b405294 (image=quay.io/prometheus/alertmanager:v0.25.0, name=modest_leakey, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:18:24 compute-0 podman[105690]: 2025-09-30 14:18:24.62062392 +0000 UTC m=+0.179297618 container start c962b5476ff48b22951b1a741f79715d0b68b67c4c37e41d5b8b095c5b405294 (image=quay.io/prometheus/alertmanager:v0.25.0, name=modest_leakey, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:18:24 compute-0 modest_leakey[105706]: 65534 65534
Sep 30 14:18:24 compute-0 systemd[1]: libpod-c962b5476ff48b22951b1a741f79715d0b68b67c4c37e41d5b8b095c5b405294.scope: Deactivated successfully.
Sep 30 14:18:24 compute-0 podman[105690]: 2025-09-30 14:18:24.689384421 +0000 UTC m=+0.248058149 container attach c962b5476ff48b22951b1a741f79715d0b68b67c4c37e41d5b8b095c5b405294 (image=quay.io/prometheus/alertmanager:v0.25.0, name=modest_leakey, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:18:24 compute-0 podman[105690]: 2025-09-30 14:18:24.691052377 +0000 UTC m=+0.249726075 container died c962b5476ff48b22951b1a741f79715d0b68b67c4c37e41d5b8b095c5b405294 (image=quay.io/prometheus/alertmanager:v0.25.0, name=modest_leakey, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:18:24 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Sep 30 14:18:24 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Sep 30 14:18:24 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Sep 30 14:18:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=infra.usagestats t=2025-09-30T14:18:24.726714483Z level=info msg="Usage stats are ready to report"
Sep 30 14:18:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-1489904d994f5a50d1456e96da63b89fd7ae12c79e8aa1e07ca2213b5a17ad1a-merged.mount: Deactivated successfully.
Sep 30 14:18:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:18:24] "GET /metrics HTTP/1.1" 200 48348 "" "Prometheus/2.51.0"
Sep 30 14:18:24 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:18:24] "GET /metrics HTTP/1.1" 200 48348 "" "Prometheus/2.51.0"
Sep 30 14:18:24 compute-0 podman[105690]: 2025-09-30 14:18:24.778947172 +0000 UTC m=+0.337620870 container remove c962b5476ff48b22951b1a741f79715d0b68b67c4c37e41d5b8b095c5b405294 (image=quay.io/prometheus/alertmanager:v0.25.0, name=modest_leakey, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:18:24 compute-0 podman[105690]: 2025-09-30 14:18:24.783680562 +0000 UTC m=+0.342354290 volume remove 3063d7a8b549e6276530076801728be83f06ac9688c5760ef171ffe0d7a5efbd
Sep 30 14:18:24 compute-0 systemd[1]: libpod-conmon-c962b5476ff48b22951b1a741f79715d0b68b67c4c37e41d5b8b095c5b405294.scope: Deactivated successfully.
Sep 30 14:18:24 compute-0 podman[105725]: 2025-09-30 14:18:24.840566518 +0000 UTC m=+0.036718955 volume create fbf368e23efab9acbaa72e886f6515578aee79d78abb0cf972b5c25ece510e10
Sep 30 14:18:24 compute-0 podman[105725]: 2025-09-30 14:18:24.874419155 +0000 UTC m=+0.070571602 container create 199fab2596086cf55868df16386f703c12718d7cfb2658d8245d120ca3a39f09 (image=quay.io/prometheus/alertmanager:v0.25.0, name=priceless_williams, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:18:24 compute-0 systemd[1]: Started libpod-conmon-199fab2596086cf55868df16386f703c12718d7cfb2658d8245d120ca3a39f09.scope.
Sep 30 14:18:24 compute-0 podman[105725]: 2025-09-30 14:18:24.826864184 +0000 UTC m=+0.023016641 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Sep 30 14:18:24 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:18:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/756040a86d46f996e1a4953fe1149d10cdf1a590e44980e49eefee26ef6eb508/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Sep 30 14:18:24 compute-0 podman[105725]: 2025-09-30 14:18:24.96631374 +0000 UTC m=+0.162466197 container init 199fab2596086cf55868df16386f703c12718d7cfb2658d8245d120ca3a39f09 (image=quay.io/prometheus/alertmanager:v0.25.0, name=priceless_williams, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:18:24 compute-0 podman[105725]: 2025-09-30 14:18:24.972013936 +0000 UTC m=+0.168166373 container start 199fab2596086cf55868df16386f703c12718d7cfb2658d8245d120ca3a39f09 (image=quay.io/prometheus/alertmanager:v0.25.0, name=priceless_williams, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:18:24 compute-0 priceless_williams[105742]: 65534 65534
Sep 30 14:18:24 compute-0 systemd[1]: libpod-199fab2596086cf55868df16386f703c12718d7cfb2658d8245d120ca3a39f09.scope: Deactivated successfully.
Sep 30 14:18:24 compute-0 podman[105725]: 2025-09-30 14:18:24.975427649 +0000 UTC m=+0.171580116 container attach 199fab2596086cf55868df16386f703c12718d7cfb2658d8245d120ca3a39f09 (image=quay.io/prometheus/alertmanager:v0.25.0, name=priceless_williams, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:18:24 compute-0 podman[105725]: 2025-09-30 14:18:24.976612841 +0000 UTC m=+0.172765298 container died 199fab2596086cf55868df16386f703c12718d7cfb2658d8245d120ca3a39f09 (image=quay.io/prometheus/alertmanager:v0.25.0, name=priceless_williams, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:18:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-756040a86d46f996e1a4953fe1149d10cdf1a590e44980e49eefee26ef6eb508-merged.mount: Deactivated successfully.
Sep 30 14:18:25 compute-0 podman[105725]: 2025-09-30 14:18:25.024015389 +0000 UTC m=+0.220167826 container remove 199fab2596086cf55868df16386f703c12718d7cfb2658d8245d120ca3a39f09 (image=quay.io/prometheus/alertmanager:v0.25.0, name=priceless_williams, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:18:25 compute-0 podman[105725]: 2025-09-30 14:18:25.028675096 +0000 UTC m=+0.224827543 volume remove fbf368e23efab9acbaa72e886f6515578aee79d78abb0cf972b5c25ece510e10
Sep 30 14:18:25 compute-0 systemd[1]: libpod-conmon-199fab2596086cf55868df16386f703c12718d7cfb2658d8245d120ca3a39f09.scope: Deactivated successfully.
Sep 30 14:18:25 compute-0 systemd[1]: Stopping Ceph alertmanager.compute-0 for 5e3c7776-ac03-5698-b79f-a6dc2d80cae6...
Sep 30 14:18:25 compute-0 ceph-mon[74194]: pgmap v28: 337 pgs: 337 active+clean; 457 KiB data, 147 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Sep 30 14:18:25 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:25 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:25 compute-0 ceph-mon[74194]: Reconfiguring alertmanager.compute-0 (dependencies changed)...
Sep 30 14:18:25 compute-0 ceph-mon[74194]: Reconfiguring daemon alertmanager.compute-0 on compute-0
Sep 30 14:18:25 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Sep 30 14:18:25 compute-0 ceph-mon[74194]: osdmap e96: 3 total, 3 up, 3 in
Sep 30 14:18:25 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 96 pg[9.10( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=2 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=96 pruub=14.534519196s) [1] r=-1 lpr=96 pi=[55,96)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active pruub 286.537231445s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:18:25 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 96 pg[9.10( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=2 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=96 pruub=14.534436226s) [1] r=-1 lpr=96 pi=[55,96)/1 crt=48'1157 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 286.537231445s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:18:25 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[98363]: ts=2025-09-30T14:18:25.258Z caller=main.go:583 level=info msg="Received SIGTERM, exiting gracefully..."
Sep 30 14:18:25 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:18:25 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf9c00a5e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:18:25 compute-0 podman[105791]: 2025-09-30 14:18:25.309609694 +0000 UTC m=+0.093215852 container died bd20ee432b94b120e4d4e48f8e160634ffb584df5fe8133f3bd8a9cff9cb64c7 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:18:25 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e96 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 14:18:25 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Sep 30 14:18:25 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Sep 30 14:18:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-b7007a0b098904ef6e90e9786a87ccacf3f39b864b9dccf403d481cb5c7c0584-merged.mount: Deactivated successfully.
Sep 30 14:18:25 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Sep 30 14:18:25 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v32: 337 pgs: 337 active+clean; 457 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Sep 30 14:18:25 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0)
Sep 30 14:18:25 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Sep 30 14:18:25 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:18:25 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:18:25 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:18:25.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:18:25 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 97 pg[9.10( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=2 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=97) [1]/[0] r=0 lpr=97 pi=[55,97)/1 crt=48'1157 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:18:25 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 97 pg[9.10( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=2 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=97) [1]/[0] r=0 lpr=97 pi=[55,97)/1 crt=48'1157 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Sep 30 14:18:25 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:18:25 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:18:25 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:18:25.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:18:25 compute-0 podman[105791]: 2025-09-30 14:18:25.553993342 +0000 UTC m=+0.337599500 container remove bd20ee432b94b120e4d4e48f8e160634ffb584df5fe8133f3bd8a9cff9cb64c7 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:18:25 compute-0 podman[105791]: 2025-09-30 14:18:25.673791 +0000 UTC m=+0.457397168 volume remove ddc0ab3592974bb99080d8c2adea1ce5e08c5ee1460ca5d4ada1331f578f0139
Sep 30 14:18:25 compute-0 bash[105791]: ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0
Sep 30 14:18:25 compute-0 systemd[1]: ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6@alertmanager.compute-0.service: Deactivated successfully.
Sep 30 14:18:25 compute-0 systemd[1]: Stopped Ceph alertmanager.compute-0 for 5e3c7776-ac03-5698-b79f-a6dc2d80cae6.
Sep 30 14:18:25 compute-0 systemd[1]: ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6@alertmanager.compute-0.service: Consumed 1.017s CPU time.
Sep 30 14:18:25 compute-0 systemd[1]: Starting Ceph alertmanager.compute-0 for 5e3c7776-ac03-5698-b79f-a6dc2d80cae6...
Sep 30 14:18:25 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:18:25 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf6c003f50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:18:26 compute-0 podman[105897]: 2025-09-30 14:18:26.015331506 +0000 UTC m=+0.027694679 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Sep 30 14:18:26 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:18:26 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf64003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:18:26 compute-0 podman[105897]: 2025-09-30 14:18:26.173132424 +0000 UTC m=+0.185495567 volume create 258903b74646813d6be7b11c9094db4b0831dadd118801e70e6cedf97f421dd3
Sep 30 14:18:26 compute-0 podman[105897]: 2025-09-30 14:18:26.357674715 +0000 UTC m=+0.370037858 container create b02a1f46575144d1c0fa40fb1da73aeaa83cbe57512ae5912168f030bf7101d3 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:18:26 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Sep 30 14:18:26 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Sep 30 14:18:26 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Sep 30 14:18:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce118250d3ce0767e2fdf906f212129f04c7d754fc777b4e8174c7fd227e7971/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Sep 30 14:18:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce118250d3ce0767e2fdf906f212129f04c7d754fc777b4e8174c7fd227e7971/merged/etc/alertmanager supports timestamps until 2038 (0x7fffffff)
Sep 30 14:18:26 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Sep 30 14:18:26 compute-0 podman[105897]: 2025-09-30 14:18:26.922675316 +0000 UTC m=+0.935038479 container init b02a1f46575144d1c0fa40fb1da73aeaa83cbe57512ae5912168f030bf7101d3 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:18:26 compute-0 podman[105897]: 2025-09-30 14:18:26.928586508 +0000 UTC m=+0.940949651 container start b02a1f46575144d1c0fa40fb1da73aeaa83cbe57512ae5912168f030bf7101d3 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:18:26 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-prometheus-compute-0[100450]: ts=2025-09-30T14:18:26.961Z caller=notifier.go:544 level=error component=notifier alertmanager=http://192.168.122.100:9093/api/v2/alerts count=1 msg="Error sending alert" err="Post \"http://192.168.122.100:9093/api/v2/alerts\": dial tcp 192.168.122.100:9093: connect: connection refused"
Sep 30 14:18:26 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:18:26.969Z caller=main.go:240 level=info msg="Starting Alertmanager" version="(version=0.25.0, branch=HEAD, revision=258fab7cdd551f2cf251ed0348f0ad7289aee789)"
Sep 30 14:18:26 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:18:26.969Z caller=main.go:241 level=info build_context="(go=go1.19.4, user=root@abe866dd5717, date=20221222-14:51:36)"
Sep 30 14:18:26 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:18:26.981Z caller=cluster.go:185 level=info component=cluster msg="setting advertise address explicitly" addr=192.168.122.100 port=9094
Sep 30 14:18:26 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:18:26.982Z caller=cluster.go:681 level=info component=cluster msg="Waiting for gossip to settle..." interval=2s
Sep 30 14:18:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:18:27.017Z caller=coordinator.go:113 level=info component=configuration msg="Loading configuration file" file=/etc/alertmanager/alertmanager.yml
Sep 30 14:18:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:18:27.017Z caller=coordinator.go:126 level=info component=configuration msg="Completed loading of configuration file" file=/etc/alertmanager/alertmanager.yml
Sep 30 14:18:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:18:27.022Z caller=tls_config.go:232 level=info msg="Listening on" address=192.168.122.100:9093
Sep 30 14:18:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:18:27.023Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=192.168.122.100:9093
Sep 30 14:18:27 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 98 pg[9.11( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=5 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=98 pruub=12.671278000s) [1] r=-1 lpr=98 pi=[55,98)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active pruub 286.537261963s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:18:27 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 98 pg[9.11( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=5 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=98 pruub=12.671242714s) [1] r=-1 lpr=98 pi=[55,98)/1 crt=48'1157 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 286.537261963s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:18:27 compute-0 bash[105897]: b02a1f46575144d1c0fa40fb1da73aeaa83cbe57512ae5912168f030bf7101d3
Sep 30 14:18:27 compute-0 systemd[1]: Started Ceph alertmanager.compute-0 for 5e3c7776-ac03-5698-b79f-a6dc2d80cae6.
Sep 30 14:18:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:18:27 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf90003c50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:18:27 compute-0 sudo[105648]: pam_unix(sudo:session): session closed for user root
Sep 30 14:18:27 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 14:18:27 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 98 pg[9.10( v 48'1157 (0'0,48'1157] local-lis/les=97/98 n=2 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=97) [1]/[0] async=[1] r=0 lpr=97 pi=[55,97)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:18:27 compute-0 ceph-mon[74194]: osdmap e97: 3 total, 3 up, 3 in
Sep 30 14:18:27 compute-0 ceph-mon[74194]: pgmap v32: 337 pgs: 337 active+clean; 457 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Sep 30 14:18:27 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Sep 30 14:18:27 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:27 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 14:18:27 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:27 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Reconfiguring grafana.compute-0 (dependencies changed)...
Sep 30 14:18:27 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Reconfiguring grafana.compute-0 (dependencies changed)...
Sep 30 14:18:27 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v34: 337 pgs: 1 remapped+peering, 2 peering, 334 active+clean; 457 KiB data, 152 MiB used, 60 GiB / 60 GiB avail; 208 B/s, 3 objects/s recovering
Sep 30 14:18:27 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:18:27 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:18:27 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:18:27.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:18:27 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Reconfiguring daemon grafana.compute-0 on compute-0
Sep 30 14:18:27 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Reconfiguring daemon grafana.compute-0 on compute-0
Sep 30 14:18:27 compute-0 sudo[105937]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:18:27 compute-0 sudo[105937]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:18:27 compute-0 sudo[105937]: pam_unix(sudo:session): session closed for user root
Sep 30 14:18:27 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:18:27 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:18:27 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:18:27.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:18:27 compute-0 sudo[105962]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/grafana:10.4.0 --timeout 895 _orch deploy --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6
Sep 30 14:18:27 compute-0 sudo[105962]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:18:27 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Sep 30 14:18:27 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Sep 30 14:18:27 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 99 pg[9.10( v 48'1157 (0'0,48'1157] local-lis/les=97/98 n=2 ec=55/37 lis/c=97/55 les/c/f=98/56/0 sis=99 pruub=15.508930206s) [1] async=[1] r=-1 lpr=99 pi=[55,99)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active pruub 290.174652100s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:18:27 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 99 pg[9.10( v 48'1157 (0'0,48'1157] local-lis/les=97/98 n=2 ec=55/37 lis/c=97/55 les/c/f=98/56/0 sis=99 pruub=15.508855820s) [1] r=-1 lpr=99 pi=[55,99)/1 crt=48'1157 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 290.174652100s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:18:27 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 99 pg[9.11( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=5 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=99) [1]/[0] r=0 lpr=99 pi=[55,99)/1 crt=48'1157 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:18:27 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 99 pg[9.11( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=5 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=99) [1]/[0] r=0 lpr=99 pi=[55,99)/1 crt=48'1157 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Sep 30 14:18:27 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Sep 30 14:18:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:18:27 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf9c00a600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:18:28 compute-0 podman[106005]: 2025-09-30 14:18:28.091646287 +0000 UTC m=+0.042900835 container create 5d71f3dfc3e07d489091cf6d6619dacc015ef8cefe03ac5cdd3602abebe50a80 (image=quay.io/ceph/grafana:10.4.0, name=bold_black, maintainer=Grafana Labs <hello@grafana.com>)
Sep 30 14:18:28 compute-0 systemd[1]: Started libpod-conmon-5d71f3dfc3e07d489091cf6d6619dacc015ef8cefe03ac5cdd3602abebe50a80.scope.
Sep 30 14:18:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:18:28 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf6c003f50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:18:28 compute-0 podman[106005]: 2025-09-30 14:18:28.072598875 +0000 UTC m=+0.023853453 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Sep 30 14:18:28 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:18:28 compute-0 podman[106005]: 2025-09-30 14:18:28.189934006 +0000 UTC m=+0.141188574 container init 5d71f3dfc3e07d489091cf6d6619dacc015ef8cefe03ac5cdd3602abebe50a80 (image=quay.io/ceph/grafana:10.4.0, name=bold_black, maintainer=Grafana Labs <hello@grafana.com>)
Sep 30 14:18:28 compute-0 podman[106005]: 2025-09-30 14:18:28.198015708 +0000 UTC m=+0.149270256 container start 5d71f3dfc3e07d489091cf6d6619dacc015ef8cefe03ac5cdd3602abebe50a80 (image=quay.io/ceph/grafana:10.4.0, name=bold_black, maintainer=Grafana Labs <hello@grafana.com>)
Sep 30 14:18:28 compute-0 bold_black[106021]: 472 0
Sep 30 14:18:28 compute-0 systemd[1]: libpod-5d71f3dfc3e07d489091cf6d6619dacc015ef8cefe03ac5cdd3602abebe50a80.scope: Deactivated successfully.
Sep 30 14:18:28 compute-0 podman[106005]: 2025-09-30 14:18:28.202336756 +0000 UTC m=+0.153591324 container attach 5d71f3dfc3e07d489091cf6d6619dacc015ef8cefe03ac5cdd3602abebe50a80 (image=quay.io/ceph/grafana:10.4.0, name=bold_black, maintainer=Grafana Labs <hello@grafana.com>)
Sep 30 14:18:28 compute-0 podman[106005]: 2025-09-30 14:18:28.202750767 +0000 UTC m=+0.154005335 container died 5d71f3dfc3e07d489091cf6d6619dacc015ef8cefe03ac5cdd3602abebe50a80 (image=quay.io/ceph/grafana:10.4.0, name=bold_black, maintainer=Grafana Labs <hello@grafana.com>)
Sep 30 14:18:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-7afc7c709089b1859f4f615906796358f9e429dafcd05701658432a282e65eaf-merged.mount: Deactivated successfully.
Sep 30 14:18:28 compute-0 podman[106005]: 2025-09-30 14:18:28.24268242 +0000 UTC m=+0.193936968 container remove 5d71f3dfc3e07d489091cf6d6619dacc015ef8cefe03ac5cdd3602abebe50a80 (image=quay.io/ceph/grafana:10.4.0, name=bold_black, maintainer=Grafana Labs <hello@grafana.com>)
Sep 30 14:18:28 compute-0 systemd[1]: libpod-conmon-5d71f3dfc3e07d489091cf6d6619dacc015ef8cefe03ac5cdd3602abebe50a80.scope: Deactivated successfully.
Sep 30 14:18:28 compute-0 podman[106038]: 2025-09-30 14:18:28.30628227 +0000 UTC m=+0.044318433 container create f8f0b77594d688e7dc9119ac23cf2b3d11a2fd3ed5595d98278d4ee5ddeaca04 (image=quay.io/ceph/grafana:10.4.0, name=relaxed_dijkstra, maintainer=Grafana Labs <hello@grafana.com>)
Sep 30 14:18:28 compute-0 systemd[1]: Started libpod-conmon-f8f0b77594d688e7dc9119ac23cf2b3d11a2fd3ed5595d98278d4ee5ddeaca04.scope.
Sep 30 14:18:28 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Sep 30 14:18:28 compute-0 ceph-mon[74194]: osdmap e98: 3 total, 3 up, 3 in
Sep 30 14:18:28 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:28 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:28 compute-0 ceph-mon[74194]: Reconfiguring grafana.compute-0 (dependencies changed)...
Sep 30 14:18:28 compute-0 ceph-mon[74194]: pgmap v34: 337 pgs: 1 remapped+peering, 2 peering, 334 active+clean; 457 KiB data, 152 MiB used, 60 GiB / 60 GiB avail; 208 B/s, 3 objects/s recovering
Sep 30 14:18:28 compute-0 ceph-mon[74194]: Reconfiguring daemon grafana.compute-0 on compute-0
Sep 30 14:18:28 compute-0 ceph-mon[74194]: osdmap e99: 3 total, 3 up, 3 in
Sep 30 14:18:28 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:18:28 compute-0 podman[106038]: 2025-09-30 14:18:28.381562231 +0000 UTC m=+0.119598414 container init f8f0b77594d688e7dc9119ac23cf2b3d11a2fd3ed5595d98278d4ee5ddeaca04 (image=quay.io/ceph/grafana:10.4.0, name=relaxed_dijkstra, maintainer=Grafana Labs <hello@grafana.com>)
Sep 30 14:18:28 compute-0 podman[106038]: 2025-09-30 14:18:28.288203916 +0000 UTC m=+0.026240119 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Sep 30 14:18:28 compute-0 podman[106038]: 2025-09-30 14:18:28.387250346 +0000 UTC m=+0.125286509 container start f8f0b77594d688e7dc9119ac23cf2b3d11a2fd3ed5595d98278d4ee5ddeaca04 (image=quay.io/ceph/grafana:10.4.0, name=relaxed_dijkstra, maintainer=Grafana Labs <hello@grafana.com>)
Sep 30 14:18:28 compute-0 relaxed_dijkstra[106054]: 472 0
Sep 30 14:18:28 compute-0 systemd[1]: libpod-f8f0b77594d688e7dc9119ac23cf2b3d11a2fd3ed5595d98278d4ee5ddeaca04.scope: Deactivated successfully.
Sep 30 14:18:28 compute-0 podman[106038]: 2025-09-30 14:18:28.393271071 +0000 UTC m=+0.131307254 container attach f8f0b77594d688e7dc9119ac23cf2b3d11a2fd3ed5595d98278d4ee5ddeaca04 (image=quay.io/ceph/grafana:10.4.0, name=relaxed_dijkstra, maintainer=Grafana Labs <hello@grafana.com>)
Sep 30 14:18:28 compute-0 podman[106038]: 2025-09-30 14:18:28.393646201 +0000 UTC m=+0.131682364 container died f8f0b77594d688e7dc9119ac23cf2b3d11a2fd3ed5595d98278d4ee5ddeaca04 (image=quay.io/ceph/grafana:10.4.0, name=relaxed_dijkstra, maintainer=Grafana Labs <hello@grafana.com>)
Sep 30 14:18:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-6ec5fa606b90caa92add9617bdfbedf896a02ce26a7ccee2f00285e7da04d72a-merged.mount: Deactivated successfully.
Sep 30 14:18:28 compute-0 podman[106038]: 2025-09-30 14:18:28.432877615 +0000 UTC m=+0.170913778 container remove f8f0b77594d688e7dc9119ac23cf2b3d11a2fd3ed5595d98278d4ee5ddeaca04 (image=quay.io/ceph/grafana:10.4.0, name=relaxed_dijkstra, maintainer=Grafana Labs <hello@grafana.com>)
Sep 30 14:18:28 compute-0 systemd[1]: libpod-conmon-f8f0b77594d688e7dc9119ac23cf2b3d11a2fd3ed5595d98278d4ee5ddeaca04.scope: Deactivated successfully.
Sep 30 14:18:28 compute-0 systemd[1]: Stopping Ceph grafana.compute-0 for 5e3c7776-ac03-5698-b79f-a6dc2d80cae6...
Sep 30 14:18:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=server t=2025-09-30T14:18:28.694091123Z level=info msg="Shutdown started" reason="System signal: terminated"
Sep 30 14:18:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=ticker t=2025-09-30T14:18:28.694364161Z level=info msg=stopped last_tick=2025-09-30T14:18:20Z
Sep 30 14:18:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=tracing t=2025-09-30T14:18:28.694450123Z level=info msg="Closing tracing"
Sep 30 14:18:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=grafana-apiserver t=2025-09-30T14:18:28.694725111Z level=info msg="StorageObjectCountTracker pruner is exiting"
Sep 30 14:18:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[98887]: logger=sqlstore.transactions t=2025-09-30T14:18:28.706602486Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked"
Sep 30 14:18:28 compute-0 podman[106103]: 2025-09-30 14:18:28.724745812 +0000 UTC m=+0.084099302 container died 93c8c5607d3b21ae8cda4d1f43e88d294c0ac0bcb4ca72548c6be243950b6313 (image=quay.io/ceph/grafana:10.4.0, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Sep 30 14:18:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-60fbe55edb8aeed63fb88162a2bf9bacdaa3d5779c54b3650ef61b3a1a18afcc-merged.mount: Deactivated successfully.
Sep 30 14:18:28 compute-0 podman[106103]: 2025-09-30 14:18:28.771104551 +0000 UTC m=+0.130458041 container remove 93c8c5607d3b21ae8cda4d1f43e88d294c0ac0bcb4ca72548c6be243950b6313 (image=quay.io/ceph/grafana:10.4.0, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Sep 30 14:18:28 compute-0 bash[106103]: ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0
Sep 30 14:18:28 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Sep 30 14:18:28 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Sep 30 14:18:28 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Sep 30 14:18:28 compute-0 systemd[1]: ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6@grafana.compute-0.service: Deactivated successfully.
Sep 30 14:18:28 compute-0 systemd[1]: Stopped Ceph grafana.compute-0 for 5e3c7776-ac03-5698-b79f-a6dc2d80cae6.
Sep 30 14:18:28 compute-0 systemd[1]: ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6@grafana.compute-0.service: Consumed 4.224s CPU time.
Sep 30 14:18:28 compute-0 systemd[1]: Starting Ceph grafana.compute-0 for 5e3c7776-ac03-5698-b79f-a6dc2d80cae6...
Sep 30 14:18:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:18:28.983Z caller=cluster.go:706 level=info component=cluster msg="gossip not settled" polls=0 before=0 now=1 elapsed=2.000727462s
Sep 30 14:18:29 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 100 pg[9.11( v 48'1157 (0'0,48'1157] local-lis/les=99/100 n=5 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=99) [1]/[0] async=[1] r=0 lpr=99 pi=[55,99)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:18:29 compute-0 podman[106202]: 2025-09-30 14:18:29.097118242 +0000 UTC m=+0.028873221 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Sep 30 14:18:29 compute-0 podman[106202]: 2025-09-30 14:18:29.22091742 +0000 UTC m=+0.152672359 container create 4fd9639868c9fdb652f2d65dd14f46e8bfbcca13240732508ba689971c876ee0 (image=quay.io/ceph/grafana:10.4.0, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Sep 30 14:18:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:18:29 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf64003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:18:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a171f11618633ff2ee338e8912f22cea6635d91e4ba555d19b3b3f1888cef84b/merged/etc/grafana/grafana.ini supports timestamps until 2038 (0x7fffffff)
Sep 30 14:18:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a171f11618633ff2ee338e8912f22cea6635d91e4ba555d19b3b3f1888cef84b/merged/etc/grafana/certs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:18:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a171f11618633ff2ee338e8912f22cea6635d91e4ba555d19b3b3f1888cef84b/merged/var/lib/grafana/grafana.db supports timestamps until 2038 (0x7fffffff)
Sep 30 14:18:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a171f11618633ff2ee338e8912f22cea6635d91e4ba555d19b3b3f1888cef84b/merged/etc/grafana/provisioning/datasources supports timestamps until 2038 (0x7fffffff)
Sep 30 14:18:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a171f11618633ff2ee338e8912f22cea6635d91e4ba555d19b3b3f1888cef84b/merged/etc/grafana/provisioning/dashboards supports timestamps until 2038 (0x7fffffff)
Sep 30 14:18:29 compute-0 podman[106202]: 2025-09-30 14:18:29.416910184 +0000 UTC m=+0.348665153 container init 4fd9639868c9fdb652f2d65dd14f46e8bfbcca13240732508ba689971c876ee0 (image=quay.io/ceph/grafana:10.4.0, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Sep 30 14:18:29 compute-0 podman[106202]: 2025-09-30 14:18:29.42263537 +0000 UTC m=+0.354390319 container start 4fd9639868c9fdb652f2d65dd14f46e8bfbcca13240732508ba689971c876ee0 (image=quay.io/ceph/grafana:10.4.0, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Sep 30 14:18:29 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v37: 337 pgs: 1 remapped+peering, 2 peering, 334 active+clean; 457 KiB data, 152 MiB used, 60 GiB / 60 GiB avail; 206 B/s, 3 objects/s recovering
Sep 30 14:18:29 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:18:29 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:18:29 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:18:29.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:18:29 compute-0 bash[106202]: 4fd9639868c9fdb652f2d65dd14f46e8bfbcca13240732508ba689971c876ee0
Sep 30 14:18:29 compute-0 systemd[1]: Started Ceph grafana.compute-0 for 5e3c7776-ac03-5698-b79f-a6dc2d80cae6.
Sep 30 14:18:29 compute-0 sudo[105962]: pam_unix(sudo:session): session closed for user root
Sep 30 14:18:29 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 14:18:29 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:18:29 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:18:29 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:18:29.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:18:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[106217]: logger=settings t=2025-09-30T14:18:29.612397783Z level=info msg="Starting Grafana" version=10.4.0 commit=03f502a94d17f7dc4e6c34acdf8428aedd986e4c branch=HEAD compiled=2025-09-30T14:18:29Z
Sep 30 14:18:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[106217]: logger=settings t=2025-09-30T14:18:29.61266389Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini
Sep 30 14:18:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[106217]: logger=settings t=2025-09-30T14:18:29.6126757Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini
Sep 30 14:18:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[106217]: logger=settings t=2025-09-30T14:18:29.61267979Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana"
Sep 30 14:18:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[106217]: logger=settings t=2025-09-30T14:18:29.61268329Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana"
Sep 30 14:18:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[106217]: logger=settings t=2025-09-30T14:18:29.61268665Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins"
Sep 30 14:18:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[106217]: logger=settings t=2025-09-30T14:18:29.612690101Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning"
Sep 30 14:18:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[106217]: logger=settings t=2025-09-30T14:18:29.612693541Z level=info msg="Config overridden from command line" arg="default.log.mode=console"
Sep 30 14:18:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[106217]: logger=settings t=2025-09-30T14:18:29.612697201Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana"
Sep 30 14:18:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[106217]: logger=settings t=2025-09-30T14:18:29.612701851Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana"
Sep 30 14:18:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[106217]: logger=settings t=2025-09-30T14:18:29.612705021Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins"
Sep 30 14:18:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[106217]: logger=settings t=2025-09-30T14:18:29.612708211Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning"
Sep 30 14:18:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[106217]: logger=settings t=2025-09-30T14:18:29.612711511Z level=info msg=Target target=[all]
Sep 30 14:18:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[106217]: logger=settings t=2025-09-30T14:18:29.612720441Z level=info msg="Path Home" path=/usr/share/grafana
Sep 30 14:18:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[106217]: logger=settings t=2025-09-30T14:18:29.612723762Z level=info msg="Path Data" path=/var/lib/grafana
Sep 30 14:18:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[106217]: logger=settings t=2025-09-30T14:18:29.612727002Z level=info msg="Path Logs" path=/var/log/grafana
Sep 30 14:18:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[106217]: logger=settings t=2025-09-30T14:18:29.612730072Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins
Sep 30 14:18:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[106217]: logger=settings t=2025-09-30T14:18:29.612733382Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning
Sep 30 14:18:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[106217]: logger=settings t=2025-09-30T14:18:29.612736532Z level=info msg="App mode production"
Sep 30 14:18:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[106217]: logger=sqlstore t=2025-09-30T14:18:29.612991219Z level=info msg="Connecting to DB" dbtype=sqlite3
Sep 30 14:18:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[106217]: logger=sqlstore t=2025-09-30T14:18:29.613013329Z level=warn msg="SQLite database file has broader permissions than it should" path=/var/lib/grafana/grafana.db mode=-rw-r--r-- expected=-rw-r-----
Sep 30 14:18:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[106217]: logger=migrator t=2025-09-30T14:18:29.613569065Z level=info msg="Starting DB migrations"
Sep 30 14:18:29 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:18:29 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:18:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[106217]: logger=migrator t=2025-09-30T14:18:29.634096186Z level=info msg="migrations completed" performed=0 skipped=547 duration=572.275µs
Sep 30 14:18:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[106217]: logger=sqlstore t=2025-09-30T14:18:29.635152035Z level=info msg="Created default organization"
Sep 30 14:18:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[106217]: logger=secrets t=2025-09-30T14:18:29.635725571Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1
Sep 30 14:18:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[106217]: logger=plugin.store t=2025-09-30T14:18:29.655335868Z level=info msg="Loading plugins..."
Sep 30 14:18:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:18:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:18:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:18:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:18:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:18:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:18:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[106217]: logger=local.finder t=2025-09-30T14:18:29.738933995Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled
Sep 30 14:18:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[106217]: logger=plugin.store t=2025-09-30T14:18:29.738972186Z level=info msg="Plugins loaded" count=55 duration=83.636948ms
Sep 30 14:18:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[106217]: logger=query_data t=2025-09-30T14:18:29.741434854Z level=info msg="Query Service initialization"
Sep 30 14:18:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[106217]: logger=live.push_http t=2025-09-30T14:18:29.74494747Z level=info msg="Live Push Gateway initialization"
Sep 30 14:18:29 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:29 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 14:18:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[106217]: logger=ngalert.migration t=2025-09-30T14:18:29.759438016Z level=info msg=Starting
Sep 30 14:18:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[106217]: logger=ngalert.state.manager t=2025-09-30T14:18:29.803949635Z level=info msg="Running in alternative execution of Error/NoData mode"
Sep 30 14:18:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[106217]: logger=infra.usagestats.collector t=2025-09-30T14:18:29.8111025Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2
Sep 30 14:18:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[106217]: logger=provisioning.datasources t=2025-09-30T14:18:29.814225826Z level=info msg="inserting datasource from configuration" name=Dashboard1 uid=P43CA22E17D0F9596
Sep 30 14:18:29 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Sep 30 14:18:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[106217]: logger=provisioning.alerting t=2025-09-30T14:18:29.840011151Z level=info msg="starting to provision alerting"
Sep 30 14:18:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[106217]: logger=provisioning.alerting t=2025-09-30T14:18:29.840042852Z level=info msg="finished to provision alerting"
Sep 30 14:18:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[106217]: logger=ngalert.state.manager t=2025-09-30T14:18:29.840786083Z level=info msg="Warming state cache for startup"
Sep 30 14:18:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[106217]: logger=ngalert.multiorg.alertmanager t=2025-09-30T14:18:29.840868285Z level=info msg="Starting MultiOrg Alertmanager"
Sep 30 14:18:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[106217]: logger=ngalert.state.manager t=2025-09-30T14:18:29.84142836Z level=info msg="State cache has been initialized" states=0 duration=641.197µs
Sep 30 14:18:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[106217]: logger=ngalert.scheduler t=2025-09-30T14:18:29.841482022Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1
Sep 30 14:18:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[106217]: logger=ticker t=2025-09-30T14:18:29.841560484Z level=info msg=starting first_tick=2025-09-30T14:18:30Z
Sep 30 14:18:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[106217]: logger=provisioning.dashboard t=2025-09-30T14:18:29.843292761Z level=info msg="starting to provision dashboards"
Sep 30 14:18:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[106217]: logger=grafanaStorageLogger t=2025-09-30T14:18:29.85055614Z level=info msg="Storage starting"
Sep 30 14:18:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[106217]: logger=provisioning.dashboard t=2025-09-30T14:18:29.85933436Z level=info msg="finished to provision dashboards"
Sep 30 14:18:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[106217]: logger=http.server t=2025-09-30T14:18:29.861757967Z level=info msg="HTTP Server TLS settings" MinTLSVersion=TLS1.2 configuredciphers=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA
Sep 30 14:18:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[106217]: logger=http.server t=2025-09-30T14:18:29.862105406Z level=info msg="HTTP Server Listen" address=192.168.122.100:3000 protocol=https subUrl= socket=
Sep 30 14:18:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:18:29 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf90003c50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:18:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[106217]: logger=plugins.update.checker t=2025-09-30T14:18:29.908941208Z level=info msg="Update check succeeded" duration=67.983671ms
Sep 30 14:18:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[106217]: logger=grafana.update.checker t=2025-09-30T14:18:29.912655329Z level=info msg="Update check succeeded" duration=71.861576ms
Sep 30 14:18:29 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:29 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Reconfiguring crash.compute-1 (monmap changed)...
Sep 30 14:18:29 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Reconfiguring crash.compute-1 (monmap changed)...
Sep 30 14:18:29 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Sep 30 14:18:29 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Sep 30 14:18:29 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:18:29 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:18:29 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.compute-1 on compute-1
Sep 30 14:18:29 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.compute-1 on compute-1
Sep 30 14:18:30 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Sep 30 14:18:30 compute-0 ceph-mon[74194]: osdmap e100: 3 total, 3 up, 3 in
Sep 30 14:18:30 compute-0 ceph-mon[74194]: pgmap v37: 337 pgs: 1 remapped+peering, 2 peering, 334 active+clean; 457 KiB data, 152 MiB used, 60 GiB / 60 GiB avail; 206 B/s, 3 objects/s recovering
Sep 30 14:18:30 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:18:30 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:30 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 101 pg[9.11( v 48'1157 (0'0,48'1157] local-lis/les=99/100 n=5 ec=55/37 lis/c=99/55 les/c/f=100/56/0 sis=101 pruub=15.037313461s) [1] async=[1] r=-1 lpr=101 pi=[55,101)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active pruub 291.978637695s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:18:30 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 101 pg[9.11( v 48'1157 (0'0,48'1157] local-lis/les=99/100 n=5 ec=55/37 lis/c=99/55 les/c/f=100/56/0 sis=101 pruub=15.037248611s) [1] r=-1 lpr=101 pi=[55,101)/1 crt=48'1157 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 291.978637695s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:18:30 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Sep 30 14:18:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:18:30 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf9c00a620 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:18:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[106217]: logger=grafana-apiserver t=2025-09-30T14:18:30.272660821Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager"
Sep 30 14:18:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[106217]: logger=grafana-apiserver t=2025-09-30T14:18:30.273594427Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager"
Sep 30 14:18:30 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e101 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 14:18:30 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Sep 30 14:18:30 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:30 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Sep 30 14:18:30 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:30 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Reconfiguring osd.1 (monmap changed)...
Sep 30 14:18:30 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Reconfiguring osd.1 (monmap changed)...
Sep 30 14:18:30 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0)
Sep 30 14:18:30 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Sep 30 14:18:30 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:18:30 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:18:30 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.1 on compute-1
Sep 30 14:18:30 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.1 on compute-1
Sep 30 14:18:31 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Sep 30 14:18:31 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:31 compute-0 ceph-mon[74194]: Reconfiguring crash.compute-1 (monmap changed)...
Sep 30 14:18:31 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Sep 30 14:18:31 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:18:31 compute-0 ceph-mon[74194]: Reconfiguring daemon crash.compute-1 on compute-1
Sep 30 14:18:31 compute-0 ceph-mon[74194]: osdmap e101: 3 total, 3 up, 3 in
Sep 30 14:18:31 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:31 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:31 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Sep 30 14:18:31 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:18:31 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Sep 30 14:18:31 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Sep 30 14:18:31 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:18:31 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf6c003f50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:18:31 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v40: 337 pgs: 1 peering, 336 active+clean; 457 KiB data, 152 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 1 objects/s recovering
Sep 30 14:18:31 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:18:31 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:18:31 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:18:31.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:18:31 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:18:31 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:18:31 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:18:31.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:18:31 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Sep 30 14:18:31 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:31 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Sep 30 14:18:31 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:31 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-1 (monmap changed)...
Sep 30 14:18:31 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-1 (monmap changed)...
Sep 30 14:18:31 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Sep 30 14:18:31 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Sep 30 14:18:31 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Sep 30 14:18:31 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Sep 30 14:18:31 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:18:31 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:18:31 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-1 on compute-1
Sep 30 14:18:31 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-1 on compute-1
Sep 30 14:18:31 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:18:31 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf64003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:18:32 compute-0 ceph-mon[74194]: Reconfiguring osd.1 (monmap changed)...
Sep 30 14:18:32 compute-0 ceph-mon[74194]: Reconfiguring daemon osd.1 on compute-1
Sep 30 14:18:32 compute-0 ceph-mon[74194]: osdmap e102: 3 total, 3 up, 3 in
Sep 30 14:18:32 compute-0 ceph-mon[74194]: pgmap v40: 337 pgs: 1 peering, 336 active+clean; 457 KiB data, 152 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 1 objects/s recovering
Sep 30 14:18:32 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:32 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:32 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Sep 30 14:18:32 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Sep 30 14:18:32 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:18:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:18:32 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf90003c50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:18:32 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Sep 30 14:18:32 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:32 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Sep 30 14:18:32 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:32 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-2 (monmap changed)...
Sep 30 14:18:32 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-2 (monmap changed)...
Sep 30 14:18:32 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Sep 30 14:18:32 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Sep 30 14:18:32 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Sep 30 14:18:32 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Sep 30 14:18:32 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:18:32 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:18:32 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-2 on compute-2
Sep 30 14:18:32 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-2 on compute-2
Sep 30 14:18:33 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Sep 30 14:18:33 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:33 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Sep 30 14:18:33 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:33 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Reconfiguring rgw.rgw.compute-2.evkboy (unknown last config time)...
Sep 30 14:18:33 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Reconfiguring rgw.rgw.compute-2.evkboy (unknown last config time)...
Sep 30 14:18:33 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.evkboy", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Sep 30 14:18:33 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.evkboy", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Sep 30 14:18:33 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Sep 30 14:18:33 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:18:33 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:18:33 compute-0 ceph-mgr[74485]: [cephadm INFO cephadm.serve] Reconfiguring daemon rgw.rgw.compute-2.evkboy on compute-2
Sep 30 14:18:33 compute-0 ceph-mgr[74485]: log_channel(cephadm) log [INF] : Reconfiguring daemon rgw.rgw.compute-2.evkboy on compute-2
Sep 30 14:18:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:18:33 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf9c00a640 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:18:33 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v41: 337 pgs: 1 peering, 336 active+clean; 457 KiB data, 152 MiB used, 60 GiB / 60 GiB avail; 19 B/s, 1 objects/s recovering
Sep 30 14:18:33 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:18:33 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:18:33 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:18:33.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:18:33 compute-0 ceph-mon[74194]: Reconfiguring mon.compute-1 (monmap changed)...
Sep 30 14:18:33 compute-0 ceph-mon[74194]: Reconfiguring daemon mon.compute-1 on compute-1
Sep 30 14:18:33 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:33 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:33 compute-0 ceph-mon[74194]: Reconfiguring mon.compute-2 (monmap changed)...
Sep 30 14:18:33 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Sep 30 14:18:33 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Sep 30 14:18:33 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:18:33 compute-0 ceph-mon[74194]: Reconfiguring daemon mon.compute-2 on compute-2
Sep 30 14:18:33 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:33 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:33 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.evkboy", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Sep 30 14:18:33 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:18:33 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:18:33 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:18:33 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:18:33.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:18:33 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Sep 30 14:18:33 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:33 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Sep 30 14:18:33 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:33 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "dashboard get-alertmanager-api-host"} v 0)
Sep 30 14:18:33 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Sep 30 14:18:33 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Sep 30 14:18:33 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "dashboard get-grafana-api-url"} v 0)
Sep 30 14:18:33 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Sep 30 14:18:33 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Sep 30 14:18:33 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"} v 0)
Sep 30 14:18:33 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Sep 30 14:18:33 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Sep 30 14:18:33 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_URL}] v 0)
Sep 30 14:18:33 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:18:33 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf6c003f50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:18:33 compute-0 ceph-mgr[74485]: [prometheus INFO root] Restarting engine...
Sep 30 14:18:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: [30/Sep/2025:14:18:33] ENGINE Bus STOPPING
Sep 30 14:18:33 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.error] [30/Sep/2025:14:18:33] ENGINE Bus STOPPING
Sep 30 14:18:34 compute-0 sudo[106271]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:18:34 compute-0 sudo[106271]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:18:34 compute-0 sudo[106271]: pam_unix(sudo:session): session closed for user root
Sep 30 14:18:34 compute-0 sudo[106296]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Sep 30 14:18:34 compute-0 sudo[106296]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:18:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:18:34 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf64003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:18:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: [30/Sep/2025:14:18:34] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down
Sep 30 14:18:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: [30/Sep/2025:14:18:34] ENGINE Bus STOPPED
Sep 30 14:18:34 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.error] [30/Sep/2025:14:18:34] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down
Sep 30 14:18:34 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.error] [30/Sep/2025:14:18:34] ENGINE Bus STOPPED
Sep 30 14:18:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: [30/Sep/2025:14:18:34] ENGINE Bus STARTING
Sep 30 14:18:34 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.error] [30/Sep/2025:14:18:34] ENGINE Bus STARTING
Sep 30 14:18:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: [30/Sep/2025:14:18:34] ENGINE Serving on http://:::9283
Sep 30 14:18:34 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.error] [30/Sep/2025:14:18:34] ENGINE Serving on http://:::9283
Sep 30 14:18:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: [30/Sep/2025:14:18:34] ENGINE Bus STARTED
Sep 30 14:18:34 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.error] [30/Sep/2025:14:18:34] ENGINE Bus STARTED
Sep 30 14:18:34 compute-0 ceph-mgr[74485]: [prometheus INFO root] Engine started.
Sep 30 14:18:34 compute-0 ceph-mon[74194]: Reconfiguring rgw.rgw.compute-2.evkboy (unknown last config time)...
Sep 30 14:18:34 compute-0 ceph-mon[74194]: Reconfiguring daemon rgw.rgw.compute-2.evkboy on compute-2
Sep 30 14:18:34 compute-0 ceph-mon[74194]: pgmap v41: 337 pgs: 1 peering, 336 active+clean; 457 KiB data, 152 MiB used, 60 GiB / 60 GiB avail; 19 B/s, 1 objects/s recovering
Sep 30 14:18:34 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:34 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:34 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Sep 30 14:18:34 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Sep 30 14:18:34 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Sep 30 14:18:34 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:34 compute-0 podman[106415]: 2025-09-30 14:18:34.630523508 +0000 UTC m=+0.056214279 container exec a277d7b6b6f3cf10a7ce0ade5eebf0f8127074c248f9bce4451399614b97ded5 (image=quay.io/ceph/ceph:v19, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1)
Sep 30 14:18:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:18:34] "GET /metrics HTTP/1.1" 200 48338 "" "Prometheus/2.51.0"
Sep 30 14:18:34 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:18:34] "GET /metrics HTTP/1.1" 200 48338 "" "Prometheus/2.51.0"
Sep 30 14:18:34 compute-0 podman[106415]: 2025-09-30 14:18:34.735329456 +0000 UTC m=+0.161020227 container exec_died a277d7b6b6f3cf10a7ce0ade5eebf0f8127074c248f9bce4451399614b97ded5 (image=quay.io/ceph/ceph:v19, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mon-compute-0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Sep 30 14:18:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:18:35 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf90003c50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:18:35 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e102 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 14:18:35 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v42: 337 pgs: 1 peering, 336 active+clean; 457 KiB data, 152 MiB used, 60 GiB / 60 GiB avail; 16 B/s, 1 objects/s recovering
Sep 30 14:18:35 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:18:35 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:18:35 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:18:35.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:18:35 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:18:35 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:18:35 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:18:35.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:18:35 compute-0 podman[106542]: 2025-09-30 14:18:35.759691739 +0000 UTC m=+0.640152049 container exec 7517aa84b8564a81255eab7821e47762fe9b9d86aae2c7d77e10c0dfa057ab6d (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:18:35 compute-0 ceph-mon[74194]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Sep 30 14:18:35 compute-0 ceph-mon[74194]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Sep 30 14:18:35 compute-0 ceph-mon[74194]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Sep 30 14:18:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:18:35 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf9c00a660 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:18:36 compute-0 podman[106571]: 2025-09-30 14:18:36.059402301 +0000 UTC m=+0.284397034 container exec_died 7517aa84b8564a81255eab7821e47762fe9b9d86aae2c7d77e10c0dfa057ab6d (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:18:36 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:18:36 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf70002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:18:36 compute-0 podman[106542]: 2025-09-30 14:18:36.181077431 +0000 UTC m=+1.061537721 container exec_died 7517aa84b8564a81255eab7821e47762fe9b9d86aae2c7d77e10c0dfa057ab6d (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:18:36 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:18:36.986Z caller=cluster.go:698 level=info component=cluster msg="gossip settled; proceeding" elapsed=10.003504304s
Sep 30 14:18:37 compute-0 ceph-mon[74194]: pgmap v42: 337 pgs: 1 peering, 336 active+clean; 457 KiB data, 152 MiB used, 60 GiB / 60 GiB avail; 16 B/s, 1 objects/s recovering
Sep 30 14:18:37 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:18:37 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf64003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:18:37 compute-0 podman[106640]: 2025-09-30 14:18:37.360272969 +0000 UTC m=+0.378220901 container exec 7e80d1c63fee1012bbcba29dc5974698e4c3e504ac2a1caae6c03536ec058cd5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:18:37 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v43: 337 pgs: 337 active+clean; 457 KiB data, 152 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 0 op/s; 13 B/s, 0 objects/s recovering
Sep 30 14:18:37 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0)
Sep 30 14:18:37 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Sep 30 14:18:37 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:18:37 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:18:37 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:18:37.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:18:37 compute-0 podman[106661]: 2025-09-30 14:18:37.488362105 +0000 UTC m=+0.107104952 container exec_died 7e80d1c63fee1012bbcba29dc5974698e4c3e504ac2a1caae6c03536ec058cd5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Sep 30 14:18:37 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:18:37 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:18:37 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:18:37.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:18:37 compute-0 podman[106640]: 2025-09-30 14:18:37.574342488 +0000 UTC m=+0.592290400 container exec_died 7e80d1c63fee1012bbcba29dc5974698e4c3e504ac2a1caae6c03536ec058cd5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Sep 30 14:18:37 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:18:37 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf78002ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:18:37 compute-0 sudo[106706]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:18:37 compute-0 sudo[106706]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:18:37 compute-0 sudo[106706]: pam_unix(sudo:session): session closed for user root
Sep 30 14:18:38 compute-0 podman[106709]: 2025-09-30 14:18:38.100916868 +0000 UTC m=+0.199019357 container exec ec49c6e24c4fbc830188fe80824f1adb9a8c3cd6d4f4491a3e9330b04061bea8 (image=quay.io/ceph/haproxy:2.3, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-nfs-cephfs-compute-0-yvkpei)
Sep 30 14:18:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:18:38 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf9c00a660 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:18:38 compute-0 podman[106752]: 2025-09-30 14:18:38.197391188 +0000 UTC m=+0.076981808 container exec_died ec49c6e24c4fbc830188fe80824f1adb9a8c3cd6d4f4491a3e9330b04061bea8 (image=quay.io/ceph/haproxy:2.3, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-nfs-cephfs-compute-0-yvkpei)
Sep 30 14:18:38 compute-0 podman[106709]: 2025-09-30 14:18:38.205927542 +0000 UTC m=+0.304030031 container exec_died ec49c6e24c4fbc830188fe80824f1adb9a8c3cd6d4f4491a3e9330b04061bea8 (image=quay.io/ceph/haproxy:2.3, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-nfs-cephfs-compute-0-yvkpei)
Sep 30 14:18:38 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Sep 30 14:18:38 compute-0 ceph-mon[74194]: pgmap v43: 337 pgs: 337 active+clean; 457 KiB data, 152 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 0 op/s; 13 B/s, 0 objects/s recovering
Sep 30 14:18:38 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Sep 30 14:18:38 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Sep 30 14:18:38 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Sep 30 14:18:38 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Sep 30 14:18:38 compute-0 podman[106798]: 2025-09-30 14:18:38.66922762 +0000 UTC m=+0.135523199 container exec df25873f420822291a2a2f3e4272e6ab946447daa59ec12441fae67f848da096 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-keepalived-nfs-cephfs-compute-0-nfjjcv, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, name=keepalived, architecture=x86_64, release=1793, vcs-type=git, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.28.2, version=2.2.4, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph)
Sep 30 14:18:38 compute-0 podman[106819]: 2025-09-30 14:18:38.879365131 +0000 UTC m=+0.192989812 container exec_died df25873f420822291a2a2f3e4272e6ab946447daa59ec12441fae67f848da096 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-keepalived-nfs-cephfs-compute-0-nfjjcv, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., vcs-type=git, io.openshift.expose-services=, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2023-02-22T09:23:20, distribution-scope=public, io.buildah.version=1.28.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=keepalived, io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Sep 30 14:18:38 compute-0 podman[106798]: 2025-09-30 14:18:38.948502103 +0000 UTC m=+0.414797682 container exec_died df25873f420822291a2a2f3e4272e6ab946447daa59ec12441fae67f848da096 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-keepalived-nfs-cephfs-compute-0-nfjjcv, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived, version=2.2.4, description=keepalived for Ceph, release=1793, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=keepalived, architecture=x86_64, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, summary=Provides keepalived on RHEL 9 for Ceph., distribution-scope=public, vcs-type=git, io.openshift.expose-services=)
Sep 30 14:18:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:18:39 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf70002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:18:39 compute-0 podman[106865]: 2025-09-30 14:18:39.310396837 +0000 UTC m=+0.196277663 container exec b02a1f46575144d1c0fa40fb1da73aeaa83cbe57512ae5912168f030bf7101d3 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:18:39 compute-0 podman[106865]: 2025-09-30 14:18:39.338543577 +0000 UTC m=+0.224424383 container exec_died b02a1f46575144d1c0fa40fb1da73aeaa83cbe57512ae5912168f030bf7101d3 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:18:39 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 103 pg[9.12( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=4 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=103 pruub=8.288415909s) [1] r=-1 lpr=103 pi=[55,103)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active pruub 294.540710449s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:18:39 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 103 pg[9.12( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=4 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=103 pruub=8.288374901s) [1] r=-1 lpr=103 pi=[55,103)/1 crt=48'1157 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 294.540710449s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:18:39 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v45: 337 pgs: 337 active+clean; 457 KiB data, 152 MiB used, 60 GiB / 60 GiB avail; 369 B/s rd, 0 op/s
Sep 30 14:18:39 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0)
Sep 30 14:18:39 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Sep 30 14:18:39 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:18:39 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:18:39 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:18:39.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:18:39 compute-0 podman[106938]: 2025-09-30 14:18:39.526350226 +0000 UTC m=+0.049095974 container exec 4fd9639868c9fdb652f2d65dd14f46e8bfbcca13240732508ba689971c876ee0 (image=quay.io/ceph/grafana:10.4.0, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Sep 30 14:18:39 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:18:39 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:18:39 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:18:39.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:18:39 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Sep 30 14:18:39 compute-0 podman[106938]: 2025-09-30 14:18:39.688891994 +0000 UTC m=+0.211637732 container exec_died 4fd9639868c9fdb652f2d65dd14f46e8bfbcca13240732508ba689971c876ee0 (image=quay.io/ceph/grafana:10.4.0, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Sep 30 14:18:39 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Sep 30 14:18:39 compute-0 ceph-mon[74194]: osdmap e103: 3 total, 3 up, 3 in
Sep 30 14:18:39 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Sep 30 14:18:39 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Sep 30 14:18:39 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Sep 30 14:18:39 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Sep 30 14:18:39 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 104 pg[9.12( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=4 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=104) [1]/[0] r=0 lpr=104 pi=[55,104)/1 crt=48'1157 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:18:39 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 104 pg[9.12( v 48'1157 (0'0,48'1157] local-lis/les=55/56 n=4 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=104) [1]/[0] r=0 lpr=104 pi=[55,104)/1 crt=48'1157 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Sep 30 14:18:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:18:39 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf64003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:18:40 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:18:40 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf78001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:18:40 compute-0 podman[107049]: 2025-09-30 14:18:40.203341842 +0000 UTC m=+0.121661559 container exec e4a50bbeb60f228cd09239a211f5e468f7ca87363229c6999e3900e12da32b57 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:18:40 compute-0 podman[107049]: 2025-09-30 14:18:40.233572429 +0000 UTC m=+0.151892116 container exec_died e4a50bbeb60f228cd09239a211f5e468f7ca87363229c6999e3900e12da32b57 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:18:40 compute-0 sudo[106296]: pam_unix(sudo:session): session closed for user root
Sep 30 14:18:40 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 14:18:40 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e104 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 14:18:40 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:40 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 14:18:40 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:40 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:18:40 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:18:40 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 14:18:40 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 14:18:40 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 14:18:40 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:40 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 14:18:40 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:40 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 14:18:40 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 14:18:40 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 14:18:40 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 14:18:40 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:18:40 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:18:40 compute-0 sudo[107090]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:18:40 compute-0 sudo[107090]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:18:40 compute-0 sudo[107090]: pam_unix(sudo:session): session closed for user root
Sep 30 14:18:40 compute-0 sudo[107115]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 14:18:40 compute-0 sudo[107115]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:18:40 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Sep 30 14:18:40 compute-0 ceph-mon[74194]: pgmap v45: 337 pgs: 337 active+clean; 457 KiB data, 152 MiB used, 60 GiB / 60 GiB avail; 369 B/s rd, 0 op/s
Sep 30 14:18:40 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Sep 30 14:18:40 compute-0 ceph-mon[74194]: osdmap e104: 3 total, 3 up, 3 in
Sep 30 14:18:40 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:40 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:40 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:18:40 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 14:18:40 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:40 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:40 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 14:18:40 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 14:18:40 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:18:41 compute-0 podman[107180]: 2025-09-30 14:18:41.04591847 +0000 UTC m=+0.022926678 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:18:41 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:18:41 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf9c00a660 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:18:41 compute-0 podman[107180]: 2025-09-30 14:18:41.390156781 +0000 UTC m=+0.367164969 container create b67eee72c304525de345bf1dc59b4a7cf8cc793968edc8b2bfcc2b5b03ea0216 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_shannon, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Sep 30 14:18:41 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Sep 30 14:18:41 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v47: 337 pgs: 1 remapped+peering, 336 active+clean; 457 KiB data, 152 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 0 op/s
Sep 30 14:18:41 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:18:41 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:18:41 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:18:41.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:18:41 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Sep 30 14:18:41 compute-0 systemd[1]: Started libpod-conmon-b67eee72c304525de345bf1dc59b4a7cf8cc793968edc8b2bfcc2b5b03ea0216.scope.
Sep 30 14:18:41 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:18:41 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:18:41 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:18:41 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:18:41.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:18:41 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:18:41 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf70002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:18:42 compute-0 podman[107180]: 2025-09-30 14:18:42.05849751 +0000 UTC m=+1.035505708 container init b67eee72c304525de345bf1dc59b4a7cf8cc793968edc8b2bfcc2b5b03ea0216 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_shannon, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:18:42 compute-0 podman[107180]: 2025-09-30 14:18:42.064971277 +0000 UTC m=+1.041979455 container start b67eee72c304525de345bf1dc59b4a7cf8cc793968edc8b2bfcc2b5b03ea0216 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_shannon, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:18:42 compute-0 naughty_shannon[107205]: 167 167
Sep 30 14:18:42 compute-0 systemd[1]: libpod-b67eee72c304525de345bf1dc59b4a7cf8cc793968edc8b2bfcc2b5b03ea0216.scope: Deactivated successfully.
Sep 30 14:18:42 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:18:42 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf64003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:18:42 compute-0 podman[107180]: 2025-09-30 14:18:42.230094256 +0000 UTC m=+1.207102464 container attach b67eee72c304525de345bf1dc59b4a7cf8cc793968edc8b2bfcc2b5b03ea0216 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_shannon, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Sep 30 14:18:42 compute-0 podman[107180]: 2025-09-30 14:18:42.230932949 +0000 UTC m=+1.207941127 container died b67eee72c304525de345bf1dc59b4a7cf8cc793968edc8b2bfcc2b5b03ea0216 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_shannon, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:18:42 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 105 pg[9.12( v 48'1157 (0'0,48'1157] local-lis/les=104/105 n=4 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=104) [1]/[0] async=[1] r=0 lpr=104 pi=[55,104)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:18:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-d3bd4e69893f60fe82c56477ba52db48a5da31ca1733d907d550f3550ef68756-merged.mount: Deactivated successfully.
Sep 30 14:18:42 compute-0 ceph-mon[74194]: pgmap v47: 337 pgs: 1 remapped+peering, 336 active+clean; 457 KiB data, 152 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 0 op/s
Sep 30 14:18:42 compute-0 ceph-mon[74194]: osdmap e105: 3 total, 3 up, 3 in
Sep 30 14:18:42 compute-0 podman[107180]: 2025-09-30 14:18:42.842829704 +0000 UTC m=+1.819837912 container remove b67eee72c304525de345bf1dc59b4a7cf8cc793968edc8b2bfcc2b5b03ea0216 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_shannon, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Sep 30 14:18:42 compute-0 systemd[1]: libpod-conmon-b67eee72c304525de345bf1dc59b4a7cf8cc793968edc8b2bfcc2b5b03ea0216.scope: Deactivated successfully.
Sep 30 14:18:43 compute-0 podman[107229]: 2025-09-30 14:18:43.042825477 +0000 UTC m=+0.075132907 container create 4e0aaa7dfdd1cd632cd38040d38aba3e0a2272a1664943db80243c98e2ac4ebe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Sep 30 14:18:43 compute-0 podman[107229]: 2025-09-30 14:18:42.995672527 +0000 UTC m=+0.027979977 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:18:43 compute-0 systemd[1]: Started libpod-conmon-4e0aaa7dfdd1cd632cd38040d38aba3e0a2272a1664943db80243c98e2ac4ebe.scope.
Sep 30 14:18:43 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:18:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22ce36f075fe36fee42bd3ca45870042fa0fcb1614e8415a8e03d44424fe53b6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:18:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22ce36f075fe36fee42bd3ca45870042fa0fcb1614e8415a8e03d44424fe53b6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:18:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22ce36f075fe36fee42bd3ca45870042fa0fcb1614e8415a8e03d44424fe53b6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:18:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22ce36f075fe36fee42bd3ca45870042fa0fcb1614e8415a8e03d44424fe53b6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:18:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22ce36f075fe36fee42bd3ca45870042fa0fcb1614e8415a8e03d44424fe53b6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 14:18:43 compute-0 podman[107229]: 2025-09-30 14:18:43.204091521 +0000 UTC m=+0.236398951 container init 4e0aaa7dfdd1cd632cd38040d38aba3e0a2272a1664943db80243c98e2ac4ebe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_panini, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:18:43 compute-0 podman[107229]: 2025-09-30 14:18:43.212391628 +0000 UTC m=+0.244699058 container start 4e0aaa7dfdd1cd632cd38040d38aba3e0a2272a1664943db80243c98e2ac4ebe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_panini, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default)
Sep 30 14:18:43 compute-0 podman[107229]: 2025-09-30 14:18:43.228318763 +0000 UTC m=+0.260626213 container attach 4e0aaa7dfdd1cd632cd38040d38aba3e0a2272a1664943db80243c98e2ac4ebe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Sep 30 14:18:43 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:18:43 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf78001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:18:43 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v49: 337 pgs: 1 remapped+peering, 336 active+clean; 457 KiB data, 152 MiB used, 60 GiB / 60 GiB avail
Sep 30 14:18:43 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:18:43 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:18:43 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:18:43.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:18:43 compute-0 inspiring_panini[107246]: --> passed data devices: 0 physical, 1 LVM
Sep 30 14:18:43 compute-0 inspiring_panini[107246]: --> All data devices are unavailable
Sep 30 14:18:43 compute-0 systemd[1]: libpod-4e0aaa7dfdd1cd632cd38040d38aba3e0a2272a1664943db80243c98e2ac4ebe.scope: Deactivated successfully.
Sep 30 14:18:43 compute-0 conmon[107246]: conmon 4e0aaa7dfdd1cd632cd3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4e0aaa7dfdd1cd632cd38040d38aba3e0a2272a1664943db80243c98e2ac4ebe.scope/container/memory.events
Sep 30 14:18:43 compute-0 podman[107229]: 2025-09-30 14:18:43.560689739 +0000 UTC m=+0.592997189 container died 4e0aaa7dfdd1cd632cd38040d38aba3e0a2272a1664943db80243c98e2ac4ebe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_panini, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2)
Sep 30 14:18:43 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:18:43 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:18:43 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:18:43.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:18:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-22ce36f075fe36fee42bd3ca45870042fa0fcb1614e8415a8e03d44424fe53b6-merged.mount: Deactivated successfully.
Sep 30 14:18:43 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Sep 30 14:18:43 compute-0 podman[107229]: 2025-09-30 14:18:43.646922729 +0000 UTC m=+0.679230159 container remove 4e0aaa7dfdd1cd632cd38040d38aba3e0a2272a1664943db80243c98e2ac4ebe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:18:43 compute-0 systemd[1]: libpod-conmon-4e0aaa7dfdd1cd632cd38040d38aba3e0a2272a1664943db80243c98e2ac4ebe.scope: Deactivated successfully.
Sep 30 14:18:43 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Sep 30 14:18:43 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Sep 30 14:18:43 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 106 pg[9.12( v 48'1157 (0'0,48'1157] local-lis/les=104/105 n=4 ec=55/37 lis/c=104/55 les/c/f=105/56/0 sis=106 pruub=14.592539787s) [1] async=[1] r=-1 lpr=106 pi=[55,106)/1 crt=48'1157 lcod 0'0 mlcod 0'0 active pruub 305.110748291s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:18:43 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 106 pg[9.12( v 48'1157 (0'0,48'1157] local-lis/les=104/105 n=4 ec=55/37 lis/c=104/55 les/c/f=105/56/0 sis=106 pruub=14.592488289s) [1] r=-1 lpr=106 pi=[55,106)/1 crt=48'1157 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 305.110748291s@ mbc={}] state<Start>: transitioning to Stray
Sep 30 14:18:43 compute-0 sudo[107115]: pam_unix(sudo:session): session closed for user root
Sep 30 14:18:43 compute-0 sudo[107284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:18:43 compute-0 sudo[107284]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:18:43 compute-0 sudo[107284]: pam_unix(sudo:session): session closed for user root
Sep 30 14:18:43 compute-0 sudo[107309]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -- lvm list --format json
Sep 30 14:18:43 compute-0 sudo[107309]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:18:43 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:18:43 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf9c00a660 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:18:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:18:44 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf70002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:18:44 compute-0 podman[107382]: 2025-09-30 14:18:44.229305186 +0000 UTC m=+0.036239633 container create 839bdb41bf6677ae46e90f0a419fad1573d950e454fe0c91346852b35d64c8dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_haibt, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Sep 30 14:18:44 compute-0 systemd[1]: Started libpod-conmon-839bdb41bf6677ae46e90f0a419fad1573d950e454fe0c91346852b35d64c8dd.scope.
Sep 30 14:18:44 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:18:44 compute-0 podman[107382]: 2025-09-30 14:18:44.303557117 +0000 UTC m=+0.110491584 container init 839bdb41bf6677ae46e90f0a419fad1573d950e454fe0c91346852b35d64c8dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_haibt, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Sep 30 14:18:44 compute-0 podman[107382]: 2025-09-30 14:18:44.213984996 +0000 UTC m=+0.020919463 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:18:44 compute-0 podman[107382]: 2025-09-30 14:18:44.310828056 +0000 UTC m=+0.117762503 container start 839bdb41bf6677ae46e90f0a419fad1573d950e454fe0c91346852b35d64c8dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_haibt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:18:44 compute-0 podman[107382]: 2025-09-30 14:18:44.314837376 +0000 UTC m=+0.121771823 container attach 839bdb41bf6677ae46e90f0a419fad1573d950e454fe0c91346852b35d64c8dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_haibt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Sep 30 14:18:44 compute-0 hardcore_haibt[107399]: 167 167
Sep 30 14:18:44 compute-0 systemd[1]: libpod-839bdb41bf6677ae46e90f0a419fad1573d950e454fe0c91346852b35d64c8dd.scope: Deactivated successfully.
Sep 30 14:18:44 compute-0 podman[107382]: 2025-09-30 14:18:44.316841121 +0000 UTC m=+0.123775568 container died 839bdb41bf6677ae46e90f0a419fad1573d950e454fe0c91346852b35d64c8dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_haibt, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default)
Sep 30 14:18:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-4fc505594bfbee5394cf5acaee12ad4c0116b23480a986d5b8cc146e614783c8-merged.mount: Deactivated successfully.
Sep 30 14:18:44 compute-0 podman[107382]: 2025-09-30 14:18:44.379688171 +0000 UTC m=+0.186622618 container remove 839bdb41bf6677ae46e90f0a419fad1573d950e454fe0c91346852b35d64c8dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_haibt, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:18:44 compute-0 systemd[1]: libpod-conmon-839bdb41bf6677ae46e90f0a419fad1573d950e454fe0c91346852b35d64c8dd.scope: Deactivated successfully.
Sep 30 14:18:44 compute-0 podman[107422]: 2025-09-30 14:18:44.531124065 +0000 UTC m=+0.041100546 container create e7987756d03d58d23e51d13a510a6acd19d833de7fe06e7c1724b3ae803b067a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_kapitsa, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:18:44 compute-0 systemd[1]: Started libpod-conmon-e7987756d03d58d23e51d13a510a6acd19d833de7fe06e7c1724b3ae803b067a.scope.
Sep 30 14:18:44 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:18:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61d81a499937c0de1f8dcec3f9120c1435c67811e013cf46af779b46d0d4da5d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:18:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61d81a499937c0de1f8dcec3f9120c1435c67811e013cf46af779b46d0d4da5d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:18:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61d81a499937c0de1f8dcec3f9120c1435c67811e013cf46af779b46d0d4da5d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:18:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61d81a499937c0de1f8dcec3f9120c1435c67811e013cf46af779b46d0d4da5d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:18:44 compute-0 podman[107422]: 2025-09-30 14:18:44.511087387 +0000 UTC m=+0.021063898 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:18:44 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:18:44 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:18:44 compute-0 podman[107422]: 2025-09-30 14:18:44.621529149 +0000 UTC m=+0.131505650 container init e7987756d03d58d23e51d13a510a6acd19d833de7fe06e7c1724b3ae803b067a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_kapitsa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Sep 30 14:18:44 compute-0 podman[107422]: 2025-09-30 14:18:44.628528041 +0000 UTC m=+0.138504522 container start e7987756d03d58d23e51d13a510a6acd19d833de7fe06e7c1724b3ae803b067a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_kapitsa, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Sep 30 14:18:44 compute-0 podman[107422]: 2025-09-30 14:18:44.631192654 +0000 UTC m=+0.141169135 container attach e7987756d03d58d23e51d13a510a6acd19d833de7fe06e7c1724b3ae803b067a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_kapitsa, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:18:44 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Sep 30 14:18:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:18:44] "GET /metrics HTTP/1.1" 200 48327 "" "Prometheus/2.51.0"
Sep 30 14:18:44 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:18:44] "GET /metrics HTTP/1.1" 200 48327 "" "Prometheus/2.51.0"
Sep 30 14:18:44 compute-0 ceph-mon[74194]: pgmap v49: 337 pgs: 1 remapped+peering, 336 active+clean; 457 KiB data, 152 MiB used, 60 GiB / 60 GiB avail
Sep 30 14:18:44 compute-0 ceph-mon[74194]: osdmap e106: 3 total, 3 up, 3 in
Sep 30 14:18:44 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:18:44 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Sep 30 14:18:44 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Sep 30 14:18:44 compute-0 flamboyant_kapitsa[107439]: {
Sep 30 14:18:44 compute-0 flamboyant_kapitsa[107439]:     "0": [
Sep 30 14:18:44 compute-0 flamboyant_kapitsa[107439]:         {
Sep 30 14:18:44 compute-0 flamboyant_kapitsa[107439]:             "devices": [
Sep 30 14:18:44 compute-0 flamboyant_kapitsa[107439]:                 "/dev/loop3"
Sep 30 14:18:44 compute-0 flamboyant_kapitsa[107439]:             ],
Sep 30 14:18:44 compute-0 flamboyant_kapitsa[107439]:             "lv_name": "ceph_lv0",
Sep 30 14:18:44 compute-0 flamboyant_kapitsa[107439]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:18:44 compute-0 flamboyant_kapitsa[107439]:             "lv_size": "21470642176",
Sep 30 14:18:44 compute-0 flamboyant_kapitsa[107439]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5e3c7776-ac03-5698-b79f-a6dc2d80cae6,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1bf35304-bfb4-41f5-b832-570aa31de1b2,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 14:18:44 compute-0 flamboyant_kapitsa[107439]:             "lv_uuid": "f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L",
Sep 30 14:18:44 compute-0 flamboyant_kapitsa[107439]:             "name": "ceph_lv0",
Sep 30 14:18:44 compute-0 flamboyant_kapitsa[107439]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:18:44 compute-0 flamboyant_kapitsa[107439]:             "tags": {
Sep 30 14:18:44 compute-0 flamboyant_kapitsa[107439]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:18:44 compute-0 flamboyant_kapitsa[107439]:                 "ceph.block_uuid": "f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L",
Sep 30 14:18:44 compute-0 flamboyant_kapitsa[107439]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 14:18:44 compute-0 flamboyant_kapitsa[107439]:                 "ceph.cluster_fsid": "5e3c7776-ac03-5698-b79f-a6dc2d80cae6",
Sep 30 14:18:44 compute-0 flamboyant_kapitsa[107439]:                 "ceph.cluster_name": "ceph",
Sep 30 14:18:44 compute-0 flamboyant_kapitsa[107439]:                 "ceph.crush_device_class": "",
Sep 30 14:18:44 compute-0 flamboyant_kapitsa[107439]:                 "ceph.encrypted": "0",
Sep 30 14:18:44 compute-0 flamboyant_kapitsa[107439]:                 "ceph.osd_fsid": "1bf35304-bfb4-41f5-b832-570aa31de1b2",
Sep 30 14:18:44 compute-0 flamboyant_kapitsa[107439]:                 "ceph.osd_id": "0",
Sep 30 14:18:44 compute-0 flamboyant_kapitsa[107439]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 14:18:44 compute-0 flamboyant_kapitsa[107439]:                 "ceph.type": "block",
Sep 30 14:18:44 compute-0 flamboyant_kapitsa[107439]:                 "ceph.vdo": "0",
Sep 30 14:18:44 compute-0 flamboyant_kapitsa[107439]:                 "ceph.with_tpm": "0"
Sep 30 14:18:44 compute-0 flamboyant_kapitsa[107439]:             },
Sep 30 14:18:44 compute-0 flamboyant_kapitsa[107439]:             "type": "block",
Sep 30 14:18:44 compute-0 flamboyant_kapitsa[107439]:             "vg_name": "ceph_vg0"
Sep 30 14:18:44 compute-0 flamboyant_kapitsa[107439]:         }
Sep 30 14:18:44 compute-0 flamboyant_kapitsa[107439]:     ]
Sep 30 14:18:44 compute-0 flamboyant_kapitsa[107439]: }
Sep 30 14:18:44 compute-0 systemd[1]: libpod-e7987756d03d58d23e51d13a510a6acd19d833de7fe06e7c1724b3ae803b067a.scope: Deactivated successfully.
Sep 30 14:18:45 compute-0 podman[107448]: 2025-09-30 14:18:45.006397772 +0000 UTC m=+0.023251418 container died e7987756d03d58d23e51d13a510a6acd19d833de7fe06e7c1724b3ae803b067a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_kapitsa, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:18:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-61d81a499937c0de1f8dcec3f9120c1435c67811e013cf46af779b46d0d4da5d-merged.mount: Deactivated successfully.
Sep 30 14:18:45 compute-0 podman[107448]: 2025-09-30 14:18:45.049912892 +0000 UTC m=+0.066766538 container remove e7987756d03d58d23e51d13a510a6acd19d833de7fe06e7c1724b3ae803b067a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_kapitsa, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:18:45 compute-0 systemd[1]: libpod-conmon-e7987756d03d58d23e51d13a510a6acd19d833de7fe06e7c1724b3ae803b067a.scope: Deactivated successfully.
Sep 30 14:18:45 compute-0 sudo[107309]: pam_unix(sudo:session): session closed for user root
Sep 30 14:18:45 compute-0 sudo[107463]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:18:45 compute-0 sudo[107463]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:18:45 compute-0 sudo[107463]: pam_unix(sudo:session): session closed for user root
Sep 30 14:18:45 compute-0 sudo[107488]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -- raw list --format json
Sep 30 14:18:45 compute-0 sudo[107488]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:18:45 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:18:45 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf64003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:18:45 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 14:18:45 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v52: 337 pgs: 1 remapped+peering, 336 active+clean; 457 KiB data, 152 MiB used, 60 GiB / 60 GiB avail
Sep 30 14:18:45 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:18:45 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000028s ======
Sep 30 14:18:45 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:18:45.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Sep 30 14:18:45 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:18:45 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:18:45 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:18:45.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:18:45 compute-0 podman[107555]: 2025-09-30 14:18:45.648331419 +0000 UTC m=+0.041189219 container create 118d5dbcfeee2fb346ddc540de856fb60ff91a45d3357fc97aaf2d7d77e4ed55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_shirley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Sep 30 14:18:45 compute-0 systemd[1]: Started libpod-conmon-118d5dbcfeee2fb346ddc540de856fb60ff91a45d3357fc97aaf2d7d77e4ed55.scope.
Sep 30 14:18:45 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:18:45 compute-0 podman[107555]: 2025-09-30 14:18:45.629968336 +0000 UTC m=+0.022826176 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:18:45 compute-0 podman[107555]: 2025-09-30 14:18:45.733977052 +0000 UTC m=+0.126834882 container init 118d5dbcfeee2fb346ddc540de856fb60ff91a45d3357fc97aaf2d7d77e4ed55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_shirley, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:18:45 compute-0 podman[107555]: 2025-09-30 14:18:45.741450677 +0000 UTC m=+0.134308477 container start 118d5dbcfeee2fb346ddc540de856fb60ff91a45d3357fc97aaf2d7d77e4ed55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_shirley, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid)
Sep 30 14:18:45 compute-0 hopeful_shirley[107573]: 167 167
Sep 30 14:18:45 compute-0 systemd[1]: libpod-118d5dbcfeee2fb346ddc540de856fb60ff91a45d3357fc97aaf2d7d77e4ed55.scope: Deactivated successfully.
Sep 30 14:18:45 compute-0 podman[107555]: 2025-09-30 14:18:45.746639059 +0000 UTC m=+0.139496879 container attach 118d5dbcfeee2fb346ddc540de856fb60ff91a45d3357fc97aaf2d7d77e4ed55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_shirley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Sep 30 14:18:45 compute-0 podman[107555]: 2025-09-30 14:18:45.747123042 +0000 UTC m=+0.139980852 container died 118d5dbcfeee2fb346ddc540de856fb60ff91a45d3357fc97aaf2d7d77e4ed55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_shirley, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Sep 30 14:18:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-640613cf5943f6e64c9e3f134fe4326d53363119b232932a71e444b0cc492eff-merged.mount: Deactivated successfully.
Sep 30 14:18:45 compute-0 podman[107555]: 2025-09-30 14:18:45.788899135 +0000 UTC m=+0.181756935 container remove 118d5dbcfeee2fb346ddc540de856fb60ff91a45d3357fc97aaf2d7d77e4ed55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_shirley, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:18:45 compute-0 systemd[1]: libpod-conmon-118d5dbcfeee2fb346ddc540de856fb60ff91a45d3357fc97aaf2d7d77e4ed55.scope: Deactivated successfully.
Sep 30 14:18:45 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:18:45 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf64003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:18:45 compute-0 podman[107596]: 2025-09-30 14:18:45.96812216 +0000 UTC m=+0.057407872 container create 3f6ceb5605cdda2d37d92493eb822ccb69c8be53e7af211f5039703fb4e6a77f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_nash, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Sep 30 14:18:46 compute-0 systemd[1]: Started libpod-conmon-3f6ceb5605cdda2d37d92493eb822ccb69c8be53e7af211f5039703fb4e6a77f.scope.
Sep 30 14:18:46 compute-0 podman[107596]: 2025-09-30 14:18:45.940965877 +0000 UTC m=+0.030251679 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:18:46 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:18:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8571bfe856eb1103f44a587b7d64f16ae690c915a1a3ac536a7f4999751aa51a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:18:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8571bfe856eb1103f44a587b7d64f16ae690c915a1a3ac536a7f4999751aa51a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:18:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8571bfe856eb1103f44a587b7d64f16ae690c915a1a3ac536a7f4999751aa51a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:18:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8571bfe856eb1103f44a587b7d64f16ae690c915a1a3ac536a7f4999751aa51a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:18:46 compute-0 podman[107596]: 2025-09-30 14:18:46.07154444 +0000 UTC m=+0.160830172 container init 3f6ceb5605cdda2d37d92493eb822ccb69c8be53e7af211f5039703fb4e6a77f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_nash, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Sep 30 14:18:46 compute-0 podman[107596]: 2025-09-30 14:18:46.078591103 +0000 UTC m=+0.167876815 container start 3f6ceb5605cdda2d37d92493eb822ccb69c8be53e7af211f5039703fb4e6a77f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_nash, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Sep 30 14:18:46 compute-0 podman[107596]: 2025-09-30 14:18:46.082999194 +0000 UTC m=+0.172284936 container attach 3f6ceb5605cdda2d37d92493eb822ccb69c8be53e7af211f5039703fb4e6a77f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_nash, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Sep 30 14:18:46 compute-0 ceph-mon[74194]: osdmap e107: 3 total, 3 up, 3 in
Sep 30 14:18:46 compute-0 ceph-mon[74194]: pgmap v52: 337 pgs: 1 remapped+peering, 336 active+clean; 457 KiB data, 152 MiB used, 60 GiB / 60 GiB avail
Sep 30 14:18:46 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:18:46 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf9c00a660 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:18:46 compute-0 lvm[107692]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 14:18:46 compute-0 lvm[107692]: VG ceph_vg0 finished
Sep 30 14:18:46 compute-0 lvm[107694]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 14:18:46 compute-0 lvm[107694]: VG ceph_vg0 finished
Sep 30 14:18:46 compute-0 agitated_nash[107614]: {}
Sep 30 14:18:46 compute-0 systemd[1]: libpod-3f6ceb5605cdda2d37d92493eb822ccb69c8be53e7af211f5039703fb4e6a77f.scope: Deactivated successfully.
Sep 30 14:18:46 compute-0 systemd[1]: libpod-3f6ceb5605cdda2d37d92493eb822ccb69c8be53e7af211f5039703fb4e6a77f.scope: Consumed 1.180s CPU time.
Sep 30 14:18:46 compute-0 podman[107596]: 2025-09-30 14:18:46.870384021 +0000 UTC m=+0.959669753 container died 3f6ceb5605cdda2d37d92493eb822ccb69c8be53e7af211f5039703fb4e6a77f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_nash, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:18:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-8571bfe856eb1103f44a587b7d64f16ae690c915a1a3ac536a7f4999751aa51a-merged.mount: Deactivated successfully.
Sep 30 14:18:47 compute-0 podman[107596]: 2025-09-30 14:18:47.000296516 +0000 UTC m=+1.089582238 container remove 3f6ceb5605cdda2d37d92493eb822ccb69c8be53e7af211f5039703fb4e6a77f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Sep 30 14:18:47 compute-0 systemd[1]: libpod-conmon-3f6ceb5605cdda2d37d92493eb822ccb69c8be53e7af211f5039703fb4e6a77f.scope: Deactivated successfully.
Sep 30 14:18:47 compute-0 sudo[107488]: pam_unix(sudo:session): session closed for user root
Sep 30 14:18:47 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 14:18:47 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:47 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 14:18:47 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:47 compute-0 sudo[107709]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 14:18:47 compute-0 sudo[107709]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:18:47 compute-0 sudo[107709]: pam_unix(sudo:session): session closed for user root
Sep 30 14:18:47 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:18:47 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf70002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:18:47 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v53: 337 pgs: 337 active+clean; 457 KiB data, 152 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s; 18 B/s, 0 objects/s recovering
Sep 30 14:18:47 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0)
Sep 30 14:18:47 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Sep 30 14:18:47 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:18:47 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:18:47 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:18:47.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:18:47 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:18:47 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:18:47 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:18:47.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:18:47 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:18:47 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf78003500 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:18:48 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:48 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:18:48 compute-0 ceph-mon[74194]: pgmap v53: 337 pgs: 337 active+clean; 457 KiB data, 152 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s; 18 B/s, 0 objects/s recovering
Sep 30 14:18:48 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Sep 30 14:18:48 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Sep 30 14:18:48 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Sep 30 14:18:48 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Sep 30 14:18:48 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Sep 30 14:18:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:18:48 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf64003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:18:49 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Sep 30 14:18:49 compute-0 ceph-mon[74194]: osdmap e108: 3 total, 3 up, 3 in
Sep 30 14:18:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:18:49 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf9c00a660 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:18:49 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v55: 337 pgs: 337 active+clean; 457 KiB data, 152 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s; 18 B/s, 0 objects/s recovering
Sep 30 14:18:49 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0)
Sep 30 14:18:49 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Sep 30 14:18:49 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:18:49 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:18:49 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:18:49.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:18:49 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:18:49 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:18:49 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:18:49.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:18:49 compute-0 sshd-session[107736]: Received disconnect from 210.90.155.80 port 40878:11: Bye Bye [preauth]
Sep 30 14:18:49 compute-0 sshd-session[107736]: Disconnected from authenticating user root 210.90.155.80 port 40878 [preauth]
Sep 30 14:18:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:18:49 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf70002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:18:50 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Sep 30 14:18:50 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:18:50 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf78003500 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:18:50 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Sep 30 14:18:50 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Sep 30 14:18:50 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Sep 30 14:18:50 compute-0 ceph-mon[74194]: pgmap v55: 337 pgs: 337 active+clean; 457 KiB data, 152 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s; 18 B/s, 0 objects/s recovering
Sep 30 14:18:50 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Sep 30 14:18:50 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 14:18:50 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Sep 30 14:18:50 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Sep 30 14:18:50 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Sep 30 14:18:51 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Sep 30 14:18:51 compute-0 ceph-mon[74194]: osdmap e109: 3 total, 3 up, 3 in
Sep 30 14:18:51 compute-0 ceph-mon[74194]: osdmap e110: 3 total, 3 up, 3 in
Sep 30 14:18:51 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:18:51 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf78003500 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:18:51 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Sep 30 14:18:51 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Sep 30 14:18:51 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Sep 30 14:18:51 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v59: 337 pgs: 1 remapped+peering, 336 active+clean; 457 KiB data, 152 MiB used, 60 GiB / 60 GiB avail
Sep 30 14:18:51 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:18:51 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:18:51 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:18:51.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:18:51 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:18:51 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:18:51 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:18:51.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:18:51 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:18:51 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf64003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:18:52 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:18:52 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf70002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:18:52 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Sep 30 14:18:52 compute-0 ceph-mon[74194]: osdmap e111: 3 total, 3 up, 3 in
Sep 30 14:18:52 compute-0 ceph-mon[74194]: pgmap v59: 337 pgs: 1 remapped+peering, 336 active+clean; 457 KiB data, 152 MiB used, 60 GiB / 60 GiB avail
Sep 30 14:18:52 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Sep 30 14:18:52 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Sep 30 14:18:53 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:18:53 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf9c00a660 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:18:53 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v61: 337 pgs: 1 remapped+peering, 336 active+clean; 457 KiB data, 152 MiB used, 60 GiB / 60 GiB avail
Sep 30 14:18:53 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Sep 30 14:18:53 compute-0 ceph-mon[74194]: osdmap e112: 3 total, 3 up, 3 in
Sep 30 14:18:53 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:18:53 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:18:53 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:18:53.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:18:53 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Sep 30 14:18:53 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Sep 30 14:18:53 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:18:53 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:18:53 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:18:53.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:18:53 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:18:53 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf78003500 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:18:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:18:54 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf64003c30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:18:54 compute-0 ceph-mon[74194]: pgmap v61: 337 pgs: 1 remapped+peering, 336 active+clean; 457 KiB data, 152 MiB used, 60 GiB / 60 GiB avail
Sep 30 14:18:54 compute-0 ceph-mon[74194]: osdmap e113: 3 total, 3 up, 3 in
Sep 30 14:18:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:18:54] "GET /metrics HTTP/1.1" 200 48327 "" "Prometheus/2.51.0"
Sep 30 14:18:54 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:18:54] "GET /metrics HTTP/1.1" 200 48327 "" "Prometheus/2.51.0"
Sep 30 14:18:55 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:18:55 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf70002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:18:55 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 14:18:55 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v63: 337 pgs: 1 remapped+peering, 336 active+clean; 457 KiB data, 152 MiB used, 60 GiB / 60 GiB avail
Sep 30 14:18:55 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:18:55 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:18:55 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:18:55.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:18:55 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:18:55 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:18:55 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:18:55.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:18:55 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:18:55 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf9c00a660 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:18:56 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:18:56 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf78003500 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:18:56 compute-0 ceph-mon[74194]: pgmap v63: 337 pgs: 1 remapped+peering, 336 active+clean; 457 KiB data, 152 MiB used, 60 GiB / 60 GiB avail
Sep 30 14:18:57 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:18:57 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf64003c50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:18:57 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v64: 337 pgs: 337 active+clean; 457 KiB data, 152 MiB used, 60 GiB / 60 GiB avail; 508 B/s rd, 0 op/s; 18 B/s, 0 objects/s recovering
Sep 30 14:18:57 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0)
Sep 30 14:18:57 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Sep 30 14:18:57 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:18:57 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:18:57 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:18:57.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:18:57 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Sep 30 14:18:57 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Sep 30 14:18:57 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Sep 30 14:18:57 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Sep 30 14:18:57 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Sep 30 14:18:57 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:18:57 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:18:57 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:18:57.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:18:57 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:18:57 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf70002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:18:58 compute-0 sudo[107748]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:18:58 compute-0 sudo[107748]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:18:58 compute-0 sudo[107748]: pam_unix(sudo:session): session closed for user root
Sep 30 14:18:58 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:18:58 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf9c00a680 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:18:58 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Sep 30 14:18:58 compute-0 ceph-mon[74194]: pgmap v64: 337 pgs: 337 active+clean; 457 KiB data, 152 MiB used, 60 GiB / 60 GiB avail; 508 B/s rd, 0 op/s; 18 B/s, 0 objects/s recovering
Sep 30 14:18:58 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Sep 30 14:18:58 compute-0 ceph-mon[74194]: osdmap e114: 3 total, 3 up, 3 in
Sep 30 14:18:58 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Sep 30 14:18:58 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Sep 30 14:18:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:18:59 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf78003500 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:18:59 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v67: 337 pgs: 337 active+clean; 457 KiB data, 152 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s; 18 B/s, 0 objects/s recovering
Sep 30 14:18:59 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0)
Sep 30 14:18:59 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Sep 30 14:18:59 compute-0 ceph-mgr[74485]: [balancer INFO root] Optimize plan auto_2025-09-30_14:18:59
Sep 30 14:18:59 compute-0 ceph-mgr[74485]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 14:18:59 compute-0 ceph-mgr[74485]: [balancer INFO root] do_upmap
Sep 30 14:18:59 compute-0 ceph-mgr[74485]: [balancer INFO root] pools ['default.rgw.meta', 'images', 'default.rgw.control', '.nfs', '.rgw.root', 'cephfs.cephfs.meta', 'volumes', 'cephfs.cephfs.data', 'default.rgw.log', 'backups', '.mgr', 'vms']
Sep 30 14:18:59 compute-0 ceph-mgr[74485]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 14:18:59 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:18:59 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:18:59 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:18:59.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:18:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 14:18:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:18:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 14:18:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:18:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:18:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:18:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:18:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:18:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:18:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:18:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:18:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:18:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Sep 30 14:18:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:18:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:18:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:18:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Sep 30 14:18:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:18:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Sep 30 14:18:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:18:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:18:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:18:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 14:18:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:18:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 14:18:59 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:18:59 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000036s ======
Sep 30 14:18:59 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:18:59.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000036s
Sep 30 14:18:59 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:18:59 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:18:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:18:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:18:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:18:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:18:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:18:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:18:59 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e115 do_prune osdmap full prune enabled
Sep 30 14:18:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:18:59 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf64003c70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:18:59 compute-0 ceph-mon[74194]: osdmap e115: 3 total, 3 up, 3 in
Sep 30 14:18:59 compute-0 ceph-mon[74194]: pgmap v67: 337 pgs: 337 active+clean; 457 KiB data, 152 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s; 18 B/s, 0 objects/s recovering
Sep 30 14:18:59 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Sep 30 14:18:59 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:19:00 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Sep 30 14:19:00 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e116 e116: 3 total, 3 up, 3 in
Sep 30 14:19:00 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e116: 3 total, 3 up, 3 in
Sep 30 14:19:00 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:19:00 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf70002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:19:00 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 14:19:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 14:19:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 14:19:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 14:19:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 14:19:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 14:19:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 14:19:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 14:19:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 14:19:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 14:19:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 14:19:01 compute-0 sudo[104675]: pam_unix(sudo:session): session closed for user root
Sep 30 14:19:01 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e116 do_prune osdmap full prune enabled
Sep 30 14:19:01 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Sep 30 14:19:01 compute-0 ceph-mon[74194]: osdmap e116: 3 total, 3 up, 3 in
Sep 30 14:19:01 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e117 e117: 3 total, 3 up, 3 in
Sep 30 14:19:01 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e117: 3 total, 3 up, 3 in
Sep 30 14:19:01 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:19:01 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf9c00a6a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:19:01 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v70: 337 pgs: 1 active+remapped, 336 active+clean; 457 KiB data, 152 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 0 objects/s recovering
Sep 30 14:19:01 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0)
Sep 30 14:19:01 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Sep 30 14:19:01 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:19:01 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:19:01 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:19:01.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:19:01 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:19:01 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:19:01 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:19:01.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:19:01 compute-0 anacron[1150]: Job `cron.monthly' started
Sep 30 14:19:01 compute-0 anacron[1150]: Job `cron.monthly' terminated
Sep 30 14:19:01 compute-0 anacron[1150]: Normal exit (3 jobs run)
Sep 30 14:19:01 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:19:01 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf78003500 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:19:02 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e117 do_prune osdmap full prune enabled
Sep 30 14:19:02 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Sep 30 14:19:02 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e118 e118: 3 total, 3 up, 3 in
Sep 30 14:19:02 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e118: 3 total, 3 up, 3 in
Sep 30 14:19:02 compute-0 ceph-mon[74194]: osdmap e117: 3 total, 3 up, 3 in
Sep 30 14:19:02 compute-0 ceph-mon[74194]: pgmap v70: 337 pgs: 1 active+remapped, 336 active+clean; 457 KiB data, 152 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 0 objects/s recovering
Sep 30 14:19:02 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Sep 30 14:19:02 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:19:02 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf64003c90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:19:02 compute-0 sudo[107928]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlhrriuvmvhtkhpmwshqkxiwbgbhdkxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241942.2915504-342-79648165939078/AnsiballZ_command.py'
Sep 30 14:19:02 compute-0 sudo[107928]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:19:02 compute-0 python3.9[107930]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:19:03 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Sep 30 14:19:03 compute-0 ceph-mon[74194]: osdmap e118: 3 total, 3 up, 3 in
Sep 30 14:19:03 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:19:03 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf70002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:19:03 compute-0 sudo[107928]: pam_unix(sudo:session): session closed for user root
Sep 30 14:19:03 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v72: 337 pgs: 1 active+remapped, 336 active+clean; 457 KiB data, 152 MiB used, 60 GiB / 60 GiB avail; 23 B/s, 0 objects/s recovering
Sep 30 14:19:03 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0)
Sep 30 14:19:03 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Sep 30 14:19:03 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:19:03 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000036s ======
Sep 30 14:19:03 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:19:03.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000036s
Sep 30 14:19:03 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:19:03 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:19:03 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:19:03.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:19:03 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:19:03 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf9c00a6c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:19:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:19:04 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf78003500 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:19:04 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Sep 30 14:19:04 compute-0 ceph-mon[74194]: pgmap v72: 337 pgs: 1 active+remapped, 336 active+clean; 457 KiB data, 152 MiB used, 60 GiB / 60 GiB avail; 23 B/s, 0 objects/s recovering
Sep 30 14:19:04 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Sep 30 14:19:04 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Sep 30 14:19:04 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e119 e119: 3 total, 3 up, 3 in
Sep 30 14:19:04 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e119: 3 total, 3 up, 3 in
Sep 30 14:19:04 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 119 pg[9.19( empty local-lis/les=0/0 n=0 ec=55/37 lis/c=86/86 les/c/f=87/87/0 sis=119) [0] r=0 lpr=119 pi=[86,119)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:19:04 compute-0 sudo[108217]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvpayktioqzdwhknhbgptsrbiplueduq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241943.8044856-366-215680175113712/AnsiballZ_selinux.py'
Sep 30 14:19:04 compute-0 sudo[108217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:19:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:19:04] "GET /metrics HTTP/1.1" 200 48327 "" "Prometheus/2.51.0"
Sep 30 14:19:04 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:19:04] "GET /metrics HTTP/1.1" 200 48327 "" "Prometheus/2.51.0"
Sep 30 14:19:04 compute-0 python3.9[108219]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Sep 30 14:19:04 compute-0 sudo[108217]: pam_unix(sudo:session): session closed for user root
Sep 30 14:19:05 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Sep 30 14:19:05 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e120 e120: 3 total, 3 up, 3 in
Sep 30 14:19:05 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e120: 3 total, 3 up, 3 in
Sep 30 14:19:05 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 120 pg[9.19( empty local-lis/les=0/0 n=0 ec=55/37 lis/c=86/86 les/c/f=87/87/0 sis=120) [0]/[2] r=-1 lpr=120 pi=[86,120)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:19:05 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 120 pg[9.19( empty local-lis/les=0/0 n=0 ec=55/37 lis/c=86/86 les/c/f=87/87/0 sis=120) [0]/[2] r=-1 lpr=120 pi=[86,120)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Sep 30 14:19:05 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Sep 30 14:19:05 compute-0 ceph-mon[74194]: osdmap e119: 3 total, 3 up, 3 in
Sep 30 14:19:05 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:19:05 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf64003cb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:19:05 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Sep 30 14:19:05 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v75: 337 pgs: 1 active+remapped, 336 active+clean; 457 KiB data, 152 MiB used, 60 GiB / 60 GiB avail
Sep 30 14:19:05 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0)
Sep 30 14:19:05 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Sep 30 14:19:05 compute-0 sudo[108371]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fntkpuguriuizhykshhlpzvchfuzsxkt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241945.2448995-399-93780475322632/AnsiballZ_command.py'
Sep 30 14:19:05 compute-0 sudo[108371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:19:05 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:19:05 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:19:05 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:19:05.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:19:05 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:19:05 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:19:05 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:19:05.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:19:05 compute-0 python3.9[108373]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Sep 30 14:19:05 compute-0 sudo[108371]: pam_unix(sudo:session): session closed for user root
Sep 30 14:19:05 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:19:05 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf70002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:19:06 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:19:06 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf6c0029d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:19:06 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Sep 30 14:19:06 compute-0 ceph-mon[74194]: osdmap e120: 3 total, 3 up, 3 in
Sep 30 14:19:06 compute-0 ceph-mon[74194]: pgmap v75: 337 pgs: 1 active+remapped, 336 active+clean; 457 KiB data, 152 MiB used, 60 GiB / 60 GiB avail
Sep 30 14:19:06 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Sep 30 14:19:06 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Sep 30 14:19:06 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e121 e121: 3 total, 3 up, 3 in
Sep 30 14:19:06 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e121: 3 total, 3 up, 3 in
Sep 30 14:19:06 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 121 pg[9.1a( empty local-lis/les=0/0 n=0 ec=55/37 lis/c=86/86 les/c/f=87/87/0 sis=121) [0] r=0 lpr=121 pi=[86,121)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:19:06 compute-0 sudo[108524]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-essmabmiycuwseohklzcbcnyexsexowf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241945.9327688-423-127406732018413/AnsiballZ_file.py'
Sep 30 14:19:06 compute-0 sudo[108524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:19:06 compute-0 python3.9[108526]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:19:06 compute-0 sudo[108524]: pam_unix(sudo:session): session closed for user root
Sep 30 14:19:07 compute-0 sudo[108676]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imloyntgupzncgtnpjbqojxsejlahohw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241946.8274088-447-108647498735796/AnsiballZ_mount.py'
Sep 30 14:19:07 compute-0 sudo[108676]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:19:07 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e121 do_prune osdmap full prune enabled
Sep 30 14:19:07 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Sep 30 14:19:07 compute-0 ceph-mon[74194]: osdmap e121: 3 total, 3 up, 3 in
Sep 30 14:19:07 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:19:07 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf78003500 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:19:07 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e122 e122: 3 total, 3 up, 3 in
Sep 30 14:19:07 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Sep 30 14:19:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 122 pg[9.1a( empty local-lis/les=0/0 n=0 ec=55/37 lis/c=86/86 les/c/f=87/87/0 sis=122) [0]/[1] r=-1 lpr=122 pi=[86,122)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:19:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 122 pg[9.1a( empty local-lis/les=0/0 n=0 ec=55/37 lis/c=86/86 les/c/f=87/87/0 sis=122) [0]/[1] r=-1 lpr=122 pi=[86,122)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Sep 30 14:19:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 122 pg[9.19( v 48'1157 (0'0,48'1157] local-lis/les=0/0 n=7 ec=55/37 lis/c=120/86 les/c/f=121/87/0 sis=122) [0] r=0 lpr=122 pi=[86,122)/1 luod=0'0 crt=48'1157 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:19:07 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 122 pg[9.19( v 48'1157 (0'0,48'1157] local-lis/les=0/0 n=7 ec=55/37 lis/c=120/86 les/c/f=121/87/0 sis=122) [0] r=0 lpr=122 pi=[86,122)/1 crt=48'1157 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:19:07 compute-0 python3.9[108678]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Sep 30 14:19:07 compute-0 sudo[108676]: pam_unix(sudo:session): session closed for user root
Sep 30 14:19:07 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v78: 337 pgs: 1 remapped+peering, 336 active+clean; 457 KiB data, 152 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:19:07 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:19:07 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:19:07 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:19:07.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:19:07 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:19:07 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:19:07 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:19:07.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:19:07 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:19:07 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf70002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:19:08 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:19:08 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf900014c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:19:08 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e122 do_prune osdmap full prune enabled
Sep 30 14:19:08 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e123 e123: 3 total, 3 up, 3 in
Sep 30 14:19:08 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e123: 3 total, 3 up, 3 in
Sep 30 14:19:08 compute-0 ceph-mon[74194]: osdmap e122: 3 total, 3 up, 3 in
Sep 30 14:19:08 compute-0 ceph-mon[74194]: pgmap v78: 337 pgs: 1 remapped+peering, 336 active+clean; 457 KiB data, 152 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:19:08 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 123 pg[9.19( v 48'1157 (0'0,48'1157] local-lis/les=122/123 n=7 ec=55/37 lis/c=120/86 les/c/f=121/87/0 sis=122) [0] r=0 lpr=122 pi=[86,122)/1 crt=48'1157 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:19:08 compute-0 sudo[108831]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebcvlvkcbdrakxkujtpivvvvrkafnoin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241948.3229969-531-56301653055881/AnsiballZ_file.py'
Sep 30 14:19:08 compute-0 sudo[108831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:19:08 compute-0 python3.9[108833]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:19:08 compute-0 sudo[108831]: pam_unix(sudo:session): session closed for user root
Sep 30 14:19:09 compute-0 sudo[108984]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkkyedmarjqvxakuxplqxkkcdsztdciw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241949.0365443-555-122171730940830/AnsiballZ_stat.py'
Sep 30 14:19:09 compute-0 sudo[108984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:19:09 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:19:09 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf6c0029d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:19:09 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v80: 337 pgs: 1 remapped+peering, 336 active+clean; 457 KiB data, 152 MiB used, 60 GiB / 60 GiB avail; 732 B/s rd, 0 op/s
Sep 30 14:19:09 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:19:09 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:19:09 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:19:09.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:19:09 compute-0 python3.9[108986]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:19:09 compute-0 sudo[108984]: pam_unix(sudo:session): session closed for user root
Sep 30 14:19:09 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:19:09 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:19:09 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e123 do_prune osdmap full prune enabled
Sep 30 14:19:09 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:19:09.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:19:09 compute-0 ceph-mon[74194]: osdmap e123: 3 total, 3 up, 3 in
Sep 30 14:19:09 compute-0 sudo[109063]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfcujsgpidvjcemybcgzdsmyvqbzjdgx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241949.0365443-555-122171730940830/AnsiballZ_file.py'
Sep 30 14:19:09 compute-0 sudo[109063]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:19:09 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e124 e124: 3 total, 3 up, 3 in
Sep 30 14:19:09 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:19:09 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf6c0022e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:19:09 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e124: 3 total, 3 up, 3 in
Sep 30 14:19:10 compute-0 python3.9[109065]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:19:10 compute-0 sudo[109063]: pam_unix(sudo:session): session closed for user root
Sep 30 14:19:10 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 124 pg[9.1a( v 48'1157 (0'0,48'1157] local-lis/les=0/0 n=4 ec=55/37 lis/c=122/86 les/c/f=123/87/0 sis=124) [0] r=0 lpr=124 pi=[86,124)/1 luod=0'0 crt=48'1157 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:19:10 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 124 pg[9.1a( v 48'1157 (0'0,48'1157] local-lis/les=0/0 n=4 ec=55/37 lis/c=122/86 les/c/f=123/87/0 sis=124) [0] r=0 lpr=124 pi=[86,124)/1 crt=48'1157 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:19:10 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:19:10 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf70002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:19:10 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:19:10 compute-0 ceph-mon[74194]: pgmap v80: 337 pgs: 1 remapped+peering, 336 active+clean; 457 KiB data, 152 MiB used, 60 GiB / 60 GiB avail; 732 B/s rd, 0 op/s
Sep 30 14:19:10 compute-0 ceph-mon[74194]: osdmap e124: 3 total, 3 up, 3 in
Sep 30 14:19:10 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e124 do_prune osdmap full prune enabled
Sep 30 14:19:10 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e125 e125: 3 total, 3 up, 3 in
Sep 30 14:19:10 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e125: 3 total, 3 up, 3 in
Sep 30 14:19:10 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 125 pg[9.1a( v 48'1157 (0'0,48'1157] local-lis/les=124/125 n=4 ec=55/37 lis/c=122/86 les/c/f=123/87/0 sis=124) [0] r=0 lpr=124 pi=[86,124)/1 crt=48'1157 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:19:11 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:19:11 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf900014c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:19:11 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v83: 337 pgs: 337 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 53 B/s, 2 objects/s recovering
Sep 30 14:19:11 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0)
Sep 30 14:19:11 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Sep 30 14:19:11 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:19:11 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:19:11 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:19:11.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:19:11 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:19:11 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:19:11 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:19:11.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:19:11 compute-0 sudo[109217]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbsttzhgbpiqmhpgpydboaliijgmnrvw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241951.3191764-627-90840190374715/AnsiballZ_getent.py'
Sep 30 14:19:11 compute-0 sudo[109217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:19:11 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e125 do_prune osdmap full prune enabled
Sep 30 14:19:11 compute-0 ceph-mon[74194]: osdmap e125: 3 total, 3 up, 3 in
Sep 30 14:19:11 compute-0 ceph-mon[74194]: pgmap v83: 337 pgs: 337 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 53 B/s, 2 objects/s recovering
Sep 30 14:19:11 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Sep 30 14:19:11 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:19:11 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf78003500 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:19:11 compute-0 python3.9[109219]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Sep 30 14:19:11 compute-0 sudo[109217]: pam_unix(sudo:session): session closed for user root
Sep 30 14:19:12 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Sep 30 14:19:12 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e126 e126: 3 total, 3 up, 3 in
Sep 30 14:19:12 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e126: 3 total, 3 up, 3 in
Sep 30 14:19:12 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:19:12 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf6c0022e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:19:12 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 126 pg[9.1b( empty local-lis/les=0/0 n=0 ec=55/37 lis/c=66/66 les/c/f=67/67/0 sis=126) [0] r=0 lpr=126 pi=[66,126)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:19:12 compute-0 sudo[109370]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndhnzivgkpjrcrxgsioubuayphnokevu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241952.2732847-657-196534926571915/AnsiballZ_getent.py'
Sep 30 14:19:12 compute-0 sudo[109370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:19:12 compute-0 python3.9[109372]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Sep 30 14:19:12 compute-0 sudo[109370]: pam_unix(sudo:session): session closed for user root
Sep 30 14:19:13 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e126 do_prune osdmap full prune enabled
Sep 30 14:19:13 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Sep 30 14:19:13 compute-0 ceph-mon[74194]: osdmap e126: 3 total, 3 up, 3 in
Sep 30 14:19:13 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e127 e127: 3 total, 3 up, 3 in
Sep 30 14:19:13 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 127 pg[9.1b( empty local-lis/les=0/0 n=0 ec=55/37 lis/c=66/66 les/c/f=67/67/0 sis=127) [0]/[2] r=-1 lpr=127 pi=[66,127)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:19:13 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 127 pg[9.1b( empty local-lis/les=0/0 n=0 ec=55/37 lis/c=66/66 les/c/f=67/67/0 sis=127) [0]/[2] r=-1 lpr=127 pi=[66,127)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Sep 30 14:19:13 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e127: 3 total, 3 up, 3 in
Sep 30 14:19:13 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:19:13 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf6c0022e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:19:13 compute-0 sudo[109524]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvvrbnsawaaryxadyllxgzcvyqupjqze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241952.958781-681-279032561139804/AnsiballZ_group.py'
Sep 30 14:19:13 compute-0 sudo[109524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:19:13 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v86: 337 pgs: 337 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 54 B/s, 2 objects/s recovering
Sep 30 14:19:13 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0)
Sep 30 14:19:13 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Sep 30 14:19:13 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:19:13 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:19:13 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:19:13.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:19:13 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:19:13 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:19:13 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:19:13.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:19:13 compute-0 python3.9[109526]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Sep 30 14:19:13 compute-0 sudo[109524]: pam_unix(sudo:session): session closed for user root
Sep 30 14:19:13 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:19:13 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf900014c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:19:14 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e127 do_prune osdmap full prune enabled
Sep 30 14:19:14 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Sep 30 14:19:14 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e128 e128: 3 total, 3 up, 3 in
Sep 30 14:19:14 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e128: 3 total, 3 up, 3 in
Sep 30 14:19:14 compute-0 ceph-mon[74194]: osdmap e127: 3 total, 3 up, 3 in
Sep 30 14:19:14 compute-0 ceph-mon[74194]: pgmap v86: 337 pgs: 337 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 54 B/s, 2 objects/s recovering
Sep 30 14:19:14 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Sep 30 14:19:14 compute-0 sudo[109677]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yueybuuzzdakgrasigsfjhfoeqnrasgp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241953.9050314-708-192559714263092/AnsiballZ_file.py'
Sep 30 14:19:14 compute-0 sudo[109677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:19:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:19:14 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf78003500 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:19:14 compute-0 python3.9[109679]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Sep 30 14:19:14 compute-0 sudo[109677]: pam_unix(sudo:session): session closed for user root
Sep 30 14:19:14 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:19:14 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:19:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:19:14] "GET /metrics HTTP/1.1" 200 48329 "" "Prometheus/2.51.0"
Sep 30 14:19:14 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:19:14] "GET /metrics HTTP/1.1" 200 48329 "" "Prometheus/2.51.0"
Sep 30 14:19:15 compute-0 sudo[109829]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdqumvqkpuzmglytsbaznetmiigdciao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241954.8042712-741-230970178458481/AnsiballZ_dnf.py'
Sep 30 14:19:15 compute-0 sudo[109829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:19:15 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e128 do_prune osdmap full prune enabled
Sep 30 14:19:15 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Sep 30 14:19:15 compute-0 ceph-mon[74194]: osdmap e128: 3 total, 3 up, 3 in
Sep 30 14:19:15 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:19:15 compute-0 python3.9[109831]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Sep 30 14:19:15 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:19:15 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf6c0022e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:19:15 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v88: 337 pgs: 337 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 48 B/s, 2 objects/s recovering
Sep 30 14:19:15 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0)
Sep 30 14:19:15 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Sep 30 14:19:15 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e129 e129: 3 total, 3 up, 3 in
Sep 30 14:19:15 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:19:15 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:19:15 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:19:15.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:19:15 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e129: 3 total, 3 up, 3 in
Sep 30 14:19:15 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 129 pg[9.1b( v 48'1157 (0'0,48'1157] local-lis/les=0/0 n=2 ec=55/37 lis/c=127/66 les/c/f=128/67/0 sis=129) [0] r=0 lpr=129 pi=[66,129)/1 luod=0'0 crt=48'1157 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:19:15 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 129 pg[9.1b( v 48'1157 (0'0,48'1157] local-lis/les=0/0 n=2 ec=55/37 lis/c=127/66 les/c/f=128/67/0 sis=129) [0] r=0 lpr=129 pi=[66,129)/1 crt=48'1157 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:19:15 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:19:15 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000036s ======
Sep 30 14:19:15 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:19:15.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000036s
Sep 30 14:19:15 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:19:15 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf6c0022e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:19:16 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:19:16 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf900014c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:19:16 compute-0 ceph-mon[74194]: pgmap v88: 337 pgs: 337 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 48 B/s, 2 objects/s recovering
Sep 30 14:19:16 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Sep 30 14:19:16 compute-0 ceph-mon[74194]: osdmap e129: 3 total, 3 up, 3 in
Sep 30 14:19:16 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e129 do_prune osdmap full prune enabled
Sep 30 14:19:16 compute-0 sudo[109829]: pam_unix(sudo:session): session closed for user root
Sep 30 14:19:16 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Sep 30 14:19:16 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e130 e130: 3 total, 3 up, 3 in
Sep 30 14:19:16 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e130: 3 total, 3 up, 3 in
Sep 30 14:19:16 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 130 pg[9.1b( v 48'1157 (0'0,48'1157] local-lis/les=129/130 n=2 ec=55/37 lis/c=127/66 les/c/f=128/67/0 sis=129) [0] r=0 lpr=129 pi=[66,129)/1 crt=48'1157 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:19:17 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:19:17 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf78003500 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:19:17 compute-0 sudo[109985]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxvmskgzvdjaovrexlufpuvznpytzmqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241957.179238-765-43163448284423/AnsiballZ_file.py'
Sep 30 14:19:17 compute-0 sudo[109985]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:19:17 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v91: 337 pgs: 1 peering, 336 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 706 B/s rd, 0 op/s; 25 B/s, 0 objects/s recovering
Sep 30 14:19:17 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:19:17 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:19:17 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:19:17.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:19:17 compute-0 python3.9[109987]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:19:17 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:19:17 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:19:17 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:19:17.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:19:17 compute-0 sudo[109985]: pam_unix(sudo:session): session closed for user root
Sep 30 14:19:17 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Sep 30 14:19:17 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e131 e131: 3 total, 3 up, 3 in
Sep 30 14:19:17 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Sep 30 14:19:17 compute-0 ceph-mon[74194]: osdmap e130: 3 total, 3 up, 3 in
Sep 30 14:19:17 compute-0 ceph-mon[74194]: pgmap v91: 337 pgs: 1 peering, 336 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 706 B/s rd, 0 op/s; 25 B/s, 0 objects/s recovering
Sep 30 14:19:17 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e131: 3 total, 3 up, 3 in
Sep 30 14:19:17 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:19:17 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf6c0022e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:19:18 compute-0 sudo[110088]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:19:18 compute-0 sudo[110088]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:19:18 compute-0 sudo[110088]: pam_unix(sudo:session): session closed for user root
Sep 30 14:19:18 compute-0 sudo[110163]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfsikrgxvoaykajtezcwtxqewqmkqpgt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241957.8954632-789-47048759961154/AnsiballZ_stat.py'
Sep 30 14:19:18 compute-0 sudo[110163]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:19:18 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:19:18 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf6c0022e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:19:18 compute-0 python3.9[110165]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:19:18 compute-0 sudo[110163]: pam_unix(sudo:session): session closed for user root
Sep 30 14:19:18 compute-0 sudo[110241]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svbstjxrvewxjraqulewgxbrqmqszqqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241957.8954632-789-47048759961154/AnsiballZ_file.py'
Sep 30 14:19:18 compute-0 sudo[110241]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:19:18 compute-0 python3.9[110243]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:19:18 compute-0 sudo[110241]: pam_unix(sudo:session): session closed for user root
Sep 30 14:19:18 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Sep 30 14:19:18 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Sep 30 14:19:18 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Sep 30 14:19:18 compute-0 ceph-mon[74194]: osdmap e131: 3 total, 3 up, 3 in
Sep 30 14:19:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:19:19 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf900014c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:19:19 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v94: 337 pgs: 1 peering, 336 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s; 27 B/s, 0 objects/s recovering
Sep 30 14:19:19 compute-0 sudo[110394]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rybylpskohcvcqwrsbmdsliljacbysod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241959.1724315-828-49530748674243/AnsiballZ_stat.py'
Sep 30 14:19:19 compute-0 sudo[110394]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:19:19 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:19:19 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:19:19 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:19:19.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:19:19 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:19:19 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:19:19 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:19:19.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:19:19 compute-0 python3.9[110396]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:19:19 compute-0 sudo[110394]: pam_unix(sudo:session): session closed for user root
Sep 30 14:19:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:19:19 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf78003500 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:19:19 compute-0 sudo[110473]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wonkshloddxgsvunqwdwydeyagkihoco ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241959.1724315-828-49530748674243/AnsiballZ_file.py'
Sep 30 14:19:19 compute-0 sudo[110473]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:19:19 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Sep 30 14:19:20 compute-0 ceph-mon[74194]: osdmap e132: 3 total, 3 up, 3 in
Sep 30 14:19:20 compute-0 ceph-mon[74194]: pgmap v94: 337 pgs: 1 peering, 336 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s; 27 B/s, 0 objects/s recovering
Sep 30 14:19:20 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Sep 30 14:19:20 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Sep 30 14:19:20 compute-0 python3.9[110475]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:19:20 compute-0 sudo[110473]: pam_unix(sudo:session): session closed for user root
Sep 30 14:19:20 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:19:20 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf6c0022e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:19:20 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:19:20 compute-0 sudo[110625]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-deugrdcztxprweltegjbwcnrultydwby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241960.6299517-873-204580717542837/AnsiballZ_dnf.py'
Sep 30 14:19:20 compute-0 sudo[110625]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:19:21 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Sep 30 14:19:21 compute-0 python3.9[110627]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Sep 30 14:19:21 compute-0 ceph-mon[74194]: osdmap e133: 3 total, 3 up, 3 in
Sep 30 14:19:21 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Sep 30 14:19:21 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Sep 30 14:19:21 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:19:21 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf6c0022e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:19:21 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v97: 337 pgs: 1 peering, 336 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 54 B/s, 1 objects/s recovering
Sep 30 14:19:21 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:19:21 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:19:21 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:19:21.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:19:21 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:19:21 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:19:21 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:19:21.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:19:21 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:19:21 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf900014c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:19:22 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:19:22 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf78003500 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:19:22 compute-0 ceph-mon[74194]: osdmap e134: 3 total, 3 up, 3 in
Sep 30 14:19:22 compute-0 ceph-mon[74194]: pgmap v97: 337 pgs: 1 peering, 336 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 54 B/s, 1 objects/s recovering
Sep 30 14:19:22 compute-0 sudo[110625]: pam_unix(sudo:session): session closed for user root
Sep 30 14:19:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:19:23 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf6c0022e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:19:23 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v98: 337 pgs: 1 peering, 336 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 39 B/s, 0 objects/s recovering
Sep 30 14:19:23 compute-0 python3.9[110781]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 14:19:23 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:19:23 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:19:23 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:19:23.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:19:23 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:19:23 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:19:23 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:19:23.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:19:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:19:23 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf6c0022e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:19:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:19:24 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf900014c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:19:24 compute-0 python3.9[110934]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Sep 30 14:19:24 compute-0 ceph-mon[74194]: pgmap v98: 337 pgs: 1 peering, 336 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 39 B/s, 0 objects/s recovering
Sep 30 14:19:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:19:24] "GET /metrics HTTP/1.1" 200 48329 "" "Prometheus/2.51.0"
Sep 30 14:19:24 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:19:24] "GET /metrics HTTP/1.1" 200 48329 "" "Prometheus/2.51.0"
Sep 30 14:19:24 compute-0 python3.9[111084]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 14:19:25 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:19:25 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf900014c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:19:25 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:19:25 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v99: 337 pgs: 1 peering, 336 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 33 B/s, 0 objects/s recovering
Sep 30 14:19:25 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:19:25 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000036s ======
Sep 30 14:19:25 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:19:25.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000036s
Sep 30 14:19:25 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:19:25 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:19:25 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:19:25.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:19:25 compute-0 ceph-mon[74194]: pgmap v99: 337 pgs: 1 peering, 336 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 33 B/s, 0 objects/s recovering
Sep 30 14:19:25 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:19:25 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf900014c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:19:26 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:19:26 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf6c003fb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:19:26 compute-0 sudo[111236]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpmjnndcivzgmleopydoczezgrfhxyll ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241965.858526-996-262316227785053/AnsiballZ_systemd.py'
Sep 30 14:19:26 compute-0 sudo[111236]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:19:27 compute-0 python3.9[111238]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 14:19:27 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Sep 30 14:19:27 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Sep 30 14:19:27 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Sep 30 14:19:27 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Sep 30 14:19:27 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Sep 30 14:19:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:19:27 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf78003500 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:19:27 compute-0 sudo[111236]: pam_unix(sudo:session): session closed for user root
Sep 30 14:19:27 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v100: 337 pgs: 337 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 0 op/s; 27 B/s, 0 objects/s recovering
Sep 30 14:19:27 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0)
Sep 30 14:19:27 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Sep 30 14:19:27 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Sep 30 14:19:27 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Sep 30 14:19:27 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e135 e135: 3 total, 3 up, 3 in
Sep 30 14:19:27 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Sep 30 14:19:27 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:19:27 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:19:27 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:19:27.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:19:27 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Sep 30 14:19:27 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 135 pg[9.1e( empty local-lis/les=0/0 n=0 ec=55/37 lis/c=77/77 les/c/f=78/78/0 sis=135) [0] r=0 lpr=135 pi=[77,135)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:19:27 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:19:27 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:19:27 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:19:27.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:19:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:19:27 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf6c003fb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:19:28 compute-0 python3.9[111401]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Sep 30 14:19:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:19:28 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf70004450 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:19:28 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Sep 30 14:19:28 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Sep 30 14:19:28 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Sep 30 14:19:28 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 136 pg[9.1e( empty local-lis/les=0/0 n=0 ec=55/37 lis/c=77/77 les/c/f=78/78/0 sis=136) [0]/[1] r=-1 lpr=136 pi=[77,136)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:19:28 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 136 pg[9.1e( empty local-lis/les=0/0 n=0 ec=55/37 lis/c=77/77 les/c/f=78/78/0 sis=136) [0]/[1] r=-1 lpr=136 pi=[77,136)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Sep 30 14:19:28 compute-0 ceph-mon[74194]: pgmap v100: 337 pgs: 337 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 0 op/s; 27 B/s, 0 objects/s recovering
Sep 30 14:19:28 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Sep 30 14:19:28 compute-0 ceph-mon[74194]: osdmap e135: 3 total, 3 up, 3 in
Sep 30 14:19:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:19:29 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf90001660 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:19:29 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v103: 337 pgs: 337 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 0 op/s
Sep 30 14:19:29 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0)
Sep 30 14:19:29 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Sep 30 14:19:29 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:19:29 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:19:29 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:19:29.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:19:29 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Sep 30 14:19:29 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:19:29 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:19:29 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:19:29.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:19:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:19:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:19:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:19:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7ff9db54f220>)]
Sep 30 14:19:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Sep 30 14:19:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:19:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7ff9e4da3a00>)]
Sep 30 14:19:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Sep 30 14:19:29 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:19:29 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:19:29 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Sep 30 14:19:29 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Sep 30 14:19:29 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Sep 30 14:19:29 compute-0 ceph-mon[74194]: osdmap e136: 3 total, 3 up, 3 in
Sep 30 14:19:29 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Sep 30 14:19:29 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:19:29 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 137 pg[9.1f( empty local-lis/les=0/0 n=0 ec=55/37 lis/c=97/97 les/c/f=98/98/0 sis=137) [0] r=0 lpr=137 pi=[97,137)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:19:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:19:29 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf78003500 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:19:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:19:30 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf6c003fb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:19:30 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:19:30 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Sep 30 14:19:30 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Sep 30 14:19:30 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Sep 30 14:19:30 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 138 pg[9.1f( empty local-lis/les=0/0 n=0 ec=55/37 lis/c=97/97 les/c/f=98/98/0 sis=138) [0]/[1] r=-1 lpr=138 pi=[97,138)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:19:30 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 138 pg[9.1f( empty local-lis/les=0/0 n=0 ec=55/37 lis/c=97/97 les/c/f=98/98/0 sis=138) [0]/[1] r=-1 lpr=138 pi=[97,138)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Sep 30 14:19:30 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 138 pg[9.1e( v 48'1157 (0'0,48'1157] local-lis/les=0/0 n=5 ec=55/37 lis/c=136/77 les/c/f=137/78/0 sis=138) [0] r=0 lpr=138 pi=[77,138)/1 luod=0'0 crt=48'1157 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:19:30 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 138 pg[9.1e( v 48'1157 (0'0,48'1157] local-lis/les=0/0 n=5 ec=55/37 lis/c=136/77 les/c/f=137/78/0 sis=138) [0] r=0 lpr=138 pi=[77,138)/1 crt=48'1157 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:19:30 compute-0 ceph-mon[74194]: pgmap v103: 337 pgs: 337 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 0 op/s
Sep 30 14:19:30 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Sep 30 14:19:30 compute-0 ceph-mon[74194]: osdmap e137: 3 total, 3 up, 3 in
Sep 30 14:19:30 compute-0 ceph-mon[74194]: osdmap e138: 3 total, 3 up, 3 in
Sep 30 14:19:31 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : mgrmap e31: compute-0.buxlkm(active, since 93s), standbys: compute-2.udzudc, compute-1.zeqptq
Sep 30 14:19:31 compute-0 sudo[111554]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lthiyjthytnbkixegsapqmdyjxrkzyqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241971.0583098-1167-247881331276282/AnsiballZ_systemd.py'
Sep 30 14:19:31 compute-0 sudo[111554]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:19:31 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:19:31 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf70004450 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:19:31 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Sep 30 14:19:31 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v106: 337 pgs: 1 remapped+peering, 1 peering, 335 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s wr, 0 op/s; 0 B/s, 1 objects/s recovering
Sep 30 14:19:31 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e139 e139: 3 total, 3 up, 3 in
Sep 30 14:19:31 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Sep 30 14:19:31 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:19:31 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:19:31 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:19:31.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:19:31 compute-0 python3.9[111556]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 14:19:31 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 139 pg[9.1e( v 48'1157 (0'0,48'1157] local-lis/les=138/139 n=5 ec=55/37 lis/c=136/77 les/c/f=137/78/0 sis=138) [0] r=0 lpr=138 pi=[77,138)/1 crt=48'1157 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:19:31 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:19:31 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:19:31 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:19:31.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:19:31 compute-0 sudo[111554]: pam_unix(sudo:session): session closed for user root
Sep 30 14:19:31 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:19:31 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf90001660 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:19:32 compute-0 sudo[111709]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvrhqezwegnsyfnmvreuaveghynzjprd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241971.812463-1167-258733439719699/AnsiballZ_systemd.py'
Sep 30 14:19:32 compute-0 sudo[111709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:19:32 compute-0 ceph-mon[74194]: mgrmap e31: compute-0.buxlkm(active, since 93s), standbys: compute-2.udzudc, compute-1.zeqptq
Sep 30 14:19:32 compute-0 ceph-mon[74194]: pgmap v106: 337 pgs: 1 remapped+peering, 1 peering, 335 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s wr, 0 op/s; 0 B/s, 1 objects/s recovering
Sep 30 14:19:32 compute-0 ceph-mon[74194]: osdmap e139: 3 total, 3 up, 3 in
Sep 30 14:19:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:19:32 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf78003500 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:19:32 compute-0 python3.9[111711]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 14:19:32 compute-0 sudo[111709]: pam_unix(sudo:session): session closed for user root
Sep 30 14:19:32 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e139 do_prune osdmap full prune enabled
Sep 30 14:19:32 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e140 e140: 3 total, 3 up, 3 in
Sep 30 14:19:32 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e140: 3 total, 3 up, 3 in
Sep 30 14:19:32 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 140 pg[9.1f( v 48'1157 (0'0,48'1157] local-lis/les=0/0 n=5 ec=55/37 lis/c=138/97 les/c/f=139/98/0 sis=140) [0] r=0 lpr=140 pi=[97,140)/1 luod=0'0 crt=48'1157 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Sep 30 14:19:32 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 140 pg[9.1f( v 48'1157 (0'0,48'1157] local-lis/les=0/0 n=5 ec=55/37 lis/c=138/97 les/c/f=139/98/0 sis=140) [0] r=0 lpr=140 pi=[97,140)/1 crt=48'1157 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Sep 30 14:19:33 compute-0 sshd-session[100862]: Connection closed by 192.168.122.30 port 60000
Sep 30 14:19:33 compute-0 sshd-session[100859]: pam_unix(sshd:session): session closed for user zuul
Sep 30 14:19:33 compute-0 systemd[1]: session-39.scope: Deactivated successfully.
Sep 30 14:19:33 compute-0 systemd[1]: session-39.scope: Consumed 1min 2.984s CPU time.
Sep 30 14:19:33 compute-0 systemd-logind[808]: Session 39 logged out. Waiting for processes to exit.
Sep 30 14:19:33 compute-0 systemd-logind[808]: Removed session 39.
Sep 30 14:19:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:19:33 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf6c003fb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:19:33 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v109: 337 pgs: 1 remapped+peering, 1 peering, 335 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s wr, 0 op/s; 0 B/s, 1 objects/s recovering
Sep 30 14:19:33 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:19:33 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000036s ======
Sep 30 14:19:33 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:19:33.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000036s
Sep 30 14:19:33 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e140 do_prune osdmap full prune enabled
Sep 30 14:19:33 compute-0 ceph-mon[74194]: osdmap e140: 3 total, 3 up, 3 in
Sep 30 14:19:33 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 e141: 3 total, 3 up, 3 in
Sep 30 14:19:33 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e141: 3 total, 3 up, 3 in
Sep 30 14:19:33 compute-0 ceph-osd[82707]: osd.0 pg_epoch: 141 pg[9.1f( v 48'1157 (0'0,48'1157] local-lis/les=140/141 n=5 ec=55/37 lis/c=138/97 les/c/f=139/98/0 sis=140) [0] r=0 lpr=140 pi=[97,140)/1 crt=48'1157 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Sep 30 14:19:33 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:19:33 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000037s ======
Sep 30 14:19:33 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:19:33.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000037s
Sep 30 14:19:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:19:33 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf70004450 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:19:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:19:34 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf90001660 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:19:34 compute-0 ceph-mon[74194]: pgmap v109: 337 pgs: 1 remapped+peering, 1 peering, 335 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s wr, 0 op/s; 0 B/s, 1 objects/s recovering
Sep 30 14:19:34 compute-0 ceph-mon[74194]: osdmap e141: 3 total, 3 up, 3 in
Sep 30 14:19:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:19:34] "GET /metrics HTTP/1.1" 200 48319 "" "Prometheus/2.51.0"
Sep 30 14:19:34 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:19:34] "GET /metrics HTTP/1.1" 200 48319 "" "Prometheus/2.51.0"
Sep 30 14:19:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:19:35 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf78003500 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:19:35 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:19:35 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v111: 337 pgs: 1 remapped+peering, 1 peering, 335 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 609 B/s wr, 0 op/s; 0 B/s, 0 objects/s recovering
Sep 30 14:19:35 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:19:35 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000037s ======
Sep 30 14:19:35 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:19:35.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000037s
Sep 30 14:19:35 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:19:35 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:19:35 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:19:35.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:19:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[106217]: logger=infra.usagestats t=2025-09-30T14:19:35.86438138Z level=info msg="Usage stats are ready to report"
Sep 30 14:19:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:19:35 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf6c003fb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:19:36 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:19:36 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf70004450 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:19:36 compute-0 ceph-mon[74194]: pgmap v111: 337 pgs: 1 remapped+peering, 1 peering, 335 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 609 B/s wr, 0 op/s; 0 B/s, 0 objects/s recovering
Sep 30 14:19:37 compute-0 ceph-mgr[74485]: [dashboard INFO request] [192.168.122.100:42150] [POST] [200] [0.116s] [4.0B] [101a162a-1f5f-406e-92e4-db26fa845b07] /api/prometheus_receiver
Sep 30 14:19:37 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:19:37 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf70004450 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:19:37 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v112: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s; 18 B/s, 0 objects/s recovering
Sep 30 14:19:37 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:19:37 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000036s ======
Sep 30 14:19:37 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:19:37.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000036s
Sep 30 14:19:37 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:19:37 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:19:37 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:19:37.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:19:37 compute-0 ceph-mon[74194]: pgmap v112: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s; 18 B/s, 0 objects/s recovering
Sep 30 14:19:37 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:19:37 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf78003500 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:19:38 compute-0 sudo[111747]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:19:38 compute-0 sudo[111747]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:19:38 compute-0 sudo[111747]: pam_unix(sudo:session): session closed for user root
Sep 30 14:19:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:19:38 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf9c001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:19:38 compute-0 sshd-session[111772]: Accepted publickey for zuul from 192.168.122.30 port 40068 ssh2: ECDSA SHA256:bXV1aFTGAGwGo0hLh6HZ3pTGxlJrPf0VedxXflT3nU8
Sep 30 14:19:38 compute-0 systemd-logind[808]: New session 40 of user zuul.
Sep 30 14:19:38 compute-0 systemd[1]: Started Session 40 of User zuul.
Sep 30 14:19:38 compute-0 sshd-session[111772]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Sep 30 14:19:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:19:39 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf6c003fb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:19:39 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v113: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 386 B/s rd, 0 op/s; 13 B/s, 0 objects/s recovering
Sep 30 14:19:39 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:19:39 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:19:39 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:19:39.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:19:39 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:19:39 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:19:39 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:19:39.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:19:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:19:39 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf6c003fb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:19:40 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:19:40 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf78003500 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:19:40 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:19:40 compute-0 python3.9[111927]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Sep 30 14:19:40 compute-0 ceph-mon[74194]: pgmap v113: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 386 B/s rd, 0 op/s; 13 B/s, 0 objects/s recovering
Sep 30 14:19:41 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:19:41 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf9c001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:19:41 compute-0 sudo[112082]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvzagzaqczuouffolxjqidivjfwwxjza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241980.9866912-68-204484068188044/AnsiballZ_getent.py'
Sep 30 14:19:41 compute-0 sudo[112082]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:19:41 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v114: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 346 B/s rd, 0 op/s; 12 B/s, 0 objects/s recovering
Sep 30 14:19:41 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:19:41 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:19:41 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:19:41.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:19:41 compute-0 python3.9[112084]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Sep 30 14:19:41 compute-0 sudo[112082]: pam_unix(sudo:session): session closed for user root
Sep 30 14:19:41 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:19:41 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000037s ======
Sep 30 14:19:41 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:19:41.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000037s
Sep 30 14:19:41 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:19:41 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf6c003fb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:19:41 compute-0 ceph-mon[74194]: pgmap v114: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 346 B/s rd, 0 op/s; 12 B/s, 0 objects/s recovering
Sep 30 14:19:42 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:19:42 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf6c003fb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:19:42 compute-0 sudo[112236]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akyfgkjerwtqksekhpktonjbqpvlzkbn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241981.9889314-104-264646337353678/AnsiballZ_setup.py'
Sep 30 14:19:42 compute-0 sudo[112236]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:19:42 compute-0 python3.9[112238]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Sep 30 14:19:42 compute-0 sudo[112236]: pam_unix(sudo:session): session closed for user root
Sep 30 14:19:43 compute-0 sudo[112320]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwypyggkltyaxkaznyqvriscxkwydgwt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241981.9889314-104-264646337353678/AnsiballZ_dnf.py'
Sep 30 14:19:43 compute-0 sudo[112320]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:19:43 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:19:43 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf78003500 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:19:43 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v115: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 0 op/s; 10 B/s, 0 objects/s recovering
Sep 30 14:19:43 compute-0 python3.9[112322]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Sep 30 14:19:43 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:19:43 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:19:43 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:19:43.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:19:43 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:19:43 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:19:43 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:19:43.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:19:43 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:19:43 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf9c001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:19:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:19:44 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf6c003fb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:19:44 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:19:44 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:19:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:19:44] "GET /metrics HTTP/1.1" 200 48323 "" "Prometheus/2.51.0"
Sep 30 14:19:44 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:19:44] "GET /metrics HTTP/1.1" 200 48323 "" "Prometheus/2.51.0"
Sep 30 14:19:44 compute-0 ceph-mon[74194]: pgmap v115: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 0 op/s; 10 B/s, 0 objects/s recovering
Sep 30 14:19:45 compute-0 sudo[112320]: pam_unix(sudo:session): session closed for user root
Sep 30 14:19:45 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:19:45 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf6c003fb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:19:45 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:19:45 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v116: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 258 B/s rd, 0 op/s; 9 B/s, 0 objects/s recovering
Sep 30 14:19:45 compute-0 sudo[112476]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtslzgtyimmlkmkfznetzhofhkboqcsz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241985.2926705-146-230985944962844/AnsiballZ_dnf.py'
Sep 30 14:19:45 compute-0 sudo[112476]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:19:45 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:19:45 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:19:45 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:19:45.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:19:45 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:19:45 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:19:45 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:19:45.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:19:45 compute-0 python3.9[112478]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Sep 30 14:19:45 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:19:45 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf78003500 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:19:46 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:19:46 compute-0 ceph-mon[74194]: pgmap v116: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 258 B/s rd, 0 op/s; 9 B/s, 0 objects/s recovering
Sep 30 14:19:46 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:19:46 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf9c001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:19:46 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:19:46.951Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:19:46 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:19:46.952Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:19:47 compute-0 sudo[112476]: pam_unix(sudo:session): session closed for user root
Sep 30 14:19:47 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:19:47 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf6c003fb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:19:47 compute-0 sudo[112489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:19:47 compute-0 sudo[112489]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:19:47 compute-0 sudo[112489]: pam_unix(sudo:session): session closed for user root
Sep 30 14:19:47 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v117: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s; 9 B/s, 0 objects/s recovering
Sep 30 14:19:47 compute-0 sudo[112531]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Sep 30 14:19:47 compute-0 sudo[112531]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:19:47 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:19:47 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:19:47 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:19:47.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:19:47 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:19:47 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:19:47 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:19:47.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:19:47 compute-0 kernel: ganesha.nfsd[108680]: segfault at 50 ip 00007fe0496d732e sp 00007fe0027fb210 error 4 in libntirpc.so.5.8[7fe0496bc000+2c000] likely on CPU 7 (core 0, socket 7)
Sep 30 14:19:47 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Sep 30 14:19:47 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[96817]: 30/09/2025 14:19:47 : epoch 68dbe642 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf90004820 fd 48 proxy ignored for local
Sep 30 14:19:47 compute-0 systemd[1]: Created slice Slice /system/systemd-coredump.
Sep 30 14:19:47 compute-0 systemd[1]: Started Process Core Dump (PID 112693/UID 0).
Sep 30 14:19:48 compute-0 podman[112684]: 2025-09-30 14:19:48.195303409 +0000 UTC m=+0.254446130 container exec a277d7b6b6f3cf10a7ce0ade5eebf0f8127074c248f9bce4451399614b97ded5 (image=quay.io/ceph/ceph:v19, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mon-compute-0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:19:48 compute-0 sudo[112779]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bazfdnpabbtykftomavdxzyilhvrwyiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241987.6602933-170-120903504252902/AnsiballZ_systemd.py'
Sep 30 14:19:48 compute-0 sudo[112779]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:19:48 compute-0 podman[112684]: 2025-09-30 14:19:48.330767445 +0000 UTC m=+0.389910146 container exec_died a277d7b6b6f3cf10a7ce0ade5eebf0f8127074c248f9bce4451399614b97ded5 (image=quay.io/ceph/ceph:v19, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:19:48 compute-0 python3.9[112781]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Sep 30 14:19:48 compute-0 sudo[112779]: pam_unix(sudo:session): session closed for user root
Sep 30 14:19:48 compute-0 ceph-mon[74194]: pgmap v117: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s; 9 B/s, 0 objects/s recovering
Sep 30 14:19:48 compute-0 podman[112903]: 2025-09-30 14:19:48.984854855 +0000 UTC m=+0.171238389 container exec 7517aa84b8564a81255eab7821e47762fe9b9d86aae2c7d77e10c0dfa057ab6d (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:19:49 compute-0 podman[112929]: 2025-09-30 14:19:49.15735101 +0000 UTC m=+0.158306139 container exec_died 7517aa84b8564a81255eab7821e47762fe9b9d86aae2c7d77e10c0dfa057ab6d (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:19:49 compute-0 podman[112903]: 2025-09-30 14:19:49.235238458 +0000 UTC m=+0.421621972 container exec_died 7517aa84b8564a81255eab7821e47762fe9b9d86aae2c7d77e10c0dfa057ab6d (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:19:49 compute-0 systemd-coredump[112699]: Process 96821 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 66:
                                                    #0  0x00007fe0496d732e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Sep 30 14:19:49 compute-0 systemd[1]: systemd-coredump@0-112693-0.service: Deactivated successfully.
Sep 30 14:19:49 compute-0 systemd[1]: systemd-coredump@0-112693-0.service: Consumed 1.208s CPU time.
Sep 30 14:19:49 compute-0 podman[113073]: 2025-09-30 14:19:49.442303582 +0000 UTC m=+0.030763282 container died 7e80d1c63fee1012bbcba29dc5974698e4c3e504ac2a1caae6c03536ec058cd5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1)
Sep 30 14:19:49 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v118: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:19:49 compute-0 python3.9[113067]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Sep 30 14:19:49 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:19:49 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:19:49 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:19:49.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:19:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-04bd5e04b989fcfa58fe4b394da2d15c4df665969f1a586e1386d49d135dadd4-merged.mount: Deactivated successfully.
Sep 30 14:19:49 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:19:49 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:19:49 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:19:49.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:19:49 compute-0 podman[113073]: 2025-09-30 14:19:49.933711256 +0000 UTC m=+0.522170946 container remove 7e80d1c63fee1012bbcba29dc5974698e4c3e504ac2a1caae6c03536ec058cd5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Sep 30 14:19:49 compute-0 systemd[1]: ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6@nfs.cephfs.2.0.compute-0.qrbicy.service: Main process exited, code=exited, status=139/n/a
Sep 30 14:19:49 compute-0 ceph-mon[74194]: pgmap v118: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:19:50 compute-0 systemd[1]: ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6@nfs.cephfs.2.0.compute-0.qrbicy.service: Failed with result 'exit-code'.
Sep 30 14:19:50 compute-0 systemd[1]: ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6@nfs.cephfs.2.0.compute-0.qrbicy.service: Consumed 1.832s CPU time.
Sep 30 14:19:50 compute-0 sudo[113329]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvhruduetedffftukuuypipxyfrpjolc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241989.8266928-224-256172494838484/AnsiballZ_sefcontext.py'
Sep 30 14:19:50 compute-0 sudo[113329]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:19:50 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:19:50 compute-0 podman[113363]: 2025-09-30 14:19:50.569329723 +0000 UTC m=+0.172413323 container exec ec49c6e24c4fbc830188fe80824f1adb9a8c3cd6d4f4491a3e9330b04061bea8 (image=quay.io/ceph/haproxy:2.3, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-nfs-cephfs-compute-0-yvkpei)
Sep 30 14:19:50 compute-0 python3.9[113332]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Sep 30 14:19:50 compute-0 podman[113384]: 2025-09-30 14:19:50.727364591 +0000 UTC m=+0.090161976 container exec_died ec49c6e24c4fbc830188fe80824f1adb9a8c3cd6d4f4491a3e9330b04061bea8 (image=quay.io/ceph/haproxy:2.3, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-nfs-cephfs-compute-0-yvkpei)
Sep 30 14:19:50 compute-0 sudo[113329]: pam_unix(sudo:session): session closed for user root
Sep 30 14:19:50 compute-0 podman[113363]: 2025-09-30 14:19:50.812365087 +0000 UTC m=+0.415448667 container exec_died ec49c6e24c4fbc830188fe80824f1adb9a8c3cd6d4f4491a3e9330b04061bea8 (image=quay.io/ceph/haproxy:2.3, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-nfs-cephfs-compute-0-yvkpei)
Sep 30 14:19:51 compute-0 podman[113475]: 2025-09-30 14:19:51.184605989 +0000 UTC m=+0.157858512 container exec df25873f420822291a2a2f3e4272e6ab946447daa59ec12441fae67f848da096 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-keepalived-nfs-cephfs-compute-0-nfjjcv, summary=Provides keepalived on RHEL 9 for Ceph., name=keepalived, version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.28.2, vcs-type=git, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.display-name=Keepalived on RHEL 9, description=keepalived for Ceph, distribution-scope=public, release=1793, io.openshift.expose-services=, vendor=Red Hat, Inc.)
Sep 30 14:19:51 compute-0 podman[113475]: 2025-09-30 14:19:51.295651485 +0000 UTC m=+0.268903988 container exec_died df25873f420822291a2a2f3e4272e6ab946447daa59ec12441fae67f848da096 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-keepalived-nfs-cephfs-compute-0-nfjjcv, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, name=keepalived, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, io.openshift.tags=Ceph keepalived, io.openshift.expose-services=, io.buildah.version=1.28.2, vcs-type=git, description=keepalived for Ceph, architecture=x86_64, release=1793, version=2.2.4)
Sep 30 14:19:51 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v119: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:19:51 compute-0 python3.9[113597]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Sep 30 14:19:51 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:19:51 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000037s ======
Sep 30 14:19:51 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:19:51.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000037s
Sep 30 14:19:51 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:19:51 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:19:51 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:19:51.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:19:51 compute-0 podman[113642]: 2025-09-30 14:19:51.756048598 +0000 UTC m=+0.053906524 container exec b02a1f46575144d1c0fa40fb1da73aeaa83cbe57512ae5912168f030bf7101d3 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:19:51 compute-0 podman[113642]: 2025-09-30 14:19:51.778875699 +0000 UTC m=+0.076733625 container exec_died b02a1f46575144d1c0fa40fb1da73aeaa83cbe57512ae5912168f030bf7101d3 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:19:51 compute-0 podman[113741]: 2025-09-30 14:19:51.973586899 +0000 UTC m=+0.044631906 container exec 4fd9639868c9fdb652f2d65dd14f46e8bfbcca13240732508ba689971c876ee0 (image=quay.io/ceph/grafana:10.4.0, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Sep 30 14:19:52 compute-0 podman[113741]: 2025-09-30 14:19:52.146624284 +0000 UTC m=+0.217669261 container exec_died 4fd9639868c9fdb652f2d65dd14f46e8bfbcca13240732508ba689971c876ee0 (image=quay.io/ceph/grafana:10.4.0, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Sep 30 14:19:52 compute-0 sudo[113946]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ethbiaqyaeaigzjfgiikpozdpbbkxkzk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241992.0447285-278-69561587269273/AnsiballZ_dnf.py'
Sep 30 14:19:52 compute-0 sudo[113946]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:19:52 compute-0 podman[113979]: 2025-09-30 14:19:52.508993331 +0000 UTC m=+0.053161619 container exec e4a50bbeb60f228cd09239a211f5e468f7ca87363229c6999e3900e12da32b57 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:19:52 compute-0 podman[113979]: 2025-09-30 14:19:52.544542818 +0000 UTC m=+0.088711096 container exec_died e4a50bbeb60f228cd09239a211f5e468f7ca87363229c6999e3900e12da32b57 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:19:52 compute-0 ceph-mon[74194]: pgmap v119: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:19:52 compute-0 python3.9[113950]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Sep 30 14:19:52 compute-0 sudo[112531]: pam_unix(sudo:session): session closed for user root
Sep 30 14:19:52 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 14:19:52 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:19:52 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 14:19:52 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:19:52 compute-0 ceph-mon[74194]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Sep 30 14:19:52 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:19:52.645964) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Sep 30 14:19:52 compute-0 ceph-mon[74194]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Sep 30 14:19:52 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759241992646007, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 2800, "num_deletes": 252, "total_data_size": 7315969, "memory_usage": 7594336, "flush_reason": "Manual Compaction"}
Sep 30 14:19:52 compute-0 ceph-mon[74194]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Sep 30 14:19:52 compute-0 sudo[114024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:19:52 compute-0 sudo[114024]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:19:52 compute-0 sudo[114024]: pam_unix(sudo:session): session closed for user root
Sep 30 14:19:52 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759241992716334, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 6874413, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8184, "largest_seqno": 10981, "table_properties": {"data_size": 6860924, "index_size": 8767, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3653, "raw_key_size": 31965, "raw_average_key_size": 22, "raw_value_size": 6832192, "raw_average_value_size": 4734, "num_data_blocks": 381, "num_entries": 1443, "num_filter_entries": 1443, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759241867, "oldest_key_time": 1759241867, "file_creation_time": 1759241992, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4a74fe2f-a33e-416b-ba25-743e7942b3ac", "db_session_id": "KY5CTSKWFSFJYE5835A9", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Sep 30 14:19:52 compute-0 ceph-mon[74194]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 70422 microseconds, and 11458 cpu microseconds.
Sep 30 14:19:52 compute-0 ceph-mon[74194]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 14:19:52 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:19:52.716384) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 6874413 bytes OK
Sep 30 14:19:52 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:19:52.716407) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Sep 30 14:19:52 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:19:52.718250) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Sep 30 14:19:52 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:19:52.718272) EVENT_LOG_v1 {"time_micros": 1759241992718266, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Sep 30 14:19:52 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:19:52.718289) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Sep 30 14:19:52 compute-0 ceph-mon[74194]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 7303209, prev total WAL file size 7303209, number of live WAL files 2.
Sep 30 14:19:52 compute-0 ceph-mon[74194]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 14:19:52 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:19:52.719602) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Sep 30 14:19:52 compute-0 ceph-mon[74194]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Sep 30 14:19:52 compute-0 ceph-mon[74194]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(6713KB)], [23(11MB)]
Sep 30 14:19:52 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759241992719648, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 18856684, "oldest_snapshot_seqno": -1}
Sep 30 14:19:52 compute-0 sudo[114049]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 14:19:52 compute-0 sudo[114049]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:19:52 compute-0 ceph-mon[74194]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 4128 keys, 14527420 bytes, temperature: kUnknown
Sep 30 14:19:52 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759241992887037, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 14527420, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14493750, "index_size": 22232, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10373, "raw_key_size": 105148, "raw_average_key_size": 25, "raw_value_size": 14412171, "raw_average_value_size": 3491, "num_data_blocks": 956, "num_entries": 4128, "num_filter_entries": 4128, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759241526, "oldest_key_time": 0, "file_creation_time": 1759241992, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4a74fe2f-a33e-416b-ba25-743e7942b3ac", "db_session_id": "KY5CTSKWFSFJYE5835A9", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Sep 30 14:19:52 compute-0 ceph-mon[74194]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 14:19:52 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:19:52.887333) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 14527420 bytes
Sep 30 14:19:52 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:19:52.890396) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 112.6 rd, 86.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(6.6, 11.4 +0.0 blob) out(13.9 +0.0 blob), read-write-amplify(4.9) write-amplify(2.1) OK, records in: 4664, records dropped: 536 output_compression: NoCompression
Sep 30 14:19:52 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:19:52.890430) EVENT_LOG_v1 {"time_micros": 1759241992890417, "job": 8, "event": "compaction_finished", "compaction_time_micros": 167469, "compaction_time_cpu_micros": 29707, "output_level": 6, "num_output_files": 1, "total_output_size": 14527420, "num_input_records": 4664, "num_output_records": 4128, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Sep 30 14:19:52 compute-0 ceph-mon[74194]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 14:19:52 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759241992891632, "job": 8, "event": "table_file_deletion", "file_number": 25}
Sep 30 14:19:52 compute-0 ceph-mon[74194]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 14:19:52 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759241992893750, "job": 8, "event": "table_file_deletion", "file_number": 23}
Sep 30 14:19:52 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:19:52.719534) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:19:52 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:19:52.893897) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:19:52 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:19:52.893903) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:19:52 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:19:52.893905) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:19:52 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:19:52.893907) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:19:52 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:19:52.893909) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:19:53 compute-0 sudo[114049]: pam_unix(sudo:session): session closed for user root
Sep 30 14:19:53 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:19:53 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:19:53 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 14:19:53 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 14:19:53 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v120: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 261 B/s rd, 0 op/s
Sep 30 14:19:53 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 14:19:53 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:19:53 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 14:19:53 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:19:53 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 14:19:53 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 14:19:53 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 14:19:53 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 14:19:53 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:19:53 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:19:53 compute-0 sudo[114105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:19:53 compute-0 sudo[114105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:19:53 compute-0 sudo[114105]: pam_unix(sudo:session): session closed for user root
Sep 30 14:19:53 compute-0 sudo[114131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 14:19:53 compute-0 sudo[114131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:19:53 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:19:53 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:19:53 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:19:53.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:19:53 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:19:53 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:19:53 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:19:53 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 14:19:53 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:19:53 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:19:53 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 14:19:53 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 14:19:53 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:19:53 compute-0 ceph-mon[74194]: log_channel(cluster) log [WRN] : Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Sep 30 14:19:53 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:19:53 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:19:53 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:19:53.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:19:53 compute-0 podman[114196]: 2025-09-30 14:19:53.777827122 +0000 UTC m=+0.038090395 container create 8539e67bb62069b112e388d1dab219fb9bf6245e5d1ea8aa38956c9383e0cf71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_grothendieck, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Sep 30 14:19:53 compute-0 systemd[1]: Started libpod-conmon-8539e67bb62069b112e388d1dab219fb9bf6245e5d1ea8aa38956c9383e0cf71.scope.
Sep 30 14:19:53 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:19:53 compute-0 podman[114196]: 2025-09-30 14:19:53.855083618 +0000 UTC m=+0.115346921 container init 8539e67bb62069b112e388d1dab219fb9bf6245e5d1ea8aa38956c9383e0cf71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_grothendieck, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Sep 30 14:19:53 compute-0 podman[114196]: 2025-09-30 14:19:53.763277023 +0000 UTC m=+0.023540326 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:19:53 compute-0 podman[114196]: 2025-09-30 14:19:53.875254865 +0000 UTC m=+0.135518138 container start 8539e67bb62069b112e388d1dab219fb9bf6245e5d1ea8aa38956c9383e0cf71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_grothendieck, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Sep 30 14:19:53 compute-0 podman[114196]: 2025-09-30 14:19:53.879005023 +0000 UTC m=+0.139268296 container attach 8539e67bb62069b112e388d1dab219fb9bf6245e5d1ea8aa38956c9383e0cf71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_grothendieck, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:19:53 compute-0 priceless_grothendieck[114213]: 167 167
Sep 30 14:19:53 compute-0 systemd[1]: libpod-8539e67bb62069b112e388d1dab219fb9bf6245e5d1ea8aa38956c9383e0cf71.scope: Deactivated successfully.
Sep 30 14:19:53 compute-0 podman[114196]: 2025-09-30 14:19:53.880614345 +0000 UTC m=+0.140877618 container died 8539e67bb62069b112e388d1dab219fb9bf6245e5d1ea8aa38956c9383e0cf71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_grothendieck, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Sep 30 14:19:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-edc0b08c5800323a07b975e5f6b0bfcf94feee859294baf2f928dfeab76f0eb9-merged.mount: Deactivated successfully.
Sep 30 14:19:53 compute-0 podman[114196]: 2025-09-30 14:19:53.920756652 +0000 UTC m=+0.181019925 container remove 8539e67bb62069b112e388d1dab219fb9bf6245e5d1ea8aa38956c9383e0cf71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Sep 30 14:19:53 compute-0 systemd[1]: libpod-conmon-8539e67bb62069b112e388d1dab219fb9bf6245e5d1ea8aa38956c9383e0cf71.scope: Deactivated successfully.
Sep 30 14:19:53 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-nfs-cephfs-compute-0-yvkpei[97239]: [WARNING] 272/141953 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Sep 30 14:19:54 compute-0 podman[114237]: 2025-09-30 14:19:54.058000974 +0000 UTC m=+0.038777283 container create 3d980e75c5834107ce293483575c44545fb175126efa563441ef750c9156c5af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_shaw, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Sep 30 14:19:54 compute-0 sudo[113946]: pam_unix(sudo:session): session closed for user root
Sep 30 14:19:54 compute-0 systemd[1]: Started libpod-conmon-3d980e75c5834107ce293483575c44545fb175126efa563441ef750c9156c5af.scope.
Sep 30 14:19:54 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:19:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cef97d2850848a045ed075b18d59dc8151de7bc43f6a2aa9f3d479b424fd8c7e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:19:54 compute-0 podman[114237]: 2025-09-30 14:19:54.042323345 +0000 UTC m=+0.023099684 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:19:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cef97d2850848a045ed075b18d59dc8151de7bc43f6a2aa9f3d479b424fd8c7e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:19:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cef97d2850848a045ed075b18d59dc8151de7bc43f6a2aa9f3d479b424fd8c7e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:19:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cef97d2850848a045ed075b18d59dc8151de7bc43f6a2aa9f3d479b424fd8c7e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:19:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cef97d2850848a045ed075b18d59dc8151de7bc43f6a2aa9f3d479b424fd8c7e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 14:19:54 compute-0 podman[114237]: 2025-09-30 14:19:54.15748288 +0000 UTC m=+0.138259219 container init 3d980e75c5834107ce293483575c44545fb175126efa563441ef750c9156c5af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_shaw, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Sep 30 14:19:54 compute-0 podman[114237]: 2025-09-30 14:19:54.163717542 +0000 UTC m=+0.144493851 container start 3d980e75c5834107ce293483575c44545fb175126efa563441ef750c9156c5af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_shaw, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Sep 30 14:19:54 compute-0 podman[114237]: 2025-09-30 14:19:54.167900352 +0000 UTC m=+0.148676701 container attach 3d980e75c5834107ce293483575c44545fb175126efa563441ef750c9156c5af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_shaw, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:19:54 compute-0 distracted_shaw[114262]: --> passed data devices: 0 physical, 1 LVM
Sep 30 14:19:54 compute-0 distracted_shaw[114262]: --> All data devices are unavailable
Sep 30 14:19:54 compute-0 systemd[1]: libpod-3d980e75c5834107ce293483575c44545fb175126efa563441ef750c9156c5af.scope: Deactivated successfully.
Sep 30 14:19:54 compute-0 podman[114237]: 2025-09-30 14:19:54.482157382 +0000 UTC m=+0.462933741 container died 3d980e75c5834107ce293483575c44545fb175126efa563441ef750c9156c5af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_shaw, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:19:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-cef97d2850848a045ed075b18d59dc8151de7bc43f6a2aa9f3d479b424fd8c7e-merged.mount: Deactivated successfully.
Sep 30 14:19:54 compute-0 podman[114237]: 2025-09-30 14:19:54.534307473 +0000 UTC m=+0.515083792 container remove 3d980e75c5834107ce293483575c44545fb175126efa563441ef750c9156c5af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_shaw, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Sep 30 14:19:54 compute-0 systemd[1]: libpod-conmon-3d980e75c5834107ce293483575c44545fb175126efa563441ef750c9156c5af.scope: Deactivated successfully.
Sep 30 14:19:54 compute-0 sudo[114131]: pam_unix(sudo:session): session closed for user root
Sep 30 14:19:54 compute-0 ceph-mon[74194]: pgmap v120: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 261 B/s rd, 0 op/s
Sep 30 14:19:54 compute-0 ceph-mon[74194]: Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Sep 30 14:19:54 compute-0 sudo[114382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:19:54 compute-0 sudo[114382]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:19:54 compute-0 sudo[114382]: pam_unix(sudo:session): session closed for user root
Sep 30 14:19:54 compute-0 sudo[114432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -- lvm list --format json
Sep 30 14:19:54 compute-0 sudo[114432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:19:54 compute-0 sudo[114480]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nehuushtwutqmkcanotgzaukeluxmtun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241994.291663-302-203322326983978/AnsiballZ_command.py'
Sep 30 14:19:54 compute-0 sudo[114480]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:19:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:19:54] "GET /metrics HTTP/1.1" 200 48323 "" "Prometheus/2.51.0"
Sep 30 14:19:54 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:19:54] "GET /metrics HTTP/1.1" 200 48323 "" "Prometheus/2.51.0"
Sep 30 14:19:54 compute-0 python3.9[114484]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:19:55 compute-0 podman[114532]: 2025-09-30 14:19:55.051587232 +0000 UTC m=+0.040551609 container create e624c34bb92fe445461dfeffa1df258045daee8dd01043ff3a14eedad534af14 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:19:55 compute-0 systemd[1]: Started libpod-conmon-e624c34bb92fe445461dfeffa1df258045daee8dd01043ff3a14eedad534af14.scope.
Sep 30 14:19:55 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:19:55 compute-0 podman[114532]: 2025-09-30 14:19:55.107265735 +0000 UTC m=+0.096230132 container init e624c34bb92fe445461dfeffa1df258045daee8dd01043ff3a14eedad534af14 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:19:55 compute-0 podman[114532]: 2025-09-30 14:19:55.114000631 +0000 UTC m=+0.102964998 container start e624c34bb92fe445461dfeffa1df258045daee8dd01043ff3a14eedad534af14 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_heisenberg, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Sep 30 14:19:55 compute-0 podman[114532]: 2025-09-30 14:19:55.11778196 +0000 UTC m=+0.106746367 container attach e624c34bb92fe445461dfeffa1df258045daee8dd01043ff3a14eedad534af14 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_heisenberg, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Sep 30 14:19:55 compute-0 pensive_heisenberg[114548]: 167 167
Sep 30 14:19:55 compute-0 systemd[1]: libpod-e624c34bb92fe445461dfeffa1df258045daee8dd01043ff3a14eedad534af14.scope: Deactivated successfully.
Sep 30 14:19:55 compute-0 podman[114532]: 2025-09-30 14:19:55.120725947 +0000 UTC m=+0.109690324 container died e624c34bb92fe445461dfeffa1df258045daee8dd01043ff3a14eedad534af14 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_heisenberg, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Sep 30 14:19:55 compute-0 podman[114532]: 2025-09-30 14:19:55.032931116 +0000 UTC m=+0.021895513 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:19:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-cfc9f88b90a42680930898f3ac22f3b6adc711c4e25aeec95042f51b5860eb44-merged.mount: Deactivated successfully.
Sep 30 14:19:55 compute-0 podman[114532]: 2025-09-30 14:19:55.1626062 +0000 UTC m=+0.151570567 container remove e624c34bb92fe445461dfeffa1df258045daee8dd01043ff3a14eedad534af14 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_heisenberg, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:19:55 compute-0 systemd[1]: libpod-conmon-e624c34bb92fe445461dfeffa1df258045daee8dd01043ff3a14eedad534af14.scope: Deactivated successfully.
Sep 30 14:19:55 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v121: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 261 B/s rd, 0 op/s
Sep 30 14:19:55 compute-0 podman[114571]: 2025-09-30 14:19:55.312153122 +0000 UTC m=+0.040461957 container create 80c8be5845efa49eebe3464d4b10d212fcf9916cc9d082869cf152e5b408956b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_chatterjee, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Sep 30 14:19:55 compute-0 systemd[1]: Started libpod-conmon-80c8be5845efa49eebe3464d4b10d212fcf9916cc9d082869cf152e5b408956b.scope.
Sep 30 14:19:55 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:19:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3c8ac1d9477b78c5c432fb33f1d5dadda789e11a84cc4e3511ba7ac80356432/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:19:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3c8ac1d9477b78c5c432fb33f1d5dadda789e11a84cc4e3511ba7ac80356432/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:19:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3c8ac1d9477b78c5c432fb33f1d5dadda789e11a84cc4e3511ba7ac80356432/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:19:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3c8ac1d9477b78c5c432fb33f1d5dadda789e11a84cc4e3511ba7ac80356432/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:19:55 compute-0 podman[114571]: 2025-09-30 14:19:55.29330253 +0000 UTC m=+0.021611175 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:19:55 compute-0 podman[114571]: 2025-09-30 14:19:55.39411917 +0000 UTC m=+0.122427795 container init 80c8be5845efa49eebe3464d4b10d212fcf9916cc9d082869cf152e5b408956b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_chatterjee, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:19:55 compute-0 podman[114571]: 2025-09-30 14:19:55.402436837 +0000 UTC m=+0.130745452 container start 80c8be5845efa49eebe3464d4b10d212fcf9916cc9d082869cf152e5b408956b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_chatterjee, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:19:55 compute-0 podman[114571]: 2025-09-30 14:19:55.406115763 +0000 UTC m=+0.134424408 container attach 80c8be5845efa49eebe3464d4b10d212fcf9916cc9d082869cf152e5b408956b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_chatterjee, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:19:55 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:19:55 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:19:55 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:19:55 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:19:55.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:19:55 compute-0 sudo[114480]: pam_unix(sudo:session): session closed for user root
Sep 30 14:19:55 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:19:55 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:19:55 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:19:55.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:19:55 compute-0 musing_chatterjee[114625]: {
Sep 30 14:19:55 compute-0 musing_chatterjee[114625]:     "0": [
Sep 30 14:19:55 compute-0 musing_chatterjee[114625]:         {
Sep 30 14:19:55 compute-0 musing_chatterjee[114625]:             "devices": [
Sep 30 14:19:55 compute-0 musing_chatterjee[114625]:                 "/dev/loop3"
Sep 30 14:19:55 compute-0 musing_chatterjee[114625]:             ],
Sep 30 14:19:55 compute-0 musing_chatterjee[114625]:             "lv_name": "ceph_lv0",
Sep 30 14:19:55 compute-0 musing_chatterjee[114625]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:19:55 compute-0 musing_chatterjee[114625]:             "lv_size": "21470642176",
Sep 30 14:19:55 compute-0 musing_chatterjee[114625]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5e3c7776-ac03-5698-b79f-a6dc2d80cae6,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1bf35304-bfb4-41f5-b832-570aa31de1b2,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 14:19:55 compute-0 musing_chatterjee[114625]:             "lv_uuid": "f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L",
Sep 30 14:19:55 compute-0 musing_chatterjee[114625]:             "name": "ceph_lv0",
Sep 30 14:19:55 compute-0 musing_chatterjee[114625]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:19:55 compute-0 musing_chatterjee[114625]:             "tags": {
Sep 30 14:19:55 compute-0 musing_chatterjee[114625]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:19:55 compute-0 musing_chatterjee[114625]:                 "ceph.block_uuid": "f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L",
Sep 30 14:19:55 compute-0 musing_chatterjee[114625]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 14:19:55 compute-0 musing_chatterjee[114625]:                 "ceph.cluster_fsid": "5e3c7776-ac03-5698-b79f-a6dc2d80cae6",
Sep 30 14:19:55 compute-0 musing_chatterjee[114625]:                 "ceph.cluster_name": "ceph",
Sep 30 14:19:55 compute-0 musing_chatterjee[114625]:                 "ceph.crush_device_class": "",
Sep 30 14:19:55 compute-0 musing_chatterjee[114625]:                 "ceph.encrypted": "0",
Sep 30 14:19:55 compute-0 musing_chatterjee[114625]:                 "ceph.osd_fsid": "1bf35304-bfb4-41f5-b832-570aa31de1b2",
Sep 30 14:19:55 compute-0 musing_chatterjee[114625]:                 "ceph.osd_id": "0",
Sep 30 14:19:55 compute-0 musing_chatterjee[114625]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 14:19:55 compute-0 musing_chatterjee[114625]:                 "ceph.type": "block",
Sep 30 14:19:55 compute-0 musing_chatterjee[114625]:                 "ceph.vdo": "0",
Sep 30 14:19:55 compute-0 musing_chatterjee[114625]:                 "ceph.with_tpm": "0"
Sep 30 14:19:55 compute-0 musing_chatterjee[114625]:             },
Sep 30 14:19:55 compute-0 musing_chatterjee[114625]:             "type": "block",
Sep 30 14:19:55 compute-0 musing_chatterjee[114625]:             "vg_name": "ceph_vg0"
Sep 30 14:19:55 compute-0 musing_chatterjee[114625]:         }
Sep 30 14:19:55 compute-0 musing_chatterjee[114625]:     ]
Sep 30 14:19:55 compute-0 musing_chatterjee[114625]: }
Sep 30 14:19:55 compute-0 systemd[1]: libpod-80c8be5845efa49eebe3464d4b10d212fcf9916cc9d082869cf152e5b408956b.scope: Deactivated successfully.
Sep 30 14:19:55 compute-0 podman[114571]: 2025-09-30 14:19:55.710437325 +0000 UTC m=+0.438745940 container died 80c8be5845efa49eebe3464d4b10d212fcf9916cc9d082869cf152e5b408956b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Sep 30 14:19:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-e3c8ac1d9477b78c5c432fb33f1d5dadda789e11a84cc4e3511ba7ac80356432-merged.mount: Deactivated successfully.
Sep 30 14:19:55 compute-0 podman[114571]: 2025-09-30 14:19:55.749792682 +0000 UTC m=+0.478101297 container remove 80c8be5845efa49eebe3464d4b10d212fcf9916cc9d082869cf152e5b408956b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_chatterjee, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:19:55 compute-0 systemd[1]: libpod-conmon-80c8be5845efa49eebe3464d4b10d212fcf9916cc9d082869cf152e5b408956b.scope: Deactivated successfully.
Sep 30 14:19:55 compute-0 sudo[114432]: pam_unix(sudo:session): session closed for user root
Sep 30 14:19:55 compute-0 sudo[114763]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:19:55 compute-0 sudo[114763]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:19:55 compute-0 sudo[114763]: pam_unix(sudo:session): session closed for user root
Sep 30 14:19:55 compute-0 sudo[114806]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -- raw list --format json
Sep 30 14:19:55 compute-0 sudo[114806]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:19:56 compute-0 sudo[114994]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkybsesoymrtorgdpwiyxzhhqgphzhxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241995.8680692-326-264301114273522/AnsiballZ_file.py'
Sep 30 14:19:56 compute-0 sudo[114994]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:19:56 compute-0 podman[114955]: 2025-09-30 14:19:56.27930469 +0000 UTC m=+0.039698897 container create 4ae14359aae2f7139f8f61ff574c85ac701cb3269594ae1f6ba48fe717d12ff5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Sep 30 14:19:56 compute-0 systemd[1]: Started libpod-conmon-4ae14359aae2f7139f8f61ff574c85ac701cb3269594ae1f6ba48fe717d12ff5.scope.
Sep 30 14:19:56 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:19:56 compute-0 podman[114955]: 2025-09-30 14:19:56.26245961 +0000 UTC m=+0.022853847 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:19:56 compute-0 podman[114955]: 2025-09-30 14:19:56.359739319 +0000 UTC m=+0.120133536 container init 4ae14359aae2f7139f8f61ff574c85ac701cb3269594ae1f6ba48fe717d12ff5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_rubin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:19:56 compute-0 podman[114955]: 2025-09-30 14:19:56.368263802 +0000 UTC m=+0.128658009 container start 4ae14359aae2f7139f8f61ff574c85ac701cb3269594ae1f6ba48fe717d12ff5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_rubin, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:19:56 compute-0 podman[114955]: 2025-09-30 14:19:56.372221985 +0000 UTC m=+0.132616212 container attach 4ae14359aae2f7139f8f61ff574c85ac701cb3269594ae1f6ba48fe717d12ff5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Sep 30 14:19:56 compute-0 upbeat_rubin[115000]: 167 167
Sep 30 14:19:56 compute-0 systemd[1]: libpod-4ae14359aae2f7139f8f61ff574c85ac701cb3269594ae1f6ba48fe717d12ff5.scope: Deactivated successfully.
Sep 30 14:19:56 compute-0 podman[114955]: 2025-09-30 14:19:56.37473439 +0000 UTC m=+0.135128597 container died 4ae14359aae2f7139f8f61ff574c85ac701cb3269594ae1f6ba48fe717d12ff5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_rubin, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:19:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-49d98e629955f14230972b4881e756944af4ca479f0e6674799e1034714d9343-merged.mount: Deactivated successfully.
Sep 30 14:19:56 compute-0 podman[114955]: 2025-09-30 14:19:56.410118654 +0000 UTC m=+0.170512861 container remove 4ae14359aae2f7139f8f61ff574c85ac701cb3269594ae1f6ba48fe717d12ff5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_rubin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:19:56 compute-0 systemd[1]: libpod-conmon-4ae14359aae2f7139f8f61ff574c85ac701cb3269594ae1f6ba48fe717d12ff5.scope: Deactivated successfully.
Sep 30 14:19:56 compute-0 python3.9[114996]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Sep 30 14:19:56 compute-0 sudo[114994]: pam_unix(sudo:session): session closed for user root
Sep 30 14:19:56 compute-0 podman[115024]: 2025-09-30 14:19:56.561451113 +0000 UTC m=+0.039238605 container create e12320778285bc5cca69e093e9937ca1842201545370d5bc617a60647d23e401 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_chatelet, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Sep 30 14:19:56 compute-0 systemd[1]: Started libpod-conmon-e12320778285bc5cca69e093e9937ca1842201545370d5bc617a60647d23e401.scope.
Sep 30 14:19:56 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:19:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8643ce468eaeeb26300e7f03a5fe19fec673300c14457b8535c612eae647dcd8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:19:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8643ce468eaeeb26300e7f03a5fe19fec673300c14457b8535c612eae647dcd8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:19:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8643ce468eaeeb26300e7f03a5fe19fec673300c14457b8535c612eae647dcd8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:19:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8643ce468eaeeb26300e7f03a5fe19fec673300c14457b8535c612eae647dcd8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:19:56 compute-0 podman[115024]: 2025-09-30 14:19:56.618273806 +0000 UTC m=+0.096061318 container init e12320778285bc5cca69e093e9937ca1842201545370d5bc617a60647d23e401 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_chatelet, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Sep 30 14:19:56 compute-0 podman[115024]: 2025-09-30 14:19:56.62534816 +0000 UTC m=+0.103135652 container start e12320778285bc5cca69e093e9937ca1842201545370d5bc617a60647d23e401 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Sep 30 14:19:56 compute-0 podman[115024]: 2025-09-30 14:19:56.62914939 +0000 UTC m=+0.106936882 container attach e12320778285bc5cca69e093e9937ca1842201545370d5bc617a60647d23e401 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_chatelet, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Sep 30 14:19:56 compute-0 podman[115024]: 2025-09-30 14:19:56.546368539 +0000 UTC m=+0.024156071 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:19:56 compute-0 ceph-mon[74194]: pgmap v121: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 261 B/s rd, 0 op/s
Sep 30 14:19:56 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:19:56.953Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:19:57 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v122: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 349 B/s rd, 0 op/s
Sep 30 14:19:57 compute-0 python3.9[115245]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 14:19:57 compute-0 lvm[115269]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 14:19:57 compute-0 lvm[115269]: VG ceph_vg0 finished
Sep 30 14:19:57 compute-0 boring_chatelet[115064]: {}
Sep 30 14:19:57 compute-0 systemd[1]: libpod-e12320778285bc5cca69e093e9937ca1842201545370d5bc617a60647d23e401.scope: Deactivated successfully.
Sep 30 14:19:57 compute-0 systemd[1]: libpod-e12320778285bc5cca69e093e9937ca1842201545370d5bc617a60647d23e401.scope: Consumed 1.139s CPU time.
Sep 30 14:19:57 compute-0 podman[115024]: 2025-09-30 14:19:57.463337019 +0000 UTC m=+0.941124531 container died e12320778285bc5cca69e093e9937ca1842201545370d5bc617a60647d23e401 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_chatelet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True)
Sep 30 14:19:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-8643ce468eaeeb26300e7f03a5fe19fec673300c14457b8535c612eae647dcd8-merged.mount: Deactivated successfully.
Sep 30 14:19:57 compute-0 podman[115024]: 2025-09-30 14:19:57.50593051 +0000 UTC m=+0.983718002 container remove e12320778285bc5cca69e093e9937ca1842201545370d5bc617a60647d23e401 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_chatelet, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:19:57 compute-0 systemd[1]: libpod-conmon-e12320778285bc5cca69e093e9937ca1842201545370d5bc617a60647d23e401.scope: Deactivated successfully.
Sep 30 14:19:57 compute-0 sudo[114806]: pam_unix(sudo:session): session closed for user root
Sep 30 14:19:57 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 14:19:57 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:19:57 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 14:19:57 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:19:57 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:19:57 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:19:57 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:19:57.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:19:57 compute-0 sudo[115352]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 14:19:57 compute-0 sudo[115352]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:19:57 compute-0 sudo[115352]: pam_unix(sudo:session): session closed for user root
Sep 30 14:19:57 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:19:57 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:19:57 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:19:57.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:19:57 compute-0 sudo[115460]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-naezbhtjesxzekvnijompkmlmevilexv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241997.5446372-374-114606589253215/AnsiballZ_dnf.py'
Sep 30 14:19:57 compute-0 sudo[115460]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:19:58 compute-0 python3.9[115462]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Sep 30 14:19:58 compute-0 sudo[115464]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:19:58 compute-0 sudo[115464]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:19:58 compute-0 sudo[115464]: pam_unix(sudo:session): session closed for user root
Sep 30 14:19:58 compute-0 ceph-mon[74194]: pgmap v122: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 349 B/s rd, 0 op/s
Sep 30 14:19:58 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:19:58 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:19:59 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v123: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 87 B/s rd, 0 op/s
Sep 30 14:19:59 compute-0 sudo[115460]: pam_unix(sudo:session): session closed for user root
Sep 30 14:19:59 compute-0 ceph-mgr[74485]: [balancer INFO root] Optimize plan auto_2025-09-30_14:19:59
Sep 30 14:19:59 compute-0 ceph-mgr[74485]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 14:19:59 compute-0 ceph-mgr[74485]: [balancer INFO root] do_upmap
Sep 30 14:19:59 compute-0 ceph-mgr[74485]: [balancer INFO root] pools ['volumes', 'images', 'cephfs.cephfs.meta', 'default.rgw.control', '.nfs', 'default.rgw.meta', '.rgw.root', 'default.rgw.log', '.mgr', 'backups', 'vms', 'cephfs.cephfs.data']
Sep 30 14:19:59 compute-0 ceph-mgr[74485]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 14:19:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 14:19:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:19:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 14:19:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:19:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:19:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:19:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:19:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:19:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:19:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:19:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:19:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:19:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Sep 30 14:19:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:19:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:19:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:19:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Sep 30 14:19:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:19:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Sep 30 14:19:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:19:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:19:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:19:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 14:19:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:19:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 14:19:59 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:19:59 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:19:59 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:19:59.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:19:59 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Sep 30 14:19:59 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:19:59 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:19:59 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:19:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:19:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:19:59 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:19:59 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:19:59 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:19:59.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:19:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:19:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:19:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:19:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:20:00 compute-0 ceph-mon[74194]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 1 OSD(s) experiencing slow operations in BlueStore; 1 failed cephadm daemon(s)
Sep 30 14:20:00 compute-0 ceph-mon[74194]: log_channel(cluster) log [WRN] : [WRN] BLUESTORE_SLOW_OP_ALERT: 1 OSD(s) experiencing slow operations in BlueStore
Sep 30 14:20:00 compute-0 ceph-mon[74194]: log_channel(cluster) log [WRN] :      osd.1 observed slow operation indications in BlueStore
Sep 30 14:20:00 compute-0 ceph-mon[74194]: log_channel(cluster) log [WRN] : [WRN] CEPHADM_FAILED_DAEMON: 1 failed cephadm daemon(s)
Sep 30 14:20:00 compute-0 ceph-mon[74194]: log_channel(cluster) log [WRN] :     daemon nfs.cephfs.2.0.compute-0.qrbicy on compute-0 is in unknown state
Sep 30 14:20:00 compute-0 systemd[1]: ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6@nfs.cephfs.2.0.compute-0.qrbicy.service: Scheduled restart job, restart counter is at 1.
Sep 30 14:20:00 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.qrbicy for 5e3c7776-ac03-5698-b79f-a6dc2d80cae6.
Sep 30 14:20:00 compute-0 systemd[1]: ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6@nfs.cephfs.2.0.compute-0.qrbicy.service: Consumed 1.832s CPU time.
Sep 30 14:20:00 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.qrbicy for 5e3c7776-ac03-5698-b79f-a6dc2d80cae6...
Sep 30 14:20:00 compute-0 sudo[115648]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbhkxqrfvsdplgubgwwuzxnihhmgxstx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759241999.9113805-401-68901375908542/AnsiballZ_dnf.py'
Sep 30 14:20:00 compute-0 sudo[115648]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:20:00 compute-0 podman[115686]: 2025-09-30 14:20:00.354940117 +0000 UTC m=+0.041871283 container create c8a84c1858ecfbc1b3076e22e85211e93c515229cc0f18640a8f35506d7d81a2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:20:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ab1f7ab84368255a0e4969dcd350cd896d1a650c69e05a6445c458076819bc2/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Sep 30 14:20:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ab1f7ab84368255a0e4969dcd350cd896d1a650c69e05a6445c458076819bc2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:20:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ab1f7ab84368255a0e4969dcd350cd896d1a650c69e05a6445c458076819bc2/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 14:20:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ab1f7ab84368255a0e4969dcd350cd896d1a650c69e05a6445c458076819bc2/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.qrbicy-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 14:20:00 compute-0 podman[115686]: 2025-09-30 14:20:00.408578607 +0000 UTC m=+0.095509793 container init c8a84c1858ecfbc1b3076e22e85211e93c515229cc0f18640a8f35506d7d81a2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:20:00 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:20:00 compute-0 podman[115686]: 2025-09-30 14:20:00.41444861 +0000 UTC m=+0.101379766 container start c8a84c1858ecfbc1b3076e22e85211e93c515229cc0f18640a8f35506d7d81a2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:20:00 compute-0 bash[115686]: c8a84c1858ecfbc1b3076e22e85211e93c515229cc0f18640a8f35506d7d81a2
Sep 30 14:20:00 compute-0 podman[115686]: 2025-09-30 14:20:00.337383839 +0000 UTC m=+0.024315025 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:20:00 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:00 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Sep 30 14:20:00 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:00 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Sep 30 14:20:00 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.qrbicy for 5e3c7776-ac03-5698-b79f-a6dc2d80cae6.
Sep 30 14:20:00 compute-0 python3.9[115654]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Sep 30 14:20:00 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:00 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Sep 30 14:20:00 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:00 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Sep 30 14:20:00 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:00 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Sep 30 14:20:00 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:00 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Sep 30 14:20:00 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:00 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Sep 30 14:20:00 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:00 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:20:00 compute-0 ceph-mon[74194]: pgmap v123: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 87 B/s rd, 0 op/s
Sep 30 14:20:00 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:20:00 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:20:00 compute-0 ceph-mon[74194]: Health detail: HEALTH_WARN 1 OSD(s) experiencing slow operations in BlueStore; 1 failed cephadm daemon(s)
Sep 30 14:20:00 compute-0 ceph-mon[74194]: [WRN] BLUESTORE_SLOW_OP_ALERT: 1 OSD(s) experiencing slow operations in BlueStore
Sep 30 14:20:00 compute-0 ceph-mon[74194]:      osd.1 observed slow operation indications in BlueStore
Sep 30 14:20:00 compute-0 ceph-mon[74194]: [WRN] CEPHADM_FAILED_DAEMON: 1 failed cephadm daemon(s)
Sep 30 14:20:00 compute-0 ceph-mon[74194]:     daemon nfs.cephfs.2.0.compute-0.qrbicy on compute-0 is in unknown state
Sep 30 14:20:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 14:20:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 14:20:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 14:20:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 14:20:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 14:20:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 14:20:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 14:20:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 14:20:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 14:20:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 14:20:01 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v124: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 436 B/s rd, 87 B/s wr, 0 op/s
Sep 30 14:20:01 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-nfs-cephfs-compute-0-yvkpei[97239]: [WARNING] 272/142001 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Sep 30 14:20:01 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:20:01 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:20:01 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:20:01.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:20:01 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:20:01 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:20:01 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:20:01.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:20:01 compute-0 sudo[115648]: pam_unix(sudo:session): session closed for user root
Sep 30 14:20:02 compute-0 sudo[115895]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzaujlolcgpelodspqohuvtvhkahvzfb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242002.3343754-437-194058039578612/AnsiballZ_stat.py'
Sep 30 14:20:02 compute-0 sudo[115895]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:20:02 compute-0 ceph-mon[74194]: pgmap v124: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 436 B/s rd, 87 B/s wr, 0 op/s
Sep 30 14:20:02 compute-0 python3.9[115897]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 14:20:02 compute-0 sudo[115895]: pam_unix(sudo:session): session closed for user root
Sep 30 14:20:03 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v125: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 436 B/s rd, 87 B/s wr, 0 op/s
Sep 30 14:20:03 compute-0 sudo[116050]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zahpvqhkppwhtwsecvjtngprtsmaabit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242002.9867582-461-32589064640455/AnsiballZ_slurp.py'
Sep 30 14:20:03 compute-0 sudo[116050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:20:03 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:20:03 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:20:03 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:20:03.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:20:03 compute-0 python3.9[116052]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Sep 30 14:20:03 compute-0 sudo[116050]: pam_unix(sudo:session): session closed for user root
Sep 30 14:20:03 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:20:03 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:20:03 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:20:03.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:20:04 compute-0 ceph-mon[74194]: pgmap v125: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 436 B/s rd, 87 B/s wr, 0 op/s
Sep 30 14:20:04 compute-0 sshd-session[111775]: Connection closed by 192.168.122.30 port 40068
Sep 30 14:20:04 compute-0 sshd-session[111772]: pam_unix(sshd:session): session closed for user zuul
Sep 30 14:20:04 compute-0 systemd[1]: session-40.scope: Deactivated successfully.
Sep 30 14:20:04 compute-0 systemd[1]: session-40.scope: Consumed 17.884s CPU time.
Sep 30 14:20:04 compute-0 systemd-logind[808]: Session 40 logged out. Waiting for processes to exit.
Sep 30 14:20:04 compute-0 systemd-logind[808]: Removed session 40.
Sep 30 14:20:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:20:04] "GET /metrics HTTP/1.1" 200 48393 "" "Prometheus/2.51.0"
Sep 30 14:20:04 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:20:04] "GET /metrics HTTP/1.1" 200 48393 "" "Prometheus/2.51.0"
Sep 30 14:20:05 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v126: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:20:05 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:20:05 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:20:05 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:20:05 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:20:05.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:20:05 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:20:05 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:20:05 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:20:05.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:20:06 compute-0 sshd-session[116078]: Invalid user gameserver from 210.90.155.80 port 35992
Sep 30 14:20:06 compute-0 sshd-session[116078]: Received disconnect from 210.90.155.80 port 35992:11: Bye Bye [preauth]
Sep 30 14:20:06 compute-0 sshd-session[116078]: Disconnected from invalid user gameserver 210.90.155.80 port 35992 [preauth]
Sep 30 14:20:06 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:06 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:20:06 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:06 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:20:06 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:06 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:20:06 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:06 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:20:06 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:06 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:20:06 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:06 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:20:06 compute-0 ceph-mon[74194]: pgmap v126: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:20:06 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:20:06.955Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:20:07 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v127: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 426 B/s wr, 1 op/s
Sep 30 14:20:07 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:20:07 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:20:07 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:20:07.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:20:07 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:20:07 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:20:07 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:20:07.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:20:08 compute-0 ceph-mon[74194]: pgmap v127: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 426 B/s wr, 1 op/s
Sep 30 14:20:09 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v128: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 426 B/s wr, 1 op/s
Sep 30 14:20:09 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:20:09 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000023s ======
Sep 30 14:20:09 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:20:09.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Sep 30 14:20:09 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:20:09 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000023s ======
Sep 30 14:20:09 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:20:09.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Sep 30 14:20:09 compute-0 ceph-mon[74194]: pgmap v128: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 426 B/s wr, 1 op/s
Sep 30 14:20:10 compute-0 sshd-session[116086]: Accepted publickey for zuul from 192.168.122.30 port 40786 ssh2: ECDSA SHA256:bXV1aFTGAGwGo0hLh6HZ3pTGxlJrPf0VedxXflT3nU8
Sep 30 14:20:10 compute-0 systemd-logind[808]: New session 41 of user zuul.
Sep 30 14:20:10 compute-0 systemd[1]: Started Session 41 of User zuul.
Sep 30 14:20:10 compute-0 sshd-session[116086]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Sep 30 14:20:10 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:20:11 compute-0 python3.9[116239]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Sep 30 14:20:11 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v129: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Sep 30 14:20:11 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:20:11 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000022s ======
Sep 30 14:20:11 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:20:11.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Sep 30 14:20:11 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:20:11 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000023s ======
Sep 30 14:20:11 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:20:11.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Sep 30 14:20:12 compute-0 python3.9[116395]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Sep 30 14:20:12 compute-0 ceph-mon[74194]: pgmap v129: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Sep 30 14:20:12 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:12 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Sep 30 14:20:12 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:12 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Sep 30 14:20:12 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:12 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Sep 30 14:20:12 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:12 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Sep 30 14:20:12 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:12 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Sep 30 14:20:12 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:12 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Sep 30 14:20:12 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:12 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Sep 30 14:20:12 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:12 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 14:20:12 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:12 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 14:20:12 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:12 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 14:20:12 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:12 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Sep 30 14:20:12 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:12 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 14:20:12 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:12 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Sep 30 14:20:12 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:12 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Sep 30 14:20:12 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:12 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Sep 30 14:20:12 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:12 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Sep 30 14:20:12 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:12 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Sep 30 14:20:12 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:12 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Sep 30 14:20:12 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:12 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Sep 30 14:20:12 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:12 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Sep 30 14:20:12 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:12 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Sep 30 14:20:12 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:12 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Sep 30 14:20:12 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:12 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Sep 30 14:20:12 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:12 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Sep 30 14:20:12 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:12 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Sep 30 14:20:12 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:12 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Sep 30 14:20:12 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:12 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Sep 30 14:20:12 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:12 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:20:13 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v130: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Sep 30 14:20:13 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:13 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9a80000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:20:13 compute-0 python3.9[116599]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:20:13 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:20:13 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:20:13 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:20:13.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:20:13 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:20:13 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000023s ======
Sep 30 14:20:13 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:20:13.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Sep 30 14:20:13 compute-0 sshd-session[116089]: Connection closed by 192.168.122.30 port 40786
Sep 30 14:20:13 compute-0 sshd-session[116086]: pam_unix(sshd:session): session closed for user zuul
Sep 30 14:20:13 compute-0 systemd[1]: session-41.scope: Deactivated successfully.
Sep 30 14:20:13 compute-0 systemd[1]: session-41.scope: Consumed 2.260s CPU time.
Sep 30 14:20:13 compute-0 systemd-logind[808]: Session 41 logged out. Waiting for processes to exit.
Sep 30 14:20:13 compute-0 systemd-logind[808]: Removed session 41.
Sep 30 14:20:13 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:13 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9a74001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:20:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:14 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9a54000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:20:14 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:20:14 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:20:14 compute-0 ceph-mon[74194]: pgmap v130: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Sep 30 14:20:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:20:14] "GET /metrics HTTP/1.1" 200 48411 "" "Prometheus/2.51.0"
Sep 30 14:20:14 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:20:14] "GET /metrics HTTP/1.1" 200 48411 "" "Prometheus/2.51.0"
Sep 30 14:20:15 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v131: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Sep 30 14:20:15 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:15 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9a54000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:20:15 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:20:15 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:20:15 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:20:15 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:20:15.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:20:15 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:15 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:20:15 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:15 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:20:15 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:20:15 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:20:15 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:20:15 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:20:15.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:20:15 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:15 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9a78001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:20:15 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-nfs-cephfs-compute-0-yvkpei[97239]: [WARNING] 272/142015 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Sep 30 14:20:16 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:16 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9a74002520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:20:16 compute-0 ceph-mon[74194]: pgmap v131: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Sep 30 14:20:16 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:20:16.956Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:20:17 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v132: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 4.2 KiB/s rd, 1.7 KiB/s wr, 6 op/s
Sep 30 14:20:17 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:17 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9a54000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:20:17 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:20:17 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:20:17 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:20:17.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:20:17 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:20:17 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:20:17 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:20:17.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:20:17 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:17 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9a50000ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:20:18 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:18 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9a780023e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:20:18 compute-0 sudo[116636]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:20:18 compute-0 sudo[116636]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:20:18 compute-0 sudo[116636]: pam_unix(sudo:session): session closed for user root
Sep 30 14:20:18 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:18 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Sep 30 14:20:18 compute-0 ceph-mon[74194]: pgmap v132: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 4.2 KiB/s rd, 1.7 KiB/s wr, 6 op/s
Sep 30 14:20:19 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v133: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 1.3 KiB/s wr, 5 op/s
Sep 30 14:20:19 compute-0 sshd-session[116661]: Accepted publickey for zuul from 192.168.122.30 port 34394 ssh2: ECDSA SHA256:bXV1aFTGAGwGo0hLh6HZ3pTGxlJrPf0VedxXflT3nU8
Sep 30 14:20:19 compute-0 systemd-logind[808]: New session 42 of user zuul.
Sep 30 14:20:19 compute-0 systemd[1]: Started Session 42 of User zuul.
Sep 30 14:20:19 compute-0 sshd-session[116661]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Sep 30 14:20:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:19 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9a74002520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:20:19 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:20:19 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:20:19 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:20:19.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:20:19 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:20:19 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:20:19 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:20:19.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:20:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:19 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9a54001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:20:20 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:20 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9a500019c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:20:20 compute-0 python3.9[116818]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Sep 30 14:20:20 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:20:20 compute-0 sshd-session[116743]: Received disconnect from 193.46.255.99 port 59468:11:  [preauth]
Sep 30 14:20:20 compute-0 sshd-session[116743]: Disconnected from authenticating user root 193.46.255.99 port 59468 [preauth]
Sep 30 14:20:20 compute-0 ceph-mon[74194]: pgmap v133: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 1.3 KiB/s wr, 5 op/s
Sep 30 14:20:21 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v134: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.8 KiB/s rd, 1.3 KiB/s wr, 5 op/s
Sep 30 14:20:21 compute-0 python3.9[116972]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Sep 30 14:20:21 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-nfs-cephfs-compute-0-yvkpei[97239]: [WARNING] 272/142021 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Sep 30 14:20:21 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:21 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9a780023e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:20:21 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:20:21 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000022s ======
Sep 30 14:20:21 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:20:21.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Sep 30 14:20:21 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:20:21 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:20:21 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:20:21.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:20:21 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:21 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9a74002520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:20:22 compute-0 sudo[117128]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npluojneikactqlzrytycmlseelhcotw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242021.690068-80-164342890920416/AnsiballZ_setup.py'
Sep 30 14:20:22 compute-0 sudo[117128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:20:22 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:22 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9a54001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:20:22 compute-0 python3.9[117130]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Sep 30 14:20:22 compute-0 sudo[117128]: pam_unix(sudo:session): session closed for user root
Sep 30 14:20:22 compute-0 ceph-mon[74194]: pgmap v134: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.8 KiB/s rd, 1.3 KiB/s wr, 5 op/s
Sep 30 14:20:22 compute-0 sudo[117212]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezenyfinrxilpbjbtmqtzfaqtjxwgafz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242021.690068-80-164342890920416/AnsiballZ_dnf.py'
Sep 30 14:20:22 compute-0 sudo[117212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:20:23 compute-0 python3.9[117214]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Sep 30 14:20:23 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v135: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Sep 30 14:20:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:23 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9a500019c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:20:23 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:20:23 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:20:23 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:20:23.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:20:23 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:20:23 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:20:23 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:20:23.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:20:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:23 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9a780023e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:20:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:24 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9a74002520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:20:24 compute-0 sudo[117212]: pam_unix(sudo:session): session closed for user root
Sep 30 14:20:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:20:24] "GET /metrics HTTP/1.1" 200 48411 "" "Prometheus/2.51.0"
Sep 30 14:20:24 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:20:24] "GET /metrics HTTP/1.1" 200 48411 "" "Prometheus/2.51.0"
Sep 30 14:20:24 compute-0 ceph-mon[74194]: pgmap v135: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Sep 30 14:20:25 compute-0 sudo[117367]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwhimhmzydssanihjrwpcfokarkhjaob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242024.756602-116-19618735182288/AnsiballZ_setup.py'
Sep 30 14:20:25 compute-0 sudo[117367]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:20:25 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v136: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Sep 30 14:20:25 compute-0 python3.9[117369]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Sep 30 14:20:25 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:25 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9a54001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:20:25 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:20:25 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:20:25 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:20:25 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:20:25.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:20:25 compute-0 sudo[117367]: pam_unix(sudo:session): session closed for user root
Sep 30 14:20:25 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:20:25 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:20:25 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:20:25.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:20:25 compute-0 ceph-mon[74194]: pgmap v136: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Sep 30 14:20:25 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:25 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9a500019c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:20:26 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:26 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9a780034e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:20:26 compute-0 sudo[117564]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbwtylerwohrusmwpsdpofnenzsqeppi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242026.0510046-149-203153452447601/AnsiballZ_file.py'
Sep 30 14:20:26 compute-0 sudo[117564]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:20:26 compute-0 python3.9[117566]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:20:26 compute-0 sudo[117564]: pam_unix(sudo:session): session closed for user root
Sep 30 14:20:26 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:20:26.957Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:20:27 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v137: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Sep 30 14:20:27 compute-0 sudo[117717]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itfxdeavehtdonbsvawxiqkcimdmuyoa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242026.8989673-173-241944533295784/AnsiballZ_command.py'
Sep 30 14:20:27 compute-0 sudo[117717]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:20:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:27 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9a54001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:20:27 compute-0 python3.9[117719]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:20:27 compute-0 sudo[117717]: pam_unix(sudo:session): session closed for user root
Sep 30 14:20:27 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:20:27 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:20:27 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:20:27.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:20:27 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:20:27 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:20:27 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:20:27.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:20:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:27 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9a54001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:20:28 compute-0 sudo[117883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqfpalhkfazjyxliwpqhekvksoxecppw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242027.8750117-197-270789146686729/AnsiballZ_stat.py'
Sep 30 14:20:28 compute-0 sudo[117883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:20:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:28 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9a50002e50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:20:28 compute-0 python3.9[117885]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:20:28 compute-0 sudo[117883]: pam_unix(sudo:session): session closed for user root
Sep 30 14:20:28 compute-0 ceph-mon[74194]: pgmap v137: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Sep 30 14:20:28 compute-0 sudo[117961]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-parzkiavemyhehozjlcgbgrvrezqgwvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242027.8750117-197-270789146686729/AnsiballZ_file.py'
Sep 30 14:20:28 compute-0 sudo[117961]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:20:28 compute-0 python3.9[117963]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:20:28 compute-0 sudo[117961]: pam_unix(sudo:session): session closed for user root
Sep 30 14:20:29 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v138: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Sep 30 14:20:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:29 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9a780034e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:20:29 compute-0 sudo[118114]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnkhpnxjayanqmcuhasozpilbaprqsrn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242029.1381497-233-136036612580646/AnsiballZ_stat.py'
Sep 30 14:20:29 compute-0 sudo[118114]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:20:29 compute-0 python3.9[118116]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:20:29 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:20:29 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:20:29 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:20:29 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000022s ======
Sep 30 14:20:29 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:20:29.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Sep 30 14:20:29 compute-0 sudo[118114]: pam_unix(sudo:session): session closed for user root
Sep 30 14:20:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:20:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:20:29 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:20:29 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:20:29 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:20:29.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:20:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:20:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:20:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:20:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:20:29 compute-0 sudo[118193]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mygcpizfzomdsyneshricngneqmiglmj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242029.1381497-233-136036612580646/AnsiballZ_file.py'
Sep 30 14:20:29 compute-0 sudo[118193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:20:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:29 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9a780034e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:20:30 compute-0 python3.9[118195]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:20:30 compute-0 sudo[118193]: pam_unix(sudo:session): session closed for user root
Sep 30 14:20:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:30 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9a780034e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:20:30 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:20:30 compute-0 ceph-mon[74194]: pgmap v138: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Sep 30 14:20:30 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:20:30 compute-0 sudo[118345]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itgjeothxiiuurraszlstamcfzyfkxwn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242030.3686745-272-45960720597753/AnsiballZ_ini_file.py'
Sep 30 14:20:30 compute-0 sudo[118345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:20:31 compute-0 python3.9[118347]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:20:31 compute-0 sudo[118345]: pam_unix(sudo:session): session closed for user root
Sep 30 14:20:31 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v139: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Sep 30 14:20:31 compute-0 sudo[118498]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzaejnvmlbnrwhovxziftzwtnanbncml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242031.1532242-272-107610758166453/AnsiballZ_ini_file.py'
Sep 30 14:20:31 compute-0 sudo[118498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:20:31 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:31 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9a5c000e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:20:31 compute-0 python3.9[118500]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:20:31 compute-0 sudo[118498]: pam_unix(sudo:session): session closed for user root
Sep 30 14:20:31 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:20:31 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000023s ======
Sep 30 14:20:31 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:20:31.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Sep 30 14:20:31 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:20:31 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:20:31 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:20:31.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:20:31 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:31 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9a780034e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:20:32 compute-0 sudo[118651]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kywrkqvoflrjaljomxscxzozdtihfbob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242031.7988498-272-275023197174762/AnsiballZ_ini_file.py'
Sep 30 14:20:32 compute-0 sudo[118651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:20:32 compute-0 python3.9[118653]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:20:32 compute-0 sudo[118651]: pam_unix(sudo:session): session closed for user root
Sep 30 14:20:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:32 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9a780034e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:20:32 compute-0 ceph-mon[74194]: pgmap v139: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Sep 30 14:20:32 compute-0 sudo[118803]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwyftewfffgijzdcmgehmmjtivmqspgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242032.3988354-272-159617010448243/AnsiballZ_ini_file.py'
Sep 30 14:20:32 compute-0 sudo[118803]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:20:32 compute-0 python3.9[118805]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:20:32 compute-0 sudo[118803]: pam_unix(sudo:session): session closed for user root
Sep 30 14:20:33 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v140: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:20:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:33 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9a74003a10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:20:33 compute-0 sudo[118956]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvfvvitjoobtwurlshghajvyddpswldq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242033.1861043-365-34800597452718/AnsiballZ_dnf.py'
Sep 30 14:20:33 compute-0 sudo[118956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:20:33 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:20:33 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:20:33 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:20:33.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:20:33 compute-0 python3.9[118958]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Sep 30 14:20:33 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:20:33 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:20:33 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:20:33.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:20:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:33 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9a5c001920 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:20:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:34 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9a5c001920 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:20:34 compute-0 ceph-mon[74194]: pgmap v140: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:20:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:20:34] "GET /metrics HTTP/1.1" 200 48413 "" "Prometheus/2.51.0"
Sep 30 14:20:34 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:20:34] "GET /metrics HTTP/1.1" 200 48413 "" "Prometheus/2.51.0"
Sep 30 14:20:35 compute-0 sudo[118956]: pam_unix(sudo:session): session closed for user root
Sep 30 14:20:35 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v141: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:20:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:35 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9a5c001920 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:20:35 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:20:35 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:20:35 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:20:35 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:20:35.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:20:35 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:20:35 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:20:35 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:20:35.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:20:35 compute-0 sudo[119112]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aszllxehaonrejpffmwxvsicpjywywil ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242035.589943-398-122635597895558/AnsiballZ_setup.py'
Sep 30 14:20:35 compute-0 sudo[119112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:20:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:35 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9a74003a10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:20:36 compute-0 python3.9[119114]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Sep 30 14:20:36 compute-0 sudo[119112]: pam_unix(sudo:session): session closed for user root
Sep 30 14:20:36 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:36 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9a780034e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:20:36 compute-0 ceph-mon[74194]: pgmap v141: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:20:36 compute-0 sudo[119266]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axlmcixvyffkhuiqdnqinyjblnenldsu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242036.5184512-422-216274363997160/AnsiballZ_stat.py'
Sep 30 14:20:36 compute-0 sudo[119266]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:20:36 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:20:36.958Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:20:36 compute-0 python3.9[119268]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 14:20:37 compute-0 sudo[119266]: pam_unix(sudo:session): session closed for user root
Sep 30 14:20:37 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v142: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:20:37 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:37 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9a50002e50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:20:37 compute-0 sudo[119419]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glhzmylnubcslaxhdvmweokarshkxrkb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242037.2709188-449-77103522541471/AnsiballZ_stat.py'
Sep 30 14:20:37 compute-0 sudo[119419]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:20:37 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:20:37 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:20:37 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:20:37.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:20:37 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:20:37 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:20:37 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:20:37.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:20:37 compute-0 python3.9[119421]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 14:20:37 compute-0 sudo[119419]: pam_unix(sudo:session): session closed for user root
Sep 30 14:20:37 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:37 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9a74003a10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:20:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:38 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9a74003a10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:20:38 compute-0 sudo[119522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:20:38 compute-0 sudo[119522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:20:38 compute-0 sudo[119522]: pam_unix(sudo:session): session closed for user root
Sep 30 14:20:38 compute-0 sudo[119597]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-strcisrkkpsvknenvxfzhegljxaixmwc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242038.0412972-479-50691521126449/AnsiballZ_service_facts.py'
Sep 30 14:20:38 compute-0 sudo[119597]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:20:38 compute-0 python3.9[119599]: ansible-service_facts Invoked
Sep 30 14:20:38 compute-0 network[119616]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Sep 30 14:20:38 compute-0 network[119617]: 'network-scripts' will be removed from distribution in near future.
Sep 30 14:20:38 compute-0 network[119618]: It is advised to switch to 'NetworkManager' instead for network management.
Sep 30 14:20:38 compute-0 ceph-mon[74194]: pgmap v142: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:20:39 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v143: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:20:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:39 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9a780034e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:20:39 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:20:39 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:20:39 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:20:39.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:20:39 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:20:39 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000022s ======
Sep 30 14:20:39 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:20:39.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Sep 30 14:20:39 compute-0 ceph-mon[74194]: pgmap v143: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:20:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:39 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9a50002e50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:20:40 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:40 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9a74003a10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:20:40 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:20:41 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v144: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Sep 30 14:20:41 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:41 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9a5c001920 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:20:41 compute-0 sudo[119597]: pam_unix(sudo:session): session closed for user root
Sep 30 14:20:41 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:20:41 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000023s ======
Sep 30 14:20:41 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:20:41.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Sep 30 14:20:41 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:20:41 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:20:41 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:20:41.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:20:41 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:41 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9a5c001920 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:20:42 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:42 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9a50002e50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:20:42 compute-0 ceph-mon[74194]: pgmap v144: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Sep 30 14:20:43 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v145: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Sep 30 14:20:43 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:43 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9a74003a10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:20:43 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:20:43 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000022s ======
Sep 30 14:20:43 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:20:43.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Sep 30 14:20:43 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:20:43 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:20:43 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:20:43.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:20:43 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:43 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9a74003a10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:20:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:44 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9a74003a10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:20:44 compute-0 ceph-mon[74194]: pgmap v145: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Sep 30 14:20:44 compute-0 sudo[119910]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qeewloruygrfulukfixvtputwmrzgqrm ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1759242044.0167906-518-87181645371990/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1759242044.0167906-518-87181645371990/args'
Sep 30 14:20:44 compute-0 sudo[119910]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:20:44 compute-0 sudo[119910]: pam_unix(sudo:session): session closed for user root
Sep 30 14:20:44 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:20:44 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:20:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:20:44] "GET /metrics HTTP/1.1" 200 48414 "" "Prometheus/2.51.0"
Sep 30 14:20:44 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:20:44] "GET /metrics HTTP/1.1" 200 48414 "" "Prometheus/2.51.0"
Sep 30 14:20:45 compute-0 sudo[120077]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phgecxynyzgjlapgblypoupqwdqmusyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242044.9591804-551-23929924891898/AnsiballZ_dnf.py'
Sep 30 14:20:45 compute-0 sudo[120077]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:20:45 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v146: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Sep 30 14:20:45 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-nfs-cephfs-compute-0-yvkpei[97239]: [WARNING] 272/142045 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Sep 30 14:20:45 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:45 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9a50003f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:20:45 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:20:45 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:20:45 compute-0 python3.9[120079]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Sep 30 14:20:45 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:20:45 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:20:45 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:20:45.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:20:45 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:20:45 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:20:45 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:20:45.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:20:45 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:45 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9a74003a10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:20:46 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:46 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9a48000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:20:46 compute-0 ceph-mon[74194]: pgmap v146: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Sep 30 14:20:46 compute-0 sudo[120077]: pam_unix(sudo:session): session closed for user root
Sep 30 14:20:46 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:20:46.959Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:20:47 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v147: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Sep 30 14:20:47 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:47 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9a78004970 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:20:47 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:20:47 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:20:47 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:20:47.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:20:47 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:20:47 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:20:47 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:20:47.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:20:47 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:47 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9a50003f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:20:48 compute-0 sudo[120235]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdplkdkuoqgasjybcdaburtquwyistaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242047.4887354-590-65439500337256/AnsiballZ_package_facts.py'
Sep 30 14:20:48 compute-0 sudo[120235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:20:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:48 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9a74003a10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:20:48 compute-0 python3.9[120237]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Sep 30 14:20:48 compute-0 ceph-mon[74194]: pgmap v147: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Sep 30 14:20:48 compute-0 sudo[120235]: pam_unix(sudo:session): session closed for user root
Sep 30 14:20:48 compute-0 ceph-mgr[74485]: [dashboard INFO request] [192.168.122.100:41240] [POST] [200] [0.001s] [4.0B] [ae470118-fe3a-495d-901e-d5edf10f892f] /api/prometheus_receiver
Sep 30 14:20:49 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v148: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Sep 30 14:20:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:49 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9a480016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:20:49 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:20:49 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:20:49 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:20:49.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:20:49 compute-0 sudo[120388]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjrbnbovrcjxwmwbzmutlmwghxuwvqfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242049.3470566-620-279934980266074/AnsiballZ_stat.py'
Sep 30 14:20:49 compute-0 sudo[120388]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:20:49 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:20:49 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000022s ======
Sep 30 14:20:49 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:20:49.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Sep 30 14:20:49 compute-0 python3.9[120390]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:20:49 compute-0 sudo[120388]: pam_unix(sudo:session): session closed for user root
Sep 30 14:20:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:49 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9a78004970 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:20:50 compute-0 sudo[120467]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skrtjijkhdywdznzgtjyaikkslphcbzx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242049.3470566-620-279934980266074/AnsiballZ_file.py'
Sep 30 14:20:50 compute-0 sudo[120467]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:20:50 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[115701]: 30/09/2025 14:20:50 : epoch 68dbe710 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9a50003f50 fd 39 proxy ignored for local
Sep 30 14:20:50 compute-0 kernel: ganesha.nfsd[116604]: segfault at 50 ip 00007f9b2c02b32e sp 00007f9ae3ffe210 error 4 in libntirpc.so.5.8[7f9b2c010000+2c000] likely on CPU 2 (core 0, socket 2)
Sep 30 14:20:50 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Sep 30 14:20:50 compute-0 python3.9[120469]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:20:50 compute-0 systemd[1]: Started Process Core Dump (PID 120470/UID 0).
Sep 30 14:20:50 compute-0 sudo[120467]: pam_unix(sudo:session): session closed for user root
Sep 30 14:20:50 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:20:50 compute-0 ceph-mon[74194]: pgmap v148: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Sep 30 14:20:50 compute-0 sudo[120621]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhbowsnsfkqnkerdrzpqfdhqmvsxduvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242050.5504296-656-262988686142896/AnsiballZ_stat.py'
Sep 30 14:20:50 compute-0 sudo[120621]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:20:51 compute-0 python3.9[120623]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:20:51 compute-0 sudo[120621]: pam_unix(sudo:session): session closed for user root
Sep 30 14:20:51 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v149: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:20:51 compute-0 sudo[120699]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlufdppuzcbrxackktgpxdojgdcazuhb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242050.5504296-656-262988686142896/AnsiballZ_file.py'
Sep 30 14:20:51 compute-0 sudo[120699]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:20:51 compute-0 python3.9[120702]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:20:51 compute-0 sudo[120699]: pam_unix(sudo:session): session closed for user root
Sep 30 14:20:51 compute-0 systemd-coredump[120471]: Process 115705 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 54:
                                                    #0  0x00007f9b2c02b32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Sep 30 14:20:51 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:20:51 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:20:51 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:20:51.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:20:51 compute-0 systemd[1]: systemd-coredump@1-120470-0.service: Deactivated successfully.
Sep 30 14:20:51 compute-0 systemd[1]: systemd-coredump@1-120470-0.service: Consumed 1.197s CPU time.
Sep 30 14:20:51 compute-0 podman[120731]: 2025-09-30 14:20:51.710959436 +0000 UTC m=+0.032307245 container died c8a84c1858ecfbc1b3076e22e85211e93c515229cc0f18640a8f35506d7d81a2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Sep 30 14:20:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-3ab1f7ab84368255a0e4969dcd350cd896d1a650c69e05a6445c458076819bc2-merged.mount: Deactivated successfully.
Sep 30 14:20:51 compute-0 podman[120731]: 2025-09-30 14:20:51.751023957 +0000 UTC m=+0.072371746 container remove c8a84c1858ecfbc1b3076e22e85211e93c515229cc0f18640a8f35506d7d81a2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:20:51 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:20:51 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:20:51 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:20:51.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:20:51 compute-0 systemd[1]: ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6@nfs.cephfs.2.0.compute-0.qrbicy.service: Main process exited, code=exited, status=139/n/a
Sep 30 14:20:51 compute-0 systemd[1]: ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6@nfs.cephfs.2.0.compute-0.qrbicy.service: Failed with result 'exit-code'.
Sep 30 14:20:51 compute-0 systemd[1]: ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6@nfs.cephfs.2.0.compute-0.qrbicy.service: Consumed 1.474s CPU time.
Sep 30 14:20:52 compute-0 ceph-mon[74194]: pgmap v149: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:20:52 compute-0 sudo[120900]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flztbqggtvhylyaxhzsrvoetthserpzk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242052.3664465-710-271986302817394/AnsiballZ_lineinfile.py'
Sep 30 14:20:52 compute-0 sudo[120900]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:20:52 compute-0 python3.9[120902]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:20:52 compute-0 sudo[120900]: pam_unix(sudo:session): session closed for user root
Sep 30 14:20:53 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v150: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:20:53 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:20:53 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:20:53 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:20:53.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:20:53 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:20:53 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:20:53 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:20:53.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:20:54 compute-0 sudo[121054]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfntqrcluyblfqccprndmdbcgtaodvtm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242053.9069564-755-269429134142041/AnsiballZ_setup.py'
Sep 30 14:20:54 compute-0 sudo[121054]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:20:54 compute-0 python3.9[121056]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Sep 30 14:20:54 compute-0 ceph-mon[74194]: pgmap v150: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:20:54 compute-0 sudo[121054]: pam_unix(sudo:session): session closed for user root
Sep 30 14:20:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:20:54] "GET /metrics HTTP/1.1" 200 48414 "" "Prometheus/2.51.0"
Sep 30 14:20:54 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:20:54] "GET /metrics HTTP/1.1" 200 48414 "" "Prometheus/2.51.0"
Sep 30 14:20:55 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v151: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:20:55 compute-0 sudo[121138]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-miqsxcbncrmqinagsppwsjfqctawswvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242053.9069564-755-269429134142041/AnsiballZ_systemd.py'
Sep 30 14:20:55 compute-0 sudo[121138]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:20:55 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:20:55 compute-0 python3.9[121141]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 14:20:55 compute-0 sudo[121138]: pam_unix(sudo:session): session closed for user root
Sep 30 14:20:55 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:20:55 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:20:55 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:20:55.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:20:55 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:20:55 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:20:55 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:20:55.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:20:55 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-nfs-cephfs-compute-0-yvkpei[97239]: [WARNING] 272/142055 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Sep 30 14:20:56 compute-0 sshd-session[116665]: Connection closed by 192.168.122.30 port 34394
Sep 30 14:20:56 compute-0 sshd-session[116661]: pam_unix(sshd:session): session closed for user zuul
Sep 30 14:20:56 compute-0 systemd[1]: session-42.scope: Deactivated successfully.
Sep 30 14:20:56 compute-0 systemd[1]: session-42.scope: Consumed 23.152s CPU time.
Sep 30 14:20:56 compute-0 systemd-logind[808]: Session 42 logged out. Waiting for processes to exit.
Sep 30 14:20:56 compute-0 systemd-logind[808]: Removed session 42.
Sep 30 14:20:56 compute-0 ceph-mon[74194]: pgmap v151: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:20:56 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:20:56.961Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:20:57 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v152: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 426 B/s wr, 1 op/s
Sep 30 14:20:57 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:20:57 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000023s ======
Sep 30 14:20:57 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:20:57.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Sep 30 14:20:57 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:20:57 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000022s ======
Sep 30 14:20:57 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:20:57.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Sep 30 14:20:57 compute-0 sudo[121171]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:20:57 compute-0 sudo[121171]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:20:57 compute-0 sudo[121171]: pam_unix(sudo:session): session closed for user root
Sep 30 14:20:57 compute-0 sudo[121196]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Sep 30 14:20:57 compute-0 sudo[121196]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:20:58 compute-0 podman[121293]: 2025-09-30 14:20:58.450227601 +0000 UTC m=+0.058070472 container exec a277d7b6b6f3cf10a7ce0ade5eebf0f8127074c248f9bce4451399614b97ded5 (image=quay.io/ceph/ceph:v19, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:20:58 compute-0 sudo[121313]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:20:58 compute-0 sudo[121313]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:20:58 compute-0 sudo[121313]: pam_unix(sudo:session): session closed for user root
Sep 30 14:20:58 compute-0 podman[121293]: 2025-09-30 14:20:58.584516573 +0000 UTC m=+0.192359424 container exec_died a277d7b6b6f3cf10a7ce0ade5eebf0f8127074c248f9bce4451399614b97ded5 (image=quay.io/ceph/ceph:v19, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Sep 30 14:20:58 compute-0 ceph-mon[74194]: pgmap v152: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 426 B/s wr, 1 op/s
Sep 30 14:20:58 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:20:58.845Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:20:59 compute-0 podman[121437]: 2025-09-30 14:20:59.016307869 +0000 UTC m=+0.053332433 container exec 7517aa84b8564a81255eab7821e47762fe9b9d86aae2c7d77e10c0dfa057ab6d (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:20:59 compute-0 podman[121437]: 2025-09-30 14:20:59.026579253 +0000 UTC m=+0.063603817 container exec_died 7517aa84b8564a81255eab7821e47762fe9b9d86aae2c7d77e10c0dfa057ab6d (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:20:59 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v153: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 426 B/s wr, 1 op/s
Sep 30 14:20:59 compute-0 ceph-mgr[74485]: [balancer INFO root] Optimize plan auto_2025-09-30_14:20:59
Sep 30 14:20:59 compute-0 ceph-mgr[74485]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 14:20:59 compute-0 ceph-mgr[74485]: [balancer INFO root] do_upmap
Sep 30 14:20:59 compute-0 ceph-mgr[74485]: [balancer INFO root] pools ['volumes', 'images', '.mgr', 'default.rgw.log', 'backups', '.nfs', 'vms', 'default.rgw.meta', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.data', 'cephfs.cephfs.meta']
Sep 30 14:20:59 compute-0 ceph-mgr[74485]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 14:20:59 compute-0 podman[121575]: 2025-09-30 14:20:59.51119783 +0000 UTC m=+0.056894405 container exec ec49c6e24c4fbc830188fe80824f1adb9a8c3cd6d4f4491a3e9330b04061bea8 (image=quay.io/ceph/haproxy:2.3, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-nfs-cephfs-compute-0-yvkpei)
Sep 30 14:20:59 compute-0 podman[121575]: 2025-09-30 14:20:59.521465643 +0000 UTC m=+0.067162198 container exec_died ec49c6e24c4fbc830188fe80824f1adb9a8c3cd6d4f4491a3e9330b04061bea8 (image=quay.io/ceph/haproxy:2.3, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-nfs-cephfs-compute-0-yvkpei)
Sep 30 14:20:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 14:20:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:20:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 14:20:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:20:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:20:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:20:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:20:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:20:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:20:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:20:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:20:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:20:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Sep 30 14:20:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:20:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:20:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:20:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Sep 30 14:20:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:20:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Sep 30 14:20:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:20:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:20:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:20:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 14:20:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:20:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 14:20:59 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:20:59 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:20:59 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:20:59 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:20:59 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:20:59.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:20:59 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:20:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:20:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:20:59 compute-0 podman[121639]: 2025-09-30 14:20:59.727154489 +0000 UTC m=+0.047928210 container exec df25873f420822291a2a2f3e4272e6ab946447daa59ec12441fae67f848da096 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-keepalived-nfs-cephfs-compute-0-nfjjcv, name=keepalived, version=2.2.4, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.expose-services=, vcs-type=git, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=keepalived-container, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., release=1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, description=keepalived for Ceph, io.buildah.version=1.28.2)
Sep 30 14:20:59 compute-0 podman[121639]: 2025-09-30 14:20:59.742507918 +0000 UTC m=+0.063281619 container exec_died df25873f420822291a2a2f3e4272e6ab946447daa59ec12441fae67f848da096 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-keepalived-nfs-cephfs-compute-0-nfjjcv, architecture=x86_64, com.redhat.component=keepalived-container, distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, description=keepalived for Ceph, io.buildah.version=1.28.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, vcs-type=git, release=1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=)
Sep 30 14:20:59 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:20:59 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:20:59 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:20:59.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:20:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:20:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:20:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:20:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:20:59 compute-0 podman[121704]: 2025-09-30 14:20:59.935494794 +0000 UTC m=+0.044302968 container exec b02a1f46575144d1c0fa40fb1da73aeaa83cbe57512ae5912168f030bf7101d3 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:20:59 compute-0 podman[121704]: 2025-09-30 14:20:59.95947756 +0000 UTC m=+0.068285714 container exec_died b02a1f46575144d1c0fa40fb1da73aeaa83cbe57512ae5912168f030bf7101d3 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:21:00 compute-0 podman[121779]: 2025-09-30 14:21:00.163466637 +0000 UTC m=+0.047641924 container exec 4fd9639868c9fdb652f2d65dd14f46e8bfbcca13240732508ba689971c876ee0 (image=quay.io/ceph/grafana:10.4.0, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Sep 30 14:21:00 compute-0 podman[121779]: 2025-09-30 14:21:00.364679931 +0000 UTC m=+0.248855198 container exec_died 4fd9639868c9fdb652f2d65dd14f46e8bfbcca13240732508ba689971c876ee0 (image=quay.io/ceph/grafana:10.4.0, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Sep 30 14:21:00 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:21:00 compute-0 ceph-mon[74194]: pgmap v153: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 426 B/s wr, 1 op/s
Sep 30 14:21:00 compute-0 podman[121888]: 2025-09-30 14:21:00.716121401 +0000 UTC m=+0.058309247 container exec e4a50bbeb60f228cd09239a211f5e468f7ca87363229c6999e3900e12da32b57 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:21:00 compute-0 podman[121888]: 2025-09-30 14:21:00.748581109 +0000 UTC m=+0.090768975 container exec_died e4a50bbeb60f228cd09239a211f5e468f7ca87363229c6999e3900e12da32b57 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:21:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 14:21:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 14:21:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 14:21:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 14:21:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 14:21:00 compute-0 sudo[121196]: pam_unix(sudo:session): session closed for user root
Sep 30 14:21:00 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 14:21:00 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:21:00 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 14:21:00 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:21:00 compute-0 sudo[121931]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:21:00 compute-0 sudo[121931]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:21:00 compute-0 sudo[121931]: pam_unix(sudo:session): session closed for user root
Sep 30 14:21:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 14:21:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 14:21:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 14:21:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 14:21:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 14:21:00 compute-0 sudo[121956]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 14:21:00 compute-0 sudo[121956]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:21:01 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v154: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 426 B/s wr, 2 op/s
Sep 30 14:21:01 compute-0 sudo[121956]: pam_unix(sudo:session): session closed for user root
Sep 30 14:21:01 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:21:01 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:21:01 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 14:21:01 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 14:21:01 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v155: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 400 B/s wr, 1 op/s
Sep 30 14:21:01 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 14:21:01 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:21:01 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 14:21:01 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:21:01 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 14:21:01 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 14:21:01 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 14:21:01 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 14:21:01 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:21:01 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:21:01 compute-0 sudo[122013]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:21:01 compute-0 sudo[122013]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:21:01 compute-0 sudo[122013]: pam_unix(sudo:session): session closed for user root
Sep 30 14:21:01 compute-0 sudo[122038]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 14:21:01 compute-0 sudo[122038]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:21:01 compute-0 sshd-session[122039]: Accepted publickey for zuul from 192.168.122.30 port 52060 ssh2: ECDSA SHA256:bXV1aFTGAGwGo0hLh6HZ3pTGxlJrPf0VedxXflT3nU8
Sep 30 14:21:01 compute-0 systemd-logind[808]: New session 43 of user zuul.
Sep 30 14:21:01 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:21:01 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000023s ======
Sep 30 14:21:01 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:21:01.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Sep 30 14:21:01 compute-0 systemd[1]: Started Session 43 of User zuul.
Sep 30 14:21:01 compute-0 sshd-session[122039]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Sep 30 14:21:01 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:21:01 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000023s ======
Sep 30 14:21:01 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:21:01.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Sep 30 14:21:01 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:21:01 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:21:01 compute-0 ceph-mon[74194]: pgmap v154: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 426 B/s wr, 2 op/s
Sep 30 14:21:01 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:21:01 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 14:21:01 compute-0 ceph-mon[74194]: pgmap v155: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 400 B/s wr, 1 op/s
Sep 30 14:21:01 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:21:01 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:21:01 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 14:21:01 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 14:21:01 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:21:02 compute-0 podman[122158]: 2025-09-30 14:21:02.02429425 +0000 UTC m=+0.046398516 container create 03ff511a01bfb2df2374633c770d9b5e3000ba858c1d4a597c7516cc3e3dfbb6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_galileo, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Sep 30 14:21:02 compute-0 systemd[1]: ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6@nfs.cephfs.2.0.compute-0.qrbicy.service: Scheduled restart job, restart counter is at 2.
Sep 30 14:21:02 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.qrbicy for 5e3c7776-ac03-5698-b79f-a6dc2d80cae6.
Sep 30 14:21:02 compute-0 systemd[1]: ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6@nfs.cephfs.2.0.compute-0.qrbicy.service: Consumed 1.474s CPU time.
Sep 30 14:21:02 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.qrbicy for 5e3c7776-ac03-5698-b79f-a6dc2d80cae6...
Sep 30 14:21:02 compute-0 systemd[1]: Started libpod-conmon-03ff511a01bfb2df2374633c770d9b5e3000ba858c1d4a597c7516cc3e3dfbb6.scope.
Sep 30 14:21:02 compute-0 podman[122158]: 2025-09-30 14:21:02.000374776 +0000 UTC m=+0.022479072 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:21:02 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:21:02 compute-0 podman[122158]: 2025-09-30 14:21:02.116758292 +0000 UTC m=+0.138862558 container init 03ff511a01bfb2df2374633c770d9b5e3000ba858c1d4a597c7516cc3e3dfbb6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_galileo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Sep 30 14:21:02 compute-0 podman[122158]: 2025-09-30 14:21:02.134663349 +0000 UTC m=+0.156767615 container start 03ff511a01bfb2df2374633c770d9b5e3000ba858c1d4a597c7516cc3e3dfbb6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_galileo, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Sep 30 14:21:02 compute-0 podman[122158]: 2025-09-30 14:21:02.138700441 +0000 UTC m=+0.160804707 container attach 03ff511a01bfb2df2374633c770d9b5e3000ba858c1d4a597c7516cc3e3dfbb6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_galileo, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Sep 30 14:21:02 compute-0 lucid_galileo[122200]: 167 167
Sep 30 14:21:02 compute-0 systemd[1]: libpod-03ff511a01bfb2df2374633c770d9b5e3000ba858c1d4a597c7516cc3e3dfbb6.scope: Deactivated successfully.
Sep 30 14:21:02 compute-0 podman[122158]: 2025-09-30 14:21:02.141448793 +0000 UTC m=+0.163553079 container died 03ff511a01bfb2df2374633c770d9b5e3000ba858c1d4a597c7516cc3e3dfbb6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_galileo, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2)
Sep 30 14:21:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-47834728e5a84821ee5869f79d3974f182c038b043ff4c70033bac0be0ece2c7-merged.mount: Deactivated successfully.
Sep 30 14:21:02 compute-0 podman[122158]: 2025-09-30 14:21:02.190619851 +0000 UTC m=+0.212724117 container remove 03ff511a01bfb2df2374633c770d9b5e3000ba858c1d4a597c7516cc3e3dfbb6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_galileo, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Sep 30 14:21:02 compute-0 systemd[1]: libpod-conmon-03ff511a01bfb2df2374633c770d9b5e3000ba858c1d4a597c7516cc3e3dfbb6.scope: Deactivated successfully.
Sep 30 14:21:02 compute-0 podman[122309]: 2025-09-30 14:21:02.297643874 +0000 UTC m=+0.042282972 container create 207f187f99245e829d868e681a22d0de29f4d60a9007a67bb77f7bc1bf904397 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Sep 30 14:21:02 compute-0 sudo[122347]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrtabgmueyokzjjnksrudlbhqufqdugr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242061.837808-26-15124257554206/AnsiballZ_file.py'
Sep 30 14:21:02 compute-0 sudo[122347]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:21:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a65dd6b9844ddde38a4f909cb21593aa43a3e4c4a6f9bc071952c875c299786c/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Sep 30 14:21:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a65dd6b9844ddde38a4f909cb21593aa43a3e4c4a6f9bc071952c875c299786c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:21:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a65dd6b9844ddde38a4f909cb21593aa43a3e4c4a6f9bc071952c875c299786c/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 14:21:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a65dd6b9844ddde38a4f909cb21593aa43a3e4c4a6f9bc071952c875c299786c/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.qrbicy-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 14:21:02 compute-0 podman[122350]: 2025-09-30 14:21:02.351781344 +0000 UTC m=+0.048292038 container create 7ce93b2246ad85b3d0056fe7291d356ecf8499ca3646889550430036a6af7d89 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_davinci, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:21:02 compute-0 podman[122309]: 2025-09-30 14:21:02.364292249 +0000 UTC m=+0.108931377 container init 207f187f99245e829d868e681a22d0de29f4d60a9007a67bb77f7bc1bf904397 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Sep 30 14:21:02 compute-0 podman[122309]: 2025-09-30 14:21:02.371034992 +0000 UTC m=+0.115674080 container start 207f187f99245e829d868e681a22d0de29f4d60a9007a67bb77f7bc1bf904397 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:21:02 compute-0 podman[122309]: 2025-09-30 14:21:02.276852071 +0000 UTC m=+0.021491189 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:21:02 compute-0 bash[122309]: 207f187f99245e829d868e681a22d0de29f4d60a9007a67bb77f7bc1bf904397
Sep 30 14:21:02 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.qrbicy for 5e3c7776-ac03-5698-b79f-a6dc2d80cae6.
Sep 30 14:21:02 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[122366]: 30/09/2025 14:21:02 : epoch 68dbe74e : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Sep 30 14:21:02 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[122366]: 30/09/2025 14:21:02 : epoch 68dbe74e : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Sep 30 14:21:02 compute-0 systemd[1]: Started libpod-conmon-7ce93b2246ad85b3d0056fe7291d356ecf8499ca3646889550430036a6af7d89.scope.
Sep 30 14:21:02 compute-0 podman[122350]: 2025-09-30 14:21:02.329252622 +0000 UTC m=+0.025763346 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:21:02 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:21:02 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[122366]: 30/09/2025 14:21:02 : epoch 68dbe74e : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Sep 30 14:21:02 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[122366]: 30/09/2025 14:21:02 : epoch 68dbe74e : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Sep 30 14:21:02 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[122366]: 30/09/2025 14:21:02 : epoch 68dbe74e : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Sep 30 14:21:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/081124f22e8fbd191495aa5d994481d7372865256f914004ef74b15b61f03129/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:21:02 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[122366]: 30/09/2025 14:21:02 : epoch 68dbe74e : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Sep 30 14:21:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/081124f22e8fbd191495aa5d994481d7372865256f914004ef74b15b61f03129/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:21:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/081124f22e8fbd191495aa5d994481d7372865256f914004ef74b15b61f03129/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:21:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/081124f22e8fbd191495aa5d994481d7372865256f914004ef74b15b61f03129/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:21:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/081124f22e8fbd191495aa5d994481d7372865256f914004ef74b15b61f03129/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 14:21:02 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[122366]: 30/09/2025 14:21:02 : epoch 68dbe74e : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Sep 30 14:21:02 compute-0 podman[122350]: 2025-09-30 14:21:02.452434593 +0000 UTC m=+0.148945307 container init 7ce93b2246ad85b3d0056fe7291d356ecf8499ca3646889550430036a6af7d89 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_davinci, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Sep 30 14:21:02 compute-0 podman[122350]: 2025-09-30 14:21:02.461421607 +0000 UTC m=+0.157932301 container start 7ce93b2246ad85b3d0056fe7291d356ecf8499ca3646889550430036a6af7d89 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_davinci, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Sep 30 14:21:02 compute-0 podman[122350]: 2025-09-30 14:21:02.46507794 +0000 UTC m=+0.161588634 container attach 7ce93b2246ad85b3d0056fe7291d356ecf8499ca3646889550430036a6af7d89 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_davinci, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:21:02 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[122366]: 30/09/2025 14:21:02 : epoch 68dbe74e : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:21:02 compute-0 python3.9[122358]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:21:02 compute-0 sudo[122347]: pam_unix(sudo:session): session closed for user root
Sep 30 14:21:02 compute-0 vigilant_davinci[122375]: --> passed data devices: 0 physical, 1 LVM
Sep 30 14:21:02 compute-0 vigilant_davinci[122375]: --> All data devices are unavailable
Sep 30 14:21:02 compute-0 systemd[1]: libpod-7ce93b2246ad85b3d0056fe7291d356ecf8499ca3646889550430036a6af7d89.scope: Deactivated successfully.
Sep 30 14:21:02 compute-0 podman[122350]: 2025-09-30 14:21:02.843715258 +0000 UTC m=+0.540225962 container died 7ce93b2246ad85b3d0056fe7291d356ecf8499ca3646889550430036a6af7d89 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_davinci, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:21:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-081124f22e8fbd191495aa5d994481d7372865256f914004ef74b15b61f03129-merged.mount: Deactivated successfully.
Sep 30 14:21:02 compute-0 podman[122350]: 2025-09-30 14:21:02.88427185 +0000 UTC m=+0.580782544 container remove 7ce93b2246ad85b3d0056fe7291d356ecf8499ca3646889550430036a6af7d89 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_davinci, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Sep 30 14:21:02 compute-0 systemd[1]: libpod-conmon-7ce93b2246ad85b3d0056fe7291d356ecf8499ca3646889550430036a6af7d89.scope: Deactivated successfully.
Sep 30 14:21:02 compute-0 sudo[122038]: pam_unix(sudo:session): session closed for user root
Sep 30 14:21:02 compute-0 sudo[122515]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:21:02 compute-0 sudo[122515]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:21:02 compute-0 sudo[122515]: pam_unix(sudo:session): session closed for user root
Sep 30 14:21:03 compute-0 sudo[122541]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -- lvm list --format json
Sep 30 14:21:03 compute-0 sudo[122541]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:21:03 compute-0 sudo[122638]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hquqlivdlyreergktiqodwwiwbfttrny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242062.722854-62-213222733010068/AnsiballZ_stat.py'
Sep 30 14:21:03 compute-0 sudo[122638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:21:03 compute-0 podman[122681]: 2025-09-30 14:21:03.409058489 +0000 UTC m=+0.039327805 container create 90a0e87351b452cbe2a1ec1d2b5fbfb6171c58a16484e0c9abcde1f15ea5816d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_robinson, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Sep 30 14:21:03 compute-0 python3.9[122640]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:21:03 compute-0 systemd[1]: Started libpod-conmon-90a0e87351b452cbe2a1ec1d2b5fbfb6171c58a16484e0c9abcde1f15ea5816d.scope.
Sep 30 14:21:03 compute-0 sudo[122638]: pam_unix(sudo:session): session closed for user root
Sep 30 14:21:03 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v156: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 400 B/s wr, 1 op/s
Sep 30 14:21:03 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:21:03 compute-0 podman[122681]: 2025-09-30 14:21:03.481548517 +0000 UTC m=+0.111817853 container init 90a0e87351b452cbe2a1ec1d2b5fbfb6171c58a16484e0c9abcde1f15ea5816d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_robinson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Sep 30 14:21:03 compute-0 podman[122681]: 2025-09-30 14:21:03.391663293 +0000 UTC m=+0.021932639 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:21:03 compute-0 podman[122681]: 2025-09-30 14:21:03.488583897 +0000 UTC m=+0.118853213 container start 90a0e87351b452cbe2a1ec1d2b5fbfb6171c58a16484e0c9abcde1f15ea5816d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_robinson, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Sep 30 14:21:03 compute-0 podman[122681]: 2025-09-30 14:21:03.492041745 +0000 UTC m=+0.122311081 container attach 90a0e87351b452cbe2a1ec1d2b5fbfb6171c58a16484e0c9abcde1f15ea5816d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_robinson, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Sep 30 14:21:03 compute-0 practical_robinson[122699]: 167 167
Sep 30 14:21:03 compute-0 systemd[1]: libpod-90a0e87351b452cbe2a1ec1d2b5fbfb6171c58a16484e0c9abcde1f15ea5816d.scope: Deactivated successfully.
Sep 30 14:21:03 compute-0 podman[122681]: 2025-09-30 14:21:03.494748827 +0000 UTC m=+0.125018143 container died 90a0e87351b452cbe2a1ec1d2b5fbfb6171c58a16484e0c9abcde1f15ea5816d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_robinson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Sep 30 14:21:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-a9d17a53fc9b8964749a387d9d51b580324e0447ea73ef829be5084385da5917-merged.mount: Deactivated successfully.
Sep 30 14:21:03 compute-0 podman[122681]: 2025-09-30 14:21:03.529471396 +0000 UTC m=+0.159740712 container remove 90a0e87351b452cbe2a1ec1d2b5fbfb6171c58a16484e0c9abcde1f15ea5816d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_robinson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:21:03 compute-0 systemd[1]: libpod-conmon-90a0e87351b452cbe2a1ec1d2b5fbfb6171c58a16484e0c9abcde1f15ea5816d.scope: Deactivated successfully.
Sep 30 14:21:03 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:21:03 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:21:03 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:21:03.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:21:03 compute-0 sudo[122802]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqzgulbnlqecigymhqzyjgtzszdluald ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242062.722854-62-213222733010068/AnsiballZ_file.py'
Sep 30 14:21:03 compute-0 sudo[122802]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:21:03 compute-0 podman[122785]: 2025-09-30 14:21:03.694043197 +0000 UTC m=+0.046638101 container create 07a8be128558db2f21e5f6b346970a7007049d4cb2eae9cb1a4ac6ec6bef4caf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_keller, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:21:03 compute-0 systemd[1]: Started libpod-conmon-07a8be128558db2f21e5f6b346970a7007049d4cb2eae9cb1a4ac6ec6bef4caf.scope.
Sep 30 14:21:03 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:21:03 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:21:03 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:21:03.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:21:03 compute-0 podman[122785]: 2025-09-30 14:21:03.676088799 +0000 UTC m=+0.028683733 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:21:03 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:21:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cebb85609e506447a81f9554c8acf812c37d8953b697224cf29f0f3032620eda/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:21:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cebb85609e506447a81f9554c8acf812c37d8953b697224cf29f0f3032620eda/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:21:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cebb85609e506447a81f9554c8acf812c37d8953b697224cf29f0f3032620eda/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:21:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cebb85609e506447a81f9554c8acf812c37d8953b697224cf29f0f3032620eda/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:21:03 compute-0 podman[122785]: 2025-09-30 14:21:03.789709782 +0000 UTC m=+0.142304706 container init 07a8be128558db2f21e5f6b346970a7007049d4cb2eae9cb1a4ac6ec6bef4caf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_keller, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Sep 30 14:21:03 compute-0 podman[122785]: 2025-09-30 14:21:03.79708834 +0000 UTC m=+0.149683244 container start 07a8be128558db2f21e5f6b346970a7007049d4cb2eae9cb1a4ac6ec6bef4caf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_keller, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Sep 30 14:21:03 compute-0 podman[122785]: 2025-09-30 14:21:03.800945598 +0000 UTC m=+0.153540522 container attach 07a8be128558db2f21e5f6b346970a7007049d4cb2eae9cb1a4ac6ec6bef4caf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_keller, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:21:03 compute-0 python3.9[122808]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:21:03 compute-0 sudo[122802]: pam_unix(sudo:session): session closed for user root
Sep 30 14:21:04 compute-0 modest_keller[122815]: {
Sep 30 14:21:04 compute-0 modest_keller[122815]:     "0": [
Sep 30 14:21:04 compute-0 modest_keller[122815]:         {
Sep 30 14:21:04 compute-0 modest_keller[122815]:             "devices": [
Sep 30 14:21:04 compute-0 modest_keller[122815]:                 "/dev/loop3"
Sep 30 14:21:04 compute-0 modest_keller[122815]:             ],
Sep 30 14:21:04 compute-0 modest_keller[122815]:             "lv_name": "ceph_lv0",
Sep 30 14:21:04 compute-0 modest_keller[122815]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:21:04 compute-0 modest_keller[122815]:             "lv_size": "21470642176",
Sep 30 14:21:04 compute-0 modest_keller[122815]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5e3c7776-ac03-5698-b79f-a6dc2d80cae6,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1bf35304-bfb4-41f5-b832-570aa31de1b2,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 14:21:04 compute-0 modest_keller[122815]:             "lv_uuid": "f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L",
Sep 30 14:21:04 compute-0 modest_keller[122815]:             "name": "ceph_lv0",
Sep 30 14:21:04 compute-0 modest_keller[122815]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:21:04 compute-0 modest_keller[122815]:             "tags": {
Sep 30 14:21:04 compute-0 modest_keller[122815]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:21:04 compute-0 modest_keller[122815]:                 "ceph.block_uuid": "f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L",
Sep 30 14:21:04 compute-0 modest_keller[122815]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 14:21:04 compute-0 modest_keller[122815]:                 "ceph.cluster_fsid": "5e3c7776-ac03-5698-b79f-a6dc2d80cae6",
Sep 30 14:21:04 compute-0 modest_keller[122815]:                 "ceph.cluster_name": "ceph",
Sep 30 14:21:04 compute-0 modest_keller[122815]:                 "ceph.crush_device_class": "",
Sep 30 14:21:04 compute-0 modest_keller[122815]:                 "ceph.encrypted": "0",
Sep 30 14:21:04 compute-0 modest_keller[122815]:                 "ceph.osd_fsid": "1bf35304-bfb4-41f5-b832-570aa31de1b2",
Sep 30 14:21:04 compute-0 modest_keller[122815]:                 "ceph.osd_id": "0",
Sep 30 14:21:04 compute-0 modest_keller[122815]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 14:21:04 compute-0 modest_keller[122815]:                 "ceph.type": "block",
Sep 30 14:21:04 compute-0 modest_keller[122815]:                 "ceph.vdo": "0",
Sep 30 14:21:04 compute-0 modest_keller[122815]:                 "ceph.with_tpm": "0"
Sep 30 14:21:04 compute-0 modest_keller[122815]:             },
Sep 30 14:21:04 compute-0 modest_keller[122815]:             "type": "block",
Sep 30 14:21:04 compute-0 modest_keller[122815]:             "vg_name": "ceph_vg0"
Sep 30 14:21:04 compute-0 modest_keller[122815]:         }
Sep 30 14:21:04 compute-0 modest_keller[122815]:     ]
Sep 30 14:21:04 compute-0 modest_keller[122815]: }
Sep 30 14:21:04 compute-0 systemd[1]: libpod-07a8be128558db2f21e5f6b346970a7007049d4cb2eae9cb1a4ac6ec6bef4caf.scope: Deactivated successfully.
Sep 30 14:21:04 compute-0 podman[122785]: 2025-09-30 14:21:04.093866197 +0000 UTC m=+0.446461121 container died 07a8be128558db2f21e5f6b346970a7007049d4cb2eae9cb1a4ac6ec6bef4caf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_keller, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:21:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-cebb85609e506447a81f9554c8acf812c37d8953b697224cf29f0f3032620eda-merged.mount: Deactivated successfully.
Sep 30 14:21:04 compute-0 podman[122785]: 2025-09-30 14:21:04.132155907 +0000 UTC m=+0.484750811 container remove 07a8be128558db2f21e5f6b346970a7007049d4cb2eae9cb1a4ac6ec6bef4caf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_keller, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:21:04 compute-0 systemd[1]: libpod-conmon-07a8be128558db2f21e5f6b346970a7007049d4cb2eae9cb1a4ac6ec6bef4caf.scope: Deactivated successfully.
Sep 30 14:21:04 compute-0 sudo[122541]: pam_unix(sudo:session): session closed for user root
Sep 30 14:21:04 compute-0 sudo[122859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:21:04 compute-0 sudo[122859]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:21:04 compute-0 sudo[122859]: pam_unix(sudo:session): session closed for user root
Sep 30 14:21:04 compute-0 sshd-session[122066]: Connection closed by 192.168.122.30 port 52060
Sep 30 14:21:04 compute-0 sshd-session[122039]: pam_unix(sshd:session): session closed for user zuul
Sep 30 14:21:04 compute-0 sudo[122884]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -- raw list --format json
Sep 30 14:21:04 compute-0 sudo[122884]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:21:04 compute-0 systemd[1]: session-43.scope: Deactivated successfully.
Sep 30 14:21:04 compute-0 systemd[1]: session-43.scope: Consumed 1.657s CPU time.
Sep 30 14:21:04 compute-0 systemd-logind[808]: Session 43 logged out. Waiting for processes to exit.
Sep 30 14:21:04 compute-0 systemd-logind[808]: Removed session 43.
Sep 30 14:21:04 compute-0 ceph-mon[74194]: pgmap v156: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 400 B/s wr, 1 op/s
Sep 30 14:21:04 compute-0 podman[122947]: 2025-09-30 14:21:04.653249333 +0000 UTC m=+0.043839427 container create 7fe4062e029de2b5152fc251d5971149287317f2c4d2ef25c8854b5295a55faf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Sep 30 14:21:04 compute-0 systemd[1]: Started libpod-conmon-7fe4062e029de2b5152fc251d5971149287317f2c4d2ef25c8854b5295a55faf.scope.
Sep 30 14:21:04 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:21:04 compute-0 podman[122947]: 2025-09-30 14:21:04.631701324 +0000 UTC m=+0.022291408 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:21:04 compute-0 podman[122947]: 2025-09-30 14:21:04.732609708 +0000 UTC m=+0.123199772 container init 7fe4062e029de2b5152fc251d5971149287317f2c4d2ef25c8854b5295a55faf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_noyce, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:21:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:21:04] "GET /metrics HTTP/1.1" 200 48414 "" "Prometheus/2.51.0"
Sep 30 14:21:04 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:21:04] "GET /metrics HTTP/1.1" 200 48414 "" "Prometheus/2.51.0"
Sep 30 14:21:04 compute-0 podman[122947]: 2025-09-30 14:21:04.741857388 +0000 UTC m=+0.132447452 container start 7fe4062e029de2b5152fc251d5971149287317f2c4d2ef25c8854b5295a55faf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_noyce, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True)
Sep 30 14:21:04 compute-0 podman[122947]: 2025-09-30 14:21:04.745583292 +0000 UTC m=+0.136173376 container attach 7fe4062e029de2b5152fc251d5971149287317f2c4d2ef25c8854b5295a55faf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_noyce, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:21:04 compute-0 compassionate_noyce[122964]: 167 167
Sep 30 14:21:04 compute-0 systemd[1]: libpod-7fe4062e029de2b5152fc251d5971149287317f2c4d2ef25c8854b5295a55faf.scope: Deactivated successfully.
Sep 30 14:21:04 compute-0 podman[122947]: 2025-09-30 14:21:04.747874085 +0000 UTC m=+0.138464159 container died 7fe4062e029de2b5152fc251d5971149287317f2c4d2ef25c8854b5295a55faf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_noyce, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Sep 30 14:21:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-e83a541045ec82296f1d5d20ced7d8d3a1d5292ad22ad9eecc0190cfd1403993-merged.mount: Deactivated successfully.
Sep 30 14:21:04 compute-0 podman[122947]: 2025-09-30 14:21:04.791491616 +0000 UTC m=+0.182081680 container remove 7fe4062e029de2b5152fc251d5971149287317f2c4d2ef25c8854b5295a55faf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_noyce, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default)
Sep 30 14:21:04 compute-0 systemd[1]: libpod-conmon-7fe4062e029de2b5152fc251d5971149287317f2c4d2ef25c8854b5295a55faf.scope: Deactivated successfully.
Sep 30 14:21:04 compute-0 podman[122988]: 2025-09-30 14:21:04.966375122 +0000 UTC m=+0.051605044 container create 1ca68eae28166fa1f22b36672f2b9a6e225a59e009be43f0ed747fecb8d740e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_engelbart, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:21:05 compute-0 systemd[1]: Started libpod-conmon-1ca68eae28166fa1f22b36672f2b9a6e225a59e009be43f0ed747fecb8d740e0.scope.
Sep 30 14:21:05 compute-0 podman[122988]: 2025-09-30 14:21:04.947119084 +0000 UTC m=+0.032349046 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:21:05 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:21:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95681a3729c1c825ccb0b5fe5561b90743490b418c06b9dc67c71ccb6caa2a83/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:21:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95681a3729c1c825ccb0b5fe5561b90743490b418c06b9dc67c71ccb6caa2a83/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:21:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95681a3729c1c825ccb0b5fe5561b90743490b418c06b9dc67c71ccb6caa2a83/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:21:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95681a3729c1c825ccb0b5fe5561b90743490b418c06b9dc67c71ccb6caa2a83/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:21:05 compute-0 podman[122988]: 2025-09-30 14:21:05.060204015 +0000 UTC m=+0.145434007 container init 1ca68eae28166fa1f22b36672f2b9a6e225a59e009be43f0ed747fecb8d740e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_engelbart, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid)
Sep 30 14:21:05 compute-0 podman[122988]: 2025-09-30 14:21:05.066115709 +0000 UTC m=+0.151345641 container start 1ca68eae28166fa1f22b36672f2b9a6e225a59e009be43f0ed747fecb8d740e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_engelbart, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:21:05 compute-0 podman[122988]: 2025-09-30 14:21:05.069123418 +0000 UTC m=+0.154353370 container attach 1ca68eae28166fa1f22b36672f2b9a6e225a59e009be43f0ed747fecb8d740e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_engelbart, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Sep 30 14:21:05 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:21:05 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v157: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 400 B/s wr, 1 op/s
Sep 30 14:21:05 compute-0 lvm[123079]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 14:21:05 compute-0 lvm[123079]: VG ceph_vg0 finished
Sep 30 14:21:05 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:21:05 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:21:05 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:21:05.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:21:05 compute-0 unruffled_engelbart[123004]: {}
Sep 30 14:21:05 compute-0 systemd[1]: libpod-1ca68eae28166fa1f22b36672f2b9a6e225a59e009be43f0ed747fecb8d740e0.scope: Deactivated successfully.
Sep 30 14:21:05 compute-0 systemd[1]: libpod-1ca68eae28166fa1f22b36672f2b9a6e225a59e009be43f0ed747fecb8d740e0.scope: Consumed 1.036s CPU time.
Sep 30 14:21:05 compute-0 podman[122988]: 2025-09-30 14:21:05.77039958 +0000 UTC m=+0.855629512 container died 1ca68eae28166fa1f22b36672f2b9a6e225a59e009be43f0ed747fecb8d740e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_engelbart, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:21:05 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:21:05 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:21:05 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:21:05.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:21:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-95681a3729c1c825ccb0b5fe5561b90743490b418c06b9dc67c71ccb6caa2a83-merged.mount: Deactivated successfully.
Sep 30 14:21:05 compute-0 podman[122988]: 2025-09-30 14:21:05.813695114 +0000 UTC m=+0.898925046 container remove 1ca68eae28166fa1f22b36672f2b9a6e225a59e009be43f0ed747fecb8d740e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_engelbart, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Sep 30 14:21:05 compute-0 systemd[1]: libpod-conmon-1ca68eae28166fa1f22b36672f2b9a6e225a59e009be43f0ed747fecb8d740e0.scope: Deactivated successfully.
Sep 30 14:21:05 compute-0 sudo[122884]: pam_unix(sudo:session): session closed for user root
Sep 30 14:21:05 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 14:21:05 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:21:05 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 14:21:05 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:21:05 compute-0 sudo[123095]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 14:21:05 compute-0 sudo[123095]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:21:05 compute-0 sudo[123095]: pam_unix(sudo:session): session closed for user root
Sep 30 14:21:06 compute-0 ceph-mon[74194]: pgmap v157: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 400 B/s wr, 1 op/s
Sep 30 14:21:06 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:21:06 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:21:06 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:21:06.962Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:21:06 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:21:06.962Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:21:06 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:21:06.963Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:21:07 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v158: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 700 B/s wr, 3 op/s
Sep 30 14:21:07 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:21:07 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:21:07 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:21:07.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:21:07 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:21:07 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:21:07 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:21:07.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:21:08 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[122366]: 30/09/2025 14:21:08 : epoch 68dbe74e : compute-0 : ganesha.nfsd-2[main] rados_kv_traverse :CLIENT ID :EVENT :Failed to lst kv ret=-2
Sep 30 14:21:08 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[122366]: 30/09/2025 14:21:08 : epoch 68dbe74e : compute-0 : ganesha.nfsd-2[main] rados_cluster_read_clids :CLIENT ID :EVENT :Failed to traverse recovery db: -2
Sep 30 14:21:08 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[122366]: 30/09/2025 14:21:08 : epoch 68dbe74e : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:21:08 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[122366]: 30/09/2025 14:21:08 : epoch 68dbe74e : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:21:08 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[122366]: 30/09/2025 14:21:08 : epoch 68dbe74e : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:21:08 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[122366]: 30/09/2025 14:21:08 : epoch 68dbe74e : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:21:08 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[122366]: 30/09/2025 14:21:08 : epoch 68dbe74e : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:21:08 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[122366]: 30/09/2025 14:21:08 : epoch 68dbe74e : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:21:08 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[122366]: 30/09/2025 14:21:08 : epoch 68dbe74e : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:21:08 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[122366]: 30/09/2025 14:21:08 : epoch 68dbe74e : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:21:08 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[122366]: 30/09/2025 14:21:08 : epoch 68dbe74e : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:21:08 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[122366]: 30/09/2025 14:21:08 : epoch 68dbe74e : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:21:08 compute-0 ceph-mon[74194]: pgmap v158: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 700 B/s wr, 3 op/s
Sep 30 14:21:08 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:21:08.847Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:21:09 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v159: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 700 B/s wr, 3 op/s
Sep 30 14:21:09 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:21:09 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:21:09 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:21:09.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:21:09 compute-0 sshd-session[123123]: Accepted publickey for zuul from 192.168.122.30 port 34682 ssh2: ECDSA SHA256:bXV1aFTGAGwGo0hLh6HZ3pTGxlJrPf0VedxXflT3nU8
Sep 30 14:21:09 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:21:09 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:21:09 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:21:09.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:21:09 compute-0 systemd-logind[808]: New session 44 of user zuul.
Sep 30 14:21:09 compute-0 systemd[1]: Started Session 44 of User zuul.
Sep 30 14:21:09 compute-0 sshd-session[123123]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Sep 30 14:21:10 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:21:10 compute-0 ceph-mon[74194]: pgmap v159: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 700 B/s wr, 3 op/s
Sep 30 14:21:10 compute-0 python3.9[123277]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Sep 30 14:21:11 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-nfs-cephfs-compute-0-yvkpei[97239]: [WARNING] 272/142111 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Sep 30 14:21:11 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v160: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 1.3 KiB/s wr, 4 op/s
Sep 30 14:21:11 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:21:11 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:21:11 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:21:11.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:21:11 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:21:11 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000022s ======
Sep 30 14:21:11 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:21:11.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Sep 30 14:21:11 compute-0 sudo[123433]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmdyvmzxzbvpskrrmqhbnixlhhsbphal ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242071.3768926-59-12223730434557/AnsiballZ_file.py'
Sep 30 14:21:11 compute-0 sudo[123433]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:21:12 compute-0 python3.9[123435]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:21:12 compute-0 sudo[123433]: pam_unix(sudo:session): session closed for user root
Sep 30 14:21:12 compute-0 ceph-mon[74194]: pgmap v160: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 1.3 KiB/s wr, 4 op/s
Sep 30 14:21:12 compute-0 sudo[123608]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xeeeszaiogsilrwornyffqctlnygdmpy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242072.43272-83-244766726724505/AnsiballZ_stat.py'
Sep 30 14:21:12 compute-0 sudo[123608]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:21:13 compute-0 python3.9[123610]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:21:13 compute-0 sudo[123608]: pam_unix(sudo:session): session closed for user root
Sep 30 14:21:13 compute-0 sudo[123687]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omkdraeyizletsbzsshigprtitckxgoy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242072.43272-83-244766726724505/AnsiballZ_file.py'
Sep 30 14:21:13 compute-0 sudo[123687]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:21:13 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v161: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Sep 30 14:21:13 compute-0 python3.9[123689]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.acekg69g recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:21:13 compute-0 sudo[123687]: pam_unix(sudo:session): session closed for user root
Sep 30 14:21:13 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:21:13 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:21:13 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:21:13.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:21:13 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:21:13 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:21:13 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:21:13.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:21:14 compute-0 sudo[123840]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjhlxfxhrkdjnwlsmaewxiznkmjrunie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242074.0512137-143-32570755264892/AnsiballZ_stat.py'
Sep 30 14:21:14 compute-0 sudo[123840]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:21:14 compute-0 python3.9[123842]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:21:14 compute-0 sudo[123840]: pam_unix(sudo:session): session closed for user root
Sep 30 14:21:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[122366]: 30/09/2025 14:21:14 : epoch 68dbe74e : compute-0 : ganesha.nfsd-2[main] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-0000000000000007:nfs.cephfs.2: -2
Sep 30 14:21:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[122366]: 30/09/2025 14:21:14 : epoch 68dbe74e : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Sep 30 14:21:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[122366]: 30/09/2025 14:21:14 : epoch 68dbe74e : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Sep 30 14:21:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[122366]: 30/09/2025 14:21:14 : epoch 68dbe74e : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Sep 30 14:21:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[122366]: 30/09/2025 14:21:14 : epoch 68dbe74e : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Sep 30 14:21:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[122366]: 30/09/2025 14:21:14 : epoch 68dbe74e : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Sep 30 14:21:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[122366]: 30/09/2025 14:21:14 : epoch 68dbe74e : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Sep 30 14:21:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[122366]: 30/09/2025 14:21:14 : epoch 68dbe74e : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Sep 30 14:21:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[122366]: 30/09/2025 14:21:14 : epoch 68dbe74e : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 14:21:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[122366]: 30/09/2025 14:21:14 : epoch 68dbe74e : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 14:21:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[122366]: 30/09/2025 14:21:14 : epoch 68dbe74e : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 14:21:14 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:21:14 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:21:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[122366]: 30/09/2025 14:21:14 : epoch 68dbe74e : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Sep 30 14:21:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[122366]: 30/09/2025 14:21:14 : epoch 68dbe74e : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 14:21:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[122366]: 30/09/2025 14:21:14 : epoch 68dbe74e : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Sep 30 14:21:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[122366]: 30/09/2025 14:21:14 : epoch 68dbe74e : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Sep 30 14:21:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[122366]: 30/09/2025 14:21:14 : epoch 68dbe74e : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Sep 30 14:21:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[122366]: 30/09/2025 14:21:14 : epoch 68dbe74e : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Sep 30 14:21:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[122366]: 30/09/2025 14:21:14 : epoch 68dbe74e : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Sep 30 14:21:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[122366]: 30/09/2025 14:21:14 : epoch 68dbe74e : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Sep 30 14:21:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[122366]: 30/09/2025 14:21:14 : epoch 68dbe74e : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Sep 30 14:21:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[122366]: 30/09/2025 14:21:14 : epoch 68dbe74e : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Sep 30 14:21:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[122366]: 30/09/2025 14:21:14 : epoch 68dbe74e : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Sep 30 14:21:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[122366]: 30/09/2025 14:21:14 : epoch 68dbe74e : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Sep 30 14:21:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[122366]: 30/09/2025 14:21:14 : epoch 68dbe74e : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Sep 30 14:21:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[122366]: 30/09/2025 14:21:14 : epoch 68dbe74e : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Sep 30 14:21:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[122366]: 30/09/2025 14:21:14 : epoch 68dbe74e : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Sep 30 14:21:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[122366]: 30/09/2025 14:21:14 : epoch 68dbe74e : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Sep 30 14:21:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[122366]: 30/09/2025 14:21:14 : epoch 68dbe74e : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Sep 30 14:21:14 compute-0 ceph-mon[74194]: pgmap v161: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Sep 30 14:21:14 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:21:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:21:14] "GET /metrics HTTP/1.1" 200 48411 "" "Prometheus/2.51.0"
Sep 30 14:21:14 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:21:14] "GET /metrics HTTP/1.1" 200 48411 "" "Prometheus/2.51.0"
Sep 30 14:21:14 compute-0 sudo[123931]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldgfyicevlbluxsoauqhjzizjwhmnrks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242074.0512137-143-32570755264892/AnsiballZ_file.py'
Sep 30 14:21:14 compute-0 sudo[123931]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:21:14 compute-0 python3.9[123933]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.h78d5oq6 recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:21:14 compute-0 sudo[123931]: pam_unix(sudo:session): session closed for user root
Sep 30 14:21:15 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:21:15 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[122366]: 30/09/2025 14:21:15 : epoch 68dbe74e : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3fa0000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:21:15 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v162: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Sep 30 14:21:15 compute-0 sudo[124086]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvehwifphdyoxjscflmylvafstsrdkqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242075.3309515-182-241770008146645/AnsiballZ_file.py'
Sep 30 14:21:15 compute-0 sudo[124086]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:21:15 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:21:15 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:21:15 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:21:15.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:21:15 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:21:15 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:21:15 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:21:15.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:21:15 compute-0 python3.9[124088]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:21:15 compute-0 sudo[124086]: pam_unix(sudo:session): session closed for user root
Sep 30 14:21:16 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[122366]: 30/09/2025 14:21:16 : epoch 68dbe74e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3f94001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:21:16 compute-0 sudo[124239]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hobzpyxjpeckwakpmovuwzhluuzxqkay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242076.0444856-206-267158343165992/AnsiballZ_stat.py'
Sep 30 14:21:16 compute-0 sudo[124239]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:21:16 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[122366]: 30/09/2025 14:21:16 : epoch 68dbe74e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3f7c000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:21:16 compute-0 python3.9[124241]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:21:16 compute-0 sudo[124239]: pam_unix(sudo:session): session closed for user root
Sep 30 14:21:16 compute-0 ceph-mon[74194]: pgmap v162: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Sep 30 14:21:16 compute-0 sudo[124317]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xylmcsclcgljxummgzqvlcyschynlldc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242076.0444856-206-267158343165992/AnsiballZ_file.py'
Sep 30 14:21:16 compute-0 sudo[124317]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:21:16 compute-0 python3.9[124319]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:21:16 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:21:16.963Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:21:16 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:21:16.964Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:21:16 compute-0 sudo[124317]: pam_unix(sudo:session): session closed for user root
Sep 30 14:21:17 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[122366]: 30/09/2025 14:21:17 : epoch 68dbe74e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3f74000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:21:17 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v163: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 1.3 KiB/s wr, 4 op/s
Sep 30 14:21:17 compute-0 sudo[124470]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfodozihrbxljdcpyycyvvwwcejnsrvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242077.3392355-206-236501315879895/AnsiballZ_stat.py'
Sep 30 14:21:17 compute-0 sudo[124470]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:21:17 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:21:17 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:21:17 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:21:17.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:21:17 compute-0 python3.9[124472]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:21:17 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:21:17 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000023s ======
Sep 30 14:21:17 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:21:17.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Sep 30 14:21:17 compute-0 sudo[124470]: pam_unix(sudo:session): session closed for user root
Sep 30 14:21:18 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-nfs-cephfs-compute-0-yvkpei[97239]: [WARNING] 272/142118 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Sep 30 14:21:18 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[122366]: 30/09/2025 14:21:18 : epoch 68dbe74e : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3f80000fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:21:18 compute-0 sudo[124551]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stqzuzekbodzabsjchwwfzfqtvuvphzf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242077.3392355-206-236501315879895/AnsiballZ_file.py'
Sep 30 14:21:18 compute-0 sudo[124551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:21:18 compute-0 python3.9[124553]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:21:18 compute-0 sudo[124551]: pam_unix(sudo:session): session closed for user root
Sep 30 14:21:18 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[122366]: 30/09/2025 14:21:18 : epoch 68dbe74e : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3f94002520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:21:18 compute-0 sudo[124630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:21:18 compute-0 sudo[124630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:21:18 compute-0 sudo[124630]: pam_unix(sudo:session): session closed for user root
Sep 30 14:21:18 compute-0 sudo[124728]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ucexeljtbsasvgvuxnksjfpbalyehxdu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242078.4535856-275-9518868077199/AnsiballZ_file.py'
Sep 30 14:21:18 compute-0 sudo[124728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:21:18 compute-0 ceph-mon[74194]: pgmap v163: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 1.3 KiB/s wr, 4 op/s
Sep 30 14:21:18 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:21:18.848Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:21:18 compute-0 sshd-session[124474]: Invalid user ntuser from 210.90.155.80 port 59578
Sep 30 14:21:18 compute-0 python3.9[124730]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:21:18 compute-0 sudo[124728]: pam_unix(sudo:session): session closed for user root
Sep 30 14:21:19 compute-0 sshd-session[124474]: Received disconnect from 210.90.155.80 port 59578:11: Bye Bye [preauth]
Sep 30 14:21:19 compute-0 sshd-session[124474]: Disconnected from invalid user ntuser 210.90.155.80 port 59578 [preauth]
Sep 30 14:21:19 compute-0 sudo[124881]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghmgrjlfjcblkycqsetqgzgoomrbrlar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242079.183847-299-165408904382283/AnsiballZ_stat.py'
Sep 30 14:21:19 compute-0 sudo[124881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:21:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[122366]: 30/09/2025 14:21:19 : epoch 68dbe74e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3f80000fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:21:19 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v164: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Sep 30 14:21:19 compute-0 python3.9[124883]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:21:19 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:21:19 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:21:19 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:21:19.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:21:19 compute-0 sudo[124881]: pam_unix(sudo:session): session closed for user root
Sep 30 14:21:19 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:21:19 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.002000045s ======
Sep 30 14:21:19 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:21:19.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000045s
Sep 30 14:21:19 compute-0 sudo[124960]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxriqnrlgzqadxbkuouvuozvtzvdptlm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242079.183847-299-165408904382283/AnsiballZ_file.py'
Sep 30 14:21:19 compute-0 sudo[124960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:21:20 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[122366]: 30/09/2025 14:21:20 : epoch 68dbe74e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3f80000fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:21:20 compute-0 python3.9[124962]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:21:20 compute-0 sudo[124960]: pam_unix(sudo:session): session closed for user root
Sep 30 14:21:20 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[122366]: 30/09/2025 14:21:20 : epoch 68dbe74e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3f740016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:21:20 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:21:20 compute-0 sudo[125112]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-inectbqdyqmjosgtuzrscevdrmpzlsll ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242080.3260741-335-2817683226287/AnsiballZ_stat.py'
Sep 30 14:21:20 compute-0 sudo[125112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:21:20 compute-0 ceph-mon[74194]: pgmap v164: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Sep 30 14:21:20 compute-0 python3.9[125114]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:21:20 compute-0 sudo[125112]: pam_unix(sudo:session): session closed for user root
Sep 30 14:21:21 compute-0 sudo[125190]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vistflrvmjqafzigrhdzynuvgvhkmcgf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242080.3260741-335-2817683226287/AnsiballZ_file.py'
Sep 30 14:21:21 compute-0 sudo[125190]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:21:21 compute-0 python3.9[125192]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:21:21 compute-0 sudo[125190]: pam_unix(sudo:session): session closed for user root
Sep 30 14:21:21 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[122366]: 30/09/2025 14:21:21 : epoch 68dbe74e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3f94002520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:21:21 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v165: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Sep 30 14:21:21 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:21:21 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:21:21 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:21:21.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:21:21 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:21:21 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:21:21 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:21:21.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:21:22 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[122366]: 30/09/2025 14:21:22 : epoch 68dbe74e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3f7c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:21:22 compute-0 sudo[125344]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmptnkicgagkhvacrqvepbcsqxnvcxzh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242081.4740767-371-223164476662139/AnsiballZ_systemd.py'
Sep 30 14:21:22 compute-0 sudo[125344]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:21:22 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[122366]: 30/09/2025 14:21:22 : epoch 68dbe74e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3f740016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:21:22 compute-0 python3.9[125346]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 14:21:22 compute-0 systemd[1]: Reloading.
Sep 30 14:21:22 compute-0 systemd-rc-local-generator[125371]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:21:22 compute-0 systemd-sysv-generator[125375]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 14:21:22 compute-0 ceph-mon[74194]: pgmap v165: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Sep 30 14:21:22 compute-0 sudo[125344]: pam_unix(sudo:session): session closed for user root
Sep 30 14:21:23 compute-0 sudo[125534]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypwrajtirsiyopmyjxmsdgvwwmmecuhl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242083.0214138-395-188879254539003/AnsiballZ_stat.py'
Sep 30 14:21:23 compute-0 sudo[125534]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:21:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[122366]: 30/09/2025 14:21:23 : epoch 68dbe74e : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3f80002400 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:21:23 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v166: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Sep 30 14:21:23 compute-0 python3.9[125536]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:21:23 compute-0 sudo[125534]: pam_unix(sudo:session): session closed for user root
Sep 30 14:21:23 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:21:23 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:21:23 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:21:23.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:21:23 compute-0 sudo[125613]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cppafsfkbemudirskluxuiiimaehkcut ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242083.0214138-395-188879254539003/AnsiballZ_file.py'
Sep 30 14:21:23 compute-0 sudo[125613]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:21:23 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:21:23 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:21:23 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:21:23.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:21:23 compute-0 python3.9[125615]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:21:23 compute-0 sudo[125613]: pam_unix(sudo:session): session closed for user root
Sep 30 14:21:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[122366]: 30/09/2025 14:21:24 : epoch 68dbe74e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3f80002400 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:21:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[122366]: 30/09/2025 14:21:24 : epoch 68dbe74e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3f7c001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:21:24 compute-0 sudo[125765]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymdnsftcfzqvsqawefdmlriqufxymtgg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242084.20364-431-71574731101213/AnsiballZ_stat.py'
Sep 30 14:21:24 compute-0 sudo[125765]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:21:24 compute-0 python3.9[125767]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:21:24 compute-0 sudo[125765]: pam_unix(sudo:session): session closed for user root
Sep 30 14:21:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:21:24] "GET /metrics HTTP/1.1" 200 48411 "" "Prometheus/2.51.0"
Sep 30 14:21:24 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:21:24] "GET /metrics HTTP/1.1" 200 48411 "" "Prometheus/2.51.0"
Sep 30 14:21:24 compute-0 ceph-mon[74194]: pgmap v166: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Sep 30 14:21:24 compute-0 sudo[125843]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mssojjwttaoeojnodowqqvnirmykkdqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242084.20364-431-71574731101213/AnsiballZ_file.py'
Sep 30 14:21:24 compute-0 sudo[125843]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:21:25 compute-0 python3.9[125845]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:21:25 compute-0 sudo[125843]: pam_unix(sudo:session): session closed for user root
Sep 30 14:21:25 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:21:25 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[122366]: 30/09/2025 14:21:25 : epoch 68dbe74e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3f740016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:21:25 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v167: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Sep 30 14:21:25 compute-0 sudo[125996]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-webeklmzsrfzqtiwnzfsrihjcpervzkw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242085.350567-467-136300139234284/AnsiballZ_systemd.py'
Sep 30 14:21:25 compute-0 sudo[125996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:21:25 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:21:25 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:21:25 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:21:25.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:21:25 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:21:25 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:21:25 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:21:25.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:21:25 compute-0 python3.9[125998]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 14:21:25 compute-0 systemd[1]: Reloading.
Sep 30 14:21:26 compute-0 systemd-rc-local-generator[126025]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:21:26 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[122366]: 30/09/2025 14:21:26 : epoch 68dbe74e : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3f80002400 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:21:26 compute-0 systemd-sysv-generator[126028]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 14:21:26 compute-0 systemd[1]: Starting Create netns directory...
Sep 30 14:21:26 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Sep 30 14:21:26 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Sep 30 14:21:26 compute-0 systemd[1]: Finished Create netns directory.
Sep 30 14:21:26 compute-0 sudo[125996]: pam_unix(sudo:session): session closed for user root
Sep 30 14:21:26 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[122366]: 30/09/2025 14:21:26 : epoch 68dbe74e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3f94002520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:21:26 compute-0 ceph-mon[74194]: pgmap v167: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Sep 30 14:21:26 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:21:26.965Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:21:27 compute-0 python3.9[126189]: ansible-ansible.builtin.service_facts Invoked
Sep 30 14:21:27 compute-0 network[126206]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Sep 30 14:21:27 compute-0 network[126207]: 'network-scripts' will be removed from distribution in near future.
Sep 30 14:21:27 compute-0 network[126208]: It is advised to switch to 'NetworkManager' instead for network management.
Sep 30 14:21:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[122366]: 30/09/2025 14:21:27 : epoch 68dbe74e : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3f94002520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:21:27 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v168: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s
Sep 30 14:21:27 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:21:27 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:21:27 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:21:27.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:21:27 compute-0 ceph-mon[74194]: pgmap v168: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s
Sep 30 14:21:27 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:21:27 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:21:27 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:21:27.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:21:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[122366]: 30/09/2025 14:21:28 : epoch 68dbe74e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3f7c001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:21:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[122366]: 30/09/2025 14:21:28 : epoch 68dbe74e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3f80003730 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:21:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:21:28.849Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:21:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[122366]: 30/09/2025 14:21:29 : epoch 68dbe74e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3f74002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:21:29 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v169: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:21:29 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:21:29 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:21:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:21:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:21:29 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:21:29 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:21:29 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:21:29.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:21:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:21:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:21:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:21:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:21:29 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:21:29 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:21:29 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:21:29.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:21:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[122366]: 30/09/2025 14:21:30 : epoch 68dbe74e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3f94002520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:21:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[122366]: 30/09/2025 14:21:30 : epoch 68dbe74e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3f74002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:21:30 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:21:30 compute-0 ceph-mon[74194]: pgmap v169: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:21:30 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:21:30 compute-0 sudo[126475]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdirukfjckoalvxkarcgerfqkrokfeiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242090.6088676-545-124090130410516/AnsiballZ_stat.py'
Sep 30 14:21:30 compute-0 sudo[126475]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:21:31 compute-0 python3.9[126477]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:21:31 compute-0 sudo[126475]: pam_unix(sudo:session): session closed for user root
Sep 30 14:21:31 compute-0 sudo[126554]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwddnedrkwkilhspfgebylnwqnbpiwfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242090.6088676-545-124090130410516/AnsiballZ_file.py'
Sep 30 14:21:31 compute-0 sudo[126554]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:21:31 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[122366]: 30/09/2025 14:21:31 : epoch 68dbe74e : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3f7c001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:21:31 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v170: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:21:31 compute-0 python3.9[126556]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:21:31 compute-0 sudo[126554]: pam_unix(sudo:session): session closed for user root
Sep 30 14:21:31 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:21:31 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000023s ======
Sep 30 14:21:31 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:21:31.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Sep 30 14:21:31 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:21:31 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:21:31 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:21:31.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:21:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[122366]: 30/09/2025 14:21:32 : epoch 68dbe74e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3f80003730 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:21:32 compute-0 sudo[126707]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzawwhyqtcsttishydotgmzhbojcrmee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242091.9157202-584-64293435530114/AnsiballZ_file.py'
Sep 30 14:21:32 compute-0 sudo[126707]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:21:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[122366]: 30/09/2025 14:21:32 : epoch 68dbe74e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3f94002520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:21:32 compute-0 python3.9[126709]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:21:32 compute-0 sudo[126707]: pam_unix(sudo:session): session closed for user root
Sep 30 14:21:32 compute-0 ceph-mon[74194]: pgmap v170: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:21:32 compute-0 sudo[126859]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmpovvyjjpeqvtynhcjuwdpjhvedtrho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242092.6105077-608-150698801330270/AnsiballZ_stat.py'
Sep 30 14:21:32 compute-0 sudo[126859]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:21:33 compute-0 python3.9[126861]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:21:33 compute-0 sudo[126859]: pam_unix(sudo:session): session closed for user root
Sep 30 14:21:33 compute-0 sudo[126938]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-klosfdsuxmckdtewmoxsuqbfsvjtywon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242092.6105077-608-150698801330270/AnsiballZ_file.py'
Sep 30 14:21:33 compute-0 sudo[126938]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:21:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[122366]: 30/09/2025 14:21:33 : epoch 68dbe74e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3f74002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:21:33 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v171: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:21:33 compute-0 python3.9[126940]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:21:33 compute-0 sudo[126938]: pam_unix(sudo:session): session closed for user root
Sep 30 14:21:33 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:21:33 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000022s ======
Sep 30 14:21:33 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:21:33.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Sep 30 14:21:33 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:21:33 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000022s ======
Sep 30 14:21:33 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:21:33.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Sep 30 14:21:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[122366]: 30/09/2025 14:21:34 : epoch 68dbe74e : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3f7c0032f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:21:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[122366]: 30/09/2025 14:21:34 : epoch 68dbe74e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3f80004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:21:34 compute-0 sudo[127091]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hoiiztgdqkzcqghzqmhggpvnrayublev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242094.0717142-653-14014930678126/AnsiballZ_timezone.py'
Sep 30 14:21:34 compute-0 sudo[127091]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:21:34 compute-0 ceph-mon[74194]: pgmap v171: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:21:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:21:34] "GET /metrics HTTP/1.1" 200 48412 "" "Prometheus/2.51.0"
Sep 30 14:21:34 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:21:34] "GET /metrics HTTP/1.1" 200 48412 "" "Prometheus/2.51.0"
Sep 30 14:21:34 compute-0 python3.9[127093]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Sep 30 14:21:34 compute-0 systemd[1]: Starting Time & Date Service...
Sep 30 14:21:34 compute-0 systemd[1]: Started Time & Date Service.
Sep 30 14:21:34 compute-0 sudo[127091]: pam_unix(sudo:session): session closed for user root
Sep 30 14:21:35 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:21:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[122366]: 30/09/2025 14:21:35 : epoch 68dbe74e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3f94002520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:21:35 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v172: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:21:35 compute-0 sudo[127248]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpdhgpxgywggiikpshioerdhhjieicxi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242095.3351204-680-124578324635644/AnsiballZ_file.py'
Sep 30 14:21:35 compute-0 sudo[127248]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:21:35 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:21:35 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.002000045s ======
Sep 30 14:21:35 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:21:35.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000045s
Sep 30 14:21:35 compute-0 python3.9[127250]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:21:35 compute-0 sudo[127248]: pam_unix(sudo:session): session closed for user root
Sep 30 14:21:35 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:21:35 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:21:35 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:21:35.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:21:36 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[122366]: 30/09/2025 14:21:36 : epoch 68dbe74e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3f74003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:21:36 compute-0 sudo[127401]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlsbhfkgutuqbabjsloskywzazfajhee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242096.033705-704-148250533509278/AnsiballZ_stat.py'
Sep 30 14:21:36 compute-0 sudo[127401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:21:36 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[122366]: 30/09/2025 14:21:36 : epoch 68dbe74e : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3f7c0032f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:21:36 compute-0 python3.9[127403]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:21:36 compute-0 sudo[127401]: pam_unix(sudo:session): session closed for user root
Sep 30 14:21:36 compute-0 ceph-mon[74194]: pgmap v172: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:21:36 compute-0 sudo[127479]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slcrajseyjdeuxvgytfxytljtllepfrm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242096.033705-704-148250533509278/AnsiballZ_file.py'
Sep 30 14:21:36 compute-0 sudo[127479]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:21:36 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-nfs-cephfs-compute-0-yvkpei[97239]: [WARNING] 272/142136 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Sep 30 14:21:36 compute-0 python3.9[127481]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:21:36 compute-0 sudo[127479]: pam_unix(sudo:session): session closed for user root
Sep 30 14:21:36 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:21:36.967Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:21:36 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:21:36.967Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:21:37 compute-0 sudo[127632]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlekpjqecbwviibpxqkjciioiqzdiqtj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242097.1964839-740-115429143826011/AnsiballZ_stat.py'
Sep 30 14:21:37 compute-0 sudo[127632]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:21:37 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[122366]: 30/09/2025 14:21:37 : epoch 68dbe74e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3f80004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:21:37 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v173: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Sep 30 14:21:37 compute-0 python3.9[127634]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:21:37 compute-0 sudo[127632]: pam_unix(sudo:session): session closed for user root
Sep 30 14:21:37 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:21:37 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:21:37 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:21:37.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:21:37 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:21:37 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:21:37 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:21:37.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:21:37 compute-0 sudo[127711]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xitzlgclableosjnxathbznkfufujycb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242097.1964839-740-115429143826011/AnsiballZ_file.py'
Sep 30 14:21:37 compute-0 sudo[127711]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:21:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[122366]: 30/09/2025 14:21:38 : epoch 68dbe74e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3f94002520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:21:38 compute-0 python3.9[127713]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.mzmwyrbi recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:21:38 compute-0 sudo[127711]: pam_unix(sudo:session): session closed for user root
Sep 30 14:21:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[122366]: 30/09/2025 14:21:38 : epoch 68dbe74e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3f74003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:21:38 compute-0 sudo[127863]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knzjpnyojspxgbhuneqhysdcptwbmizv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242098.3235931-776-3473056036271/AnsiballZ_stat.py'
Sep 30 14:21:38 compute-0 sudo[127863]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:21:38 compute-0 sudo[127866]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:21:38 compute-0 sudo[127866]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:21:38 compute-0 sudo[127866]: pam_unix(sudo:session): session closed for user root
Sep 30 14:21:38 compute-0 ceph-mon[74194]: pgmap v173: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Sep 30 14:21:38 compute-0 python3.9[127865]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:21:38 compute-0 sudo[127863]: pam_unix(sudo:session): session closed for user root
Sep 30 14:21:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:21:38.850Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:21:39 compute-0 sudo[127966]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rulksfcylfdecuimspwxeksmowjjwcxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242098.3235931-776-3473056036271/AnsiballZ_file.py'
Sep 30 14:21:39 compute-0 sudo[127966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:21:39 compute-0 python3.9[127968]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:21:39 compute-0 sudo[127966]: pam_unix(sudo:session): session closed for user root
Sep 30 14:21:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[122366]: 30/09/2025 14:21:39 : epoch 68dbe74e : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3f7c0032f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:21:39 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v174: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Sep 30 14:21:39 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:21:39 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:21:39 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:21:39.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:21:39 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:21:39 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:21:39 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:21:39.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:21:39 compute-0 sudo[128120]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltkzieeiaijzcsydwudpjcerbmhzjane ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242099.5708203-815-93767618394345/AnsiballZ_command.py'
Sep 30 14:21:39 compute-0 sudo[128120]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:21:40 compute-0 kernel: ganesha.nfsd[123874]: segfault at 50 ip 00007f404e48a32e sp 00007f40137fd210 error 4 in libntirpc.so.5.8[7f404e46f000+2c000] likely on CPU 0 (core 0, socket 0)
Sep 30 14:21:40 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Sep 30 14:21:40 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[122366]: 30/09/2025 14:21:40 : epoch 68dbe74e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3f80004050 fd 39 proxy ignored for local
Sep 30 14:21:40 compute-0 systemd[1]: Started Process Core Dump (PID 128123/UID 0).
Sep 30 14:21:40 compute-0 python3.9[128122]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:21:40 compute-0 sudo[128120]: pam_unix(sudo:session): session closed for user root
Sep 30 14:21:40 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:21:40 compute-0 ceph-mon[74194]: pgmap v174: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Sep 30 14:21:40 compute-0 sudo[128275]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dofxxmvdeozqcelrpmzfovuzqiahnkhn ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759242100.3936796-839-181159833456264/AnsiballZ_edpm_nftables_from_files.py'
Sep 30 14:21:40 compute-0 sudo[128275]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:21:41 compute-0 python3[128277]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Sep 30 14:21:41 compute-0 sudo[128275]: pam_unix(sudo:session): session closed for user root
Sep 30 14:21:41 compute-0 systemd-coredump[128124]: Process 122372 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 46:
                                                    #0  0x00007f404e48a32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Sep 30 14:21:41 compute-0 systemd[1]: systemd-coredump@2-128123-0.service: Deactivated successfully.
Sep 30 14:21:41 compute-0 systemd[1]: systemd-coredump@2-128123-0.service: Consumed 1.156s CPU time.
Sep 30 14:21:41 compute-0 podman[128358]: 2025-09-30 14:21:41.330282561 +0000 UTC m=+0.026983071 container died 207f187f99245e829d868e681a22d0de29f4d60a9007a67bb77f7bc1bf904397 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Sep 30 14:21:41 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v175: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Sep 30 14:21:41 compute-0 sudo[128447]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umqpvrbnzlmbfaonmynexhagnyvbimxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242101.244132-863-112330329498469/AnsiballZ_stat.py'
Sep 30 14:21:41 compute-0 sudo[128447]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:21:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-a65dd6b9844ddde38a4f909cb21593aa43a3e4c4a6f9bc071952c875c299786c-merged.mount: Deactivated successfully.
Sep 30 14:21:41 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:21:41 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:21:41 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:21:41.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:21:41 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:21:41 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:21:41 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:21:41.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:21:41 compute-0 python3.9[128449]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:21:41 compute-0 sudo[128447]: pam_unix(sudo:session): session closed for user root
Sep 30 14:21:42 compute-0 sudo[128526]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opkhrvqvynscrdmmixovyqawdawbsarb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242101.244132-863-112330329498469/AnsiballZ_file.py'
Sep 30 14:21:42 compute-0 sudo[128526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:21:42 compute-0 ceph-mon[74194]: pgmap v175: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Sep 30 14:21:42 compute-0 podman[128358]: 2025-09-30 14:21:42.203811523 +0000 UTC m=+0.900512043 container remove 207f187f99245e829d868e681a22d0de29f4d60a9007a67bb77f7bc1bf904397 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Sep 30 14:21:42 compute-0 systemd[1]: ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6@nfs.cephfs.2.0.compute-0.qrbicy.service: Main process exited, code=exited, status=139/n/a
Sep 30 14:21:42 compute-0 python3.9[128528]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:21:42 compute-0 sudo[128526]: pam_unix(sudo:session): session closed for user root
Sep 30 14:21:42 compute-0 systemd[1]: ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6@nfs.cephfs.2.0.compute-0.qrbicy.service: Failed with result 'exit-code'.
Sep 30 14:21:42 compute-0 systemd[1]: ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6@nfs.cephfs.2.0.compute-0.qrbicy.service: Consumed 1.430s CPU time.
Sep 30 14:21:42 compute-0 sudo[128708]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybynndanhbjyoxmbbqmwacifryujwwav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242102.4851742-899-188762915977140/AnsiballZ_stat.py'
Sep 30 14:21:42 compute-0 sudo[128708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:21:43 compute-0 python3.9[128710]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:21:43 compute-0 sudo[128708]: pam_unix(sudo:session): session closed for user root
Sep 30 14:21:43 compute-0 sudo[128786]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjfklxxdhaefpcamvpxpznifpkdestyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242102.4851742-899-188762915977140/AnsiballZ_file.py'
Sep 30 14:21:43 compute-0 sudo[128786]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:21:43 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v176: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Sep 30 14:21:43 compute-0 python3.9[128789]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:21:43 compute-0 sudo[128786]: pam_unix(sudo:session): session closed for user root
Sep 30 14:21:43 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:21:43 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:21:43 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:21:43.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:21:43 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:21:43 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:21:43 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:21:43.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:21:44 compute-0 sudo[128940]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wcdbvpogjnesqirczpocgdcndzqcdqfd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242103.8194423-935-223137477999463/AnsiballZ_stat.py'
Sep 30 14:21:44 compute-0 sudo[128940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:21:44 compute-0 python3.9[128942]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:21:44 compute-0 sudo[128940]: pam_unix(sudo:session): session closed for user root
Sep 30 14:21:44 compute-0 sudo[129018]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qripetqjkmckdivriyqeoyjhmlgnwkyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242103.8194423-935-223137477999463/AnsiballZ_file.py'
Sep 30 14:21:44 compute-0 sudo[129018]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:21:44 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:21:44 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:21:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:21:44] "GET /metrics HTTP/1.1" 200 48416 "" "Prometheus/2.51.0"
Sep 30 14:21:44 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:21:44] "GET /metrics HTTP/1.1" 200 48416 "" "Prometheus/2.51.0"
Sep 30 14:21:44 compute-0 python3.9[129020]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:21:44 compute-0 sudo[129018]: pam_unix(sudo:session): session closed for user root
Sep 30 14:21:44 compute-0 ceph-mon[74194]: pgmap v176: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Sep 30 14:21:44 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:21:45 compute-0 sudo[129171]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxaokhgfzmirpciqwlziluxijosrxsvb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242105.0338802-971-147812107103180/AnsiballZ_stat.py'
Sep 30 14:21:45 compute-0 sudo[129171]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:21:45 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:21:45 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-nfs-cephfs-compute-0-yvkpei[97239]: [WARNING] 272/142145 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Sep 30 14:21:45 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v177: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Sep 30 14:21:45 compute-0 python3.9[129173]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:21:45 compute-0 sudo[129171]: pam_unix(sudo:session): session closed for user root
Sep 30 14:21:45 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:21:45 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:21:45 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:21:45.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:21:45 compute-0 sudo[129250]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqhkxqiugsbcbnrdgwwraaqbouxtnetx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242105.0338802-971-147812107103180/AnsiballZ_file.py'
Sep 30 14:21:45 compute-0 sudo[129250]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:21:45 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:21:45 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:21:45 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:21:45.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:21:46 compute-0 python3.9[129252]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:21:46 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-nfs-cephfs-compute-0-yvkpei[97239]: [WARNING] 272/142146 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Sep 30 14:21:46 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-nfs-cephfs-compute-0-yvkpei[97239]: [NOTICE] 272/142146 (4) : haproxy version is 2.3.17-d1c9119
Sep 30 14:21:46 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-nfs-cephfs-compute-0-yvkpei[97239]: [NOTICE] 272/142146 (4) : path to executable is /usr/local/sbin/haproxy
Sep 30 14:21:46 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-nfs-cephfs-compute-0-yvkpei[97239]: [ALERT] 272/142146 (4) : backend 'backend' has no server available!
Sep 30 14:21:46 compute-0 sudo[129250]: pam_unix(sudo:session): session closed for user root
Sep 30 14:21:46 compute-0 ceph-mon[74194]: pgmap v177: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Sep 30 14:21:46 compute-0 sudo[129402]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwutjajjdzktijapiqgozjarvieypzwy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242106.240887-1007-52857331590007/AnsiballZ_stat.py'
Sep 30 14:21:46 compute-0 sudo[129402]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:21:46 compute-0 python3.9[129404]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:21:46 compute-0 sudo[129402]: pam_unix(sudo:session): session closed for user root
Sep 30 14:21:46 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:21:46.969Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:21:47 compute-0 sudo[129480]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ernhaachdjqnqfahgmasaidhpuvnhswr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242106.240887-1007-52857331590007/AnsiballZ_file.py'
Sep 30 14:21:47 compute-0 sudo[129480]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:21:47 compute-0 python3.9[129482]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:21:47 compute-0 sudo[129480]: pam_unix(sudo:session): session closed for user root
Sep 30 14:21:47 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v178: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:21:47 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:21:47 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:21:47 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:21:47.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:21:47 compute-0 sudo[129634]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmjarthlluhitmcvopjdbwiyaejfbdaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242107.5722651-1046-8087394844056/AnsiballZ_command.py'
Sep 30 14:21:47 compute-0 sudo[129634]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:21:47 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:21:47 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:21:47 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:21:47.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:21:47 compute-0 python3.9[129636]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:21:48 compute-0 sudo[129634]: pam_unix(sudo:session): session closed for user root
Sep 30 14:21:48 compute-0 ceph-mon[74194]: pgmap v178: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:21:48 compute-0 sudo[129789]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfudjzyvfrcpwsvndcejkojnpmuvlwxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242108.2310724-1070-225478011271376/AnsiballZ_blockinfile.py'
Sep 30 14:21:48 compute-0 sudo[129789]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:21:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:21:48.850Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:21:48 compute-0 python3.9[129791]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:21:48 compute-0 sudo[129789]: pam_unix(sudo:session): session closed for user root
Sep 30 14:21:48 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Sep 30 14:21:48 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Sep 30 14:21:49 compute-0 sudo[129943]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ccuhwkznmdynbunzwmoezqcuqqoedbpv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242109.1097238-1097-97094699728431/AnsiballZ_file.py'
Sep 30 14:21:49 compute-0 sudo[129943]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:21:49 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v179: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:21:49 compute-0 python3.9[129945]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:21:49 compute-0 sudo[129943]: pam_unix(sudo:session): session closed for user root
Sep 30 14:21:49 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:21:49 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:21:49 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:21:49.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:21:49 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:21:49 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:21:49 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:21:49.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:21:49 compute-0 sudo[130096]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhlxwgavemhgoghamnpvcipusbxvqhth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242109.716842-1097-108754868031621/AnsiballZ_file.py'
Sep 30 14:21:49 compute-0 sudo[130096]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:21:50 compute-0 python3.9[130098]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:21:50 compute-0 sudo[130096]: pam_unix(sudo:session): session closed for user root
Sep 30 14:21:50 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:21:50 compute-0 ceph-mon[74194]: pgmap v179: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:21:50 compute-0 sudo[130248]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jilekujcajneeukndkfnbyvonwesrpqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242110.3636413-1142-149777360732532/AnsiballZ_mount.py'
Sep 30 14:21:50 compute-0 sudo[130248]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:21:50 compute-0 python3.9[130250]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Sep 30 14:21:50 compute-0 sudo[130248]: pam_unix(sudo:session): session closed for user root
Sep 30 14:21:51 compute-0 sudo[130401]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odsygcewdydebrupvzlavbojwrugwhyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242111.1324-1142-147928044123080/AnsiballZ_mount.py'
Sep 30 14:21:51 compute-0 sudo[130401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:21:51 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v180: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 341 B/s wr, 1 op/s
Sep 30 14:21:51 compute-0 python3.9[130403]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Sep 30 14:21:51 compute-0 sudo[130401]: pam_unix(sudo:session): session closed for user root
Sep 30 14:21:51 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:21:51 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:21:51 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:21:51.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:21:51 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:21:51 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:21:51 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:21:51.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:21:52 compute-0 sshd-session[123127]: Connection closed by 192.168.122.30 port 34682
Sep 30 14:21:52 compute-0 sshd-session[123123]: pam_unix(sshd:session): session closed for user zuul
Sep 30 14:21:52 compute-0 systemd[1]: session-44.scope: Deactivated successfully.
Sep 30 14:21:52 compute-0 systemd[1]: session-44.scope: Consumed 28.256s CPU time.
Sep 30 14:21:52 compute-0 systemd-logind[808]: Session 44 logged out. Waiting for processes to exit.
Sep 30 14:21:52 compute-0 systemd-logind[808]: Removed session 44.
Sep 30 14:21:52 compute-0 systemd[1]: ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6@nfs.cephfs.2.0.compute-0.qrbicy.service: Scheduled restart job, restart counter is at 3.
Sep 30 14:21:52 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.qrbicy for 5e3c7776-ac03-5698-b79f-a6dc2d80cae6.
Sep 30 14:21:52 compute-0 systemd[1]: ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6@nfs.cephfs.2.0.compute-0.qrbicy.service: Consumed 1.430s CPU time.
Sep 30 14:21:52 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.qrbicy for 5e3c7776-ac03-5698-b79f-a6dc2d80cae6...
Sep 30 14:21:52 compute-0 ceph-mon[74194]: pgmap v180: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 341 B/s wr, 1 op/s
Sep 30 14:21:52 compute-0 podman[130475]: 2025-09-30 14:21:52.76796308 +0000 UTC m=+0.035613557 container create 09de7eac6ed58a85e37b5b069644aa52f054189a78284dba9b5a23b9104c763e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Sep 30 14:21:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0229a34b6b8fcb7e34c06790d6792ac010ce8f1f37a0c40d76de82a07184e648/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Sep 30 14:21:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0229a34b6b8fcb7e34c06790d6792ac010ce8f1f37a0c40d76de82a07184e648/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:21:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0229a34b6b8fcb7e34c06790d6792ac010ce8f1f37a0c40d76de82a07184e648/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 14:21:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0229a34b6b8fcb7e34c06790d6792ac010ce8f1f37a0c40d76de82a07184e648/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.qrbicy-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 14:21:52 compute-0 podman[130475]: 2025-09-30 14:21:52.819398201 +0000 UTC m=+0.087048698 container init 09de7eac6ed58a85e37b5b069644aa52f054189a78284dba9b5a23b9104c763e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:21:52 compute-0 podman[130475]: 2025-09-30 14:21:52.823952491 +0000 UTC m=+0.091602968 container start 09de7eac6ed58a85e37b5b069644aa52f054189a78284dba9b5a23b9104c763e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True)
Sep 30 14:21:52 compute-0 bash[130475]: 09de7eac6ed58a85e37b5b069644aa52f054189a78284dba9b5a23b9104c763e
Sep 30 14:21:52 compute-0 podman[130475]: 2025-09-30 14:21:52.752008401 +0000 UTC m=+0.019658908 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:21:52 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:21:52 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Sep 30 14:21:52 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:21:52 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Sep 30 14:21:52 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.qrbicy for 5e3c7776-ac03-5698-b79f-a6dc2d80cae6.
Sep 30 14:21:52 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:21:52 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Sep 30 14:21:52 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:21:52 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Sep 30 14:21:52 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:21:52 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Sep 30 14:21:52 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:21:52 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Sep 30 14:21:52 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:21:52 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Sep 30 14:21:52 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:21:52 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:21:53 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v181: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 341 B/s wr, 1 op/s
Sep 30 14:21:53 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:21:53 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:21:53 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:21:53.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:21:53 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:21:53 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:21:53 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:21:53.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:21:54 compute-0 ceph-mon[74194]: pgmap v181: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 341 B/s wr, 1 op/s
Sep 30 14:21:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:21:54] "GET /metrics HTTP/1.1" 200 48416 "" "Prometheus/2.51.0"
Sep 30 14:21:54 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:21:54] "GET /metrics HTTP/1.1" 200 48416 "" "Prometheus/2.51.0"
Sep 30 14:21:55 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:21:55 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v182: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 341 B/s wr, 1 op/s
Sep 30 14:21:55 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:21:55 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:21:55 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:21:55.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:21:55 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:21:55 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:21:55 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:21:55.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:21:56 compute-0 ceph-mon[74194]: pgmap v182: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 341 B/s wr, 1 op/s
Sep 30 14:21:56 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:21:56.970Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:21:56 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:21:56.971Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:21:57 compute-0 sshd-session[130535]: Accepted publickey for zuul from 192.168.122.30 port 41036 ssh2: ECDSA SHA256:bXV1aFTGAGwGo0hLh6HZ3pTGxlJrPf0VedxXflT3nU8
Sep 30 14:21:57 compute-0 systemd-logind[808]: New session 45 of user zuul.
Sep 30 14:21:57 compute-0 systemd[1]: Started Session 45 of User zuul.
Sep 30 14:21:57 compute-0 sshd-session[130535]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Sep 30 14:21:57 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v183: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Sep 30 14:21:57 compute-0 sudo[130689]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxagdsmvylmdlbswrcmqzoawgltmeise ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242117.153013-18-257298360833733/AnsiballZ_tempfile.py'
Sep 30 14:21:57 compute-0 sudo[130689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:21:57 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:21:57 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:21:57 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:21:57.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:21:57 compute-0 python3.9[130691]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Sep 30 14:21:57 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:21:57 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:21:57 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:21:57.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:21:57 compute-0 sudo[130689]: pam_unix(sudo:session): session closed for user root
Sep 30 14:21:58 compute-0 sudo[130842]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbqjamywswaahoocblivjgqtpccwaoco ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242118.016972-54-94933400676859/AnsiballZ_stat.py'
Sep 30 14:21:58 compute-0 sudo[130842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:21:58 compute-0 python3.9[130844]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 14:21:58 compute-0 ceph-mon[74194]: pgmap v183: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Sep 30 14:21:58 compute-0 sudo[130842]: pam_unix(sudo:session): session closed for user root
Sep 30 14:21:58 compute-0 sudo[130848]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:21:58 compute-0 sudo[130848]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:21:58 compute-0 sudo[130848]: pam_unix(sudo:session): session closed for user root
Sep 30 14:21:58 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:21:58.852Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:21:58 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:21:58 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[main] rados_kv_traverse :CLIENT ID :EVENT :Failed to lst kv ret=-2
Sep 30 14:21:58 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:21:58 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[main] rados_cluster_read_clids :CLIENT ID :EVENT :Failed to traverse recovery db: -2
Sep 30 14:21:58 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:21:58 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:21:58 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:21:58 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:21:58 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:21:58 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:21:58 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:21:58 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:21:58 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:21:58 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:21:58 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:21:58 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:21:59 compute-0 sudo[131021]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmtnxoizpypfthprvwrhbhztfmrdhaal ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242118.8711998-78-174183111996622/AnsiballZ_slurp.py'
Sep 30 14:21:59 compute-0 sudo[131021]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:21:59 compute-0 ceph-mgr[74485]: [balancer INFO root] Optimize plan auto_2025-09-30_14:21:59
Sep 30 14:21:59 compute-0 ceph-mgr[74485]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 14:21:59 compute-0 ceph-mgr[74485]: [balancer INFO root] do_upmap
Sep 30 14:21:59 compute-0 ceph-mgr[74485]: [balancer INFO root] pools ['.nfs', 'cephfs.cephfs.meta', 'images', 'default.rgw.meta', 'cephfs.cephfs.data', 'volumes', 'backups', '.mgr', 'vms', '.rgw.root', 'default.rgw.control', 'default.rgw.log']
Sep 30 14:21:59 compute-0 ceph-mgr[74485]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 14:21:59 compute-0 python3.9[131023]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Sep 30 14:21:59 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v184: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 853 B/s wr, 2 op/s
Sep 30 14:21:59 compute-0 sudo[131021]: pam_unix(sudo:session): session closed for user root
Sep 30 14:21:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 14:21:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:21:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 14:21:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:21:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:21:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:21:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:21:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:21:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:21:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:21:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:21:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:21:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Sep 30 14:21:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:21:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:21:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:21:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Sep 30 14:21:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:21:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Sep 30 14:21:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:21:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:21:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:21:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 14:21:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:21:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 14:21:59 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:21:59 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:21:59 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:21:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:21:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:21:59 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:21:59 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:21:59 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:21:59.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:21:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:21:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:21:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:21:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:21:59 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:21:59 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:21:59 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:21:59.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:21:59 compute-0 sudo[131175]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-heounmfuwretdmereyqwspiwlerbifkk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242119.6719782-102-221352585580098/AnsiballZ_stat.py'
Sep 30 14:21:59 compute-0 sudo[131175]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:22:00 compute-0 python3.9[131177]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.ohtdbxbf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:22:00 compute-0 sudo[131175]: pam_unix(sudo:session): session closed for user root
Sep 30 14:22:00 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:22:00 compute-0 ceph-mon[74194]: pgmap v184: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 853 B/s wr, 2 op/s
Sep 30 14:22:00 compute-0 sudo[131300]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tciowppwjxlqpmwtbsysuioqinukiknw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242119.6719782-102-221352585580098/AnsiballZ_copy.py'
Sep 30 14:22:00 compute-0 sudo[131300]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:22:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 14:22:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 14:22:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 14:22:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 14:22:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 14:22:00 compute-0 python3.9[131302]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.ohtdbxbf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759242119.6719782-102-221352585580098/.source.ohtdbxbf _original_basename=.c1n8e4tv follow=False checksum=d73f8168c4ac87c0ac633268563a45ccdfe16ac2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:22:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 14:22:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 14:22:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 14:22:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 14:22:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 14:22:00 compute-0 sudo[131300]: pam_unix(sudo:session): session closed for user root
Sep 30 14:22:01 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v185: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 1.2 KiB/s wr, 4 op/s
Sep 30 14:22:01 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:22:01 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:22:01 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:22:01.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:22:01 compute-0 sudo[131454]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szdsvpewyhwbqczhusgokdfwtkqpxlxt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242121.063407-147-230018113081708/AnsiballZ_setup.py'
Sep 30 14:22:01 compute-0 sudo[131454]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:22:01 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:22:01 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:22:01 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:22:01.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:22:02 compute-0 python3.9[131456]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Sep 30 14:22:02 compute-0 sudo[131454]: pam_unix(sudo:session): session closed for user root
Sep 30 14:22:02 compute-0 ceph-mon[74194]: pgmap v185: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 1.2 KiB/s wr, 4 op/s
Sep 30 14:22:02 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-nfs-cephfs-compute-0-yvkpei[97239]: [WARNING] 272/142202 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 1 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Sep 30 14:22:02 compute-0 sudo[131606]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpxajohtkafcnjobkwcxvukqakkfakhj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242122.3577733-172-140375750896245/AnsiballZ_blockinfile.py'
Sep 30 14:22:02 compute-0 sudo[131606]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:22:02 compute-0 python3.9[131608]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCv0Wkq+j+fGZm3g4FqO0k97HEDxRPMvcwc3mezGI0d6HXz1idSLlEvisBD5x33g1ZEpqE+UKpaz5jo631sXRWDIQkB+XyBenmVirsvjb64FBUfa5ddymIDBQI7h4Nu3k3jdnLV+0VP4lc5k27jVePROBQWh5AZ504IzDUlKXASzzbP0ZT3DKXWRbeREqIK0w2errWoAuULV4cYBhmk/v4vlAliBhPh2bwRJRa43VNXHlJnX7lK1qqFHPp63fe0t23uXUssYQJ8OyJnRT7030ZOYwU4LYK0MXgYJqP7fClsFqnzrcaWJDO32L7M89peYQ5QKF0eMNHf+a15s1nhPkgnynsOqpId2OleuJZqpt00reWSxmwfG09sb6EwI4EAxGTWWL87DuwXz3ipJMbrRa+8PrLTAjrLuHC10aMtq1qejCQJgHd4yZVZ024zi9KHZMgVnzedQF9Byf3u2ZnJSMNok8VHosQ/ny421qEgNF44XEbNUD56bXCSxsU5Gg+yNNs=
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIP4KjWL9aevmnyMtrV7tu6pE8vgoG3wZbh9qSrtJ+XoC
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOqI6tqXjTKjeu4yD2TwTST9ws8xuagG9BbnXQ6fvmDvvniDkLihQ6k7GTTmBGgJE5lCje5bKOG2MRkcVCNXhKU=
                                             compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDv3lqvLSdSe9FfjIvLbovEc/EXXFpVSKrphGNEdNwPKWxKCqbRYPxeqJl9Jji4K0tZVFsnk2y80vkJi2t49CsgHkulvDipHH0WbxzT5JmxX3U03kqn5gkmrCxpqL6za8bs9Q7mkt6mjkWly6gcmfLpKuuLvUxZKOU1LZ2AVlGJ+lx8BKyB/eLXF3G5Z+SizImDNtYWWRadJLWvD5niRNMIc2TlUCokf7CPDF8EiD9l/XSjvS1B8gsIkbj61bZbc5FPy0L7Rf3R2/GQep+DOwM+SlKvMhN3JDAnmMlD3OXlJNYzMbwR38RaTSg1pFgzzOPsqZ9Iz2JfJ1PsEjDeExvLlplJumOgKmj0EVqPUzSrgMHEIqGK1cql5+xL2pPsaxx+7FoLVxTyLuFpNgy3DE7BTVpJsFThJyWiQQOxp4VvZeErMHcsAbyAgDLQdb6+hj/Ywpz+IVhhCI/z71G4iDd0vr30Ege2Mu65bqGRrTGryXZjFKR1aotsf9ftBCV0WkM=
                                             compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJtA9linW0FhxFVV4OOPBy2+xpEXZnSB7XZ4XJ6LwDJf
                                             compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFs/AgtcpVNFq8p4SVbSHfwdF0vUxZGYjSLggzy7X+2gYefshG0Ix5Z3uc2A1+UYgtw8a3032k+JQ3hw3F4uXS8=
                                             compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCnxECDV8o4yTWjIh4fvbRM7O3xbGJ1m92I7pON0ACAWIAouASSEepZzIkP/5+xhc3FvoENphurtvwMkG/2EO54537hANpdReJX9jOK8oyBKFY69IjJkJOJeVP+oxwcxoh3EWtJs1YuvUmWc2OlOxw1dU/jABEbEjdAKAhvqxRaqUYugro6sW3wPvfJAchlkp6HZlUOKtLNvQYY0TgEm3KEnnNRPy81PrLCBPFw+4r/4OLCLfGiNNBXurueYIi2AtJU5ri8w0IasaCJIuRaf0b9nZb9YhYheEZwNMWWo0TqqWLjxpEpkAwEpFt20BG5gWVcehU6LTHU1jhBHtvj/bw29G3Bjj661M2x1TalNg1qVS1uqHqt+iaTYHDkjU6EDBgNTlJB2E7o5g8gx5odi1xDt1+82pz2ofs9HExCG8e34PG+VPbiITiBmYxIhD/sedYo/whhBpwnk8Ntc6FiTJ8YKZFoDrdpRszhCSjF3Ku4tV3K/OALpdEj9gfZof1g9w0=
                                             compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBHaVDEpiNIgxbcdiDZPInyHzgYXaub7mLSciYJRys3z
                                             compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCxQB4y9hJVNDPIO/1RzO1QfaaNnxXt0XWNC3imzikzmekKOgg80jMXW/2phxTZXO0o7+FqN5NV4+uvp8a+O56I=
                                              create=True mode=0644 path=/tmp/ansible.ohtdbxbf state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:22:03 compute-0 sudo[131606]: pam_unix(sudo:session): session closed for user root
Sep 30 14:22:03 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v186: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Sep 30 14:22:03 compute-0 sudo[131759]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvafsfkvyybxdqmomihtbonyaeuqdxjq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242123.1783094-196-186156804381342/AnsiballZ_command.py'
Sep 30 14:22:03 compute-0 sudo[131759]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:22:03 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:22:03 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:22:03 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:22:03.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:22:03 compute-0 python3.9[131761]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.ohtdbxbf' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:22:03 compute-0 sudo[131759]: pam_unix(sudo:session): session closed for user root
Sep 30 14:22:03 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:22:03 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:22:03 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:22:03.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:22:04 compute-0 sudo[131914]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjfhevhykwxxvwulgpvrirvpwjkcpafy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242123.9834719-220-167176654056045/AnsiballZ_file.py'
Sep 30 14:22:04 compute-0 sudo[131914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:22:04 compute-0 python3.9[131916]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.ohtdbxbf state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:22:04 compute-0 sudo[131914]: pam_unix(sudo:session): session closed for user root
Sep 30 14:22:04 compute-0 ceph-mon[74194]: pgmap v186: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Sep 30 14:22:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:22:04] "GET /metrics HTTP/1.1" 200 48411 "" "Prometheus/2.51.0"
Sep 30 14:22:04 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:22:04] "GET /metrics HTTP/1.1" 200 48411 "" "Prometheus/2.51.0"
Sep 30 14:22:04 compute-0 sshd-session[130538]: Connection closed by 192.168.122.30 port 41036
Sep 30 14:22:04 compute-0 sshd-session[130535]: pam_unix(sshd:session): session closed for user zuul
Sep 30 14:22:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:04 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[main] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-0000000000000009:nfs.cephfs.2: -2
Sep 30 14:22:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:04 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Sep 30 14:22:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:04 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Sep 30 14:22:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:04 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Sep 30 14:22:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:04 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Sep 30 14:22:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:04 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Sep 30 14:22:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:04 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Sep 30 14:22:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:04 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Sep 30 14:22:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:04 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 14:22:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:04 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 14:22:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:04 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 14:22:04 compute-0 systemd[1]: session-45.scope: Deactivated successfully.
Sep 30 14:22:04 compute-0 systemd[1]: session-45.scope: Consumed 4.919s CPU time.
Sep 30 14:22:04 compute-0 systemd-logind[808]: Session 45 logged out. Waiting for processes to exit.
Sep 30 14:22:04 compute-0 systemd-logind[808]: Removed session 45.
Sep 30 14:22:04 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Sep 30 14:22:05 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:05 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Sep 30 14:22:05 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:05 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 14:22:05 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:05 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Sep 30 14:22:05 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:05 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Sep 30 14:22:05 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:05 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Sep 30 14:22:05 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:05 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Sep 30 14:22:05 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:05 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Sep 30 14:22:05 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:05 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Sep 30 14:22:05 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:05 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Sep 30 14:22:05 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:05 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Sep 30 14:22:05 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:05 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Sep 30 14:22:05 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:05 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Sep 30 14:22:05 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:05 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Sep 30 14:22:05 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:05 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Sep 30 14:22:05 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:05 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Sep 30 14:22:05 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:05 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Sep 30 14:22:05 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:05 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Sep 30 14:22:05 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:22:05 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v187: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Sep 30 14:22:05 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:05 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2294000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:22:05 compute-0 ceph-mon[74194]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Sep 30 14:22:05 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:22:05.507249) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Sep 30 14:22:05 compute-0 ceph-mon[74194]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Sep 30 14:22:05 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759242125507283, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 1355, "num_deletes": 251, "total_data_size": 2603485, "memory_usage": 2648888, "flush_reason": "Manual Compaction"}
Sep 30 14:22:05 compute-0 ceph-mon[74194]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Sep 30 14:22:05 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759242125521901, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 1536716, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 10982, "largest_seqno": 12336, "table_properties": {"data_size": 1531931, "index_size": 2181, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 12279, "raw_average_key_size": 20, "raw_value_size": 1521504, "raw_average_value_size": 2498, "num_data_blocks": 97, "num_entries": 609, "num_filter_entries": 609, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759241993, "oldest_key_time": 1759241993, "file_creation_time": 1759242125, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4a74fe2f-a33e-416b-ba25-743e7942b3ac", "db_session_id": "KY5CTSKWFSFJYE5835A9", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Sep 30 14:22:05 compute-0 ceph-mon[74194]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 14711 microseconds, and 3896 cpu microseconds.
Sep 30 14:22:05 compute-0 ceph-mon[74194]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 14:22:05 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:22:05.521957) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 1536716 bytes OK
Sep 30 14:22:05 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:22:05.521975) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Sep 30 14:22:05 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:22:05.523718) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Sep 30 14:22:05 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:22:05.523735) EVENT_LOG_v1 {"time_micros": 1759242125523730, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Sep 30 14:22:05 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:22:05.523752) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Sep 30 14:22:05 compute-0 ceph-mon[74194]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 2597649, prev total WAL file size 2597649, number of live WAL files 2.
Sep 30 14:22:05 compute-0 ceph-mon[74194]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 14:22:05 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:22:05.524604) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323532' seq:0, type:0; will stop at (end)
Sep 30 14:22:05 compute-0 ceph-mon[74194]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Sep 30 14:22:05 compute-0 ceph-mon[74194]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(1500KB)], [26(13MB)]
Sep 30 14:22:05 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759242125524636, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 16064136, "oldest_snapshot_seqno": -1}
Sep 30 14:22:05 compute-0 ceph-mon[74194]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 4280 keys, 13812135 bytes, temperature: kUnknown
Sep 30 14:22:05 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759242125621274, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 13812135, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13779460, "index_size": 20853, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10757, "raw_key_size": 108715, "raw_average_key_size": 25, "raw_value_size": 13697196, "raw_average_value_size": 3200, "num_data_blocks": 894, "num_entries": 4280, "num_filter_entries": 4280, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759241526, "oldest_key_time": 0, "file_creation_time": 1759242125, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4a74fe2f-a33e-416b-ba25-743e7942b3ac", "db_session_id": "KY5CTSKWFSFJYE5835A9", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Sep 30 14:22:05 compute-0 ceph-mon[74194]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 14:22:05 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:22:05.621487) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 13812135 bytes
Sep 30 14:22:05 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:22:05.624139) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 166.1 rd, 142.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 13.9 +0.0 blob) out(13.2 +0.0 blob), read-write-amplify(19.4) write-amplify(9.0) OK, records in: 4737, records dropped: 457 output_compression: NoCompression
Sep 30 14:22:05 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:22:05.624159) EVENT_LOG_v1 {"time_micros": 1759242125624151, "job": 10, "event": "compaction_finished", "compaction_time_micros": 96702, "compaction_time_cpu_micros": 27820, "output_level": 6, "num_output_files": 1, "total_output_size": 13812135, "num_input_records": 4737, "num_output_records": 4280, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Sep 30 14:22:05 compute-0 ceph-mon[74194]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 14:22:05 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759242125624544, "job": 10, "event": "table_file_deletion", "file_number": 28}
Sep 30 14:22:05 compute-0 ceph-mon[74194]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 14:22:05 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759242125627113, "job": 10, "event": "table_file_deletion", "file_number": 26}
Sep 30 14:22:05 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:22:05.524527) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:22:05 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:22:05.627197) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:22:05 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:22:05.627202) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:22:05 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:22:05.627203) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:22:05 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:22:05.627205) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:22:05 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:22:05.627206) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:22:05 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:22:05 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:22:05 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:22:05.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:22:05 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:22:05 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:22:05 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:22:05.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:22:06 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:06 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2288001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:22:06 compute-0 sudo[131960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:22:06 compute-0 sudo[131960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:22:06 compute-0 sudo[131960]: pam_unix(sudo:session): session closed for user root
Sep 30 14:22:06 compute-0 sudo[131985]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 14:22:06 compute-0 sudo[131985]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:22:06 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:06 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2270000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:22:06 compute-0 ceph-mon[74194]: pgmap v187: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Sep 30 14:22:06 compute-0 sudo[131985]: pam_unix(sudo:session): session closed for user root
Sep 30 14:22:06 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:22:06 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:22:06 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 14:22:06 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 14:22:06 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 14:22:06 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v188: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.4 KiB/s rd, 1.3 KiB/s wr, 5 op/s
Sep 30 14:22:06 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:22:06 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 14:22:06 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:22:06 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 14:22:06 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 14:22:06 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 14:22:06 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 14:22:06 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:22:06 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:22:06 compute-0 sudo[132043]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:22:06 compute-0 sudo[132043]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:22:06 compute-0 sudo[132043]: pam_unix(sudo:session): session closed for user root
Sep 30 14:22:06 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:22:06.972Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:22:06 compute-0 sudo[132068]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 14:22:06 compute-0 sudo[132068]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:22:07 compute-0 podman[132135]: 2025-09-30 14:22:07.382040438 +0000 UTC m=+0.043324179 container create a05dc9dceb64c010e44f2686fc31dc7849a4f61db342970081571f2353e2ee0b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Sep 30 14:22:07 compute-0 systemd[1]: Started libpod-conmon-a05dc9dceb64c010e44f2686fc31dc7849a4f61db342970081571f2353e2ee0b.scope.
Sep 30 14:22:07 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:22:07 compute-0 podman[132135]: 2025-09-30 14:22:07.358849209 +0000 UTC m=+0.020132950 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:22:07 compute-0 podman[132135]: 2025-09-30 14:22:07.485440575 +0000 UTC m=+0.146724316 container init a05dc9dceb64c010e44f2686fc31dc7849a4f61db342970081571f2353e2ee0b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_shaw, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Sep 30 14:22:07 compute-0 podman[132135]: 2025-09-30 14:22:07.49324709 +0000 UTC m=+0.154530811 container start a05dc9dceb64c010e44f2686fc31dc7849a4f61db342970081571f2353e2ee0b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_shaw, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:22:07 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-nfs-cephfs-compute-0-yvkpei[97239]: [WARNING] 272/142207 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Sep 30 14:22:07 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:07 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2268000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:22:07 compute-0 boring_shaw[132152]: 167 167
Sep 30 14:22:07 compute-0 podman[132135]: 2025-09-30 14:22:07.497086961 +0000 UTC m=+0.158370712 container attach a05dc9dceb64c010e44f2686fc31dc7849a4f61db342970081571f2353e2ee0b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_shaw, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:22:07 compute-0 systemd[1]: libpod-a05dc9dceb64c010e44f2686fc31dc7849a4f61db342970081571f2353e2ee0b.scope: Deactivated successfully.
Sep 30 14:22:07 compute-0 podman[132135]: 2025-09-30 14:22:07.49896598 +0000 UTC m=+0.160249701 container died a05dc9dceb64c010e44f2686fc31dc7849a4f61db342970081571f2353e2ee0b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:22:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-e17091aa288988a7b0f6af970ead59076b0db46125a1cab8a966003a1046e95d-merged.mount: Deactivated successfully.
Sep 30 14:22:07 compute-0 podman[132135]: 2025-09-30 14:22:07.551653105 +0000 UTC m=+0.212936826 container remove a05dc9dceb64c010e44f2686fc31dc7849a4f61db342970081571f2353e2ee0b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_shaw, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Sep 30 14:22:07 compute-0 systemd[1]: libpod-conmon-a05dc9dceb64c010e44f2686fc31dc7849a4f61db342970081571f2353e2ee0b.scope: Deactivated successfully.
Sep 30 14:22:07 compute-0 podman[132177]: 2025-09-30 14:22:07.701213865 +0000 UTC m=+0.040547287 container create 112dca6a971850919d9cd9a415ccd09045c535b1b01760e7f6b4f56a6a24f51b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_pasteur, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:22:07 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:22:07 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 14:22:07 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:22:07 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:22:07 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 14:22:07 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 14:22:07 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:22:07 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:22:07 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:22:07 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:22:07.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:22:07 compute-0 systemd[1]: Started libpod-conmon-112dca6a971850919d9cd9a415ccd09045c535b1b01760e7f6b4f56a6a24f51b.scope.
Sep 30 14:22:07 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:22:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/177cefbc95e3b3dbea56502f6cd9275888627d6551792514828d8ed840b08ed1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:22:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/177cefbc95e3b3dbea56502f6cd9275888627d6551792514828d8ed840b08ed1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:22:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/177cefbc95e3b3dbea56502f6cd9275888627d6551792514828d8ed840b08ed1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:22:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/177cefbc95e3b3dbea56502f6cd9275888627d6551792514828d8ed840b08ed1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:22:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/177cefbc95e3b3dbea56502f6cd9275888627d6551792514828d8ed840b08ed1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 14:22:07 compute-0 podman[132177]: 2025-09-30 14:22:07.778982067 +0000 UTC m=+0.118315509 container init 112dca6a971850919d9cd9a415ccd09045c535b1b01760e7f6b4f56a6a24f51b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_pasteur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Sep 30 14:22:07 compute-0 podman[132177]: 2025-09-30 14:22:07.685936633 +0000 UTC m=+0.025270085 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:22:07 compute-0 podman[132177]: 2025-09-30 14:22:07.79243309 +0000 UTC m=+0.131766512 container start 112dca6a971850919d9cd9a415ccd09045c535b1b01760e7f6b4f56a6a24f51b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_pasteur, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Sep 30 14:22:07 compute-0 podman[132177]: 2025-09-30 14:22:07.795675806 +0000 UTC m=+0.135009248 container attach 112dca6a971850919d9cd9a415ccd09045c535b1b01760e7f6b4f56a6a24f51b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_pasteur, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Sep 30 14:22:07 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:22:07 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:22:07 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:22:07.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:22:08 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-nfs-cephfs-compute-0-yvkpei[97239]: [WARNING] 272/142208 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Sep 30 14:22:08 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:08 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2274000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:22:08 compute-0 bold_pasteur[132195]: --> passed data devices: 0 physical, 1 LVM
Sep 30 14:22:08 compute-0 bold_pasteur[132195]: --> All data devices are unavailable
Sep 30 14:22:08 compute-0 systemd[1]: libpod-112dca6a971850919d9cd9a415ccd09045c535b1b01760e7f6b4f56a6a24f51b.scope: Deactivated successfully.
Sep 30 14:22:08 compute-0 podman[132177]: 2025-09-30 14:22:08.118994811 +0000 UTC m=+0.458328233 container died 112dca6a971850919d9cd9a415ccd09045c535b1b01760e7f6b4f56a6a24f51b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_pasteur, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:22:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-177cefbc95e3b3dbea56502f6cd9275888627d6551792514828d8ed840b08ed1-merged.mount: Deactivated successfully.
Sep 30 14:22:08 compute-0 podman[132177]: 2025-09-30 14:22:08.15891818 +0000 UTC m=+0.498251602 container remove 112dca6a971850919d9cd9a415ccd09045c535b1b01760e7f6b4f56a6a24f51b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_pasteur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:22:08 compute-0 systemd[1]: libpod-conmon-112dca6a971850919d9cd9a415ccd09045c535b1b01760e7f6b4f56a6a24f51b.scope: Deactivated successfully.
Sep 30 14:22:08 compute-0 sudo[132068]: pam_unix(sudo:session): session closed for user root
Sep 30 14:22:08 compute-0 sudo[132220]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:22:08 compute-0 sudo[132220]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:22:08 compute-0 sudo[132220]: pam_unix(sudo:session): session closed for user root
Sep 30 14:22:08 compute-0 sudo[132245]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -- lvm list --format json
Sep 30 14:22:08 compute-0 sudo[132245]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:22:08 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:08 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2288001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:22:08 compute-0 podman[132314]: 2025-09-30 14:22:08.702709868 +0000 UTC m=+0.034743984 container create 28629cdd42ad29a355ea5f87d90810ac2ed8e12976f43ebef9dbe368ce13c13d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_hellman, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:22:08 compute-0 ceph-mon[74194]: pgmap v188: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.4 KiB/s rd, 1.3 KiB/s wr, 5 op/s
Sep 30 14:22:08 compute-0 systemd[1]: Started libpod-conmon-28629cdd42ad29a355ea5f87d90810ac2ed8e12976f43ebef9dbe368ce13c13d.scope.
Sep 30 14:22:08 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:22:08 compute-0 podman[132314]: 2025-09-30 14:22:08.780258886 +0000 UTC m=+0.112293022 container init 28629cdd42ad29a355ea5f87d90810ac2ed8e12976f43ebef9dbe368ce13c13d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:22:08 compute-0 podman[132314]: 2025-09-30 14:22:08.688368562 +0000 UTC m=+0.020402708 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:22:08 compute-0 podman[132314]: 2025-09-30 14:22:08.787889887 +0000 UTC m=+0.119924003 container start 28629cdd42ad29a355ea5f87d90810ac2ed8e12976f43ebef9dbe368ce13c13d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_hellman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Sep 30 14:22:08 compute-0 podman[132314]: 2025-09-30 14:22:08.790331001 +0000 UTC m=+0.122365137 container attach 28629cdd42ad29a355ea5f87d90810ac2ed8e12976f43ebef9dbe368ce13c13d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_hellman, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Sep 30 14:22:08 compute-0 agitated_hellman[132330]: 167 167
Sep 30 14:22:08 compute-0 systemd[1]: libpod-28629cdd42ad29a355ea5f87d90810ac2ed8e12976f43ebef9dbe368ce13c13d.scope: Deactivated successfully.
Sep 30 14:22:08 compute-0 podman[132314]: 2025-09-30 14:22:08.79297411 +0000 UTC m=+0.125008226 container died 28629cdd42ad29a355ea5f87d90810ac2ed8e12976f43ebef9dbe368ce13c13d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_hellman, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Sep 30 14:22:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-60f09bea285c96e4e449b5ad26f67e380444f2fe61910caa42cb4f519409c65b-merged.mount: Deactivated successfully.
Sep 30 14:22:08 compute-0 podman[132314]: 2025-09-30 14:22:08.825842294 +0000 UTC m=+0.157876410 container remove 28629cdd42ad29a355ea5f87d90810ac2ed8e12976f43ebef9dbe368ce13c13d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_hellman, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid)
Sep 30 14:22:08 compute-0 systemd[1]: libpod-conmon-28629cdd42ad29a355ea5f87d90810ac2ed8e12976f43ebef9dbe368ce13c13d.scope: Deactivated successfully.
Sep 30 14:22:08 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v189: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 720 B/s wr, 3 op/s
Sep 30 14:22:08 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:22:08.853Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:22:08 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:22:08.853Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:22:08 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:22:08.854Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:22:08 compute-0 podman[132353]: 2025-09-30 14:22:08.965426471 +0000 UTC m=+0.038614235 container create 7cb9f5709b69df5c0b798f3312a87f4fa09c3e258f5f27eeb4f511fe86504e8b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_banach, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Sep 30 14:22:09 compute-0 systemd[1]: Started libpod-conmon-7cb9f5709b69df5c0b798f3312a87f4fa09c3e258f5f27eeb4f511fe86504e8b.scope.
Sep 30 14:22:09 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:22:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fc34a30e4964adf2120317093ea53bbd1113fb728707e569345a146d7654bf3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:22:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fc34a30e4964adf2120317093ea53bbd1113fb728707e569345a146d7654bf3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:22:09 compute-0 podman[132353]: 2025-09-30 14:22:08.949943915 +0000 UTC m=+0.023131709 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:22:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fc34a30e4964adf2120317093ea53bbd1113fb728707e569345a146d7654bf3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:22:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fc34a30e4964adf2120317093ea53bbd1113fb728707e569345a146d7654bf3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:22:09 compute-0 podman[132353]: 2025-09-30 14:22:09.056476914 +0000 UTC m=+0.129664698 container init 7cb9f5709b69df5c0b798f3312a87f4fa09c3e258f5f27eeb4f511fe86504e8b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_banach, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:22:09 compute-0 podman[132353]: 2025-09-30 14:22:09.066251951 +0000 UTC m=+0.139439715 container start 7cb9f5709b69df5c0b798f3312a87f4fa09c3e258f5f27eeb4f511fe86504e8b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_banach, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Sep 30 14:22:09 compute-0 podman[132353]: 2025-09-30 14:22:09.070707748 +0000 UTC m=+0.143895512 container attach 7cb9f5709b69df5c0b798f3312a87f4fa09c3e258f5f27eeb4f511fe86504e8b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_banach, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:22:09 compute-0 elastic_banach[132370]: {
Sep 30 14:22:09 compute-0 elastic_banach[132370]:     "0": [
Sep 30 14:22:09 compute-0 elastic_banach[132370]:         {
Sep 30 14:22:09 compute-0 elastic_banach[132370]:             "devices": [
Sep 30 14:22:09 compute-0 elastic_banach[132370]:                 "/dev/loop3"
Sep 30 14:22:09 compute-0 elastic_banach[132370]:             ],
Sep 30 14:22:09 compute-0 elastic_banach[132370]:             "lv_name": "ceph_lv0",
Sep 30 14:22:09 compute-0 elastic_banach[132370]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:22:09 compute-0 elastic_banach[132370]:             "lv_size": "21470642176",
Sep 30 14:22:09 compute-0 elastic_banach[132370]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5e3c7776-ac03-5698-b79f-a6dc2d80cae6,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1bf35304-bfb4-41f5-b832-570aa31de1b2,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 14:22:09 compute-0 elastic_banach[132370]:             "lv_uuid": "f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L",
Sep 30 14:22:09 compute-0 elastic_banach[132370]:             "name": "ceph_lv0",
Sep 30 14:22:09 compute-0 elastic_banach[132370]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:22:09 compute-0 elastic_banach[132370]:             "tags": {
Sep 30 14:22:09 compute-0 elastic_banach[132370]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:22:09 compute-0 elastic_banach[132370]:                 "ceph.block_uuid": "f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L",
Sep 30 14:22:09 compute-0 elastic_banach[132370]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 14:22:09 compute-0 elastic_banach[132370]:                 "ceph.cluster_fsid": "5e3c7776-ac03-5698-b79f-a6dc2d80cae6",
Sep 30 14:22:09 compute-0 elastic_banach[132370]:                 "ceph.cluster_name": "ceph",
Sep 30 14:22:09 compute-0 elastic_banach[132370]:                 "ceph.crush_device_class": "",
Sep 30 14:22:09 compute-0 elastic_banach[132370]:                 "ceph.encrypted": "0",
Sep 30 14:22:09 compute-0 elastic_banach[132370]:                 "ceph.osd_fsid": "1bf35304-bfb4-41f5-b832-570aa31de1b2",
Sep 30 14:22:09 compute-0 elastic_banach[132370]:                 "ceph.osd_id": "0",
Sep 30 14:22:09 compute-0 elastic_banach[132370]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 14:22:09 compute-0 elastic_banach[132370]:                 "ceph.type": "block",
Sep 30 14:22:09 compute-0 elastic_banach[132370]:                 "ceph.vdo": "0",
Sep 30 14:22:09 compute-0 elastic_banach[132370]:                 "ceph.with_tpm": "0"
Sep 30 14:22:09 compute-0 elastic_banach[132370]:             },
Sep 30 14:22:09 compute-0 elastic_banach[132370]:             "type": "block",
Sep 30 14:22:09 compute-0 elastic_banach[132370]:             "vg_name": "ceph_vg0"
Sep 30 14:22:09 compute-0 elastic_banach[132370]:         }
Sep 30 14:22:09 compute-0 elastic_banach[132370]:     ]
Sep 30 14:22:09 compute-0 elastic_banach[132370]: }
Sep 30 14:22:09 compute-0 systemd[1]: libpod-7cb9f5709b69df5c0b798f3312a87f4fa09c3e258f5f27eeb4f511fe86504e8b.scope: Deactivated successfully.
Sep 30 14:22:09 compute-0 podman[132353]: 2025-09-30 14:22:09.355660935 +0000 UTC m=+0.428848699 container died 7cb9f5709b69df5c0b798f3312a87f4fa09c3e258f5f27eeb4f511fe86504e8b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_banach, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Sep 30 14:22:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-9fc34a30e4964adf2120317093ea53bbd1113fb728707e569345a146d7654bf3-merged.mount: Deactivated successfully.
Sep 30 14:22:09 compute-0 podman[132353]: 2025-09-30 14:22:09.401647023 +0000 UTC m=+0.474834787 container remove 7cb9f5709b69df5c0b798f3312a87f4fa09c3e258f5f27eeb4f511fe86504e8b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_banach, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:22:09 compute-0 systemd[1]: libpod-conmon-7cb9f5709b69df5c0b798f3312a87f4fa09c3e258f5f27eeb4f511fe86504e8b.scope: Deactivated successfully.
Sep 30 14:22:09 compute-0 sudo[132245]: pam_unix(sudo:session): session closed for user root
Sep 30 14:22:09 compute-0 sudo[132391]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:22:09 compute-0 sudo[132391]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:22:09 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:09 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22700016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:22:09 compute-0 sudo[132391]: pam_unix(sudo:session): session closed for user root
Sep 30 14:22:09 compute-0 sudo[132416]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -- raw list --format json
Sep 30 14:22:09 compute-0 sudo[132416]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:22:09 compute-0 ceph-mon[74194]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Sep 30 14:22:09 compute-0 ceph-mon[74194]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Cumulative writes: 2550 writes, 12K keys, 2550 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.04 MB/s
                                           Cumulative WAL: 2550 writes, 2550 syncs, 1.00 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2550 writes, 12K keys, 2550 commit groups, 1.0 writes per commit group, ingest: 23.35 MB, 0.04 MB/s
                                           Interval WAL: 2550 writes, 2550 syncs, 1.00 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     57.3      0.36              0.04         5    0.072       0      0       0.0       0.0
                                             L6      1/0   13.17 MB   0.0      0.1     0.0      0.0       0.0      0.0       0.0   2.5     85.4     74.5      0.68              0.11         4    0.169     16K   1807       0.0       0.0
                                            Sum      1/0   13.17 MB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   3.5     55.8     68.5      1.04              0.16         9    0.115     16K   1807       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   3.5     56.0     68.7      1.03              0.16         8    0.129     16K   1807       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.0      0.0       0.0   0.0     85.4     74.5      0.68              0.11         4    0.169     16K   1807       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     57.8      0.35              0.04         4    0.089       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     13.7      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.020, interval 0.020
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.07 GB write, 0.12 MB/s write, 0.06 GB read, 0.10 MB/s read, 1.0 seconds
                                           Interval compaction: 0.07 GB write, 0.12 MB/s write, 0.06 GB read, 0.10 MB/s read, 1.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5596d7211350#2 capacity: 304.00 MB usage: 2.27 MB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(161,2.09 MB,0.688001%) FilterBlock(10,57.61 KB,0.0185063%) IndexBlock(10,121.36 KB,0.0389852%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Sep 30 14:22:09 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:22:09 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:22:09 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:22:09.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:22:09 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:22:09 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:22:09 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:22:09.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:22:09 compute-0 podman[132482]: 2025-09-30 14:22:09.904375373 +0000 UTC m=+0.038025210 container create aeeef753b0fc62aeb40daaa5f8bf3cc18e0cd326d4b1859844ef4c9435dc5750 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Sep 30 14:22:09 compute-0 systemd[1]: Started libpod-conmon-aeeef753b0fc62aeb40daaa5f8bf3cc18e0cd326d4b1859844ef4c9435dc5750.scope.
Sep 30 14:22:09 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:22:09 compute-0 podman[132482]: 2025-09-30 14:22:09.889967784 +0000 UTC m=+0.023617641 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:22:09 compute-0 podman[132482]: 2025-09-30 14:22:09.984892008 +0000 UTC m=+0.118541865 container init aeeef753b0fc62aeb40daaa5f8bf3cc18e0cd326d4b1859844ef4c9435dc5750 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_boyd, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:22:09 compute-0 podman[132482]: 2025-09-30 14:22:09.992896579 +0000 UTC m=+0.126546426 container start aeeef753b0fc62aeb40daaa5f8bf3cc18e0cd326d4b1859844ef4c9435dc5750 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_boyd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Sep 30 14:22:09 compute-0 podman[132482]: 2025-09-30 14:22:09.996123514 +0000 UTC m=+0.129773361 container attach aeeef753b0fc62aeb40daaa5f8bf3cc18e0cd326d4b1859844ef4c9435dc5750 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_boyd, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Sep 30 14:22:09 compute-0 intelligent_boyd[132498]: 167 167
Sep 30 14:22:09 compute-0 systemd[1]: libpod-aeeef753b0fc62aeb40daaa5f8bf3cc18e0cd326d4b1859844ef4c9435dc5750.scope: Deactivated successfully.
Sep 30 14:22:09 compute-0 podman[132482]: 2025-09-30 14:22:09.998878196 +0000 UTC m=+0.132528053 container died aeeef753b0fc62aeb40daaa5f8bf3cc18e0cd326d4b1859844ef4c9435dc5750 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_boyd, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:22:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-32140cda8703f59dd3270e895fd78b836048c706a935d69267ceaf17241f881f-merged.mount: Deactivated successfully.
Sep 30 14:22:10 compute-0 podman[132482]: 2025-09-30 14:22:10.038211039 +0000 UTC m=+0.171860886 container remove aeeef753b0fc62aeb40daaa5f8bf3cc18e0cd326d4b1859844ef4c9435dc5750 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_boyd, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Sep 30 14:22:10 compute-0 systemd[1]: libpod-conmon-aeeef753b0fc62aeb40daaa5f8bf3cc18e0cd326d4b1859844ef4c9435dc5750.scope: Deactivated successfully.
Sep 30 14:22:10 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:10 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22680016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:22:10 compute-0 sshd-session[132515]: Accepted publickey for zuul from 192.168.122.30 port 49758 ssh2: ECDSA SHA256:bXV1aFTGAGwGo0hLh6HZ3pTGxlJrPf0VedxXflT3nU8
Sep 30 14:22:10 compute-0 systemd-logind[808]: New session 46 of user zuul.
Sep 30 14:22:10 compute-0 systemd[1]: Started Session 46 of User zuul.
Sep 30 14:22:10 compute-0 sshd-session[132515]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Sep 30 14:22:10 compute-0 podman[132526]: 2025-09-30 14:22:10.234288851 +0000 UTC m=+0.056749512 container create c2e2712b59fad15ff3116f1b7bf5921be0e9fdeabdd89b81eb1c074d8c63adf6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_hermann, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Sep 30 14:22:10 compute-0 systemd[1]: Started libpod-conmon-c2e2712b59fad15ff3116f1b7bf5921be0e9fdeabdd89b81eb1c074d8c63adf6.scope.
Sep 30 14:22:10 compute-0 podman[132526]: 2025-09-30 14:22:10.210008833 +0000 UTC m=+0.032469544 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:22:10 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:22:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74299fcad29670028c48eda27e41ce1134c080c70aad55770d7467fe1e75fa3b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:22:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74299fcad29670028c48eda27e41ce1134c080c70aad55770d7467fe1e75fa3b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:22:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74299fcad29670028c48eda27e41ce1134c080c70aad55770d7467fe1e75fa3b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:22:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74299fcad29670028c48eda27e41ce1134c080c70aad55770d7467fe1e75fa3b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:22:10 compute-0 podman[132526]: 2025-09-30 14:22:10.32748753 +0000 UTC m=+0.149948181 container init c2e2712b59fad15ff3116f1b7bf5921be0e9fdeabdd89b81eb1c074d8c63adf6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_hermann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Sep 30 14:22:10 compute-0 podman[132526]: 2025-09-30 14:22:10.337859883 +0000 UTC m=+0.160320504 container start c2e2712b59fad15ff3116f1b7bf5921be0e9fdeabdd89b81eb1c074d8c63adf6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_hermann, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:22:10 compute-0 podman[132526]: 2025-09-30 14:22:10.341028226 +0000 UTC m=+0.163488867 container attach c2e2712b59fad15ff3116f1b7bf5921be0e9fdeabdd89b81eb1c074d8c63adf6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_hermann, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Sep 30 14:22:10 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:10 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2274001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:22:10 compute-0 sshd-session[71089]: Received disconnect from 38.129.56.219 port 49486:11: disconnected by user
Sep 30 14:22:10 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:22:10 compute-0 sshd-session[71089]: Disconnected from user zuul 38.129.56.219 port 49486
Sep 30 14:22:10 compute-0 sshd-session[71086]: pam_unix(sshd:session): session closed for user zuul
Sep 30 14:22:10 compute-0 systemd[1]: session-19.scope: Deactivated successfully.
Sep 30 14:22:10 compute-0 systemd[1]: session-19.scope: Consumed 1min 35.891s CPU time.
Sep 30 14:22:10 compute-0 systemd-logind[808]: Session 19 logged out. Waiting for processes to exit.
Sep 30 14:22:10 compute-0 systemd-logind[808]: Removed session 19.
Sep 30 14:22:10 compute-0 ceph-mon[74194]: pgmap v189: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 720 B/s wr, 3 op/s
Sep 30 14:22:10 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v190: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 720 B/s wr, 3 op/s
Sep 30 14:22:10 compute-0 lvm[132723]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 14:22:10 compute-0 lvm[132723]: VG ceph_vg0 finished
Sep 30 14:22:11 compute-0 lucid_hermann[132569]: {}
Sep 30 14:22:11 compute-0 systemd[1]: libpod-c2e2712b59fad15ff3116f1b7bf5921be0e9fdeabdd89b81eb1c074d8c63adf6.scope: Deactivated successfully.
Sep 30 14:22:11 compute-0 systemd[1]: libpod-c2e2712b59fad15ff3116f1b7bf5921be0e9fdeabdd89b81eb1c074d8c63adf6.scope: Consumed 1.132s CPU time.
Sep 30 14:22:11 compute-0 podman[132526]: 2025-09-30 14:22:11.05401521 +0000 UTC m=+0.876475851 container died c2e2712b59fad15ff3116f1b7bf5921be0e9fdeabdd89b81eb1c074d8c63adf6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_hermann, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Sep 30 14:22:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-74299fcad29670028c48eda27e41ce1134c080c70aad55770d7467fe1e75fa3b-merged.mount: Deactivated successfully.
Sep 30 14:22:11 compute-0 podman[132526]: 2025-09-30 14:22:11.095850529 +0000 UTC m=+0.918311150 container remove c2e2712b59fad15ff3116f1b7bf5921be0e9fdeabdd89b81eb1c074d8c63adf6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_hermann, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2)
Sep 30 14:22:11 compute-0 systemd[1]: libpod-conmon-c2e2712b59fad15ff3116f1b7bf5921be0e9fdeabdd89b81eb1c074d8c63adf6.scope: Deactivated successfully.
Sep 30 14:22:11 compute-0 sudo[132416]: pam_unix(sudo:session): session closed for user root
Sep 30 14:22:11 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 14:22:11 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:22:11 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 14:22:11 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:22:11 compute-0 sudo[132782]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 14:22:11 compute-0 sudo[132782]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:22:11 compute-0 sudo[132782]: pam_unix(sudo:session): session closed for user root
Sep 30 14:22:11 compute-0 python3.9[132770]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Sep 30 14:22:11 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:11 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2288001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:22:11 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:22:11 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:22:11 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:22:11.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:22:11 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:22:11 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:22:11 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:22:11.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:22:12 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:12 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22700016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:22:12 compute-0 ceph-mon[74194]: pgmap v190: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 720 B/s wr, 3 op/s
Sep 30 14:22:12 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:22:12 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:22:12 compute-0 sudo[132962]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uycbieqqluiuyzzkqqqlkenovlkkkpul ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242131.7851958-56-90892312492933/AnsiballZ_systemd.py'
Sep 30 14:22:12 compute-0 sudo[132962]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:22:12 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:12 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22680016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:22:12 compute-0 python3.9[132964]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Sep 30 14:22:12 compute-0 sudo[132962]: pam_unix(sudo:session): session closed for user root
Sep 30 14:22:12 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v191: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 270 B/s wr, 1 op/s
Sep 30 14:22:13 compute-0 sudo[133116]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvaykpdfdmefrymtunbytfbovqnalehi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242132.917418-80-241624278217642/AnsiballZ_systemd.py'
Sep 30 14:22:13 compute-0 sudo[133116]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:22:13 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:13 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2274001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:22:13 compute-0 python3.9[133118]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Sep 30 14:22:13 compute-0 sudo[133116]: pam_unix(sudo:session): session closed for user root
Sep 30 14:22:13 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:22:13 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:22:13 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:22:13.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:22:13 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:22:13 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:22:13 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:22:13.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:22:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:14 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2288001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:22:14 compute-0 sudo[133271]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzfmqnvjohaxuyznpsgmztdwuqxvsaiz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242133.8251517-107-243418727397768/AnsiballZ_command.py'
Sep 30 14:22:14 compute-0 sudo[133271]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:22:14 compute-0 ceph-mon[74194]: pgmap v191: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 270 B/s wr, 1 op/s
Sep 30 14:22:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:14 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22700016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:22:14 compute-0 python3.9[133273]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:22:14 compute-0 sudo[133271]: pam_unix(sudo:session): session closed for user root
Sep 30 14:22:14 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:22:14 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:22:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:22:14] "GET /metrics HTTP/1.1" 200 48412 "" "Prometheus/2.51.0"
Sep 30 14:22:14 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:22:14] "GET /metrics HTTP/1.1" 200 48412 "" "Prometheus/2.51.0"
Sep 30 14:22:14 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v192: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 270 B/s wr, 1 op/s
Sep 30 14:22:15 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:22:15 compute-0 sudo[133425]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eiyrizphrmphebouftdcemsfgvyaorpr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242134.9699755-131-208654232177271/AnsiballZ_stat.py'
Sep 30 14:22:15 compute-0 sudo[133425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:22:15 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:22:15 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:15 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22700016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:22:15 compute-0 python3.9[133427]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 14:22:15 compute-0 sudo[133425]: pam_unix(sudo:session): session closed for user root
Sep 30 14:22:15 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:22:15 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:22:15 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:22:15.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:22:15 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:22:15 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:22:15 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:22:15.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:22:16 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:16 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2274001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:22:16 compute-0 sudo[133578]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxxleugnjhvqwfyrdfwdifsavdbgyxar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242135.8848338-158-123430733772357/AnsiballZ_file.py'
Sep 30 14:22:16 compute-0 sudo[133578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:22:16 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:16 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2288001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:22:16 compute-0 ceph-mon[74194]: pgmap v192: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 270 B/s wr, 1 op/s
Sep 30 14:22:16 compute-0 python3.9[133580]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:22:16 compute-0 sudo[133578]: pam_unix(sudo:session): session closed for user root
Sep 30 14:22:16 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v193: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 270 B/s wr, 1 op/s
Sep 30 14:22:16 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:22:16.972Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:22:16 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:22:16.973Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:22:17 compute-0 sshd-session[132523]: Connection closed by 192.168.122.30 port 49758
Sep 30 14:22:17 compute-0 sshd-session[132515]: pam_unix(sshd:session): session closed for user zuul
Sep 30 14:22:17 compute-0 systemd[1]: session-46.scope: Deactivated successfully.
Sep 30 14:22:17 compute-0 systemd[1]: session-46.scope: Consumed 3.800s CPU time.
Sep 30 14:22:17 compute-0 systemd-logind[808]: Session 46 logged out. Waiting for processes to exit.
Sep 30 14:22:17 compute-0 systemd-logind[808]: Removed session 46.
Sep 30 14:22:17 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:17 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2288001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:22:17 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:22:17 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:22:17 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:22:17.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:22:17 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:22:17 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:22:17 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:22:17.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:22:18 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:18 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2270002f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:22:18 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:18 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2274002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:22:18 compute-0 ceph-mon[74194]: pgmap v193: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 270 B/s wr, 1 op/s
Sep 30 14:22:18 compute-0 sudo[133607]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:22:18 compute-0 sudo[133607]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:22:18 compute-0 sudo[133607]: pam_unix(sudo:session): session closed for user root
Sep 30 14:22:18 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:22:18.855Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:22:18 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v194: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:22:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:19 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2274002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:22:19 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:22:19 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:22:19 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:22:19.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:22:19 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:22:19 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:22:19 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:22:19.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:22:20 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:20 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2288001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:22:20 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:20 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2270002f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:22:20 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:22:20 compute-0 ceph-mon[74194]: pgmap v194: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:22:20 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v195: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:22:21 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:21 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2274002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:22:21 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:22:21 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:22:21 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:22:21.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:22:21 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:22:21 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:22:21 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:22:21.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:22:22 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:22 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2274002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:22:22 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:22 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2288001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:22:22 compute-0 sshd-session[133636]: Accepted publickey for zuul from 192.168.122.30 port 56298 ssh2: ECDSA SHA256:bXV1aFTGAGwGo0hLh6HZ3pTGxlJrPf0VedxXflT3nU8
Sep 30 14:22:22 compute-0 systemd-logind[808]: New session 47 of user zuul.
Sep 30 14:22:22 compute-0 systemd[1]: Started Session 47 of User zuul.
Sep 30 14:22:22 compute-0 sshd-session[133636]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Sep 30 14:22:22 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v196: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:22:23 compute-0 ceph-mon[74194]: pgmap v195: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:22:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:23 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2274002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:22:23 compute-0 python3.9[133790]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Sep 30 14:22:23 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:22:23 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:22:23 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:22:23.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:22:23 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:22:23 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:22:23 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:22:23.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:22:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:24 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2270003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:22:24 compute-0 ceph-mon[74194]: pgmap v196: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:22:24 compute-0 sudo[133945]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnlrpdjdmkrajicoprigsbbqrixitlon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242144.0932527-62-68644266943454/AnsiballZ_setup.py'
Sep 30 14:22:24 compute-0 sudo[133945]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:22:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:24 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2274002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:22:24 compute-0 python3.9[133947]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Sep 30 14:22:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:22:24] "GET /metrics HTTP/1.1" 200 48412 "" "Prometheus/2.51.0"
Sep 30 14:22:24 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:22:24] "GET /metrics HTTP/1.1" 200 48412 "" "Prometheus/2.51.0"
Sep 30 14:22:24 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v197: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:22:24 compute-0 sudo[133945]: pam_unix(sudo:session): session closed for user root
Sep 30 14:22:25 compute-0 sudo[134030]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tobfxraybpahtyjdkqkxlnbwtklulpka ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242144.0932527-62-68644266943454/AnsiballZ_dnf.py'
Sep 30 14:22:25 compute-0 sudo[134030]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:22:25 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:22:25 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:25 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2270003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:22:25 compute-0 python3.9[134032]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Sep 30 14:22:25 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:22:25 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:22:25 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:22:25.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:22:25 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:22:25 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:22:25 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:22:25.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:22:26 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:26 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2288001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:22:26 compute-0 ceph-mon[74194]: pgmap v197: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:22:26 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:26 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2274002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:22:26 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v198: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:22:26 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:22:26.973Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:22:26 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:22:26.974Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:22:27 compute-0 sudo[134030]: pam_unix(sudo:session): session closed for user root
Sep 30 14:22:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:27 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2274002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:22:27 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:22:27 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:22:27 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:22:27.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:22:27 compute-0 python3.9[134185]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:22:27 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:22:27 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:22:27 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:22:27.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:22:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:28 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2270003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:22:28 compute-0 ceph-mon[74194]: pgmap v198: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:22:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:28 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2288001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:22:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:22:28.855Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:22:28 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v199: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:22:29 compute-0 python3.9[134337]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Sep 30 14:22:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:29 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2288001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:22:29 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:22:29 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:22:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:22:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:22:29 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:22:29 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:22:29 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:22:29.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:22:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:22:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:22:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:22:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:22:29 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:22:29 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:22:29 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:22:29.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:22:30 compute-0 python3.9[134489]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 14:22:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:30 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2288001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:22:30 compute-0 ceph-mon[74194]: pgmap v199: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:22:30 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:22:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:30 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2288001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:22:30 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:22:30 compute-0 python3.9[134639]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 14:22:30 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v200: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:22:31 compute-0 sshd-session[133639]: Connection closed by 192.168.122.30 port 56298
Sep 30 14:22:31 compute-0 sshd-session[133636]: pam_unix(sshd:session): session closed for user zuul
Sep 30 14:22:31 compute-0 systemd[1]: session-47.scope: Deactivated successfully.
Sep 30 14:22:31 compute-0 systemd[1]: session-47.scope: Consumed 5.864s CPU time.
Sep 30 14:22:31 compute-0 systemd-logind[808]: Session 47 logged out. Waiting for processes to exit.
Sep 30 14:22:31 compute-0 systemd-logind[808]: Removed session 47.
Sep 30 14:22:31 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:31 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2288001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:22:31 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:22:31 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:22:31 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:22:31.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:22:31 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:22:31 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:22:31 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:22:31.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:22:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:32 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2270003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:22:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:32 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2268002d20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:22:32 compute-0 ceph-mon[74194]: pgmap v200: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:22:32 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v201: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:22:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-nfs-cephfs-compute-0-yvkpei[97239]: [WARNING] 272/142233 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Sep 30 14:22:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:33 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2274002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:22:33 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:22:33 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:22:33 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:22:33.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:22:33 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:22:33 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:22:33 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:22:33.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:22:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:34 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2288003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:22:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:34 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2270003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:22:34 compute-0 ceph-mon[74194]: pgmap v201: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:22:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:22:34] "GET /metrics HTTP/1.1" 200 48413 "" "Prometheus/2.51.0"
Sep 30 14:22:34 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:22:34] "GET /metrics HTTP/1.1" 200 48413 "" "Prometheus/2.51.0"
Sep 30 14:22:34 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v202: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:22:35 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:22:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:35 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2268003640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:22:35 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:22:35 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 14:22:35 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:22:35.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 14:22:35 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:22:35 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:22:35 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:22:35.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:22:36 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:36 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2274002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:22:36 compute-0 sshd-session[134670]: Accepted publickey for zuul from 192.168.122.30 port 46808 ssh2: ECDSA SHA256:bXV1aFTGAGwGo0hLh6HZ3pTGxlJrPf0VedxXflT3nU8
Sep 30 14:22:36 compute-0 systemd-logind[808]: New session 48 of user zuul.
Sep 30 14:22:36 compute-0 systemd[1]: Started Session 48 of User zuul.
Sep 30 14:22:36 compute-0 sshd-session[134670]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Sep 30 14:22:36 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:36 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2288003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:22:36 compute-0 ceph-mon[74194]: pgmap v202: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:22:36 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v203: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Sep 30 14:22:36 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:22:36.975Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:22:36 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:22:36.976Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:22:36 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:22:36.976Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:22:37 compute-0 python3.9[134823]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Sep 30 14:22:37 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:37 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2288003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:22:37 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:22:37 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:22:37 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:22:37.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:22:37 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:22:37 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:22:37 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:22:37.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:22:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:38 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2288003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:22:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:38 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2264000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:22:38 compute-0 ceph-mon[74194]: pgmap v203: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Sep 30 14:22:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:22:38.857Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:22:38 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v204: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Sep 30 14:22:38 compute-0 sudo[134956]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:22:38 compute-0 sudo[134956]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:22:38 compute-0 sudo[134956]: pam_unix(sudo:session): session closed for user root
Sep 30 14:22:38 compute-0 sudo[135005]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-karucnylvtiimrjudzdjkqcrsqnjonsn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242158.4230018-110-244037462324819/AnsiballZ_file.py'
Sep 30 14:22:38 compute-0 sudo[135005]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:22:39 compute-0 python3.9[135009]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:22:39 compute-0 sudo[135005]: pam_unix(sudo:session): session closed for user root
Sep 30 14:22:39 compute-0 sudo[135160]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rygijhruozthybzqkcojuqbigqyfzoaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242159.2287967-110-209363689281538/AnsiballZ_file.py'
Sep 30 14:22:39 compute-0 sudo[135160]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:22:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:39 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2274002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:22:39 compute-0 python3.9[135162]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:22:39 compute-0 sudo[135160]: pam_unix(sudo:session): session closed for user root
Sep 30 14:22:39 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:22:39 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:22:39 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:22:39.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:22:39 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:22:39 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:22:39 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:22:39.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:22:40 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:40 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2260000d90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:22:40 compute-0 sudo[135313]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-niadclxybxkrhiowcvfepatkwojudvsq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242159.8582954-155-92398563870930/AnsiballZ_stat.py'
Sep 30 14:22:40 compute-0 sudo[135313]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:22:40 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:40 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2288003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:22:40 compute-0 ceph-mon[74194]: pgmap v204: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Sep 30 14:22:40 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:22:40 compute-0 python3.9[135315]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:22:40 compute-0 sudo[135313]: pam_unix(sudo:session): session closed for user root
Sep 30 14:22:40 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v205: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Sep 30 14:22:41 compute-0 sudo[135436]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvhzgfweunaqempivdtleibvcwahpwku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242159.8582954-155-92398563870930/AnsiballZ_copy.py'
Sep 30 14:22:41 compute-0 sudo[135436]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:22:41 compute-0 python3.9[135438]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759242159.8582954-155-92398563870930/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=cd3aa50300b5ae69eda3bdbc7fcd5e92c0ddb182 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:22:41 compute-0 sudo[135436]: pam_unix(sudo:session): session closed for user root
Sep 30 14:22:41 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:41 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22640016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:22:41 compute-0 sudo[135589]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvuembtddpkcinqbeodqqtbpkrqshcsi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242161.375083-155-10361910909615/AnsiballZ_stat.py'
Sep 30 14:22:41 compute-0 sudo[135589]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:22:41 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:22:41 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:22:41 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:22:41.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:22:41 compute-0 python3.9[135591]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:22:41 compute-0 sudo[135589]: pam_unix(sudo:session): session closed for user root
Sep 30 14:22:41 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:22:41 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:22:41 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:22:41.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:22:41 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:41 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:22:42 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:42 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2274002f50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:22:42 compute-0 sudo[135713]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wegfildpibbwmbncdjuucowosykgjgnp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242161.375083-155-10361910909615/AnsiballZ_copy.py'
Sep 30 14:22:42 compute-0 sudo[135713]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:22:42 compute-0 python3.9[135715]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759242161.375083-155-10361910909615/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=966c8124e5880131ecebc255635d865e4ce2b2f1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:22:42 compute-0 sudo[135713]: pam_unix(sudo:session): session closed for user root
Sep 30 14:22:42 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:42 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22600018b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:22:42 compute-0 ceph-mon[74194]: pgmap v205: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Sep 30 14:22:42 compute-0 sudo[135865]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjknpmqzfwrxkytwvlhbsayzmcsccyzw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242162.439297-155-274786442294015/AnsiballZ_stat.py'
Sep 30 14:22:42 compute-0 sudo[135865]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:22:42 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v206: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:22:42 compute-0 python3.9[135867]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:22:42 compute-0 sudo[135865]: pam_unix(sudo:session): session closed for user root
Sep 30 14:22:43 compute-0 sudo[135988]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iilenybbgvskjqoduljyuzcdbbpmbjff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242162.439297-155-274786442294015/AnsiballZ_copy.py'
Sep 30 14:22:43 compute-0 sudo[135988]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:22:43 compute-0 python3.9[135990]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759242162.439297-155-274786442294015/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=51ea81e2411746b85d52bed960393ec7a279e4df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:22:43 compute-0 sudo[135988]: pam_unix(sudo:session): session closed for user root
Sep 30 14:22:43 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:43 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2288003cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:22:43 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:22:43 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:22:43 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:22:43.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:22:43 compute-0 sudo[136142]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wumlwcvnsllebvtcmzxfnzzwrnngxznr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242163.651008-285-112643867750711/AnsiballZ_file.py'
Sep 30 14:22:43 compute-0 sudo[136142]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:22:43 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:22:43 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:22:43 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:22:43.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:22:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:44 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22640016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:22:44 compute-0 python3.9[136144]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:22:44 compute-0 sudo[136142]: pam_unix(sudo:session): session closed for user root
Sep 30 14:22:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:44 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22640016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:22:44 compute-0 sudo[136294]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmqrlpqbljpxjnvdhgstuufkpdwjdyvi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242164.2202768-285-142245142987301/AnsiballZ_file.py'
Sep 30 14:22:44 compute-0 sudo[136294]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:22:44 compute-0 ceph-mon[74194]: pgmap v206: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:22:44 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:22:44 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:22:44 compute-0 python3.9[136296]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:22:44 compute-0 sudo[136294]: pam_unix(sudo:session): session closed for user root
Sep 30 14:22:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:22:44] "GET /metrics HTTP/1.1" 200 48415 "" "Prometheus/2.51.0"
Sep 30 14:22:44 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:22:44] "GET /metrics HTTP/1.1" 200 48415 "" "Prometheus/2.51.0"
Sep 30 14:22:44 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v207: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:22:45 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:45 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:22:45 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:45 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:22:45 compute-0 sudo[136446]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tsnzcysjthtkutifjfyjecqvyomksolc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242164.869281-330-137610361653443/AnsiballZ_stat.py'
Sep 30 14:22:45 compute-0 sudo[136446]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:22:45 compute-0 python3.9[136448]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:22:45 compute-0 sudo[136446]: pam_unix(sudo:session): session closed for user root
Sep 30 14:22:45 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:22:45 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:45 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22600018b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:22:45 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:22:45 compute-0 sudo[136570]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejqfbhsyrmffowellwdkleqkydsjhqip ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242164.869281-330-137610361653443/AnsiballZ_copy.py'
Sep 30 14:22:45 compute-0 sudo[136570]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:22:45 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:22:45 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:22:45 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:22:45.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:22:45 compute-0 python3.9[136572]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759242164.869281-330-137610361653443/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=29352c05c6409657d63a934d57f42c9f060f4a61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:22:45 compute-0 sudo[136570]: pam_unix(sudo:session): session closed for user root
Sep 30 14:22:45 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:22:45 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 14:22:45 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:22:45.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 14:22:46 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:46 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2288003cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:22:46 compute-0 sudo[136723]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnrkzshswlgjkivupyrbxcbnnbwmtade ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242165.9966528-330-237632239757990/AnsiballZ_stat.py'
Sep 30 14:22:46 compute-0 sudo[136723]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:22:46 compute-0 python3.9[136725]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:22:46 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:46 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2288003cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:22:46 compute-0 sudo[136723]: pam_unix(sudo:session): session closed for user root
Sep 30 14:22:46 compute-0 ceph-mon[74194]: pgmap v207: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:22:46 compute-0 sudo[136846]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amloyxtpevtvsdfppfpcjxgswzyyydzz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242165.9966528-330-237632239757990/AnsiballZ_copy.py'
Sep 30 14:22:46 compute-0 sudo[136846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:22:46 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v208: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Sep 30 14:22:46 compute-0 python3.9[136848]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759242165.9966528-330-237632239757990/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=1b2c49abdf8db7d1af40c9be0864753b0c9dd09e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:22:46 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:22:46.977Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:22:46 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:22:46.977Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:22:46 compute-0 sudo[136846]: pam_unix(sudo:session): session closed for user root
Sep 30 14:22:47 compute-0 sudo[136999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glycloeovgvdihnktuserozldnfyavib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242167.1144276-330-51132836776253/AnsiballZ_stat.py'
Sep 30 14:22:47 compute-0 sudo[136999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:22:47 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:47 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2288003cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:22:47 compute-0 python3.9[137001]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:22:47 compute-0 sudo[136999]: pam_unix(sudo:session): session closed for user root
Sep 30 14:22:47 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:22:47 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:22:47 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:22:47.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:22:47 compute-0 sudo[137123]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjzgiqmxuoofurdkgxgyjsrqguceftet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242167.1144276-330-51132836776253/AnsiballZ_copy.py'
Sep 30 14:22:47 compute-0 sudo[137123]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:22:47 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:22:47 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 14:22:47 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:22:47.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 14:22:48 compute-0 python3.9[137125]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759242167.1144276-330-51132836776253/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=d0fade783382d25a9c28ddfc60055a96fb35b333 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:22:48 compute-0 sudo[137123]: pam_unix(sudo:session): session closed for user root
Sep 30 14:22:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:48 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22600018b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:22:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:48 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Sep 30 14:22:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:48 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2274004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:22:48 compute-0 sudo[137275]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqqlqsuxgicbjtiocklibvkadtyzxcds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242168.2213933-459-243883939104996/AnsiballZ_file.py'
Sep 30 14:22:48 compute-0 sudo[137275]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:22:48 compute-0 ceph-mon[74194]: pgmap v208: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Sep 30 14:22:48 compute-0 python3.9[137277]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:22:48 compute-0 sudo[137275]: pam_unix(sudo:session): session closed for user root
Sep 30 14:22:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:22:48.857Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:22:48 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v209: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 2 op/s
Sep 30 14:22:49 compute-0 sudo[137427]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bsjumrryqfbpzhizsllpslqyueifaijh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242168.7924588-459-3207559432664/AnsiballZ_file.py'
Sep 30 14:22:49 compute-0 sudo[137427]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:22:49 compute-0 python3.9[137429]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:22:49 compute-0 sudo[137427]: pam_unix(sudo:session): session closed for user root
Sep 30 14:22:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:49 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2274004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:22:49 compute-0 sudo[137581]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blwprmirtmbceetthvhxidqieojylwhb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242169.49229-505-150442890393469/AnsiballZ_stat.py'
Sep 30 14:22:49 compute-0 sudo[137581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:22:49 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:22:49 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:22:49 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:22:49.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:22:49 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:22:49 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:22:49 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:22:49.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:22:49 compute-0 python3.9[137583]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:22:49 compute-0 sudo[137581]: pam_unix(sudo:session): session closed for user root
Sep 30 14:22:50 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:50 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2288003cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:22:50 compute-0 sudo[137704]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbmlokbygikwwxkrrzprphzrbufjawou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242169.49229-505-150442890393469/AnsiballZ_copy.py'
Sep 30 14:22:50 compute-0 sudo[137704]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:22:50 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:50 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2260002d40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:22:50 compute-0 python3.9[137706]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759242169.49229-505-150442890393469/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=7ffb7af23393f9b2502e6b8bf0a094f293fbd2df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:22:50 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:22:50 compute-0 sudo[137704]: pam_unix(sudo:session): session closed for user root
Sep 30 14:22:50 compute-0 ceph-mon[74194]: pgmap v209: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 2 op/s
Sep 30 14:22:50 compute-0 sudo[137856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgedxitnpvcgsaumtiejuqsgqauscwmr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242170.6331916-505-127453876348571/AnsiballZ_stat.py'
Sep 30 14:22:50 compute-0 sudo[137856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:22:50 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v210: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 2 op/s
Sep 30 14:22:51 compute-0 python3.9[137858]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:22:51 compute-0 sudo[137856]: pam_unix(sudo:session): session closed for user root
Sep 30 14:22:51 compute-0 sudo[137980]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pigltwzgbecxkctgounhuiqdozztglha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242170.6331916-505-127453876348571/AnsiballZ_copy.py'
Sep 30 14:22:51 compute-0 sudo[137980]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:22:51 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:51 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2274004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:22:51 compute-0 python3.9[137982]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759242170.6331916-505-127453876348571/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=1b2c49abdf8db7d1af40c9be0864753b0c9dd09e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:22:51 compute-0 sudo[137980]: pam_unix(sudo:session): session closed for user root
Sep 30 14:22:51 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:22:51 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 14:22:51 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:22:51.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 14:22:51 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:22:51 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 14:22:51 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:22:51.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 14:22:51 compute-0 sudo[138133]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oapqinuodlitndwnlnhpukyxhogjbobg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242171.7144096-505-130496837509790/AnsiballZ_stat.py'
Sep 30 14:22:51 compute-0 sudo[138133]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:22:52 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:52 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2274004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:22:52 compute-0 python3.9[138135]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:22:52 compute-0 sudo[138133]: pam_unix(sudo:session): session closed for user root
Sep 30 14:22:52 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:52 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2288003cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:22:52 compute-0 sudo[138256]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xivjzqhemhbsliexkuvyodxnhpzvtwtn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242171.7144096-505-130496837509790/AnsiballZ_copy.py'
Sep 30 14:22:52 compute-0 sudo[138256]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:22:52 compute-0 ceph-mon[74194]: pgmap v210: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 2 op/s
Sep 30 14:22:52 compute-0 python3.9[138258]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759242171.7144096-505-130496837509790/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=a0b5d7b12dae6d7e392dfd57031746547e101b20 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:22:52 compute-0 sudo[138256]: pam_unix(sudo:session): session closed for user root
Sep 30 14:22:52 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v211: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Sep 30 14:22:53 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-nfs-cephfs-compute-0-yvkpei[97239]: [WARNING] 272/142253 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Sep 30 14:22:53 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:53 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2260002d40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:22:53 compute-0 sudo[138409]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kklhlephcbhjuxvegdcukzhuitsokjtz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242173.4805648-668-44033417794319/AnsiballZ_file.py'
Sep 30 14:22:53 compute-0 sudo[138409]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:22:53 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:22:53 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:22:53 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:22:53.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:22:53 compute-0 python3.9[138412]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:22:53 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:22:53 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:22:53 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:22:53.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:22:53 compute-0 sudo[138409]: pam_unix(sudo:session): session closed for user root
Sep 30 14:22:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:54 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2274004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:22:54 compute-0 sudo[138562]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqfscvrlqevnwbqzskamwlcemgckeyld ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242174.084796-706-188334446517272/AnsiballZ_stat.py'
Sep 30 14:22:54 compute-0 sudo[138562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:22:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:54 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2264002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:22:54 compute-0 python3.9[138564]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:22:54 compute-0 sudo[138562]: pam_unix(sudo:session): session closed for user root
Sep 30 14:22:54 compute-0 ceph-mon[74194]: pgmap v211: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Sep 30 14:22:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:22:54] "GET /metrics HTTP/1.1" 200 48415 "" "Prometheus/2.51.0"
Sep 30 14:22:54 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:22:54] "GET /metrics HTTP/1.1" 200 48415 "" "Prometheus/2.51.0"
Sep 30 14:22:54 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v212: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Sep 30 14:22:54 compute-0 sudo[138685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-geogllnckrzlkklzysclljjmployvwdd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242174.084796-706-188334446517272/AnsiballZ_copy.py'
Sep 30 14:22:54 compute-0 sudo[138685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:22:55 compute-0 python3.9[138687]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759242174.084796-706-188334446517272/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=b0c7031d22a68ee9798f5449b16cb4c47fcab9d3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:22:55 compute-0 sudo[138685]: pam_unix(sudo:session): session closed for user root
Sep 30 14:22:55 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:22:55 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:55 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2288003cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:22:55 compute-0 sudo[138838]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odtuobompbpvrjxzdsyihwiokrlnnceg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242175.2945976-750-155023161881889/AnsiballZ_file.py'
Sep 30 14:22:55 compute-0 sudo[138838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:22:55 compute-0 python3.9[138840]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:22:55 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:22:55 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:22:55 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:22:55.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:22:55 compute-0 sudo[138838]: pam_unix(sudo:session): session closed for user root
Sep 30 14:22:55 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:22:55 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 14:22:55 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:22:55.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 14:22:56 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:56 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2260002d40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:22:56 compute-0 sudo[138991]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwhclsxixxveskmfcaazndhtanajqyrp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242175.959312-776-241802289772978/AnsiballZ_stat.py'
Sep 30 14:22:56 compute-0 sudo[138991]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:22:56 compute-0 python3.9[138993]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:22:56 compute-0 sudo[138991]: pam_unix(sudo:session): session closed for user root
Sep 30 14:22:56 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:56 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2274004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:22:56 compute-0 sudo[139114]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wcekgasnwbvchhpwvjsovnkgrshoazkl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242175.959312-776-241802289772978/AnsiballZ_copy.py'
Sep 30 14:22:56 compute-0 sudo[139114]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:22:56 compute-0 ceph-mon[74194]: pgmap v212: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Sep 30 14:22:56 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v213: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Sep 30 14:22:56 compute-0 python3.9[139116]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759242175.959312-776-241802289772978/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=b0c7031d22a68ee9798f5449b16cb4c47fcab9d3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:22:56 compute-0 sudo[139114]: pam_unix(sudo:session): session closed for user root
Sep 30 14:22:56 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:22:56.978Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:22:56 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:22:56.978Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:22:56 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:22:56.979Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:22:57 compute-0 sudo[139267]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttgakqmlinxnfnytslnfaxtzqsalxnmg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242177.1545918-823-162475013972923/AnsiballZ_file.py'
Sep 30 14:22:57 compute-0 sudo[139267]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:22:57 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:57 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2274004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:22:57 compute-0 python3.9[139269]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:22:57 compute-0 sudo[139267]: pam_unix(sudo:session): session closed for user root
Sep 30 14:22:57 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:22:57 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:22:57 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:22:57.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:22:57 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:22:57 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:22:57 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:22:57.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:22:58 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:58 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2288003cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:22:58 compute-0 sudo[139420]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzszhotugphhxvjnnmopxelecytmtsyv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242177.8297322-846-59574569779426/AnsiballZ_stat.py'
Sep 30 14:22:58 compute-0 sudo[139420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:22:58 compute-0 python3.9[139422]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:22:58 compute-0 sudo[139420]: pam_unix(sudo:session): session closed for user root
Sep 30 14:22:58 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:58 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2260003e40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:22:58 compute-0 sudo[139543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agbsdblgtmesyrhzrxzcrqjdcaageejh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242177.8297322-846-59574569779426/AnsiballZ_copy.py'
Sep 30 14:22:58 compute-0 sudo[139543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:22:58 compute-0 ceph-mon[74194]: pgmap v213: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Sep 30 14:22:58 compute-0 python3.9[139545]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759242177.8297322-846-59574569779426/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=b0c7031d22a68ee9798f5449b16cb4c47fcab9d3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:22:58 compute-0 sudo[139543]: pam_unix(sudo:session): session closed for user root
Sep 30 14:22:58 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:22:58.858Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:22:58 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v214: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:22:58 compute-0 sudo[139570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:22:58 compute-0 sudo[139570]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:22:58 compute-0 sudo[139570]: pam_unix(sudo:session): session closed for user root
Sep 30 14:22:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-crash-compute-0[79646]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Sep 30 14:22:59 compute-0 sudo[139720]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yriaktucxtulushbzmvymlklupffhddk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242179.0184155-890-186893603307295/AnsiballZ_file.py'
Sep 30 14:22:59 compute-0 sudo[139720]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:22:59 compute-0 ceph-mgr[74485]: [balancer INFO root] Optimize plan auto_2025-09-30_14:22:59
Sep 30 14:22:59 compute-0 ceph-mgr[74485]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 14:22:59 compute-0 ceph-mgr[74485]: [balancer INFO root] do_upmap
Sep 30 14:22:59 compute-0 ceph-mgr[74485]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.meta', 'volumes', 'cephfs.cephfs.data', '.rgw.root', 'images', '.mgr', 'default.rgw.log', 'vms', '.nfs', 'default.rgw.control', 'default.rgw.meta']
Sep 30 14:22:59 compute-0 ceph-mgr[74485]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 14:22:59 compute-0 python3.9[139722]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:22:59 compute-0 sudo[139720]: pam_unix(sudo:session): session closed for user root
Sep 30 14:22:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:22:59 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2274004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:22:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 14:22:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:22:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 14:22:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:22:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:22:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:22:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:22:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:22:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:22:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:22:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:22:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:22:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Sep 30 14:22:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:22:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:22:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:22:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Sep 30 14:22:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:22:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Sep 30 14:22:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:22:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:22:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:22:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 14:22:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:22:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 14:22:59 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:22:59 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:22:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:22:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:22:59 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:22:59 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:22:59 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:22:59.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:22:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:22:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:22:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:22:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:22:59 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:22:59 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:22:59 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:22:59 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:22:59.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:22:59 compute-0 sudo[139874]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpsludnkerrxdwbxjvcavnrfhkthriwr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242179.6712492-912-193575431465724/AnsiballZ_stat.py'
Sep 30 14:22:59 compute-0 sudo[139874]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:23:00 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:23:00 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2274004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:23:00 compute-0 python3.9[139876]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:23:00 compute-0 sudo[139874]: pam_unix(sudo:session): session closed for user root
Sep 30 14:23:00 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:23:00 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2288003cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:23:00 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:23:00 compute-0 sudo[139997]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gufnklanjndwimgnwtfpsgveshuajzog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242179.6712492-912-193575431465724/AnsiballZ_copy.py'
Sep 30 14:23:00 compute-0 sudo[139997]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:23:00 compute-0 python3.9[139999]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759242179.6712492-912-193575431465724/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=b0c7031d22a68ee9798f5449b16cb4c47fcab9d3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:23:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 14:23:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 14:23:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 14:23:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 14:23:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 14:23:00 compute-0 sudo[139997]: pam_unix(sudo:session): session closed for user root
Sep 30 14:23:00 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v215: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:23:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 14:23:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 14:23:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 14:23:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 14:23:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 14:23:00 compute-0 ceph-mon[74194]: pgmap v214: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:23:01 compute-0 sudo[140149]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jlqiyflpvkbzgisefllmzammbcfhbuju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242180.9890964-961-270840115335735/AnsiballZ_file.py'
Sep 30 14:23:01 compute-0 sudo[140149]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:23:01 compute-0 python3.9[140151]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:23:01 compute-0 sudo[140149]: pam_unix(sudo:session): session closed for user root
Sep 30 14:23:01 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:23:01 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2260003e40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:23:01 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:23:01 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:23:01 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:23:01.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:23:01 compute-0 sudo[140303]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvtswbxbiwsrwbbannxdimzectdussfv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242181.6241982-981-50331160404095/AnsiballZ_stat.py'
Sep 30 14:23:01 compute-0 sudo[140303]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:23:01 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:23:01 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:23:01 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:23:01.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:23:02 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:23:02 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2260003e40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:23:02 compute-0 ceph-mon[74194]: pgmap v215: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:23:02 compute-0 python3.9[140305]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:23:02 compute-0 sudo[140303]: pam_unix(sudo:session): session closed for user root
Sep 30 14:23:02 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:23:02 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2274004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:23:02 compute-0 sudo[140426]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yydziegcznhzvqtmumewfkelancrrmgx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242181.6241982-981-50331160404095/AnsiballZ_copy.py'
Sep 30 14:23:02 compute-0 sudo[140426]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:23:02 compute-0 python3.9[140428]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759242181.6241982-981-50331160404095/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=b0c7031d22a68ee9798f5449b16cb4c47fcab9d3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:23:02 compute-0 sudo[140426]: pam_unix(sudo:session): session closed for user root
Sep 30 14:23:02 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v216: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:23:03 compute-0 sudo[140578]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cujwefgkzkynrgymlhbxrkjxrdpjcpuk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242182.9128456-1031-154324835903991/AnsiballZ_file.py'
Sep 30 14:23:03 compute-0 sudo[140578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:23:03 compute-0 python3.9[140580]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:23:03 compute-0 sudo[140578]: pam_unix(sudo:session): session closed for user root
Sep 30 14:23:03 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:23:03 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2274004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:23:03 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:23:03 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:23:03 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:23:03.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:23:03 compute-0 sudo[140732]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ayxxsgvdrapfaqijduwxfguulfntbhaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242183.573067-1054-138733631267313/AnsiballZ_stat.py'
Sep 30 14:23:03 compute-0 sudo[140732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:23:03 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:23:03 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:23:03 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:23:03.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:23:04 compute-0 python3.9[140734]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:23:04 compute-0 sudo[140732]: pam_unix(sudo:session): session closed for user root
Sep 30 14:23:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:23:04 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2288003cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:23:04 compute-0 ceph-mon[74194]: pgmap v216: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:23:04 compute-0 sudo[140855]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxpvkpvpassbagvhkexipdjqkuxyhkdq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242183.573067-1054-138733631267313/AnsiballZ_copy.py'
Sep 30 14:23:04 compute-0 sudo[140855]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:23:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:23:04 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2260003e40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:23:04 compute-0 python3.9[140857]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759242183.573067-1054-138733631267313/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=b0c7031d22a68ee9798f5449b16cb4c47fcab9d3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:23:04 compute-0 sudo[140855]: pam_unix(sudo:session): session closed for user root
Sep 30 14:23:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:23:04] "GET /metrics HTTP/1.1" 200 48409 "" "Prometheus/2.51.0"
Sep 30 14:23:04 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:23:04] "GET /metrics HTTP/1.1" 200 48409 "" "Prometheus/2.51.0"
Sep 30 14:23:04 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v217: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:23:05 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:23:05 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:23:05 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2260003e40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:23:05 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:23:05 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:23:05 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:23:05.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:23:05 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:23:05 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:23:05 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:23:05.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:23:06 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:23:06 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2260003e40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:23:06 compute-0 sshd-session[134673]: Connection closed by 192.168.122.30 port 46808
Sep 30 14:23:06 compute-0 sshd-session[134670]: pam_unix(sshd:session): session closed for user zuul
Sep 30 14:23:06 compute-0 systemd[1]: session-48.scope: Deactivated successfully.
Sep 30 14:23:06 compute-0 systemd[1]: session-48.scope: Consumed 22.151s CPU time.
Sep 30 14:23:06 compute-0 systemd-logind[808]: Session 48 logged out. Waiting for processes to exit.
Sep 30 14:23:06 compute-0 systemd-logind[808]: Removed session 48.
Sep 30 14:23:06 compute-0 ceph-mon[74194]: pgmap v217: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:23:06 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:23:06 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2288003cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:23:06 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v218: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:23:06 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:23:06.980Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:23:07 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-nfs-cephfs-compute-0-yvkpei[97239]: [WARNING] 272/142307 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Sep 30 14:23:07 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:23:07 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2260003e40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:23:07 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:23:07 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:23:07 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:23:07.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:23:07 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:23:07 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:23:07 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:23:07.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:23:08 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:23:08 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2264003430 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:23:08 compute-0 ceph-mon[74194]: pgmap v218: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:23:08 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:23:08 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2274004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:23:08 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:23:08.859Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:23:08 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v219: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:23:09 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:23:09 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2288003cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:23:09 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:23:09 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:23:09 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:23:09.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:23:09 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:23:09 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:23:09 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:23:09.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:23:10 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:23:10 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2288003cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:23:10 compute-0 ceph-mon[74194]: pgmap v219: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:23:10 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:23:10 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2270001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:23:10 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:23:10 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v220: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:23:11 compute-0 sudo[140890]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:23:11 compute-0 sudo[140890]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:23:11 compute-0 sudo[140890]: pam_unix(sudo:session): session closed for user root
Sep 30 14:23:11 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:23:11 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2274004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:23:11 compute-0 sudo[140915]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Sep 30 14:23:11 compute-0 sudo[140915]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:23:11 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:23:11 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:23:11 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:23:11.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:23:11 compute-0 sshd-session[140943]: Accepted publickey for zuul from 192.168.122.30 port 40532 ssh2: ECDSA SHA256:bXV1aFTGAGwGo0hLh6HZ3pTGxlJrPf0VedxXflT3nU8
Sep 30 14:23:11 compute-0 systemd-logind[808]: New session 49 of user zuul.
Sep 30 14:23:11 compute-0 systemd[1]: Started Session 49 of User zuul.
Sep 30 14:23:11 compute-0 sshd-session[140943]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Sep 30 14:23:11 compute-0 sudo[140915]: pam_unix(sudo:session): session closed for user root
Sep 30 14:23:11 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 14:23:11 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:23:11 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 14:23:11 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:23:11 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Sep 30 14:23:11 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:23:11 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Sep 30 14:23:11 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:23:11 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:23:11 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:23:11 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:23:11.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:23:11 compute-0 sudo[140972]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:23:11 compute-0 sudo[140972]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:23:11 compute-0 sudo[140972]: pam_unix(sudo:session): session closed for user root
Sep 30 14:23:12 compute-0 sudo[141034]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 14:23:12 compute-0 sudo[141034]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:23:12 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:23:12 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2274004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:23:12 compute-0 ceph-mon[74194]: pgmap v220: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:23:12 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:23:12 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:23:12 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:23:12 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:23:12 compute-0 sudo[141181]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-reepeuidhjemdoiggyxduduygrarkamk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242191.9614408-26-83797914457631/AnsiballZ_file.py'
Sep 30 14:23:12 compute-0 sudo[141181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:23:12 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:23:12 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2260003e40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:23:12 compute-0 sudo[141034]: pam_unix(sudo:session): session closed for user root
Sep 30 14:23:12 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:23:12 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:23:12 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 14:23:12 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 14:23:12 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v221: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 437 B/s rd, 0 op/s
Sep 30 14:23:12 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 14:23:12 compute-0 python3.9[141183]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:23:12 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:23:12 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 14:23:12 compute-0 sudo[141181]: pam_unix(sudo:session): session closed for user root
Sep 30 14:23:12 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:23:12 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 14:23:12 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 14:23:12 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 14:23:12 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 14:23:12 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:23:12 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:23:12 compute-0 sudo[141199]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:23:12 compute-0 sudo[141199]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:23:12 compute-0 sudo[141199]: pam_unix(sudo:session): session closed for user root
Sep 30 14:23:12 compute-0 sudo[141247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 14:23:12 compute-0 sudo[141247]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:23:13 compute-0 podman[141367]: 2025-09-30 14:23:13.129736501 +0000 UTC m=+0.041137056 container create 2b74cc2d7809134fdc0dd1d74bc934c40ba2ad5e689c9a10f370f2228abf2b38 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_chatelet, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:23:13 compute-0 systemd[1]: Started libpod-conmon-2b74cc2d7809134fdc0dd1d74bc934c40ba2ad5e689c9a10f370f2228abf2b38.scope.
Sep 30 14:23:13 compute-0 podman[141367]: 2025-09-30 14:23:13.108013614 +0000 UTC m=+0.019414199 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:23:13 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:23:13 compute-0 podman[141367]: 2025-09-30 14:23:13.230164866 +0000 UTC m=+0.141565451 container init 2b74cc2d7809134fdc0dd1d74bc934c40ba2ad5e689c9a10f370f2228abf2b38 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_chatelet, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Sep 30 14:23:13 compute-0 podman[141367]: 2025-09-30 14:23:13.237837592 +0000 UTC m=+0.149238147 container start 2b74cc2d7809134fdc0dd1d74bc934c40ba2ad5e689c9a10f370f2228abf2b38 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_chatelet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Sep 30 14:23:13 compute-0 podman[141367]: 2025-09-30 14:23:13.242381229 +0000 UTC m=+0.153781824 container attach 2b74cc2d7809134fdc0dd1d74bc934c40ba2ad5e689c9a10f370f2228abf2b38 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_chatelet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:23:13 compute-0 gracious_chatelet[141428]: 167 167
Sep 30 14:23:13 compute-0 systemd[1]: libpod-2b74cc2d7809134fdc0dd1d74bc934c40ba2ad5e689c9a10f370f2228abf2b38.scope: Deactivated successfully.
Sep 30 14:23:13 compute-0 podman[141367]: 2025-09-30 14:23:13.246420012 +0000 UTC m=+0.157820577 container died 2b74cc2d7809134fdc0dd1d74bc934c40ba2ad5e689c9a10f370f2228abf2b38 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_chatelet, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Sep 30 14:23:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-3b217a8328a771518530d7a1f1cad248286a09c17a162ce3732a7385fc8a6858-merged.mount: Deactivated successfully.
Sep 30 14:23:13 compute-0 sudo[141461]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwzbxeuawckcqxnbulwfqbyomnfcletk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242192.8108037-62-253590714902434/AnsiballZ_stat.py'
Sep 30 14:23:13 compute-0 sudo[141461]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:23:13 compute-0 podman[141367]: 2025-09-30 14:23:13.291800896 +0000 UTC m=+0.203201461 container remove 2b74cc2d7809134fdc0dd1d74bc934c40ba2ad5e689c9a10f370f2228abf2b38 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_chatelet, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:23:13 compute-0 systemd[1]: libpod-conmon-2b74cc2d7809134fdc0dd1d74bc934c40ba2ad5e689c9a10f370f2228abf2b38.scope: Deactivated successfully.
Sep 30 14:23:13 compute-0 podman[141484]: 2025-09-30 14:23:13.441566685 +0000 UTC m=+0.039349339 container create a1ffe0320e04eeb38c429d7743f8ca67dfb9368113f2f1baa9da859fb10ac440 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_jones, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default)
Sep 30 14:23:13 compute-0 python3.9[141475]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:23:13 compute-0 systemd[1]: Started libpod-conmon-a1ffe0320e04eeb38c429d7743f8ca67dfb9368113f2f1baa9da859fb10ac440.scope.
Sep 30 14:23:13 compute-0 sudo[141461]: pam_unix(sudo:session): session closed for user root
Sep 30 14:23:13 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:23:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7be3510ff87dcf8552eabceeba6a9e845df7a7ff120c72424543f6e74e468a5e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:23:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7be3510ff87dcf8552eabceeba6a9e845df7a7ff120c72424543f6e74e468a5e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:23:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7be3510ff87dcf8552eabceeba6a9e845df7a7ff120c72424543f6e74e468a5e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:23:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7be3510ff87dcf8552eabceeba6a9e845df7a7ff120c72424543f6e74e468a5e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:23:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7be3510ff87dcf8552eabceeba6a9e845df7a7ff120c72424543f6e74e468a5e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 14:23:13 compute-0 podman[141484]: 2025-09-30 14:23:13.513626703 +0000 UTC m=+0.111409387 container init a1ffe0320e04eeb38c429d7743f8ca67dfb9368113f2f1baa9da859fb10ac440 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Sep 30 14:23:13 compute-0 podman[141484]: 2025-09-30 14:23:13.424251171 +0000 UTC m=+0.022033855 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:23:13 compute-0 podman[141484]: 2025-09-30 14:23:13.524240145 +0000 UTC m=+0.122022809 container start a1ffe0320e04eeb38c429d7743f8ca67dfb9368113f2f1baa9da859fb10ac440 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_jones, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:23:13 compute-0 podman[141484]: 2025-09-30 14:23:13.527669893 +0000 UTC m=+0.125452577 container attach a1ffe0320e04eeb38c429d7743f8ca67dfb9368113f2f1baa9da859fb10ac440 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_jones, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325)
Sep 30 14:23:13 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:23:13 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 14:23:13 compute-0 ceph-mon[74194]: pgmap v221: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 437 B/s rd, 0 op/s
Sep 30 14:23:13 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:23:13 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:23:13 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 14:23:13 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 14:23:13 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:23:13 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:23:13 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2268001060 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:23:13 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:23:13 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:23:13 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:23:13.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:23:13 compute-0 hungry_jones[141501]: --> passed data devices: 0 physical, 1 LVM
Sep 30 14:23:13 compute-0 hungry_jones[141501]: --> All data devices are unavailable
Sep 30 14:23:13 compute-0 systemd[1]: libpod-a1ffe0320e04eeb38c429d7743f8ca67dfb9368113f2f1baa9da859fb10ac440.scope: Deactivated successfully.
Sep 30 14:23:13 compute-0 podman[141484]: 2025-09-30 14:23:13.859061459 +0000 UTC m=+0.456844143 container died a1ffe0320e04eeb38c429d7743f8ca67dfb9368113f2f1baa9da859fb10ac440 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_jones, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:23:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-7be3510ff87dcf8552eabceeba6a9e845df7a7ff120c72424543f6e74e468a5e-merged.mount: Deactivated successfully.
Sep 30 14:23:13 compute-0 podman[141484]: 2025-09-30 14:23:13.905333925 +0000 UTC m=+0.503116589 container remove a1ffe0320e04eeb38c429d7743f8ca67dfb9368113f2f1baa9da859fb10ac440 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_jones, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:23:13 compute-0 sudo[141648]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slmsjqcsgxzttggenqnxqjdvvsnnuwhx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242192.8108037-62-253590714902434/AnsiballZ_copy.py'
Sep 30 14:23:13 compute-0 systemd[1]: libpod-conmon-a1ffe0320e04eeb38c429d7743f8ca67dfb9368113f2f1baa9da859fb10ac440.scope: Deactivated successfully.
Sep 30 14:23:13 compute-0 sudo[141648]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:23:13 compute-0 sudo[141247]: pam_unix(sudo:session): session closed for user root
Sep 30 14:23:13 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:23:13 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:23:13 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:23:13.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:23:13 compute-0 sudo[141651]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:23:13 compute-0 sudo[141651]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:23:13 compute-0 sudo[141651]: pam_unix(sudo:session): session closed for user root
Sep 30 14:23:14 compute-0 sudo[141676]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -- lvm list --format json
Sep 30 14:23:14 compute-0 sudo[141676]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:23:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:23:14 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2288003cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:23:14 compute-0 python3.9[141650]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759242192.8108037-62-253590714902434/.source.conf _original_basename=ceph.conf follow=False checksum=e1711c016d554460ec7f0b28b684e5a0f66b683e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:23:14 compute-0 sudo[141648]: pam_unix(sudo:session): session closed for user root
Sep 30 14:23:14 compute-0 podman[141818]: 2025-09-30 14:23:14.401861195 +0000 UTC m=+0.044852651 container create b2900c2bfe82cccf048ab2b29590a2d66d0cdf842c92ad8fa45bd377d6fd04ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Sep 30 14:23:14 compute-0 systemd[1]: Started libpod-conmon-b2900c2bfe82cccf048ab2b29590a2d66d0cdf842c92ad8fa45bd377d6fd04ae.scope.
Sep 30 14:23:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:23:14 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2274004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:23:14 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:23:14 compute-0 podman[141818]: 2025-09-30 14:23:14.380656521 +0000 UTC m=+0.023648037 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:23:14 compute-0 podman[141818]: 2025-09-30 14:23:14.475296288 +0000 UTC m=+0.118287744 container init b2900c2bfe82cccf048ab2b29590a2d66d0cdf842c92ad8fa45bd377d6fd04ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_clarke, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Sep 30 14:23:14 compute-0 podman[141818]: 2025-09-30 14:23:14.481744353 +0000 UTC m=+0.124735809 container start b2900c2bfe82cccf048ab2b29590a2d66d0cdf842c92ad8fa45bd377d6fd04ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_clarke, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Sep 30 14:23:14 compute-0 epic_clarke[141857]: 167 167
Sep 30 14:23:14 compute-0 systemd[1]: libpod-b2900c2bfe82cccf048ab2b29590a2d66d0cdf842c92ad8fa45bd377d6fd04ae.scope: Deactivated successfully.
Sep 30 14:23:14 compute-0 podman[141818]: 2025-09-30 14:23:14.488183968 +0000 UTC m=+0.131175424 container attach b2900c2bfe82cccf048ab2b29590a2d66d0cdf842c92ad8fa45bd377d6fd04ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_clarke, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Sep 30 14:23:14 compute-0 podman[141818]: 2025-09-30 14:23:14.488480726 +0000 UTC m=+0.131472182 container died b2900c2bfe82cccf048ab2b29590a2d66d0cdf842c92ad8fa45bd377d6fd04ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_clarke, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:23:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-e1d1bd4ad31efbd59b73529333e3efbab1f8b30b6ce608b6ed4cbee68bb08244-merged.mount: Deactivated successfully.
Sep 30 14:23:14 compute-0 podman[141818]: 2025-09-30 14:23:14.520605569 +0000 UTC m=+0.163597025 container remove b2900c2bfe82cccf048ab2b29590a2d66d0cdf842c92ad8fa45bd377d6fd04ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_clarke, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Sep 30 14:23:14 compute-0 systemd[1]: libpod-conmon-b2900c2bfe82cccf048ab2b29590a2d66d0cdf842c92ad8fa45bd377d6fd04ae.scope: Deactivated successfully.
Sep 30 14:23:14 compute-0 sudo[141926]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yyfullnjrswhpypcavzezqwobbsirjvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242194.280971-62-2785468278862/AnsiballZ_stat.py'
Sep 30 14:23:14 compute-0 sudo[141926]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:23:14 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v222: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 174 B/s rd, 0 op/s
Sep 30 14:23:14 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:23:14 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:23:14 compute-0 podman[141934]: 2025-09-30 14:23:14.675340796 +0000 UTC m=+0.056000857 container create 70d9f423876a3765adf166f33965520a5b226423e0bf4b759606ce3bdf53df42 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_jennings, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:23:14 compute-0 systemd[1]: Started libpod-conmon-70d9f423876a3765adf166f33965520a5b226423e0bf4b759606ce3bdf53df42.scope.
Sep 30 14:23:14 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:23:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:23:14] "GET /metrics HTTP/1.1" 200 48409 "" "Prometheus/2.51.0"
Sep 30 14:23:14 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:23:14] "GET /metrics HTTP/1.1" 200 48409 "" "Prometheus/2.51.0"
Sep 30 14:23:14 compute-0 python3.9[141928]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:23:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11ad2e3a41133b99316c01186f1046f65d6dac63eb6c10439a1ecc22496a3d60/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:23:14 compute-0 podman[141934]: 2025-09-30 14:23:14.656862662 +0000 UTC m=+0.037522734 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:23:14 compute-0 sudo[141926]: pam_unix(sudo:session): session closed for user root
Sep 30 14:23:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11ad2e3a41133b99316c01186f1046f65d6dac63eb6c10439a1ecc22496a3d60/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:23:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11ad2e3a41133b99316c01186f1046f65d6dac63eb6c10439a1ecc22496a3d60/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:23:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11ad2e3a41133b99316c01186f1046f65d6dac63eb6c10439a1ecc22496a3d60/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:23:14 compute-0 podman[141934]: 2025-09-30 14:23:14.768442383 +0000 UTC m=+0.149102434 container init 70d9f423876a3765adf166f33965520a5b226423e0bf4b759606ce3bdf53df42 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_jennings, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Sep 30 14:23:14 compute-0 podman[141934]: 2025-09-30 14:23:14.776536121 +0000 UTC m=+0.157196162 container start 70d9f423876a3765adf166f33965520a5b226423e0bf4b759606ce3bdf53df42 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_jennings, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:23:14 compute-0 podman[141934]: 2025-09-30 14:23:14.779523767 +0000 UTC m=+0.160183828 container attach 70d9f423876a3765adf166f33965520a5b226423e0bf4b759606ce3bdf53df42 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_jennings, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:23:14 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:23:15 compute-0 sudo[142080]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xsvgucjgbhiltztuswzftxmapmpseypy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242194.280971-62-2785468278862/AnsiballZ_copy.py'
Sep 30 14:23:15 compute-0 sudo[142080]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:23:15 compute-0 interesting_jennings[141951]: {
Sep 30 14:23:15 compute-0 interesting_jennings[141951]:     "0": [
Sep 30 14:23:15 compute-0 interesting_jennings[141951]:         {
Sep 30 14:23:15 compute-0 interesting_jennings[141951]:             "devices": [
Sep 30 14:23:15 compute-0 interesting_jennings[141951]:                 "/dev/loop3"
Sep 30 14:23:15 compute-0 interesting_jennings[141951]:             ],
Sep 30 14:23:15 compute-0 interesting_jennings[141951]:             "lv_name": "ceph_lv0",
Sep 30 14:23:15 compute-0 interesting_jennings[141951]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:23:15 compute-0 interesting_jennings[141951]:             "lv_size": "21470642176",
Sep 30 14:23:15 compute-0 interesting_jennings[141951]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5e3c7776-ac03-5698-b79f-a6dc2d80cae6,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1bf35304-bfb4-41f5-b832-570aa31de1b2,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 14:23:15 compute-0 interesting_jennings[141951]:             "lv_uuid": "f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L",
Sep 30 14:23:15 compute-0 interesting_jennings[141951]:             "name": "ceph_lv0",
Sep 30 14:23:15 compute-0 interesting_jennings[141951]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:23:15 compute-0 interesting_jennings[141951]:             "tags": {
Sep 30 14:23:15 compute-0 interesting_jennings[141951]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:23:15 compute-0 interesting_jennings[141951]:                 "ceph.block_uuid": "f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L",
Sep 30 14:23:15 compute-0 interesting_jennings[141951]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 14:23:15 compute-0 interesting_jennings[141951]:                 "ceph.cluster_fsid": "5e3c7776-ac03-5698-b79f-a6dc2d80cae6",
Sep 30 14:23:15 compute-0 interesting_jennings[141951]:                 "ceph.cluster_name": "ceph",
Sep 30 14:23:15 compute-0 interesting_jennings[141951]:                 "ceph.crush_device_class": "",
Sep 30 14:23:15 compute-0 interesting_jennings[141951]:                 "ceph.encrypted": "0",
Sep 30 14:23:15 compute-0 interesting_jennings[141951]:                 "ceph.osd_fsid": "1bf35304-bfb4-41f5-b832-570aa31de1b2",
Sep 30 14:23:15 compute-0 interesting_jennings[141951]:                 "ceph.osd_id": "0",
Sep 30 14:23:15 compute-0 interesting_jennings[141951]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 14:23:15 compute-0 interesting_jennings[141951]:                 "ceph.type": "block",
Sep 30 14:23:15 compute-0 interesting_jennings[141951]:                 "ceph.vdo": "0",
Sep 30 14:23:15 compute-0 interesting_jennings[141951]:                 "ceph.with_tpm": "0"
Sep 30 14:23:15 compute-0 interesting_jennings[141951]:             },
Sep 30 14:23:15 compute-0 interesting_jennings[141951]:             "type": "block",
Sep 30 14:23:15 compute-0 interesting_jennings[141951]:             "vg_name": "ceph_vg0"
Sep 30 14:23:15 compute-0 interesting_jennings[141951]:         }
Sep 30 14:23:15 compute-0 interesting_jennings[141951]:     ]
Sep 30 14:23:15 compute-0 interesting_jennings[141951]: }
Sep 30 14:23:15 compute-0 systemd[1]: libpod-70d9f423876a3765adf166f33965520a5b226423e0bf4b759606ce3bdf53df42.scope: Deactivated successfully.
Sep 30 14:23:15 compute-0 podman[141934]: 2025-09-30 14:23:15.102075617 +0000 UTC m=+0.482735658 container died 70d9f423876a3765adf166f33965520a5b226423e0bf4b759606ce3bdf53df42 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_jennings, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:23:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-11ad2e3a41133b99316c01186f1046f65d6dac63eb6c10439a1ecc22496a3d60-merged.mount: Deactivated successfully.
Sep 30 14:23:15 compute-0 podman[141934]: 2025-09-30 14:23:15.143833427 +0000 UTC m=+0.524493468 container remove 70d9f423876a3765adf166f33965520a5b226423e0bf4b759606ce3bdf53df42 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_jennings, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Sep 30 14:23:15 compute-0 systemd[1]: libpod-conmon-70d9f423876a3765adf166f33965520a5b226423e0bf4b759606ce3bdf53df42.scope: Deactivated successfully.
Sep 30 14:23:15 compute-0 sudo[141676]: pam_unix(sudo:session): session closed for user root
Sep 30 14:23:15 compute-0 sudo[142096]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:23:15 compute-0 sudo[142096]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:23:15 compute-0 sudo[142096]: pam_unix(sudo:session): session closed for user root
Sep 30 14:23:15 compute-0 python3.9[142082]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759242194.280971-62-2785468278862/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=568fd117e3e38e19bc8df91cc4c576927d41f3c4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:23:15 compute-0 sudo[142080]: pam_unix(sudo:session): session closed for user root
Sep 30 14:23:15 compute-0 sudo[142121]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -- raw list --format json
Sep 30 14:23:15 compute-0 sudo[142121]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:23:15 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:23:15 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:23:15 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2260003e40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:23:15 compute-0 sshd-session[140964]: Connection closed by 192.168.122.30 port 40532
Sep 30 14:23:15 compute-0 sshd-session[140943]: pam_unix(sshd:session): session closed for user zuul
Sep 30 14:23:15 compute-0 systemd[1]: session-49.scope: Deactivated successfully.
Sep 30 14:23:15 compute-0 systemd[1]: session-49.scope: Consumed 2.496s CPU time.
Sep 30 14:23:15 compute-0 systemd-logind[808]: Session 49 logged out. Waiting for processes to exit.
Sep 30 14:23:15 compute-0 systemd-logind[808]: Removed session 49.
Sep 30 14:23:15 compute-0 podman[142213]: 2025-09-30 14:23:15.745200275 +0000 UTC m=+0.042088890 container create 7f69dccb560da71a6a9904f2b5ed3b461bea8609e986ee2d18f660a8ec15d98a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Sep 30 14:23:15 compute-0 systemd[1]: Started libpod-conmon-7f69dccb560da71a6a9904f2b5ed3b461bea8609e986ee2d18f660a8ec15d98a.scope.
Sep 30 14:23:15 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:23:15 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:23:15 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:23:15.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:23:15 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:23:15 compute-0 podman[142213]: 2025-09-30 14:23:15.818750099 +0000 UTC m=+0.115638744 container init 7f69dccb560da71a6a9904f2b5ed3b461bea8609e986ee2d18f660a8ec15d98a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Sep 30 14:23:15 compute-0 podman[142213]: 2025-09-30 14:23:15.729599215 +0000 UTC m=+0.026487860 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:23:15 compute-0 podman[142213]: 2025-09-30 14:23:15.826065337 +0000 UTC m=+0.122953982 container start 7f69dccb560da71a6a9904f2b5ed3b461bea8609e986ee2d18f660a8ec15d98a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Sep 30 14:23:15 compute-0 podman[142213]: 2025-09-30 14:23:15.82969503 +0000 UTC m=+0.126583675 container attach 7f69dccb560da71a6a9904f2b5ed3b461bea8609e986ee2d18f660a8ec15d98a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_agnesi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Sep 30 14:23:15 compute-0 vibrant_agnesi[142230]: 167 167
Sep 30 14:23:15 compute-0 systemd[1]: libpod-7f69dccb560da71a6a9904f2b5ed3b461bea8609e986ee2d18f660a8ec15d98a.scope: Deactivated successfully.
Sep 30 14:23:15 compute-0 podman[142213]: 2025-09-30 14:23:15.830903171 +0000 UTC m=+0.127791806 container died 7f69dccb560da71a6a9904f2b5ed3b461bea8609e986ee2d18f660a8ec15d98a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Sep 30 14:23:15 compute-0 ceph-mon[74194]: pgmap v222: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 174 B/s rd, 0 op/s
Sep 30 14:23:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-8b1d72d3f0ad041fa89090826df1179c15b3aefeaa05665c735bc5fa5480a247-merged.mount: Deactivated successfully.
Sep 30 14:23:15 compute-0 podman[142213]: 2025-09-30 14:23:15.87026545 +0000 UTC m=+0.167154075 container remove 7f69dccb560da71a6a9904f2b5ed3b461bea8609e986ee2d18f660a8ec15d98a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_agnesi, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Sep 30 14:23:15 compute-0 systemd[1]: libpod-conmon-7f69dccb560da71a6a9904f2b5ed3b461bea8609e986ee2d18f660a8ec15d98a.scope: Deactivated successfully.
Sep 30 14:23:15 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:23:15 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:23:15 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:23:15.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:23:16 compute-0 podman[142254]: 2025-09-30 14:23:16.033856464 +0000 UTC m=+0.044471121 container create 7ade3f423399f5a4967fee98a390230d85f2e2b4bff079441d269ceeb18447c2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_newton, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Sep 30 14:23:16 compute-0 systemd[1]: Started libpod-conmon-7ade3f423399f5a4967fee98a390230d85f2e2b4bff079441d269ceeb18447c2.scope.
Sep 30 14:23:16 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:23:16 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2268001060 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:23:16 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:23:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55fbea2a4dfc17b38529aab7f0328d769634231859343395585ff186960d4cd3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:23:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55fbea2a4dfc17b38529aab7f0328d769634231859343395585ff186960d4cd3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:23:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55fbea2a4dfc17b38529aab7f0328d769634231859343395585ff186960d4cd3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:23:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55fbea2a4dfc17b38529aab7f0328d769634231859343395585ff186960d4cd3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:23:16 compute-0 podman[142254]: 2025-09-30 14:23:16.010709281 +0000 UTC m=+0.021323978 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:23:16 compute-0 podman[142254]: 2025-09-30 14:23:16.115273581 +0000 UTC m=+0.125888238 container init 7ade3f423399f5a4967fee98a390230d85f2e2b4bff079441d269ceeb18447c2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_newton, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS)
Sep 30 14:23:16 compute-0 podman[142254]: 2025-09-30 14:23:16.122558608 +0000 UTC m=+0.133173245 container start 7ade3f423399f5a4967fee98a390230d85f2e2b4bff079441d269ceeb18447c2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_newton, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Sep 30 14:23:16 compute-0 podman[142254]: 2025-09-30 14:23:16.125466563 +0000 UTC m=+0.136081200 container attach 7ade3f423399f5a4967fee98a390230d85f2e2b4bff079441d269ceeb18447c2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_newton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:23:16 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:23:16 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2288003cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:23:16 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v223: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 612 B/s rd, 87 B/s wr, 0 op/s
Sep 30 14:23:16 compute-0 lvm[142344]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 14:23:16 compute-0 lvm[142344]: VG ceph_vg0 finished
Sep 30 14:23:16 compute-0 sweet_newton[142270]: {}
Sep 30 14:23:16 compute-0 ceph-mon[74194]: pgmap v223: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 612 B/s rd, 87 B/s wr, 0 op/s
Sep 30 14:23:16 compute-0 systemd[1]: libpod-7ade3f423399f5a4967fee98a390230d85f2e2b4bff079441d269ceeb18447c2.scope: Deactivated successfully.
Sep 30 14:23:16 compute-0 podman[142254]: 2025-09-30 14:23:16.882635895 +0000 UTC m=+0.893250532 container died 7ade3f423399f5a4967fee98a390230d85f2e2b4bff079441d269ceeb18447c2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_newton, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:23:16 compute-0 systemd[1]: libpod-7ade3f423399f5a4967fee98a390230d85f2e2b4bff079441d269ceeb18447c2.scope: Consumed 1.139s CPU time.
Sep 30 14:23:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-55fbea2a4dfc17b38529aab7f0328d769634231859343395585ff186960d4cd3-merged.mount: Deactivated successfully.
Sep 30 14:23:16 compute-0 podman[142254]: 2025-09-30 14:23:16.93630575 +0000 UTC m=+0.946920387 container remove 7ade3f423399f5a4967fee98a390230d85f2e2b4bff079441d269ceeb18447c2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_newton, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:23:16 compute-0 systemd[1]: libpod-conmon-7ade3f423399f5a4967fee98a390230d85f2e2b4bff079441d269ceeb18447c2.scope: Deactivated successfully.
Sep 30 14:23:16 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:23:16.980Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:23:16 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:23:16.981Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:23:16 compute-0 sudo[142121]: pam_unix(sudo:session): session closed for user root
Sep 30 14:23:16 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:23:16.981Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:23:16 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 14:23:17 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:23:17 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 14:23:17 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:23:17 compute-0 sudo[142358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 14:23:17 compute-0 sudo[142358]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:23:17 compute-0 sudo[142358]: pam_unix(sudo:session): session closed for user root
Sep 30 14:23:17 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:23:17 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:23:17 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:23:17 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2288003cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:23:17 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:23:17 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:23:17 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:23:17.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:23:17 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:23:17 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:23:17 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:23:17.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:23:18 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:23:18 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:23:18 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:23:18 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2260003e40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:23:18 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:23:18 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2268001060 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:23:18 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v224: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 612 B/s rd, 87 B/s wr, 0 op/s
Sep 30 14:23:18 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:23:18.860Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:23:18 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:23:18.860Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:23:19 compute-0 sudo[142385]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:23:19 compute-0 sudo[142385]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:23:19 compute-0 sudo[142385]: pam_unix(sudo:session): session closed for user root
Sep 30 14:23:19 compute-0 ceph-mon[74194]: pgmap v224: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 612 B/s rd, 87 B/s wr, 0 op/s
Sep 30 14:23:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:23:19 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2274004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:23:19 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:23:19 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:23:19 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:23:19.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:23:19 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:23:19 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 14:23:19 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:23:19.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 14:23:20 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:23:20 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2288003cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:23:20 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:23:20 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2260003e40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:23:20 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:23:20 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:23:20 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:23:20 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:23:20 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:23:20 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v225: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 612 B/s rd, 87 B/s wr, 0 op/s
Sep 30 14:23:21 compute-0 sshd-session[142412]: Accepted publickey for zuul from 192.168.122.30 port 35054 ssh2: ECDSA SHA256:bXV1aFTGAGwGo0hLh6HZ3pTGxlJrPf0VedxXflT3nU8
Sep 30 14:23:21 compute-0 systemd-logind[808]: New session 50 of user zuul.
Sep 30 14:23:21 compute-0 systemd[1]: Started Session 50 of User zuul.
Sep 30 14:23:21 compute-0 sshd-session[142412]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Sep 30 14:23:21 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:23:21 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2268001060 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:23:21 compute-0 ceph-mon[74194]: pgmap v225: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 612 B/s rd, 87 B/s wr, 0 op/s
Sep 30 14:23:21 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:23:21 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:23:21 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:23:21.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:23:21 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:23:21 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:23:21 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:23:21.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:23:22 compute-0 python3.9[142567]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Sep 30 14:23:22 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:23:22 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2274004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:23:22 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:23:22 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2288003cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:23:22 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v226: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 962 B/s wr, 3 op/s
Sep 30 14:23:22 compute-0 sudo[142721]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irgcnysdypcblfcdyzryaluytqkiizwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242202.488957-62-99652897113375/AnsiballZ_file.py'
Sep 30 14:23:22 compute-0 sudo[142721]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:23:23 compute-0 python3.9[142723]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:23:23 compute-0 sudo[142721]: pam_unix(sudo:session): session closed for user root
Sep 30 14:23:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:23:23 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Sep 30 14:23:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:23:23 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2260003e40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:23:23 compute-0 sudo[142874]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbkoqhaqrguyvngjibmmmkhxjobcsosz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242203.3491733-62-77714410590374/AnsiballZ_file.py'
Sep 30 14:23:23 compute-0 sudo[142874]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:23:23 compute-0 ceph-mon[74194]: pgmap v226: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 962 B/s wr, 3 op/s
Sep 30 14:23:23 compute-0 python3.9[142876]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:23:23 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:23:23 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 14:23:23 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:23:23.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 14:23:23 compute-0 sudo[142874]: pam_unix(sudo:session): session closed for user root
Sep 30 14:23:23 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:23:23 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:23:23 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:23:23.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:23:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:23:24 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2268001060 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:23:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:23:24 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2274004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:23:24 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v227: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 2 op/s
Sep 30 14:23:24 compute-0 python3.9[143027]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Sep 30 14:23:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:23:24] "GET /metrics HTTP/1.1" 200 48409 "" "Prometheus/2.51.0"
Sep 30 14:23:24 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:23:24] "GET /metrics HTTP/1.1" 200 48409 "" "Prometheus/2.51.0"
Sep 30 14:23:25 compute-0 sudo[143178]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzhwpyctdjoifebcnpyybuwluhahafyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242204.896613-131-81635899719330/AnsiballZ_seboolean.py'
Sep 30 14:23:25 compute-0 sudo[143178]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:23:25 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:23:25 compute-0 python3.9[143180]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Sep 30 14:23:25 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:23:25 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2288003cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:23:25 compute-0 ceph-mon[74194]: pgmap v227: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 2 op/s
Sep 30 14:23:25 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:23:25 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:23:25 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:23:25.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:23:25 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:23:25 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:23:25 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:23:25.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:23:26 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:23:26 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2260003e40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:23:26 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:23:26 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2268001060 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:23:26 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v228: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Sep 30 14:23:26 compute-0 ceph-mon[74194]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Sep 30 14:23:26 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:23:26.737923) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Sep 30 14:23:26 compute-0 ceph-mon[74194]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Sep 30 14:23:26 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759242206737947, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 1002, "num_deletes": 251, "total_data_size": 1798150, "memory_usage": 1817128, "flush_reason": "Manual Compaction"}
Sep 30 14:23:26 compute-0 ceph-mon[74194]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Sep 30 14:23:26 compute-0 sudo[143178]: pam_unix(sudo:session): session closed for user root
Sep 30 14:23:26 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759242206748602, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 1731024, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 12337, "largest_seqno": 13338, "table_properties": {"data_size": 1726109, "index_size": 2439, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 10344, "raw_average_key_size": 19, "raw_value_size": 1716332, "raw_average_value_size": 3184, "num_data_blocks": 108, "num_entries": 539, "num_filter_entries": 539, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759242125, "oldest_key_time": 1759242125, "file_creation_time": 1759242206, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4a74fe2f-a33e-416b-ba25-743e7942b3ac", "db_session_id": "KY5CTSKWFSFJYE5835A9", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Sep 30 14:23:26 compute-0 ceph-mon[74194]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 10737 microseconds, and 3919 cpu microseconds.
Sep 30 14:23:26 compute-0 ceph-mon[74194]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 14:23:26 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:23:26.748653) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 1731024 bytes OK
Sep 30 14:23:26 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:23:26.748674) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Sep 30 14:23:26 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:23:26.749908) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Sep 30 14:23:26 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:23:26.749924) EVENT_LOG_v1 {"time_micros": 1759242206749919, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Sep 30 14:23:26 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:23:26.749941) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Sep 30 14:23:26 compute-0 ceph-mon[74194]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 1793525, prev total WAL file size 1793525, number of live WAL files 2.
Sep 30 14:23:26 compute-0 ceph-mon[74194]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 14:23:26 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:23:26.750549) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Sep 30 14:23:26 compute-0 ceph-mon[74194]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Sep 30 14:23:26 compute-0 ceph-mon[74194]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(1690KB)], [29(13MB)]
Sep 30 14:23:26 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759242206750582, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 15543159, "oldest_snapshot_seqno": -1}
Sep 30 14:23:26 compute-0 ceph-mon[74194]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 4301 keys, 13504053 bytes, temperature: kUnknown
Sep 30 14:23:26 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759242206859527, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 13504053, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13472190, "index_size": 19977, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10821, "raw_key_size": 109941, "raw_average_key_size": 25, "raw_value_size": 13390528, "raw_average_value_size": 3113, "num_data_blocks": 846, "num_entries": 4301, "num_filter_entries": 4301, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759241526, "oldest_key_time": 0, "file_creation_time": 1759242206, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4a74fe2f-a33e-416b-ba25-743e7942b3ac", "db_session_id": "KY5CTSKWFSFJYE5835A9", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Sep 30 14:23:26 compute-0 ceph-mon[74194]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 14:23:26 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:23:26.859787) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 13504053 bytes
Sep 30 14:23:26 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:23:26.861712) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 142.6 rd, 123.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 13.2 +0.0 blob) out(12.9 +0.0 blob), read-write-amplify(16.8) write-amplify(7.8) OK, records in: 4819, records dropped: 518 output_compression: NoCompression
Sep 30 14:23:26 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:23:26.861738) EVENT_LOG_v1 {"time_micros": 1759242206861726, "job": 12, "event": "compaction_finished", "compaction_time_micros": 109028, "compaction_time_cpu_micros": 26298, "output_level": 6, "num_output_files": 1, "total_output_size": 13504053, "num_input_records": 4819, "num_output_records": 4301, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Sep 30 14:23:26 compute-0 ceph-mon[74194]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 14:23:26 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759242206862203, "job": 12, "event": "table_file_deletion", "file_number": 31}
Sep 30 14:23:26 compute-0 ceph-mon[74194]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 14:23:26 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759242206864885, "job": 12, "event": "table_file_deletion", "file_number": 29}
Sep 30 14:23:26 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:23:26.750465) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:23:26 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:23:26.864982) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:23:26 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:23:26.864986) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:23:26 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:23:26.864988) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:23:26 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:23:26.864989) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:23:26 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:23:26.864990) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:23:26 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:23:26.981Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:23:26 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:23:26.982Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:23:26 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:23:26.983Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:23:27 compute-0 sudo[143335]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dflxyyffmpwiwxgzsqkiwvdciqvxsdlq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242207.0047276-161-100745940301216/AnsiballZ_setup.py'
Sep 30 14:23:27 compute-0 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Sep 30 14:23:27 compute-0 sudo[143335]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:23:27 compute-0 python3.9[143337]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Sep 30 14:23:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:23:27 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2274004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:23:27 compute-0 ceph-mon[74194]: pgmap v228: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Sep 30 14:23:27 compute-0 sudo[143335]: pam_unix(sudo:session): session closed for user root
Sep 30 14:23:27 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:23:27 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:23:27 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:23:27.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:23:27 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:23:27 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:23:27 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:23:27.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:23:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:23:28 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2288003ce0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:23:28 compute-0 sudo[143421]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwfuunscdduypxoxhcelvrkdofovalku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242207.0047276-161-100745940301216/AnsiballZ_dnf.py'
Sep 30 14:23:28 compute-0 sudo[143421]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:23:28 compute-0 python3.9[143423]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Sep 30 14:23:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:23:28 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2288003ce0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:23:28 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v229: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Sep 30 14:23:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:23:28.861Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:23:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-nfs-cephfs-compute-0-yvkpei[97239]: [WARNING] 272/142329 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Sep 30 14:23:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:23:29 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2268003e80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:23:29 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:23:29 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:23:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:23:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:23:29 compute-0 ceph-mon[74194]: pgmap v229: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Sep 30 14:23:29 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:23:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:23:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:23:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:23:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:23:29 compute-0 sudo[143421]: pam_unix(sudo:session): session closed for user root
Sep 30 14:23:29 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:23:29 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:23:29 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:23:29.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:23:29 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:23:29 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:23:29 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:23:29.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:23:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:23:30 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2274004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:23:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:23:30 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2288003ce0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:23:30 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:23:30 compute-0 sudo[143576]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fniqmmeflljsilqurgutnapwxujcsdrd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242209.956879-197-231887862941410/AnsiballZ_systemd.py'
Sep 30 14:23:30 compute-0 sudo[143576]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:23:30 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v230: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Sep 30 14:23:30 compute-0 python3.9[143578]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Sep 30 14:23:30 compute-0 ceph-mon[74194]: pgmap v230: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Sep 30 14:23:30 compute-0 sudo[143576]: pam_unix(sudo:session): session closed for user root
Sep 30 14:23:31 compute-0 sudo[143732]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rklvtqgwykomnoysedyvaqeuhsgcnpzc ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759242211.1041121-221-232967284930890/AnsiballZ_edpm_nftables_snippet.py'
Sep 30 14:23:31 compute-0 sudo[143732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:23:31 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:23:31 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2288003ce0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:23:31 compute-0 python3[143734]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks
                                             rule:
                                               proto: udp
                                               dport: 4789
                                           - rule_name: 119 neutron geneve networks
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               state: ["UNTRACKED"]
                                           - rule_name: 120 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: OUTPUT
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                           - rule_name: 121 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: PREROUTING
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                            dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Sep 30 14:23:31 compute-0 sudo[143732]: pam_unix(sudo:session): session closed for user root
Sep 30 14:23:31 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:23:31 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:23:31 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:23:31.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:23:31 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:23:31 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:23:31 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:23:31.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:23:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:23:32 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2268003e80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:23:32 compute-0 sudo[143885]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bldoffkvgwlhkhehqetpderglyqbybrg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242212.0498338-248-103916343364949/AnsiballZ_file.py'
Sep 30 14:23:32 compute-0 sudo[143885]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:23:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:23:32 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2274004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:23:32 compute-0 python3.9[143887]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:23:32 compute-0 sudo[143885]: pam_unix(sudo:session): session closed for user root
Sep 30 14:23:32 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v231: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Sep 30 14:23:33 compute-0 sudo[144037]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhdehuqccquoahmybhvpoivqvohaszxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242212.7573311-272-72308115625642/AnsiballZ_stat.py'
Sep 30 14:23:33 compute-0 sudo[144037]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:23:33 compute-0 python3.9[144039]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:23:33 compute-0 sudo[144037]: pam_unix(sudo:session): session closed for user root
Sep 30 14:23:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:23:33 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2288003ce0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:23:33 compute-0 ceph-mon[74194]: pgmap v231: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Sep 30 14:23:33 compute-0 sudo[144116]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfilhzwbrkqvzrbilvsnoonnxwgqbvdb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242212.7573311-272-72308115625642/AnsiballZ_file.py'
Sep 30 14:23:33 compute-0 sudo[144116]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:23:33 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:23:33 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:23:33 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:23:33.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:23:33 compute-0 python3.9[144118]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:23:33 compute-0 sudo[144116]: pam_unix(sudo:session): session closed for user root
Sep 30 14:23:33 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:23:33 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:23:33 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:23:33.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:23:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:23:34 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2288003ce0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:23:34 compute-0 sudo[144269]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-naqxtqqmudgtdxohfsncmwqlratiupsv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242214.1053145-308-216674099218372/AnsiballZ_stat.py'
Sep 30 14:23:34 compute-0 sudo[144269]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:23:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:23:34 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2268003e80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:23:34 compute-0 python3.9[144271]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:23:34 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v232: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:23:34 compute-0 sudo[144269]: pam_unix(sudo:session): session closed for user root
Sep 30 14:23:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:23:34] "GET /metrics HTTP/1.1" 200 48415 "" "Prometheus/2.51.0"
Sep 30 14:23:34 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:23:34] "GET /metrics HTTP/1.1" 200 48415 "" "Prometheus/2.51.0"
Sep 30 14:23:34 compute-0 sudo[144347]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ietscqurhepuudgjgbgxjtrrpnjzxsfg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242214.1053145-308-216674099218372/AnsiballZ_file.py'
Sep 30 14:23:34 compute-0 sudo[144347]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:23:35 compute-0 python3.9[144349]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.m7gh37n2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:23:35 compute-0 sudo[144347]: pam_unix(sudo:session): session closed for user root
Sep 30 14:23:35 compute-0 sudo[144500]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obzhoyojakrhtokemlxbajepaoknbciu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242215.2118416-344-39300058850930/AnsiballZ_stat.py'
Sep 30 14:23:35 compute-0 sudo[144500]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:23:35 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:23:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:23:35 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2274004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:23:35 compute-0 python3.9[144502]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:23:35 compute-0 ceph-mon[74194]: pgmap v232: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:23:35 compute-0 sudo[144500]: pam_unix(sudo:session): session closed for user root
Sep 30 14:23:35 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:23:35 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:23:35 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:23:35.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:23:35 compute-0 sudo[144579]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkmciubihljzlnanbbbeobrylsrkgtpe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242215.2118416-344-39300058850930/AnsiballZ_file.py'
Sep 30 14:23:35 compute-0 sudo[144579]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:23:36 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:23:36 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:23:36 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:23:36.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:23:36 compute-0 python3.9[144581]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:23:36 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:23:36 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2288003ce0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:23:36 compute-0 sudo[144579]: pam_unix(sudo:session): session closed for user root
Sep 30 14:23:36 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:23:36 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2288003ce0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:23:36 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v233: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:23:36 compute-0 sudo[144731]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilmwdzbqjacgofkcmazojwhvojjxivvb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242216.3895056-383-15563704244190/AnsiballZ_command.py'
Sep 30 14:23:36 compute-0 sudo[144731]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:23:36 compute-0 ceph-mon[74194]: pgmap v233: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:23:36 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:23:36.984Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:23:36 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:23:36.984Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:23:36 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:23:36.984Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:23:37 compute-0 python3.9[144733]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:23:37 compute-0 sudo[144731]: pam_unix(sudo:session): session closed for user root
Sep 30 14:23:37 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:23:37 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2268003e80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:23:37 compute-0 sudo[144886]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhonnadqlpdxxbwssscqlyvsriqzvlgg ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759242217.290558-407-251639960688575/AnsiballZ_edpm_nftables_from_files.py'
Sep 30 14:23:37 compute-0 sudo[144886]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:23:37 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:23:37 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:23:37 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:23:37.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:23:37 compute-0 python3[144888]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Sep 30 14:23:37 compute-0 sudo[144886]: pam_unix(sudo:session): session closed for user root
Sep 30 14:23:38 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:23:38 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:23:38 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:23:38.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:23:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:23:38 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2274004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:23:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:23:38 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2288003ce0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:23:38 compute-0 sudo[145038]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rltvhjxajqbfvxsxusgzzaxdwqyfpoij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242218.2311466-431-25031908257715/AnsiballZ_stat.py'
Sep 30 14:23:38 compute-0 sudo[145038]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:23:38 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v234: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Sep 30 14:23:38 compute-0 python3.9[145040]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:23:38 compute-0 sudo[145038]: pam_unix(sudo:session): session closed for user root
Sep 30 14:23:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:23:38.863Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:23:39 compute-0 sudo[145090]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:23:39 compute-0 sudo[145090]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:23:39 compute-0 sudo[145090]: pam_unix(sudo:session): session closed for user root
Sep 30 14:23:39 compute-0 sudo[145188]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgzqyrwcfnkiwzkkujlpxnqzrqpblqqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242218.2311466-431-25031908257715/AnsiballZ_copy.py'
Sep 30 14:23:39 compute-0 sudo[145188]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:23:39 compute-0 python3.9[145190]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759242218.2311466-431-25031908257715/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:23:39 compute-0 sudo[145188]: pam_unix(sudo:session): session closed for user root
Sep 30 14:23:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:23:39 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2288003ce0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:23:39 compute-0 ceph-mon[74194]: pgmap v234: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Sep 30 14:23:39 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:23:39 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:23:39 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:23:39.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:23:39 compute-0 sudo[145342]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvsmsgjbbnagitqiqgtrupjawcelujeh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242219.6746607-476-40231121021404/AnsiballZ_stat.py'
Sep 30 14:23:39 compute-0 sudo[145342]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:23:40 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:23:40 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:23:40 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:23:40.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:23:40 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:23:40 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2268003e80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:23:40 compute-0 python3.9[145344]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:23:40 compute-0 sudo[145342]: pam_unix(sudo:session): session closed for user root
Sep 30 14:23:40 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:23:40 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2274004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:23:40 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:23:40 compute-0 sudo[145467]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqtcejleyduktssmattldwaaamrhogod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242219.6746607-476-40231121021404/AnsiballZ_copy.py'
Sep 30 14:23:40 compute-0 sudo[145467]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:23:40 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v235: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Sep 30 14:23:40 compute-0 python3.9[145469]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759242219.6746607-476-40231121021404/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:23:40 compute-0 sudo[145467]: pam_unix(sudo:session): session closed for user root
Sep 30 14:23:41 compute-0 sudo[145619]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tztvckgliexzgsjryszmxchrkrnijciv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242220.9569893-521-38393246802772/AnsiballZ_stat.py'
Sep 30 14:23:41 compute-0 sudo[145619]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:23:41 compute-0 python3.9[145621]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:23:41 compute-0 sudo[145619]: pam_unix(sudo:session): session closed for user root
Sep 30 14:23:41 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:23:41 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2274004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:23:41 compute-0 ceph-mon[74194]: pgmap v235: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Sep 30 14:23:41 compute-0 sudo[145746]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brkixelurwuycwyztmgppnziplppfvln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242220.9569893-521-38393246802772/AnsiballZ_copy.py'
Sep 30 14:23:41 compute-0 sudo[145746]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:23:41 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:23:41 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:23:41 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:23:41.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:23:41 compute-0 python3.9[145748]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759242220.9569893-521-38393246802772/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:23:41 compute-0 sudo[145746]: pam_unix(sudo:session): session closed for user root
Sep 30 14:23:42 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:23:42 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:23:42 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:23:42.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:23:42 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:23:42 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2274004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:23:42 compute-0 sudo[145898]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkcdhlimfkibxcgzivvdxozwbvyhcrql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242222.2119312-566-6293636432888/AnsiballZ_stat.py'
Sep 30 14:23:42 compute-0 sudo[145898]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:23:42 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:23:42 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2260003e40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:23:42 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v236: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Sep 30 14:23:42 compute-0 python3.9[145900]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:23:42 compute-0 sudo[145898]: pam_unix(sudo:session): session closed for user root
Sep 30 14:23:42 compute-0 sudo[146023]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfapdpgpihflmsgkhpoxotonjzhtbljj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242222.2119312-566-6293636432888/AnsiballZ_copy.py'
Sep 30 14:23:42 compute-0 sudo[146023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:23:43 compute-0 python3.9[146025]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759242222.2119312-566-6293636432888/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:23:43 compute-0 sudo[146023]: pam_unix(sudo:session): session closed for user root
Sep 30 14:23:43 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:23:43 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2288003ce0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:23:43 compute-0 ceph-mon[74194]: pgmap v236: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Sep 30 14:23:43 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:23:43 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 14:23:43 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:23:43.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 14:23:43 compute-0 sudo[146179]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nphmxvypyjxrrbojboedzqpfhbgdkytm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242223.5453591-611-213000784123514/AnsiballZ_stat.py'
Sep 30 14:23:43 compute-0 sudo[146179]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:23:44 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:23:44 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:23:44 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:23:44.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:23:44 compute-0 python3.9[146181]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:23:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:23:44 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2268003e80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:23:44 compute-0 sudo[146179]: pam_unix(sudo:session): session closed for user root
Sep 30 14:23:44 compute-0 sudo[146304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oazhdieqgqshsjjlgewrzbcgmchnyjlf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242223.5453591-611-213000784123514/AnsiballZ_copy.py'
Sep 30 14:23:44 compute-0 sudo[146304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:23:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:23:44 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2264002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:23:44 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v237: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:23:44 compute-0 python3.9[146306]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759242223.5453591-611-213000784123514/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:23:44 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:23:44 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:23:44 compute-0 sudo[146304]: pam_unix(sudo:session): session closed for user root
Sep 30 14:23:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:23:44] "GET /metrics HTTP/1.1" 200 48409 "" "Prometheus/2.51.0"
Sep 30 14:23:44 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:23:44] "GET /metrics HTTP/1.1" 200 48409 "" "Prometheus/2.51.0"
Sep 30 14:23:44 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:23:45 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:23:45 compute-0 sudo[146457]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qoztcbvxphiudsmdghznwojtoyqfycsg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242225.2603257-656-112402143148179/AnsiballZ_file.py'
Sep 30 14:23:45 compute-0 sudo[146457]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:23:45 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:23:45 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2260003e40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:23:45 compute-0 python3.9[146459]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:23:45 compute-0 sudo[146457]: pam_unix(sudo:session): session closed for user root
Sep 30 14:23:45 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:23:45 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:23:45 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:23:45.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:23:45 compute-0 ceph-mon[74194]: pgmap v237: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:23:46 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:23:46 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:23:46 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:23:46.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:23:46 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:23:46 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2260003e40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:23:46 compute-0 sudo[146610]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxcyjrhjprmeeinevpflpdhzpeuohlsx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242225.918983-680-170298170133566/AnsiballZ_command.py'
Sep 30 14:23:46 compute-0 sudo[146610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:23:46 compute-0 python3.9[146612]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:23:46 compute-0 sudo[146610]: pam_unix(sudo:session): session closed for user root
Sep 30 14:23:46 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:23:46 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2268003e80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:23:46 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v238: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Sep 30 14:23:46 compute-0 ceph-mon[74194]: pgmap v238: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Sep 30 14:23:46 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:23:46.985Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:23:46 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:23:46.986Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:23:47 compute-0 sudo[146765]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aefujprdrvnvtobzwcyvrgpnscpnzcbw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242226.6551201-704-183630453188489/AnsiballZ_blockinfile.py'
Sep 30 14:23:47 compute-0 sudo[146765]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:23:47 compute-0 python3.9[146767]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:23:47 compute-0 sudo[146765]: pam_unix(sudo:session): session closed for user root
Sep 30 14:23:47 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:23:47 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2264002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:23:47 compute-0 sudo[146919]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rthlekymdtefjajaonmeqblbgiuhibvi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242227.5481927-731-50947951428420/AnsiballZ_command.py'
Sep 30 14:23:47 compute-0 sudo[146919]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:23:47 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:23:47 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:23:47 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:23:47.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:23:48 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:23:48 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:23:48 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:23:48.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:23:48 compute-0 python3.9[146921]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:23:48 compute-0 sudo[146919]: pam_unix(sudo:session): session closed for user root
Sep 30 14:23:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:23:48 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2288003ce0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:23:48 compute-0 sudo[147072]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njwzsglcleqxfxubrfqnjmiyfglrnkzb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242228.2280364-755-261251082773811/AnsiballZ_stat.py'
Sep 30 14:23:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:23:48 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2260003e40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:23:48 compute-0 sudo[147072]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:23:48 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v239: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:23:48 compute-0 python3.9[147075]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 14:23:48 compute-0 sudo[147072]: pam_unix(sudo:session): session closed for user root
Sep 30 14:23:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:23:48.863Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:23:49 compute-0 sudo[147227]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulwehvhkbyhggitcoumbigijxgyqmlhe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242228.9110968-779-119964967959714/AnsiballZ_command.py'
Sep 30 14:23:49 compute-0 sudo[147227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:23:49 compute-0 python3.9[147229]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:23:49 compute-0 sudo[147227]: pam_unix(sudo:session): session closed for user root
Sep 30 14:23:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:23:49 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2260003e40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:23:49 compute-0 ceph-mon[74194]: pgmap v239: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:23:49 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:23:49 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 14:23:49 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:23:49.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 14:23:50 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:23:50 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:23:50 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:23:50.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:23:50 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:23:50 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2268003e80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:23:50 compute-0 sudo[147384]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etrpqdrcadefmyluajpsyqdxmfhafewt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242229.7083488-803-225453550204759/AnsiballZ_file.py'
Sep 30 14:23:50 compute-0 sudo[147384]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:23:50 compute-0 python3.9[147386]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:23:50 compute-0 sudo[147384]: pam_unix(sudo:session): session closed for user root
Sep 30 14:23:50 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:23:50 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2270001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:23:50 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:23:50 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v240: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:23:51 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:23:51 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2264002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:23:51 compute-0 python3.9[147537]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Sep 30 14:23:51 compute-0 ceph-mon[74194]: pgmap v240: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:23:51 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:23:51 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:23:51 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:23:51.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:23:52 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:23:52 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:23:52 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:23:52.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:23:52 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:23:52 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2260003e40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:23:52 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:23:52 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2268003e80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:23:52 compute-0 sudo[147689]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukllehcixdofodsrvxlrlixftbhmgjqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242232.3075676-923-83128716279080/AnsiballZ_command.py'
Sep 30 14:23:52 compute-0 sudo[147689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:23:52 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v241: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Sep 30 14:23:52 compute-0 python3.9[147691]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:3e:0a:74:f6:ca:ec" external_ids:ovn-encap-ip=172.19.0.102 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch 
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:23:52 compute-0 ovs-vsctl[147692]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:3e:0a:74:f6:ca:ec external_ids:ovn-encap-ip=172.19.0.102 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Sep 30 14:23:52 compute-0 sudo[147689]: pam_unix(sudo:session): session closed for user root
Sep 30 14:23:53 compute-0 ceph-osd[82707]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Sep 30 14:23:53 compute-0 ceph-osd[82707]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 8577 writes, 33K keys, 8577 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.04 MB/s
                                           Cumulative WAL: 8577 writes, 1988 syncs, 4.31 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 8577 writes, 33K keys, 8577 commit groups, 1.0 writes per commit group, ingest: 21.27 MB, 0.04 MB/s
                                           Interval WAL: 8577 writes, 1988 syncs, 4.31 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.06              0.00         1    0.065       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.06              0.00         1    0.065       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.06              0.00         1    0.065       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559a3376f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559a3376f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559a3376f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559a3376f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559a3376f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559a3376f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559a3376f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559a3376e9b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559a3376e9b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559a3376e9b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559a3376f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559a3376f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Sep 30 14:23:53 compute-0 sudo[147843]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ecodivvzyoqssfnzlxznliujqozhoebx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242233.1111887-950-178990814062278/AnsiballZ_command.py'
Sep 30 14:23:53 compute-0 sudo[147843]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:23:53 compute-0 python3.9[147845]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ovs-vsctl show | grep -q "Manager"
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:23:53 compute-0 sudo[147843]: pam_unix(sudo:session): session closed for user root
Sep 30 14:23:53 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:23:53 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2270001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:23:53 compute-0 ceph-mon[74194]: pgmap v241: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Sep 30 14:23:53 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:23:53 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:23:53 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:23:53.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:23:54 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:23:54 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:23:54 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:23:54.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:23:54 compute-0 sudo[147999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvplbtxavodrcclvtrkjidsfkvpvvcpi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242233.8389676-974-243333054421974/AnsiballZ_command.py'
Sep 30 14:23:54 compute-0 sudo[147999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:23:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:23:54 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2264002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:23:54 compute-0 python3.9[148001]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:23:54 compute-0 ovs-vsctl[148002]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Sep 30 14:23:54 compute-0 sudo[147999]: pam_unix(sudo:session): session closed for user root
Sep 30 14:23:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:23:54 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2260003e40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:23:54 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v242: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:23:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:23:54] "GET /metrics HTTP/1.1" 200 48409 "" "Prometheus/2.51.0"
Sep 30 14:23:54 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:23:54] "GET /metrics HTTP/1.1" 200 48409 "" "Prometheus/2.51.0"
Sep 30 14:23:54 compute-0 ceph-mon[74194]: pgmap v242: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:23:55 compute-0 python3.9[148152]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 14:23:55 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:23:55 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:23:55 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2268003e80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:23:55 compute-0 sudo[148305]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xoreuncjumzwuqbxohhzdpttxtticsez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242235.4747992-1025-188002700589570/AnsiballZ_file.py'
Sep 30 14:23:55 compute-0 sudo[148305]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:23:55 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:23:55 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:23:55 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:23:55.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:23:55 compute-0 python3.9[148308]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:23:55 compute-0 sudo[148305]: pam_unix(sudo:session): session closed for user root
Sep 30 14:23:56 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:23:56 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:23:56 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:23:56.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:23:56 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:23:56 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2270001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:23:56 compute-0 sudo[148458]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhgnfurdlbtzqjgizkighcsbyizyqwxd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242236.2051024-1049-95135188556190/AnsiballZ_stat.py'
Sep 30 14:23:56 compute-0 sudo[148458]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:23:56 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:23:56 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2264003240 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:23:56 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v243: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Sep 30 14:23:56 compute-0 python3.9[148460]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:23:56 compute-0 sudo[148458]: pam_unix(sudo:session): session closed for user root
Sep 30 14:23:56 compute-0 sudo[148536]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctechrsaolvwupfrgpcpjszqbyifklrz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242236.2051024-1049-95135188556190/AnsiballZ_file.py'
Sep 30 14:23:56 compute-0 sudo[148536]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:23:56 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:23:56.987Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:23:56 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:23:56.988Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:23:57 compute-0 python3.9[148538]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:23:57 compute-0 sudo[148536]: pam_unix(sudo:session): session closed for user root
Sep 30 14:23:57 compute-0 sudo[148689]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxwupaygplhoqpdejjmyozmragbbenbf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242237.244139-1049-275241498850487/AnsiballZ_stat.py'
Sep 30 14:23:57 compute-0 sudo[148689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:23:57 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:23:57 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2260003e40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:23:57 compute-0 python3.9[148691]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:23:57 compute-0 ceph-mon[74194]: pgmap v243: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Sep 30 14:23:57 compute-0 sudo[148689]: pam_unix(sudo:session): session closed for user root
Sep 30 14:23:57 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:23:57 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:23:57 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:23:57.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:23:57 compute-0 sudo[148768]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkhlegyeikuqheefdpapxondolhagoxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242237.244139-1049-275241498850487/AnsiballZ_file.py'
Sep 30 14:23:57 compute-0 sudo[148768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:23:58 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:23:58 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:23:58 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:23:58.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:23:58 compute-0 python3.9[148770]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:23:58 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:23:58 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2268003e80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:23:58 compute-0 sudo[148768]: pam_unix(sudo:session): session closed for user root
Sep 30 14:23:58 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:23:58 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2270002940 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:23:58 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v244: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:23:58 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:23:58.864Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:23:58 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:23:58.865Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:23:58 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:23:58.865Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:23:58 compute-0 sudo[148920]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ueffeougrthzgxnyfxorttmqkxvkyznd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242238.6621358-1118-128126329233518/AnsiballZ_file.py'
Sep 30 14:23:58 compute-0 sudo[148920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:23:59 compute-0 python3.9[148922]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:23:59 compute-0 sudo[148920]: pam_unix(sudo:session): session closed for user root
Sep 30 14:23:59 compute-0 sudo[148930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:23:59 compute-0 sudo[148930]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:23:59 compute-0 sudo[148930]: pam_unix(sudo:session): session closed for user root
Sep 30 14:23:59 compute-0 ceph-mgr[74485]: [balancer INFO root] Optimize plan auto_2025-09-30_14:23:59
Sep 30 14:23:59 compute-0 ceph-mgr[74485]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 14:23:59 compute-0 ceph-mgr[74485]: [balancer INFO root] do_upmap
Sep 30 14:23:59 compute-0 ceph-mgr[74485]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.meta', 'volumes', '.nfs', 'images', 'backups', 'default.rgw.log', 'vms', '.mgr', 'default.rgw.control', '.rgw.root']
Sep 30 14:23:59 compute-0 ceph-mgr[74485]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 14:23:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 14:23:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:23:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 14:23:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:23:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:23:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:23:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:23:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:23:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:23:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:23:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:23:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:23:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Sep 30 14:23:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:23:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:23:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:23:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Sep 30 14:23:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:23:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Sep 30 14:23:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:23:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:23:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:23:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 14:23:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:23:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 14:23:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:23:59 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2264003240 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:23:59 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:23:59 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:23:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:23:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:23:59 compute-0 ceph-mon[74194]: pgmap v244: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:23:59 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:23:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:23:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:23:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:23:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:23:59 compute-0 sudo[149099]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjyldzlujaywpxuepnkbuvihdmrzwgrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242239.6092196-1142-187953835720795/AnsiballZ_stat.py'
Sep 30 14:23:59 compute-0 sudo[149099]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:23:59 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:23:59 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:23:59 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:23:59.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:24:00 compute-0 python3.9[149101]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:24:00 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:24:00 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:24:00 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:24:00.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:24:00 compute-0 sudo[149099]: pam_unix(sudo:session): session closed for user root
Sep 30 14:24:00 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:24:00 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2260003e40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:24:00 compute-0 sudo[149177]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbqqkwcnfrvenfuibhnvjhfencneflwe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242239.6092196-1142-187953835720795/AnsiballZ_file.py'
Sep 30 14:24:00 compute-0 sudo[149177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:24:00 compute-0 python3.9[149179]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:24:00 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:24:00 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2268003e80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:24:00 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:24:00 compute-0 sudo[149177]: pam_unix(sudo:session): session closed for user root
Sep 30 14:24:00 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v245: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:24:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 14:24:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 14:24:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 14:24:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 14:24:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 14:24:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 14:24:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 14:24:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 14:24:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 14:24:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 14:24:01 compute-0 sudo[149329]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eyioykdvlovxghnhihcqkvlrrciqejws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242240.85095-1178-158718302430222/AnsiballZ_stat.py'
Sep 30 14:24:01 compute-0 sudo[149329]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:24:01 compute-0 python3.9[149331]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:24:01 compute-0 sudo[149329]: pam_unix(sudo:session): session closed for user root
Sep 30 14:24:01 compute-0 sudo[149408]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efzuwpydvifjmgcvuaeacraskizblbjt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242240.85095-1178-158718302430222/AnsiballZ_file.py'
Sep 30 14:24:01 compute-0 sudo[149408]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:24:01 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:24:01 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2270002940 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:24:01 compute-0 python3.9[149410]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:24:01 compute-0 ceph-mon[74194]: pgmap v245: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:24:01 compute-0 sudo[149408]: pam_unix(sudo:session): session closed for user root
Sep 30 14:24:01 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:24:01 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:24:01 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:24:01.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:24:02 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:24:02 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:24:02 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:24:02.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:24:02 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:24:02 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2264003240 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:24:02 compute-0 sudo[149561]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-erbbvygmvurbunitgxlekaqdxtdimgyp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242242.1704133-1214-240563345155302/AnsiballZ_systemd.py'
Sep 30 14:24:02 compute-0 sudo[149561]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:24:02 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:24:02 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2260003e40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:24:02 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v246: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Sep 30 14:24:02 compute-0 python3.9[149563]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 14:24:02 compute-0 systemd[1]: Reloading.
Sep 30 14:24:02 compute-0 systemd-rc-local-generator[149593]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:24:02 compute-0 systemd-sysv-generator[149596]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 14:24:02 compute-0 ceph-mon[74194]: pgmap v246: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Sep 30 14:24:03 compute-0 sudo[149561]: pam_unix(sudo:session): session closed for user root
Sep 30 14:24:03 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[130489]: 30/09/2025 14:24:03 : epoch 68dbe780 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2268003e80 fd 48 proxy ignored for local
Sep 30 14:24:03 compute-0 kernel: ganesha.nfsd[140938]: segfault at 50 ip 00007f233ff8232e sp 00007f22fcff8210 error 4 in libntirpc.so.5.8[7f233ff67000+2c000] likely on CPU 5 (core 0, socket 5)
Sep 30 14:24:03 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Sep 30 14:24:03 compute-0 systemd[1]: Started Process Core Dump (PID 149753/UID 0).
Sep 30 14:24:03 compute-0 sudo[149754]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjmrbryhjmhvawzslwvjmytldmlvinlv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242243.3824286-1238-261206351709493/AnsiballZ_stat.py'
Sep 30 14:24:03 compute-0 sudo[149754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:24:03 compute-0 python3.9[149757]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:24:03 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:24:03 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:24:03 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:24:03.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:24:03 compute-0 sudo[149754]: pam_unix(sudo:session): session closed for user root
Sep 30 14:24:04 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:24:04 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:24:04 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:24:04.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:24:04 compute-0 sudo[149834]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqfvweryrhkmdriyifvqqibxzyrajtct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242243.3824286-1238-261206351709493/AnsiballZ_file.py'
Sep 30 14:24:04 compute-0 sudo[149834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:24:04 compute-0 python3.9[149836]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:24:04 compute-0 sudo[149834]: pam_unix(sudo:session): session closed for user root
Sep 30 14:24:04 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v247: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:24:04 compute-0 systemd-coredump[149756]: Process 130493 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 59:
                                                    #0  0x00007f233ff8232e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Sep 30 14:24:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:24:04] "GET /metrics HTTP/1.1" 200 48407 "" "Prometheus/2.51.0"
Sep 30 14:24:04 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:24:04] "GET /metrics HTTP/1.1" 200 48407 "" "Prometheus/2.51.0"
Sep 30 14:24:04 compute-0 systemd[1]: systemd-coredump@3-149753-0.service: Deactivated successfully.
Sep 30 14:24:04 compute-0 systemd[1]: systemd-coredump@3-149753-0.service: Consumed 1.009s CPU time.
Sep 30 14:24:04 compute-0 podman[149940]: 2025-09-30 14:24:04.785104249 +0000 UTC m=+0.026892321 container died 09de7eac6ed58a85e37b5b069644aa52f054189a78284dba9b5a23b9104c763e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:24:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-0229a34b6b8fcb7e34c06790d6792ac010ce8f1f37a0c40d76de82a07184e648-merged.mount: Deactivated successfully.
Sep 30 14:24:04 compute-0 podman[149940]: 2025-09-30 14:24:04.83937655 +0000 UTC m=+0.081164602 container remove 09de7eac6ed58a85e37b5b069644aa52f054189a78284dba9b5a23b9104c763e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Sep 30 14:24:04 compute-0 systemd[1]: ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6@nfs.cephfs.2.0.compute-0.qrbicy.service: Main process exited, code=exited, status=139/n/a
Sep 30 14:24:04 compute-0 sudo[150008]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqzraipuwckdwaaobplxoczhqfnmtvgj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242244.6345022-1274-260679471841255/AnsiballZ_stat.py'
Sep 30 14:24:04 compute-0 sudo[150008]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:24:05 compute-0 systemd[1]: ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6@nfs.cephfs.2.0.compute-0.qrbicy.service: Failed with result 'exit-code'.
Sep 30 14:24:05 compute-0 systemd[1]: ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6@nfs.cephfs.2.0.compute-0.qrbicy.service: Consumed 1.360s CPU time.
Sep 30 14:24:05 compute-0 python3.9[150018]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:24:05 compute-0 sudo[150008]: pam_unix(sudo:session): session closed for user root
Sep 30 14:24:05 compute-0 sudo[150113]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thnxaavsxlddybhzmkezryjvafpmnpao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242244.6345022-1274-260679471841255/AnsiballZ_file.py'
Sep 30 14:24:05 compute-0 sudo[150113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:24:05 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:24:05 compute-0 python3.9[150115]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:24:05 compute-0 sudo[150113]: pam_unix(sudo:session): session closed for user root
Sep 30 14:24:05 compute-0 ceph-mon[74194]: pgmap v247: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:24:05 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:24:05 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:24:05 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:24:05.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:24:05 compute-0 sudo[150266]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oszluagmjjjzvahcevwmwmkantbufwpq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242245.745391-1310-251068723142641/AnsiballZ_systemd.py'
Sep 30 14:24:05 compute-0 sudo[150266]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:24:06 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:24:06 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:24:06 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:24:06.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:24:06 compute-0 python3.9[150268]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 14:24:06 compute-0 systemd[1]: Reloading.
Sep 30 14:24:06 compute-0 systemd-rc-local-generator[150293]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:24:06 compute-0 systemd-sysv-generator[150297]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 14:24:06 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v248: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Sep 30 14:24:06 compute-0 systemd[1]: Starting Create netns directory...
Sep 30 14:24:06 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Sep 30 14:24:06 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Sep 30 14:24:06 compute-0 systemd[1]: Finished Create netns directory.
Sep 30 14:24:06 compute-0 sudo[150266]: pam_unix(sudo:session): session closed for user root
Sep 30 14:24:06 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:24:06.988Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:24:07 compute-0 sudo[150459]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edujznzdcnwxrihlwieuftqaseahfppm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242247.0263224-1340-73910040019655/AnsiballZ_file.py'
Sep 30 14:24:07 compute-0 sudo[150459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:24:07 compute-0 python3.9[150461]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:24:07 compute-0 sudo[150459]: pam_unix(sudo:session): session closed for user root
Sep 30 14:24:07 compute-0 ceph-mon[74194]: pgmap v248: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Sep 30 14:24:07 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:24:07 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:24:07 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:24:07.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:24:07 compute-0 sudo[150613]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbsrpvcujbdngjzawjizzpnzszzklsep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242247.7180977-1364-279986172932048/AnsiballZ_stat.py'
Sep 30 14:24:07 compute-0 sudo[150613]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:24:08 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:24:08 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:24:08 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:24:08.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:24:08 compute-0 python3.9[150615]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:24:08 compute-0 sudo[150613]: pam_unix(sudo:session): session closed for user root
Sep 30 14:24:08 compute-0 sudo[150736]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdvhslcjrnvjbhmmwxgxaykwvzbrxlfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242247.7180977-1364-279986172932048/AnsiballZ_copy.py'
Sep 30 14:24:08 compute-0 sudo[150736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:24:08 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v249: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:24:08 compute-0 python3.9[150738]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759242247.7180977-1364-279986172932048/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:24:08 compute-0 sudo[150736]: pam_unix(sudo:session): session closed for user root
Sep 30 14:24:08 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:24:08.866Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:24:09 compute-0 ceph-mon[74194]: pgmap v249: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:24:09 compute-0 sudo[150890]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aexgdcmydyrpfpikolgrbznpwptofffb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242249.5045395-1415-146749523650296/AnsiballZ_file.py'
Sep 30 14:24:09 compute-0 sudo[150890]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:24:09 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:24:09 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 14:24:09 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:24:09.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 14:24:09 compute-0 python3.9[150892]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:24:10 compute-0 sudo[150890]: pam_unix(sudo:session): session closed for user root
Sep 30 14:24:10 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:24:10 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:24:10 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:24:10.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:24:10 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-nfs-cephfs-compute-0-yvkpei[97239]: [WARNING] 272/142410 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Sep 30 14:24:10 compute-0 sudo[151042]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbffkyowflexzakfhwvpwdrqiyaywmnx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242250.2363489-1439-272886127471429/AnsiballZ_stat.py'
Sep 30 14:24:10 compute-0 sudo[151042]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:24:10 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:24:10 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v250: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:24:10 compute-0 python3.9[151044]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:24:10 compute-0 sudo[151042]: pam_unix(sudo:session): session closed for user root
Sep 30 14:24:11 compute-0 sudo[151165]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mizirpcxmtwdrujtauqffjrflrxpngdr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242250.2363489-1439-272886127471429/AnsiballZ_copy.py'
Sep 30 14:24:11 compute-0 sudo[151165]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:24:11 compute-0 python3.9[151167]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759242250.2363489-1439-272886127471429/.source.json _original_basename=.us6gvxhh follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:24:11 compute-0 sudo[151165]: pam_unix(sudo:session): session closed for user root
Sep 30 14:24:11 compute-0 sudo[151318]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jazxpusltwgzdheasandtubeyhxfdgep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242251.4255626-1484-143189962085722/AnsiballZ_file.py'
Sep 30 14:24:11 compute-0 sudo[151318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:24:11 compute-0 ceph-mon[74194]: pgmap v250: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:24:11 compute-0 python3.9[151320]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:24:11 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:24:11 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:24:11 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:24:11.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:24:11 compute-0 sudo[151318]: pam_unix(sudo:session): session closed for user root
Sep 30 14:24:12 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:24:12 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:24:12 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:24:12.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:24:12 compute-0 sudo[151471]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dozwyqbxmwzpcufniiatnbbyvggpkeeu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242252.1236818-1508-64473650812691/AnsiballZ_stat.py'
Sep 30 14:24:12 compute-0 sudo[151471]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:24:12 compute-0 sudo[151471]: pam_unix(sudo:session): session closed for user root
Sep 30 14:24:12 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v251: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Sep 30 14:24:12 compute-0 sudo[151594]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqztqbkkldapisglzvuohlapcguibxer ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242252.1236818-1508-64473650812691/AnsiballZ_copy.py'
Sep 30 14:24:12 compute-0 sudo[151594]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:24:13 compute-0 sudo[151594]: pam_unix(sudo:session): session closed for user root
Sep 30 14:24:13 compute-0 ceph-mon[74194]: pgmap v251: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Sep 30 14:24:13 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:24:13 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:24:13 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:24:13.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:24:13 compute-0 sudo[151748]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odzdrhyeteuqyduooiokpzsurrwixpqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242253.5484223-1559-62225169675356/AnsiballZ_container_config_data.py'
Sep 30 14:24:13 compute-0 sudo[151748]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:24:14 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:24:14 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:24:14 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:24:14.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:24:14 compute-0 python3.9[151750]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Sep 30 14:24:14 compute-0 sudo[151748]: pam_unix(sudo:session): session closed for user root
Sep 30 14:24:14 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v252: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:24:14 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:24:14 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:24:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:24:14] "GET /metrics HTTP/1.1" 200 48409 "" "Prometheus/2.51.0"
Sep 30 14:24:14 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:24:14] "GET /metrics HTTP/1.1" 200 48409 "" "Prometheus/2.51.0"
Sep 30 14:24:14 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:24:14 compute-0 sudo[151900]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrqufvnieagnupyhjfrdnnjkvylaigrg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242254.4240093-1586-281099713298663/AnsiballZ_container_config_hash.py'
Sep 30 14:24:14 compute-0 sudo[151900]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:24:15 compute-0 systemd[1]: ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6@nfs.cephfs.2.0.compute-0.qrbicy.service: Scheduled restart job, restart counter is at 4.
Sep 30 14:24:15 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.qrbicy for 5e3c7776-ac03-5698-b79f-a6dc2d80cae6.
Sep 30 14:24:15 compute-0 systemd[1]: ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6@nfs.cephfs.2.0.compute-0.qrbicy.service: Consumed 1.360s CPU time.
Sep 30 14:24:15 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.qrbicy for 5e3c7776-ac03-5698-b79f-a6dc2d80cae6...
Sep 30 14:24:15 compute-0 python3.9[151902]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Sep 30 14:24:15 compute-0 sudo[151900]: pam_unix(sudo:session): session closed for user root
Sep 30 14:24:15 compute-0 podman[151976]: 2025-09-30 14:24:15.288900466 +0000 UTC m=+0.040966281 container create 3ee602e5338e3a60a8e04eee81924ddf620b5ea058acc017f2e9979ba848b7a3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Sep 30 14:24:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abc669bbc091ee3e4c0b33c8e81cda760bec0a9bc8609393f9ba34df276cf237/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Sep 30 14:24:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abc669bbc091ee3e4c0b33c8e81cda760bec0a9bc8609393f9ba34df276cf237/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:24:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abc669bbc091ee3e4c0b33c8e81cda760bec0a9bc8609393f9ba34df276cf237/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 14:24:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abc669bbc091ee3e4c0b33c8e81cda760bec0a9bc8609393f9ba34df276cf237/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.qrbicy-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 14:24:15 compute-0 podman[151976]: 2025-09-30 14:24:15.351991493 +0000 UTC m=+0.104057318 container init 3ee602e5338e3a60a8e04eee81924ddf620b5ea058acc017f2e9979ba848b7a3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:24:15 compute-0 podman[151976]: 2025-09-30 14:24:15.357781382 +0000 UTC m=+0.109847197 container start 3ee602e5338e3a60a8e04eee81924ddf620b5ea058acc017f2e9979ba848b7a3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Sep 30 14:24:15 compute-0 bash[151976]: 3ee602e5338e3a60a8e04eee81924ddf620b5ea058acc017f2e9979ba848b7a3
Sep 30 14:24:15 compute-0 podman[151976]: 2025-09-30 14:24:15.270135515 +0000 UTC m=+0.022201360 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:24:15 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:24:15 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Sep 30 14:24:15 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:24:15 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Sep 30 14:24:15 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.qrbicy for 5e3c7776-ac03-5698-b79f-a6dc2d80cae6.
Sep 30 14:24:15 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:24:15 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Sep 30 14:24:15 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:24:15 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Sep 30 14:24:15 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:24:15 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Sep 30 14:24:15 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:24:15 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Sep 30 14:24:15 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:24:15 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Sep 30 14:24:15 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:24:15 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:24:15 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:24:15 compute-0 sudo[152160]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-esqmjxlfsulryhtjgxxrnvvuqejuihfu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242255.3166978-1613-209225369952923/AnsiballZ_podman_container_info.py'
Sep 30 14:24:15 compute-0 sudo[152160]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:24:15 compute-0 ceph-mon[74194]: pgmap v252: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:24:15 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:24:15 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:24:15 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:24:15.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:24:15 compute-0 python3.9[152162]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Sep 30 14:24:16 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:24:16 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:24:16 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:24:16.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:24:16 compute-0 sudo[152160]: pam_unix(sudo:session): session closed for user root
Sep 30 14:24:16 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v253: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:24:16 compute-0 ceph-mon[74194]: pgmap v253: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:24:16 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:24:16.989Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:24:17 compute-0 sudo[152290]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:24:17 compute-0 sudo[152290]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:24:17 compute-0 sudo[152290]: pam_unix(sudo:session): session closed for user root
Sep 30 14:24:17 compute-0 sudo[152333]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 14:24:17 compute-0 sudo[152333]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:24:17 compute-0 sudo[152390]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvtheawxwbgwmrtkrpuewutodtshjzbz ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759242256.9326494-1652-267161830139827/AnsiballZ_edpm_container_manage.py'
Sep 30 14:24:17 compute-0 sudo[152390]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:24:17 compute-0 python3[152392]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Sep 30 14:24:17 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Sep 30 14:24:17 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:24:17 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Sep 30 14:24:17 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:24:17 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:24:17 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:24:17 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:24:17.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:24:17 compute-0 sudo[152333]: pam_unix(sudo:session): session closed for user root
Sep 30 14:24:18 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:24:18 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:24:18 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:24:18.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:24:18 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:24:18 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:24:18 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 14:24:18 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 14:24:18 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v254: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 608 B/s rd, 86 B/s wr, 0 op/s
Sep 30 14:24:18 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 14:24:18 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:24:18 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 14:24:18 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:24:18 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 14:24:18 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 14:24:18 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 14:24:18 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 14:24:18 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:24:18 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:24:18 compute-0 sudo[152463]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:24:18 compute-0 sudo[152463]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:24:18 compute-0 sudo[152463]: pam_unix(sudo:session): session closed for user root
Sep 30 14:24:18 compute-0 sudo[152494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 14:24:18 compute-0 sudo[152494]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:24:18 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:24:18 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:24:18 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:24:18 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 14:24:18 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:24:18 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:24:18 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 14:24:18 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 14:24:18 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:24:18 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:24:18.867Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:24:19 compute-0 podman[152560]: 2025-09-30 14:24:19.156924441 +0000 UTC m=+0.085470502 container create 96ee88abfe5faa4ab519bd97965821b6c9210eeba7511db052f8d3a73b5fd4f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_bouman, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:24:19 compute-0 systemd[1]: Started libpod-conmon-96ee88abfe5faa4ab519bd97965821b6c9210eeba7511db052f8d3a73b5fd4f9.scope.
Sep 30 14:24:19 compute-0 podman[152560]: 2025-09-30 14:24:19.109630369 +0000 UTC m=+0.038176450 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:24:19 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:24:19 compute-0 podman[152560]: 2025-09-30 14:24:19.242080984 +0000 UTC m=+0.170627075 container init 96ee88abfe5faa4ab519bd97965821b6c9210eeba7511db052f8d3a73b5fd4f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_bouman, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:24:19 compute-0 podman[152560]: 2025-09-30 14:24:19.250593363 +0000 UTC m=+0.179139424 container start 96ee88abfe5faa4ab519bd97965821b6c9210eeba7511db052f8d3a73b5fd4f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_bouman, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Sep 30 14:24:19 compute-0 xenodochial_bouman[152576]: 167 167
Sep 30 14:24:19 compute-0 systemd[1]: libpod-96ee88abfe5faa4ab519bd97965821b6c9210eeba7511db052f8d3a73b5fd4f9.scope: Deactivated successfully.
Sep 30 14:24:19 compute-0 conmon[152576]: conmon 96ee88abfe5faa4ab519 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-96ee88abfe5faa4ab519bd97965821b6c9210eeba7511db052f8d3a73b5fd4f9.scope/container/memory.events
Sep 30 14:24:19 compute-0 podman[152560]: 2025-09-30 14:24:19.258076115 +0000 UTC m=+0.186622206 container attach 96ee88abfe5faa4ab519bd97965821b6c9210eeba7511db052f8d3a73b5fd4f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_bouman, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:24:19 compute-0 podman[152560]: 2025-09-30 14:24:19.259978873 +0000 UTC m=+0.188524944 container died 96ee88abfe5faa4ab519bd97965821b6c9210eeba7511db052f8d3a73b5fd4f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:24:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-9aafb997fb604d3918ea08b62b7fc42c9532ca23d9edb8258558ae231d7568f2-merged.mount: Deactivated successfully.
Sep 30 14:24:19 compute-0 sudo[152579]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:24:19 compute-0 sudo[152579]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:24:19 compute-0 sudo[152579]: pam_unix(sudo:session): session closed for user root
Sep 30 14:24:19 compute-0 podman[152560]: 2025-09-30 14:24:19.360256164 +0000 UTC m=+0.288802225 container remove 96ee88abfe5faa4ab519bd97965821b6c9210eeba7511db052f8d3a73b5fd4f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_bouman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Sep 30 14:24:19 compute-0 systemd[1]: libpod-conmon-96ee88abfe5faa4ab519bd97965821b6c9210eeba7511db052f8d3a73b5fd4f9.scope: Deactivated successfully.
Sep 30 14:24:19 compute-0 podman[152625]: 2025-09-30 14:24:19.54769455 +0000 UTC m=+0.077206551 container create a6c557fa3c56d3de1bc1d2e2fa0d1b6f24712c0e4112233159172c4e876fd225 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_mirzakhani, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True)
Sep 30 14:24:19 compute-0 systemd[1]: Started libpod-conmon-a6c557fa3c56d3de1bc1d2e2fa0d1b6f24712c0e4112233159172c4e876fd225.scope.
Sep 30 14:24:19 compute-0 podman[152625]: 2025-09-30 14:24:19.5016791 +0000 UTC m=+0.031191301 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:24:19 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:24:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ae3514b85d0807225ffaf40fbdbcbadb2f9d04bd49e5a70e4067cd120ee81b7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:24:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ae3514b85d0807225ffaf40fbdbcbadb2f9d04bd49e5a70e4067cd120ee81b7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:24:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ae3514b85d0807225ffaf40fbdbcbadb2f9d04bd49e5a70e4067cd120ee81b7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:24:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ae3514b85d0807225ffaf40fbdbcbadb2f9d04bd49e5a70e4067cd120ee81b7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:24:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ae3514b85d0807225ffaf40fbdbcbadb2f9d04bd49e5a70e4067cd120ee81b7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 14:24:19 compute-0 podman[152625]: 2025-09-30 14:24:19.634893625 +0000 UTC m=+0.164405626 container init a6c557fa3c56d3de1bc1d2e2fa0d1b6f24712c0e4112233159172c4e876fd225 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Sep 30 14:24:19 compute-0 podman[152625]: 2025-09-30 14:24:19.641922475 +0000 UTC m=+0.171434476 container start a6c557fa3c56d3de1bc1d2e2fa0d1b6f24712c0e4112233159172c4e876fd225 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_mirzakhani, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:24:19 compute-0 podman[152625]: 2025-09-30 14:24:19.649788297 +0000 UTC m=+0.179300298 container attach a6c557fa3c56d3de1bc1d2e2fa0d1b6f24712c0e4112233159172c4e876fd225 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_mirzakhani, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid)
Sep 30 14:24:19 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:24:19 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:24:19 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:24:19.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:24:19 compute-0 ceph-mon[74194]: pgmap v254: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 608 B/s rd, 86 B/s wr, 0 op/s
Sep 30 14:24:19 compute-0 heuristic_mirzakhani[152655]: --> passed data devices: 0 physical, 1 LVM
Sep 30 14:24:19 compute-0 heuristic_mirzakhani[152655]: --> All data devices are unavailable
Sep 30 14:24:19 compute-0 systemd[1]: libpod-a6c557fa3c56d3de1bc1d2e2fa0d1b6f24712c0e4112233159172c4e876fd225.scope: Deactivated successfully.
Sep 30 14:24:19 compute-0 podman[152625]: 2025-09-30 14:24:19.98067957 +0000 UTC m=+0.510191561 container died a6c557fa3c56d3de1bc1d2e2fa0d1b6f24712c0e4112233159172c4e876fd225 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_mirzakhani, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:24:20 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:24:20 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:24:20 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:24:20.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:24:20 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v255: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 608 B/s rd, 86 B/s wr, 0 op/s
Sep 30 14:24:20 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:24:21 compute-0 ceph-mon[74194]: pgmap v255: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 608 B/s rd, 86 B/s wr, 0 op/s
Sep 30 14:24:21 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:24:21 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:24:21 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:24:21 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:24:21 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:24:21 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:24:21 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:24:21.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:24:22 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:24:22 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:24:22 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:24:22.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:24:22 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v256: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 608 B/s wr, 1 op/s
Sep 30 14:24:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-9ae3514b85d0807225ffaf40fbdbcbadb2f9d04bd49e5a70e4067cd120ee81b7-merged.mount: Deactivated successfully.
Sep 30 14:24:23 compute-0 podman[152625]: 2025-09-30 14:24:23.218138415 +0000 UTC m=+3.747650416 container remove a6c557fa3c56d3de1bc1d2e2fa0d1b6f24712c0e4112233159172c4e876fd225 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_mirzakhani, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:24:23 compute-0 systemd[1]: libpod-conmon-a6c557fa3c56d3de1bc1d2e2fa0d1b6f24712c0e4112233159172c4e876fd225.scope: Deactivated successfully.
Sep 30 14:24:23 compute-0 sudo[152494]: pam_unix(sudo:session): session closed for user root
Sep 30 14:24:23 compute-0 sudo[152734]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:24:23 compute-0 sudo[152734]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:24:23 compute-0 sudo[152734]: pam_unix(sudo:session): session closed for user root
Sep 30 14:24:23 compute-0 sudo[152760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -- lvm list --format json
Sep 30 14:24:23 compute-0 sudo[152760]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:24:23 compute-0 ceph-mon[74194]: pgmap v256: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 608 B/s wr, 1 op/s
Sep 30 14:24:23 compute-0 podman[152423]: 2025-09-30 14:24:23.654155284 +0000 UTC m=+5.880545999 image pull 7ffac6b06b247caf26cf673b775a5f070f2fa1a6008cf0b0964af7e905ba86a5 quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Sep 30 14:24:23 compute-0 podman[152847]: 2025-09-30 14:24:23.803538198 +0000 UTC m=+0.052731978 container create 8ace90c1ad44d98a416105b72e1c541724757ab08efa10adfaa0df7e8c2027c6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Sep 30 14:24:23 compute-0 podman[152847]: 2025-09-30 14:24:23.775285638 +0000 UTC m=+0.024479438 image pull 7ffac6b06b247caf26cf673b775a5f070f2fa1a6008cf0b0964af7e905ba86a5 quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Sep 30 14:24:23 compute-0 python3[152392]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Sep 30 14:24:23 compute-0 podman[152853]: 2025-09-30 14:24:23.820019141 +0000 UTC m=+0.053623013 container create dc33eeec56d0f0d3f38366c325ba631bff5ea0138b4f9e040b95589fa6bb7042 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_benz, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:24:23 compute-0 systemd[1]: Started libpod-conmon-dc33eeec56d0f0d3f38366c325ba631bff5ea0138b4f9e040b95589fa6bb7042.scope.
Sep 30 14:24:23 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:24:23 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:24:23 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:24:23.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:24:23 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:24:23 compute-0 podman[152853]: 2025-09-30 14:24:23.891129442 +0000 UTC m=+0.124733354 container init dc33eeec56d0f0d3f38366c325ba631bff5ea0138b4f9e040b95589fa6bb7042 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_benz, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Sep 30 14:24:23 compute-0 podman[152853]: 2025-09-30 14:24:23.799112729 +0000 UTC m=+0.032716641 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:24:23 compute-0 podman[152853]: 2025-09-30 14:24:23.897565545 +0000 UTC m=+0.131169427 container start dc33eeec56d0f0d3f38366c325ba631bff5ea0138b4f9e040b95589fa6bb7042 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_benz, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:24:23 compute-0 gallant_benz[152891]: 167 167
Sep 30 14:24:23 compute-0 systemd[1]: libpod-dc33eeec56d0f0d3f38366c325ba631bff5ea0138b4f9e040b95589fa6bb7042.scope: Deactivated successfully.
Sep 30 14:24:23 compute-0 podman[152853]: 2025-09-30 14:24:23.904569313 +0000 UTC m=+0.138173225 container attach dc33eeec56d0f0d3f38366c325ba631bff5ea0138b4f9e040b95589fa6bb7042 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Sep 30 14:24:23 compute-0 podman[152853]: 2025-09-30 14:24:23.905097527 +0000 UTC m=+0.138701409 container died dc33eeec56d0f0d3f38366c325ba631bff5ea0138b4f9e040b95589fa6bb7042 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_benz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Sep 30 14:24:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-618d0d06a44406e4b3cab8d3cdb874ab7436d2169f526dd6d0d1e52e3ec7c3e5-merged.mount: Deactivated successfully.
Sep 30 14:24:23 compute-0 sudo[152390]: pam_unix(sudo:session): session closed for user root
Sep 30 14:24:23 compute-0 podman[152853]: 2025-09-30 14:24:23.953811336 +0000 UTC m=+0.187415218 container remove dc33eeec56d0f0d3f38366c325ba631bff5ea0138b4f9e040b95589fa6bb7042 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_benz, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:24:23 compute-0 systemd[1]: libpod-conmon-dc33eeec56d0f0d3f38366c325ba631bff5ea0138b4f9e040b95589fa6bb7042.scope: Deactivated successfully.
Sep 30 14:24:24 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:24:24 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:24:24 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:24:24.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:24:24 compute-0 podman[152965]: 2025-09-30 14:24:24.106630474 +0000 UTC m=+0.049251145 container create a12e730b867b62d2c6278f387b95f87bb64f8bee132db01d752ef7d5b95c0d7d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_chaplygin, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:24:24 compute-0 systemd[1]: Started libpod-conmon-a12e730b867b62d2c6278f387b95f87bb64f8bee132db01d752ef7d5b95c0d7d.scope.
Sep 30 14:24:24 compute-0 podman[152965]: 2025-09-30 14:24:24.079935737 +0000 UTC m=+0.022556458 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:24:24 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:24:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1e68759792e199349c5de8aa36a96c29f46067819fe9fd4bdae4113dd792f35/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:24:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1e68759792e199349c5de8aa36a96c29f46067819fe9fd4bdae4113dd792f35/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:24:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1e68759792e199349c5de8aa36a96c29f46067819fe9fd4bdae4113dd792f35/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:24:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1e68759792e199349c5de8aa36a96c29f46067819fe9fd4bdae4113dd792f35/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:24:24 compute-0 podman[152965]: 2025-09-30 14:24:24.201610817 +0000 UTC m=+0.144231518 container init a12e730b867b62d2c6278f387b95f87bb64f8bee132db01d752ef7d5b95c0d7d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_chaplygin, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Sep 30 14:24:24 compute-0 podman[152965]: 2025-09-30 14:24:24.209599752 +0000 UTC m=+0.152220423 container start a12e730b867b62d2c6278f387b95f87bb64f8bee132db01d752ef7d5b95c0d7d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Sep 30 14:24:24 compute-0 podman[152965]: 2025-09-30 14:24:24.214108463 +0000 UTC m=+0.156729164 container attach a12e730b867b62d2c6278f387b95f87bb64f8bee132db01d752ef7d5b95c0d7d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_chaplygin, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Sep 30 14:24:24 compute-0 sudo[153099]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jprsybjkzvdayadppnshiastziiydzqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242264.0681415-1676-66415251617520/AnsiballZ_stat.py'
Sep 30 14:24:24 compute-0 sudo[153099]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:24:24 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v257: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 608 B/s wr, 1 op/s
Sep 30 14:24:24 compute-0 dreamy_chaplygin[153025]: {
Sep 30 14:24:24 compute-0 dreamy_chaplygin[153025]:     "0": [
Sep 30 14:24:24 compute-0 dreamy_chaplygin[153025]:         {
Sep 30 14:24:24 compute-0 dreamy_chaplygin[153025]:             "devices": [
Sep 30 14:24:24 compute-0 dreamy_chaplygin[153025]:                 "/dev/loop3"
Sep 30 14:24:24 compute-0 dreamy_chaplygin[153025]:             ],
Sep 30 14:24:24 compute-0 dreamy_chaplygin[153025]:             "lv_name": "ceph_lv0",
Sep 30 14:24:24 compute-0 dreamy_chaplygin[153025]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:24:24 compute-0 dreamy_chaplygin[153025]:             "lv_size": "21470642176",
Sep 30 14:24:24 compute-0 dreamy_chaplygin[153025]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5e3c7776-ac03-5698-b79f-a6dc2d80cae6,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1bf35304-bfb4-41f5-b832-570aa31de1b2,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 14:24:24 compute-0 dreamy_chaplygin[153025]:             "lv_uuid": "f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L",
Sep 30 14:24:24 compute-0 dreamy_chaplygin[153025]:             "name": "ceph_lv0",
Sep 30 14:24:24 compute-0 dreamy_chaplygin[153025]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:24:24 compute-0 dreamy_chaplygin[153025]:             "tags": {
Sep 30 14:24:24 compute-0 dreamy_chaplygin[153025]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:24:24 compute-0 dreamy_chaplygin[153025]:                 "ceph.block_uuid": "f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L",
Sep 30 14:24:24 compute-0 dreamy_chaplygin[153025]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 14:24:24 compute-0 dreamy_chaplygin[153025]:                 "ceph.cluster_fsid": "5e3c7776-ac03-5698-b79f-a6dc2d80cae6",
Sep 30 14:24:24 compute-0 dreamy_chaplygin[153025]:                 "ceph.cluster_name": "ceph",
Sep 30 14:24:24 compute-0 dreamy_chaplygin[153025]:                 "ceph.crush_device_class": "",
Sep 30 14:24:24 compute-0 dreamy_chaplygin[153025]:                 "ceph.encrypted": "0",
Sep 30 14:24:24 compute-0 dreamy_chaplygin[153025]:                 "ceph.osd_fsid": "1bf35304-bfb4-41f5-b832-570aa31de1b2",
Sep 30 14:24:24 compute-0 dreamy_chaplygin[153025]:                 "ceph.osd_id": "0",
Sep 30 14:24:24 compute-0 dreamy_chaplygin[153025]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 14:24:24 compute-0 dreamy_chaplygin[153025]:                 "ceph.type": "block",
Sep 30 14:24:24 compute-0 dreamy_chaplygin[153025]:                 "ceph.vdo": "0",
Sep 30 14:24:24 compute-0 dreamy_chaplygin[153025]:                 "ceph.with_tpm": "0"
Sep 30 14:24:24 compute-0 dreamy_chaplygin[153025]:             },
Sep 30 14:24:24 compute-0 dreamy_chaplygin[153025]:             "type": "block",
Sep 30 14:24:24 compute-0 dreamy_chaplygin[153025]:             "vg_name": "ceph_vg0"
Sep 30 14:24:24 compute-0 dreamy_chaplygin[153025]:         }
Sep 30 14:24:24 compute-0 dreamy_chaplygin[153025]:     ]
Sep 30 14:24:24 compute-0 dreamy_chaplygin[153025]: }
Sep 30 14:24:24 compute-0 systemd[1]: libpod-a12e730b867b62d2c6278f387b95f87bb64f8bee132db01d752ef7d5b95c0d7d.scope: Deactivated successfully.
Sep 30 14:24:24 compute-0 podman[152965]: 2025-09-30 14:24:24.541120432 +0000 UTC m=+0.483741123 container died a12e730b867b62d2c6278f387b95f87bb64f8bee132db01d752ef7d5b95c0d7d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_chaplygin, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:24:24 compute-0 python3.9[153101]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 14:24:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-b1e68759792e199349c5de8aa36a96c29f46067819fe9fd4bdae4113dd792f35-merged.mount: Deactivated successfully.
Sep 30 14:24:24 compute-0 sudo[153099]: pam_unix(sudo:session): session closed for user root
Sep 30 14:24:24 compute-0 podman[152965]: 2025-09-30 14:24:24.606594822 +0000 UTC m=+0.549215493 container remove a12e730b867b62d2c6278f387b95f87bb64f8bee132db01d752ef7d5b95c0d7d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_chaplygin, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Sep 30 14:24:24 compute-0 systemd[1]: libpod-conmon-a12e730b867b62d2c6278f387b95f87bb64f8bee132db01d752ef7d5b95c0d7d.scope: Deactivated successfully.
Sep 30 14:24:24 compute-0 sudo[152760]: pam_unix(sudo:session): session closed for user root
Sep 30 14:24:24 compute-0 sudo[153143]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:24:24 compute-0 sudo[153143]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:24:24 compute-0 sudo[153143]: pam_unix(sudo:session): session closed for user root
Sep 30 14:24:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:24:24] "GET /metrics HTTP/1.1" 200 48409 "" "Prometheus/2.51.0"
Sep 30 14:24:24 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:24:24] "GET /metrics HTTP/1.1" 200 48409 "" "Prometheus/2.51.0"
Sep 30 14:24:24 compute-0 sudo[153168]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -- raw list --format json
Sep 30 14:24:24 compute-0 sudo[153168]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:24:25 compute-0 podman[153308]: 2025-09-30 14:24:25.146447592 +0000 UTC m=+0.042208426 container create b9e44404909c59ea83a005b8fdbfa79cda2706a8501203bef0665459ed4dd70e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_mcclintock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Sep 30 14:24:25 compute-0 systemd[1]: Started libpod-conmon-b9e44404909c59ea83a005b8fdbfa79cda2706a8501203bef0665459ed4dd70e.scope.
Sep 30 14:24:25 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:24:25 compute-0 podman[153308]: 2025-09-30 14:24:25.126147926 +0000 UTC m=+0.021908780 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:24:25 compute-0 podman[153308]: 2025-09-30 14:24:25.225235979 +0000 UTC m=+0.120996843 container init b9e44404909c59ea83a005b8fdbfa79cda2706a8501203bef0665459ed4dd70e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_mcclintock, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Sep 30 14:24:25 compute-0 podman[153308]: 2025-09-30 14:24:25.232290729 +0000 UTC m=+0.128051563 container start b9e44404909c59ea83a005b8fdbfa79cda2706a8501203bef0665459ed4dd70e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_mcclintock, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Sep 30 14:24:25 compute-0 relaxed_mcclintock[153348]: 167 167
Sep 30 14:24:25 compute-0 systemd[1]: libpod-b9e44404909c59ea83a005b8fdbfa79cda2706a8501203bef0665459ed4dd70e.scope: Deactivated successfully.
Sep 30 14:24:25 compute-0 podman[153308]: 2025-09-30 14:24:25.238089855 +0000 UTC m=+0.133850689 container attach b9e44404909c59ea83a005b8fdbfa79cda2706a8501203bef0665459ed4dd70e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_mcclintock, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:24:25 compute-0 podman[153308]: 2025-09-30 14:24:25.239154463 +0000 UTC m=+0.134915297 container died b9e44404909c59ea83a005b8fdbfa79cda2706a8501203bef0665459ed4dd70e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_mcclintock, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:24:25 compute-0 sudo[153378]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjsmlezmwhbhyvmwmcbtfecmezzyhnix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242264.9292142-1703-65578203957746/AnsiballZ_file.py'
Sep 30 14:24:25 compute-0 sudo[153378]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:24:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-af9d58690e1b770f1af8926174efd320ce9c112eb363854079055687d96423f8-merged.mount: Deactivated successfully.
Sep 30 14:24:25 compute-0 podman[153308]: 2025-09-30 14:24:25.305071625 +0000 UTC m=+0.200832459 container remove b9e44404909c59ea83a005b8fdbfa79cda2706a8501203bef0665459ed4dd70e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_mcclintock, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:24:25 compute-0 systemd[1]: libpod-conmon-b9e44404909c59ea83a005b8fdbfa79cda2706a8501203bef0665459ed4dd70e.scope: Deactivated successfully.
Sep 30 14:24:25 compute-0 podman[153401]: 2025-09-30 14:24:25.457609765 +0000 UTC m=+0.052962045 container create a3f87f5454109e9df961aecc3dd4f8b46f6e21adec218450338728237911f762 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_chaum, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Sep 30 14:24:25 compute-0 python3.9[153387]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:24:25 compute-0 sudo[153378]: pam_unix(sudo:session): session closed for user root
Sep 30 14:24:25 compute-0 systemd[1]: Started libpod-conmon-a3f87f5454109e9df961aecc3dd4f8b46f6e21adec218450338728237911f762.scope.
Sep 30 14:24:25 compute-0 podman[153401]: 2025-09-30 14:24:25.43583107 +0000 UTC m=+0.031183370 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:24:25 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:24:25 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:24:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/629250209c0dd4baf59324da51a405e8a2e7578adb53d6ff1ecb2f9f1bfd19c1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:24:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/629250209c0dd4baf59324da51a405e8a2e7578adb53d6ff1ecb2f9f1bfd19c1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:24:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/629250209c0dd4baf59324da51a405e8a2e7578adb53d6ff1ecb2f9f1bfd19c1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:24:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/629250209c0dd4baf59324da51a405e8a2e7578adb53d6ff1ecb2f9f1bfd19c1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:24:25 compute-0 podman[153401]: 2025-09-30 14:24:25.631962741 +0000 UTC m=+0.227315041 container init a3f87f5454109e9df961aecc3dd4f8b46f6e21adec218450338728237911f762 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_chaum, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:24:25 compute-0 podman[153401]: 2025-09-30 14:24:25.639160964 +0000 UTC m=+0.234513264 container start a3f87f5454109e9df961aecc3dd4f8b46f6e21adec218450338728237911f762 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_chaum, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:24:25 compute-0 podman[153401]: 2025-09-30 14:24:25.651586888 +0000 UTC m=+0.246939198 container attach a3f87f5454109e9df961aecc3dd4f8b46f6e21adec218450338728237911f762 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_chaum, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid)
Sep 30 14:24:25 compute-0 ceph-mon[74194]: pgmap v257: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 608 B/s wr, 1 op/s
Sep 30 14:24:25 compute-0 sudo[153502]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrnxusdhhsjqovkeqzeqcfzaplwwmbin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242264.9292142-1703-65578203957746/AnsiballZ_stat.py'
Sep 30 14:24:25 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:24:25 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:24:25 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:24:25.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:24:25 compute-0 sudo[153502]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:24:26 compute-0 python3.9[153510]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 14:24:26 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:24:26 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:24:26 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:24:26.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:24:26 compute-0 sudo[153502]: pam_unix(sudo:session): session closed for user root
Sep 30 14:24:26 compute-0 lvm[153620]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 14:24:26 compute-0 lvm[153620]: VG ceph_vg0 finished
Sep 30 14:24:26 compute-0 competent_chaum[153417]: {}
Sep 30 14:24:26 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v258: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 1.0 KiB/s wr, 3 op/s
Sep 30 14:24:26 compute-0 systemd[1]: libpod-a3f87f5454109e9df961aecc3dd4f8b46f6e21adec218450338728237911f762.scope: Deactivated successfully.
Sep 30 14:24:26 compute-0 systemd[1]: libpod-a3f87f5454109e9df961aecc3dd4f8b46f6e21adec218450338728237911f762.scope: Consumed 1.130s CPU time.
Sep 30 14:24:26 compute-0 podman[153401]: 2025-09-30 14:24:26.394562158 +0000 UTC m=+0.989914468 container died a3f87f5454109e9df961aecc3dd4f8b46f6e21adec218450338728237911f762 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_chaum, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Sep 30 14:24:26 compute-0 sudo[153732]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjhcqvtgklrqldvgfgpvusivkcszjjkj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242266.1499786-1703-126136416209402/AnsiballZ_copy.py'
Sep 30 14:24:26 compute-0 sudo[153732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:24:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-629250209c0dd4baf59324da51a405e8a2e7578adb53d6ff1ecb2f9f1bfd19c1-merged.mount: Deactivated successfully.
Sep 30 14:24:26 compute-0 python3.9[153734]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759242266.1499786-1703-126136416209402/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:24:26 compute-0 sudo[153732]: pam_unix(sudo:session): session closed for user root
Sep 30 14:24:26 compute-0 podman[153401]: 2025-09-30 14:24:26.923095933 +0000 UTC m=+1.518448213 container remove a3f87f5454109e9df961aecc3dd4f8b46f6e21adec218450338728237911f762 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_chaum, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:24:26 compute-0 sudo[153168]: pam_unix(sudo:session): session closed for user root
Sep 30 14:24:26 compute-0 systemd[1]: libpod-conmon-a3f87f5454109e9df961aecc3dd4f8b46f6e21adec218450338728237911f762.scope: Deactivated successfully.
Sep 30 14:24:26 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 14:24:26 compute-0 sudo[153808]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kubozunymhxjehlgwqfrlabftmimexte ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242266.1499786-1703-126136416209402/AnsiballZ_systemd.py'
Sep 30 14:24:26 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:24:26.991Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:24:26 compute-0 sudo[153808]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:24:27 compute-0 python3.9[153810]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Sep 30 14:24:27 compute-0 systemd[1]: Reloading.
Sep 30 14:24:27 compute-0 systemd-rc-local-generator[153838]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:24:27 compute-0 systemd-sysv-generator[153842]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 14:24:27 compute-0 ceph-mon[74194]: pgmap v258: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 1.0 KiB/s wr, 3 op/s
Sep 30 14:24:27 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:24:27 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 14:24:27 compute-0 sudo[153808]: pam_unix(sudo:session): session closed for user root
Sep 30 14:24:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:24:27 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Sep 30 14:24:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:24:27 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Sep 30 14:24:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:24:27 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Sep 30 14:24:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:24:27 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Sep 30 14:24:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:24:27 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Sep 30 14:24:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:24:27 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Sep 30 14:24:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:24:27 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Sep 30 14:24:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:24:27 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 14:24:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:24:27 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 14:24:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:24:27 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 14:24:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:24:27 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Sep 30 14:24:27 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:24:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:24:27 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 14:24:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:24:27 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Sep 30 14:24:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:24:27 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Sep 30 14:24:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:24:27 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Sep 30 14:24:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:24:27 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Sep 30 14:24:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:24:27 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Sep 30 14:24:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:24:27 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Sep 30 14:24:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:24:27 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Sep 30 14:24:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:24:27 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Sep 30 14:24:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:24:27 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Sep 30 14:24:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:24:27 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Sep 30 14:24:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:24:27 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Sep 30 14:24:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:24:27 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Sep 30 14:24:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:24:27 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Sep 30 14:24:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:24:27 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Sep 30 14:24:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:24:27 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Sep 30 14:24:27 compute-0 sudo[153859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 14:24:27 compute-0 sudo[153859]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:24:27 compute-0 sudo[153859]: pam_unix(sudo:session): session closed for user root
Sep 30 14:24:27 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:24:27 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:24:27 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:24:27.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:24:27 compute-0 sudo[153958]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-unvdzypcjgqavnqxylvfdnvventlbtbh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242266.1499786-1703-126136416209402/AnsiballZ_systemd.py'
Sep 30 14:24:27 compute-0 sudo[153958]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:24:28 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:24:28 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:24:28 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:24:28.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:24:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:24:28 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad18000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:24:28 compute-0 python3.9[153960]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 14:24:28 compute-0 systemd[1]: Reloading.
Sep 30 14:24:28 compute-0 systemd-rc-local-generator[153995]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:24:28 compute-0 systemd-sysv-generator[153999]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 14:24:28 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v259: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 955 B/s wr, 3 op/s
Sep 30 14:24:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:24:28 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad000016e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:24:28 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:24:28 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:24:28 compute-0 systemd[1]: Starting ovn_controller container...
Sep 30 14:24:28 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:24:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38ea8b9f7d7d7dd277bc0a4baa35aec1c74fec35e36a3f0947bce21a7e98ed92/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Sep 30 14:24:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:24:28.868Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:24:29 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 8ace90c1ad44d98a416105b72e1c541724757ab08efa10adfaa0df7e8c2027c6.
Sep 30 14:24:29 compute-0 podman[154005]: 2025-09-30 14:24:29.35781118 +0000 UTC m=+0.771772194 container init 8ace90c1ad44d98a416105b72e1c541724757ab08efa10adfaa0df7e8c2027c6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Sep 30 14:24:29 compute-0 ovn_controller[154021]: + sudo -E kolla_set_configs
Sep 30 14:24:29 compute-0 podman[154005]: 2025-09-30 14:24:29.387122528 +0000 UTC m=+0.801083542 container start 8ace90c1ad44d98a416105b72e1c541724757ab08efa10adfaa0df7e8c2027c6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, org.label-schema.build-date=20250923, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Sep 30 14:24:29 compute-0 edpm-start-podman-container[154005]: ovn_controller
Sep 30 14:24:29 compute-0 systemd[1]: Created slice User Slice of UID 0.
Sep 30 14:24:29 compute-0 systemd[1]: Starting User Runtime Directory /run/user/0...
Sep 30 14:24:29 compute-0 systemd[1]: Finished User Runtime Directory /run/user/0.
Sep 30 14:24:29 compute-0 systemd[1]: Starting User Manager for UID 0...
Sep 30 14:24:29 compute-0 systemd[154060]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Sep 30 14:24:29 compute-0 edpm-start-podman-container[154004]: Creating additional drop-in dependency for "ovn_controller" (8ace90c1ad44d98a416105b72e1c541724757ab08efa10adfaa0df7e8c2027c6)
Sep 30 14:24:29 compute-0 podman[154029]: 2025-09-30 14:24:29.481014801 +0000 UTC m=+0.082758625 container health_status 8ace90c1ad44d98a416105b72e1c541724757ab08efa10adfaa0df7e8c2027c6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Sep 30 14:24:29 compute-0 systemd[1]: Reloading.
Sep 30 14:24:29 compute-0 systemd-rc-local-generator[154102]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:24:29 compute-0 systemd-sysv-generator[154108]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 14:24:29 compute-0 systemd[154060]: Queued start job for default target Main User Target.
Sep 30 14:24:29 compute-0 systemd[154060]: Created slice User Application Slice.
Sep 30 14:24:29 compute-0 systemd[154060]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Sep 30 14:24:29 compute-0 systemd[154060]: Started Daily Cleanup of User's Temporary Directories.
Sep 30 14:24:29 compute-0 systemd[154060]: Reached target Paths.
Sep 30 14:24:29 compute-0 systemd[154060]: Reached target Timers.
Sep 30 14:24:29 compute-0 systemd[154060]: Starting D-Bus User Message Bus Socket...
Sep 30 14:24:29 compute-0 systemd[154060]: Starting Create User's Volatile Files and Directories...
Sep 30 14:24:29 compute-0 systemd[154060]: Listening on D-Bus User Message Bus Socket.
Sep 30 14:24:29 compute-0 systemd[154060]: Reached target Sockets.
Sep 30 14:24:29 compute-0 systemd[154060]: Finished Create User's Volatile Files and Directories.
Sep 30 14:24:29 compute-0 systemd[154060]: Reached target Basic System.
Sep 30 14:24:29 compute-0 systemd[154060]: Reached target Main User Target.
Sep 30 14:24:29 compute-0 systemd[154060]: Startup finished in 144ms.
Sep 30 14:24:29 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:24:29 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:24:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:24:29 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facec000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:24:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:24:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:24:29 compute-0 ceph-mon[74194]: pgmap v259: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 955 B/s wr, 3 op/s
Sep 30 14:24:29 compute-0 systemd[1]: Started User Manager for UID 0.
Sep 30 14:24:29 compute-0 systemd[1]: Started ovn_controller container.
Sep 30 14:24:29 compute-0 systemd[1]: 8ace90c1ad44d98a416105b72e1c541724757ab08efa10adfaa0df7e8c2027c6-2070e5a5399f6ffd.service: Main process exited, code=exited, status=1/FAILURE
Sep 30 14:24:29 compute-0 systemd[1]: 8ace90c1ad44d98a416105b72e1c541724757ab08efa10adfaa0df7e8c2027c6-2070e5a5399f6ffd.service: Failed with result 'exit-code'.
Sep 30 14:24:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:24:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:24:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:24:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:24:29 compute-0 systemd[1]: Started Session c1 of User root.
Sep 30 14:24:29 compute-0 sudo[153958]: pam_unix(sudo:session): session closed for user root
Sep 30 14:24:29 compute-0 ovn_controller[154021]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Sep 30 14:24:29 compute-0 ovn_controller[154021]: INFO:__main__:Validating config file
Sep 30 14:24:29 compute-0 ovn_controller[154021]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Sep 30 14:24:29 compute-0 ovn_controller[154021]: INFO:__main__:Writing out command to execute
Sep 30 14:24:29 compute-0 systemd[1]: session-c1.scope: Deactivated successfully.
Sep 30 14:24:29 compute-0 ovn_controller[154021]: ++ cat /run_command
Sep 30 14:24:29 compute-0 ovn_controller[154021]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Sep 30 14:24:29 compute-0 ovn_controller[154021]: + ARGS=
Sep 30 14:24:29 compute-0 ovn_controller[154021]: + sudo kolla_copy_cacerts
Sep 30 14:24:29 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:24:29 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:24:29 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:24:29.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:24:29 compute-0 systemd[1]: Started Session c2 of User root.
Sep 30 14:24:29 compute-0 ovn_controller[154021]: + [[ ! -n '' ]]
Sep 30 14:24:29 compute-0 ovn_controller[154021]: + . kolla_extend_start
Sep 30 14:24:29 compute-0 ovn_controller[154021]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Sep 30 14:24:29 compute-0 ovn_controller[154021]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Sep 30 14:24:29 compute-0 ovn_controller[154021]: + umask 0022
Sep 30 14:24:29 compute-0 ovn_controller[154021]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Sep 30 14:24:29 compute-0 systemd[1]: session-c2.scope: Deactivated successfully.
Sep 30 14:24:29 compute-0 ovn_controller[154021]: 2025-09-30T14:24:29Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Sep 30 14:24:29 compute-0 ovn_controller[154021]: 2025-09-30T14:24:29Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Sep 30 14:24:29 compute-0 ovn_controller[154021]: 2025-09-30T14:24:29Z|00003|main|INFO|OVN internal version is : [24.03.7-20.33.0-76.8]
Sep 30 14:24:29 compute-0 ovn_controller[154021]: 2025-09-30T14:24:29Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Sep 30 14:24:29 compute-0 ovn_controller[154021]: 2025-09-30T14:24:29Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Sep 30 14:24:29 compute-0 ovn_controller[154021]: 2025-09-30T14:24:29Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Sep 30 14:24:29 compute-0 NetworkManager[45472]: <info>  [1759242269.9300] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Sep 30 14:24:29 compute-0 NetworkManager[45472]: <info>  [1759242269.9309] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Sep 30 14:24:29 compute-0 NetworkManager[45472]: <info>  [1759242269.9321] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Sep 30 14:24:29 compute-0 NetworkManager[45472]: <info>  [1759242269.9328] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Sep 30 14:24:29 compute-0 NetworkManager[45472]: <info>  [1759242269.9330] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Sep 30 14:24:29 compute-0 kernel: br-int: entered promiscuous mode
Sep 30 14:24:29 compute-0 ovn_controller[154021]: 2025-09-30T14:24:29Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Sep 30 14:24:29 compute-0 ovn_controller[154021]: 2025-09-30T14:24:29Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Sep 30 14:24:29 compute-0 ovn_controller[154021]: 2025-09-30T14:24:29Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Sep 30 14:24:29 compute-0 ovn_controller[154021]: 2025-09-30T14:24:29Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Sep 30 14:24:29 compute-0 ovn_controller[154021]: 2025-09-30T14:24:29Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Sep 30 14:24:29 compute-0 ovn_controller[154021]: 2025-09-30T14:24:29Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Sep 30 14:24:29 compute-0 ovn_controller[154021]: 2025-09-30T14:24:29Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Sep 30 14:24:29 compute-0 ovn_controller[154021]: 2025-09-30T14:24:29Z|00014|main|INFO|OVS feature set changed, force recompute.
Sep 30 14:24:29 compute-0 ovn_controller[154021]: 2025-09-30T14:24:29Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Sep 30 14:24:29 compute-0 ovn_controller[154021]: 2025-09-30T14:24:29Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Sep 30 14:24:29 compute-0 ovn_controller[154021]: 2025-09-30T14:24:29Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Sep 30 14:24:29 compute-0 ovn_controller[154021]: 2025-09-30T14:24:29Z|00018|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Sep 30 14:24:29 compute-0 ovn_controller[154021]: 2025-09-30T14:24:29Z|00019|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Sep 30 14:24:29 compute-0 ovn_controller[154021]: 2025-09-30T14:24:29Z|00020|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Sep 30 14:24:29 compute-0 ovn_controller[154021]: 2025-09-30T14:24:29Z|00021|main|INFO|OVS feature set changed, force recompute.
Sep 30 14:24:29 compute-0 ovn_controller[154021]: 2025-09-30T14:24:29Z|00022|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Sep 30 14:24:29 compute-0 ovn_controller[154021]: 2025-09-30T14:24:29Z|00023|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Sep 30 14:24:29 compute-0 ovn_controller[154021]: 2025-09-30T14:24:29Z|00024|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Sep 30 14:24:29 compute-0 ovn_controller[154021]: 2025-09-30T14:24:29Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Sep 30 14:24:29 compute-0 ovn_controller[154021]: 2025-09-30T14:24:29Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Sep 30 14:24:29 compute-0 ovn_controller[154021]: 2025-09-30T14:24:29Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Sep 30 14:24:29 compute-0 ovn_controller[154021]: 2025-09-30T14:24:29Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Sep 30 14:24:29 compute-0 NetworkManager[45472]: <info>  [1759242269.9573] manager: (ovn-388e30-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Sep 30 14:24:29 compute-0 NetworkManager[45472]: <info>  [1759242269.9578] manager: (ovn-6b99fc-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/20)
Sep 30 14:24:29 compute-0 NetworkManager[45472]: <info>  [1759242269.9583] manager: (ovn-28c918-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/21)
Sep 30 14:24:29 compute-0 systemd-udevd[154177]: Network interface NamePolicy= disabled on kernel command line.
Sep 30 14:24:29 compute-0 kernel: genev_sys_6081: entered promiscuous mode
Sep 30 14:24:29 compute-0 NetworkManager[45472]: <info>  [1759242269.9764] device (genev_sys_6081): carrier: link connected
Sep 30 14:24:29 compute-0 NetworkManager[45472]: <info>  [1759242269.9766] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/22)
Sep 30 14:24:29 compute-0 systemd-udevd[154180]: Network interface NamePolicy= disabled on kernel command line.
Sep 30 14:24:29 compute-0 ovn_controller[154021]: 2025-09-30T14:24:29Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Sep 30 14:24:29 compute-0 ovn_controller[154021]: 2025-09-30T14:24:29Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Sep 30 14:24:30 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:24:30 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:24:30 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:24:30.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:24:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-nfs-cephfs-compute-0-yvkpei[97239]: [WARNING] 272/142430 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Sep 30 14:24:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:24:30 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad0c001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:24:30 compute-0 sudo[154284]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvwphppcsdpfbvrrjnxlswejawbmsidp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242269.9776914-1787-141207555535247/AnsiballZ_command.py'
Sep 30 14:24:30 compute-0 sudo[154284]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:24:30 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v260: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Sep 30 14:24:30 compute-0 python3.9[154286]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:24:30 compute-0 ovs-vsctl[154287]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Sep 30 14:24:30 compute-0 sudo[154284]: pam_unix(sudo:session): session closed for user root
Sep 30 14:24:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:24:30 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facf4000fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:24:30 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:24:30 compute-0 sudo[154437]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aedroijjvcsfrahwhsvdxsxqcprmdctg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242270.6954424-1811-247998849433472/AnsiballZ_command.py'
Sep 30 14:24:30 compute-0 sudo[154437]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:24:31 compute-0 python3.9[154439]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:24:31 compute-0 ovs-vsctl[154441]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Sep 30 14:24:31 compute-0 sudo[154437]: pam_unix(sudo:session): session closed for user root
Sep 30 14:24:31 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:24:31 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:24:31 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad00002000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:24:31 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:24:31 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:24:31 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:24:31.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:24:32 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:24:32 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:24:32 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:24:32.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:24:32 compute-0 sudo[154594]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nyplydxlpnpemesojxnigokkuqmietvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242271.8657372-1853-177475391434539/AnsiballZ_command.py'
Sep 30 14:24:32 compute-0 sudo[154594]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:24:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:24:32 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facec0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:24:32 compute-0 python3.9[154596]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:24:32 compute-0 ovs-vsctl[154597]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Sep 30 14:24:32 compute-0 sudo[154594]: pam_unix(sudo:session): session closed for user root
Sep 30 14:24:32 compute-0 ceph-mon[74194]: pgmap v260: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Sep 30 14:24:32 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v261: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Sep 30 14:24:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:24:32 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad0c001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:24:32 compute-0 sshd-session[142415]: Connection closed by 192.168.122.30 port 35054
Sep 30 14:24:32 compute-0 sshd-session[142412]: pam_unix(sshd:session): session closed for user zuul
Sep 30 14:24:32 compute-0 systemd[1]: session-50.scope: Deactivated successfully.
Sep 30 14:24:32 compute-0 systemd[1]: session-50.scope: Consumed 54.534s CPU time.
Sep 30 14:24:32 compute-0 systemd-logind[808]: Session 50 logged out. Waiting for processes to exit.
Sep 30 14:24:32 compute-0 systemd-logind[808]: Removed session 50.
Sep 30 14:24:33 compute-0 ceph-mon[74194]: pgmap v261: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Sep 30 14:24:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:24:33 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facf4001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:24:33 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:24:33 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:24:33 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:24:33.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:24:34 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:24:34 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:24:34 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:24:34.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:24:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:24:34 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad00002000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:24:34 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v262: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Sep 30 14:24:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:24:34 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facec0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:24:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:24:34] "GET /metrics HTTP/1.1" 200 48418 "" "Prometheus/2.51.0"
Sep 30 14:24:34 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:24:34] "GET /metrics HTTP/1.1" 200 48418 "" "Prometheus/2.51.0"
Sep 30 14:24:35 compute-0 ceph-mon[74194]: pgmap v262: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Sep 30 14:24:35 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:24:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:24:35 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad0c002910 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:24:35 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:24:35 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:24:35 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:24:35.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:24:36 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:24:36 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:24:36 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:24:36.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:24:36 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:24:36 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facf4001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:24:36 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v263: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 426 B/s wr, 2 op/s
Sep 30 14:24:36 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:24:36 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad00002000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:24:36 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:24:36.993Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:24:37 compute-0 ceph-mon[74194]: pgmap v263: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 426 B/s wr, 2 op/s
Sep 30 14:24:37 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:24:37 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facec0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:24:37 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:24:37 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:24:37 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:24:37.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:24:38 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:24:38 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:24:38 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:24:38.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:24:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:24:38 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad0c002910 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:24:38 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v264: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Sep 30 14:24:38 compute-0 sshd-session[154628]: Accepted publickey for zuul from 192.168.122.30 port 53780 ssh2: ECDSA SHA256:bXV1aFTGAGwGo0hLh6HZ3pTGxlJrPf0VedxXflT3nU8
Sep 30 14:24:38 compute-0 systemd-logind[808]: New session 52 of user zuul.
Sep 30 14:24:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:24:38 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facf4001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:24:38 compute-0 systemd[1]: Started Session 52 of User zuul.
Sep 30 14:24:38 compute-0 sshd-session[154628]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Sep 30 14:24:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:24:38.869Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:24:39 compute-0 sudo[154783]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:24:39 compute-0 sudo[154783]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:24:39 compute-0 sudo[154783]: pam_unix(sudo:session): session closed for user root
Sep 30 14:24:39 compute-0 python3.9[154781]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Sep 30 14:24:39 compute-0 ceph-mon[74194]: pgmap v264: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Sep 30 14:24:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:24:39 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad00002000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:24:39 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:24:39 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:24:39 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:24:39.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:24:40 compute-0 systemd[1]: Stopping User Manager for UID 0...
Sep 30 14:24:40 compute-0 systemd[154060]: Activating special unit Exit the Session...
Sep 30 14:24:40 compute-0 systemd[154060]: Stopped target Main User Target.
Sep 30 14:24:40 compute-0 systemd[154060]: Stopped target Basic System.
Sep 30 14:24:40 compute-0 systemd[154060]: Stopped target Paths.
Sep 30 14:24:40 compute-0 systemd[154060]: Stopped target Sockets.
Sep 30 14:24:40 compute-0 systemd[154060]: Stopped target Timers.
Sep 30 14:24:40 compute-0 systemd[154060]: Stopped Daily Cleanup of User's Temporary Directories.
Sep 30 14:24:40 compute-0 systemd[154060]: Closed D-Bus User Message Bus Socket.
Sep 30 14:24:40 compute-0 systemd[154060]: Stopped Create User's Volatile Files and Directories.
Sep 30 14:24:40 compute-0 systemd[154060]: Removed slice User Application Slice.
Sep 30 14:24:40 compute-0 systemd[154060]: Reached target Shutdown.
Sep 30 14:24:40 compute-0 systemd[154060]: Finished Exit the Session.
Sep 30 14:24:40 compute-0 systemd[154060]: Reached target Exit the Session.
Sep 30 14:24:40 compute-0 systemd[1]: user@0.service: Deactivated successfully.
Sep 30 14:24:40 compute-0 systemd[1]: Stopped User Manager for UID 0.
Sep 30 14:24:40 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/0...
Sep 30 14:24:40 compute-0 systemd[1]: run-user-0.mount: Deactivated successfully.
Sep 30 14:24:40 compute-0 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Sep 30 14:24:40 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/0.
Sep 30 14:24:40 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:24:40 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:24:40 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:24:40.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:24:40 compute-0 systemd[1]: Removed slice User Slice of UID 0.
Sep 30 14:24:40 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:24:40 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facec002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:24:40 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v265: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Sep 30 14:24:40 compute-0 sudo[154965]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcmmxxqpukbslumbcghogwoyfmkqzjms ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242279.975571-62-239022756292701/AnsiballZ_file.py'
Sep 30 14:24:40 compute-0 sudo[154965]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:24:40 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:24:40 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:24:40 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad0c002910 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:24:40 compute-0 python3.9[154967]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:24:40 compute-0 sudo[154965]: pam_unix(sudo:session): session closed for user root
Sep 30 14:24:40 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-nfs-cephfs-compute-0-yvkpei[97239]: [WARNING] 272/142440 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Sep 30 14:24:40 compute-0 sudo[155117]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-syetxjmtlxseyfbpgxynkkwkbwvchsol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242280.7546456-62-173298555315411/AnsiballZ_file.py'
Sep 30 14:24:40 compute-0 sudo[155117]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:24:41 compute-0 python3.9[155119]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:24:41 compute-0 sudo[155117]: pam_unix(sudo:session): session closed for user root
Sep 30 14:24:41 compute-0 sudo[155270]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewzaxymrfntysyxtylltuyojmkdafmnw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242281.3527353-62-46060812265952/AnsiballZ_file.py'
Sep 30 14:24:41 compute-0 sudo[155270]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:24:41 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:24:41 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facf4002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:24:41 compute-0 ceph-mon[74194]: pgmap v265: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Sep 30 14:24:41 compute-0 python3.9[155272]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:24:41 compute-0 sudo[155270]: pam_unix(sudo:session): session closed for user root
Sep 30 14:24:41 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:24:41 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:24:41 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:24:41.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:24:42 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:24:42 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:24:42 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:24:42.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:24:42 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:24:42 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad00002000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:24:42 compute-0 sudo[155423]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qljpdlylixuemznjdwmpwawicrcbtpct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242281.915201-62-186217519719823/AnsiballZ_file.py'
Sep 30 14:24:42 compute-0 sudo[155423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:24:42 compute-0 python3.9[155425]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:24:42 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v266: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Sep 30 14:24:42 compute-0 sudo[155423]: pam_unix(sudo:session): session closed for user root
Sep 30 14:24:42 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:24:42 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facec002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:24:42 compute-0 sudo[155575]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwhboimsjbbvjnivjtzinegddfgthutk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242282.5433729-62-10266309102819/AnsiballZ_file.py'
Sep 30 14:24:42 compute-0 sudo[155575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:24:42 compute-0 python3.9[155577]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:24:43 compute-0 sudo[155575]: pam_unix(sudo:session): session closed for user root
Sep 30 14:24:43 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:24:43 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad0c003da0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:24:43 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:24:43 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:24:43 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:24:43.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:24:43 compute-0 python3.9[155728]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Sep 30 14:24:44 compute-0 ceph-mon[74194]: pgmap v266: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Sep 30 14:24:44 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:24:44 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:24:44 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:24:44.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:24:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:24:44 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facf4002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:24:44 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v267: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Sep 30 14:24:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:24:44 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad00002000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:24:44 compute-0 sudo[155879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmqjvuixtuyrurnnsjksrbzbxyawhmes ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242284.1505036-194-73998135901502/AnsiballZ_seboolean.py'
Sep 30 14:24:44 compute-0 sudo[155879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:24:44 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:24:44 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:24:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:24:44] "GET /metrics HTTP/1.1" 200 48412 "" "Prometheus/2.51.0"
Sep 30 14:24:44 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:24:44] "GET /metrics HTTP/1.1" 200 48412 "" "Prometheus/2.51.0"
Sep 30 14:24:44 compute-0 python3.9[155881]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Sep 30 14:24:45 compute-0 ceph-mon[74194]: pgmap v267: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Sep 30 14:24:45 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:24:45 compute-0 sudo[155879]: pam_unix(sudo:session): session closed for user root
Sep 30 14:24:45 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:24:45 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:24:45 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facec002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:24:45 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:24:45 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:24:45 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:24:45.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:24:46 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:24:46 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:24:46 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:24:46.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:24:46 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:24:46 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad0c003da0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:24:46 compute-0 python3.9[156033]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:24:46 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v268: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:24:46 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:24:46 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facf4002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:24:46 compute-0 python3.9[156154]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759242285.6730762-218-33608788435300/.source follow=False _original_basename=haproxy.j2 checksum=95c62e64c8f82dd9393a560d1b052dc98d38f810 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:24:46 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:24:46.994Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:24:46 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:24:46.994Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:24:46 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:24:46.994Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:24:47 compute-0 ceph-mon[74194]: pgmap v268: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:24:47 compute-0 python3.9[156305]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:24:47 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:24:47 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facec003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:24:47 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:24:47 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:24:47 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:24:47.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:24:48 compute-0 python3.9[156427]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759242287.1465697-263-42182594882972/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:24:48 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:24:48 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:24:48 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:24:48.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:24:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:24:48 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad00002000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:24:48 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v269: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Sep 30 14:24:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:24:48 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad0c003da0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:24:48 compute-0 sudo[156578]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onmmjjpfmghnbxtbfpsyrrgwuocjwuce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242288.5535896-314-210901063382137/AnsiballZ_setup.py'
Sep 30 14:24:48 compute-0 sudo[156578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:24:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:24:48.870Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:24:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:24:48.870Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:24:49 compute-0 python3.9[156580]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Sep 30 14:24:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:24:49 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:24:49 compute-0 sudo[156578]: pam_unix(sudo:session): session closed for user root
Sep 30 14:24:49 compute-0 ceph-mon[74194]: pgmap v269: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Sep 30 14:24:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:24:49 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facf4004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:24:49 compute-0 sudo[156664]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnpthkvepqbnupyhpipelvcqiiigrzjv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242288.5535896-314-210901063382137/AnsiballZ_dnf.py'
Sep 30 14:24:49 compute-0 sudo[156664]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:24:49 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:24:49 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:24:49 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:24:49.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:24:50 compute-0 python3.9[156666]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Sep 30 14:24:50 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:24:50 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:24:50 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:24:50.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:24:50 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:24:50 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facec003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:24:50 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v270: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Sep 30 14:24:50 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:24:50 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:24:50 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facec003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:24:51 compute-0 sudo[156664]: pam_unix(sudo:session): session closed for user root
Sep 30 14:24:51 compute-0 ceph-mon[74194]: pgmap v270: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Sep 30 14:24:51 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:24:51 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad0c003da0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:24:51 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:24:51 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:24:51 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:24:51.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:24:52 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:24:52 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:24:52 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:24:52.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:24:52 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:24:52 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facf4004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:24:52 compute-0 sudo[156819]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezeyikbidefygjpludscfgvnmhenhlyp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242291.6712735-350-241842222622986/AnsiballZ_systemd.py'
Sep 30 14:24:52 compute-0 sudo[156819]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:24:52 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:24:52 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:24:52 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:24:52 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:24:52 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v271: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Sep 30 14:24:52 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:24:52 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad00002000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:24:52 compute-0 python3.9[156821]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Sep 30 14:24:52 compute-0 sudo[156819]: pam_unix(sudo:session): session closed for user root
Sep 30 14:24:53 compute-0 python3.9[156974]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:24:53 compute-0 ceph-mon[74194]: pgmap v271: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Sep 30 14:24:53 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:24:53 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facec003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:24:53 compute-0 python3.9[157096]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759242292.8530767-374-213366481896583/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:24:53 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:24:53 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:24:53 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:24:53.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:24:54 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:24:54 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:24:54 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:24:54.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:24:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:24:54 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad0c003da0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:24:54 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v272: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Sep 30 14:24:54 compute-0 python3.9[157247]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:24:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:24:54 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facf4004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:24:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:24:54] "GET /metrics HTTP/1.1" 200 48412 "" "Prometheus/2.51.0"
Sep 30 14:24:54 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:24:54] "GET /metrics HTTP/1.1" 200 48412 "" "Prometheus/2.51.0"
Sep 30 14:24:54 compute-0 python3.9[157368]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759242293.9980028-374-203983572001040/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:24:55 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:24:55 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Sep 30 14:24:55 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:24:55 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:24:55 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad00002000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:24:55 compute-0 ceph-mon[74194]: pgmap v272: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Sep 30 14:24:55 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:24:55 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:24:55 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:24:55.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:24:56 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:24:56 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:24:56 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:24:56.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:24:56 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:24:56 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facec003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:24:56 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v273: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Sep 30 14:24:56 compute-0 python3.9[157521]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:24:56 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:24:56 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad0c003da0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:24:56 compute-0 python3.9[157642]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759242295.7959514-506-57656483277642/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:24:56 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:24:56.994Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:24:57 compute-0 python3.9[157793]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:24:57 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:24:57 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facf4004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:24:57 compute-0 ceph-mon[74194]: pgmap v273: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Sep 30 14:24:57 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:24:57 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:24:57 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:24:57.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:24:58 compute-0 python3.9[157915]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759242297.090905-506-187538432519645/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:24:58 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:24:58 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:24:58 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:24:58.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:24:58 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:24:58 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad00002000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:24:58 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v274: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Sep 30 14:24:58 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:24:58 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facec003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:24:58 compute-0 python3.9[158065]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 14:24:58 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:24:58.871Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:24:58 compute-0 ceph-mon[74194]: pgmap v274: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Sep 30 14:24:59 compute-0 sudo[158217]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzzeykiytdobixhzswyokabgschxhfdg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242298.9596741-620-269893401311919/AnsiballZ_file.py'
Sep 30 14:24:59 compute-0 sudo[158217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:24:59 compute-0 python3.9[158219]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:24:59 compute-0 sudo[158217]: pam_unix(sudo:session): session closed for user root
Sep 30 14:24:59 compute-0 ceph-mgr[74485]: [balancer INFO root] Optimize plan auto_2025-09-30_14:24:59
Sep 30 14:24:59 compute-0 ceph-mgr[74485]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 14:24:59 compute-0 ceph-mgr[74485]: [balancer INFO root] do_upmap
Sep 30 14:24:59 compute-0 ceph-mgr[74485]: [balancer INFO root] pools ['cephfs.cephfs.data', 'vms', '.mgr', '.nfs', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.log', '.rgw.root', 'default.rgw.meta', 'volumes', 'backups', 'images']
Sep 30 14:24:59 compute-0 ceph-mgr[74485]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 14:24:59 compute-0 sudo[158221]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:24:59 compute-0 sudo[158221]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:24:59 compute-0 sudo[158221]: pam_unix(sudo:session): session closed for user root
Sep 30 14:24:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 14:24:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:24:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 14:24:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:24:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:24:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:24:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:24:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:24:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:24:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:24:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:24:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:24:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Sep 30 14:24:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:24:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:24:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:24:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Sep 30 14:24:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:24:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Sep 30 14:24:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:24:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:24:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:24:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 14:24:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:24:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 14:24:59 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:24:59 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:24:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:24:59 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad0c003da0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:24:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:24:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:24:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:24:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:24:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:24:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:24:59 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:24:59 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:24:59 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:24:59 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:24:59.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:24:59 compute-0 sudo[158410]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btlnrzbtapcovpkdjsrvqqquhxzpesjv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242299.6525638-644-63817578776115/AnsiballZ_stat.py'
Sep 30 14:24:59 compute-0 sudo[158410]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:24:59 compute-0 ovn_controller[154021]: 2025-09-30T14:24:59Z|00025|memory|INFO|16128 kB peak resident set size after 30.0 seconds
Sep 30 14:24:59 compute-0 ovn_controller[154021]: 2025-09-30T14:24:59Z|00026|memory|INFO|idl-cells-OVN_Southbound:273 idl-cells-Open_vSwitch:642 ofctrl_desired_flow_usage-KB:7 ofctrl_installed_flow_usage-KB:5 ofctrl_sb_flow_ref_usage-KB:3
Sep 30 14:24:59 compute-0 podman[158373]: 2025-09-30 14:24:59.960958406 +0000 UTC m=+0.092703163 container health_status 8ace90c1ad44d98a416105b72e1c541724757ab08efa10adfaa0df7e8c2027c6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Sep 30 14:25:00 compute-0 python3.9[158418]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:25:00 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:25:00 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:25:00 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:25:00.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:25:00 compute-0 sudo[158410]: pam_unix(sudo:session): session closed for user root
Sep 30 14:25:00 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:25:00 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad0c003da0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:25:00 compute-0 sudo[158503]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldyrsakqhnftududlwfgegbtnkqiaqyi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242299.6525638-644-63817578776115/AnsiballZ_file.py'
Sep 30 14:25:00 compute-0 sudo[158503]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:25:00 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v275: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Sep 30 14:25:00 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:25:00 compute-0 python3.9[158505]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:25:00 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:25:00 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad04000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:25:00 compute-0 sudo[158503]: pam_unix(sudo:session): session closed for user root
Sep 30 14:25:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 14:25:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 14:25:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 14:25:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 14:25:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 14:25:00 compute-0 ceph-mon[74194]: pgmap v275: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Sep 30 14:25:00 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-nfs-cephfs-compute-0-yvkpei[97239]: [WARNING] 272/142500 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Sep 30 14:25:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 14:25:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 14:25:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 14:25:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 14:25:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 14:25:00 compute-0 sudo[158655]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvjihirlaqyknbsaalihmimtvfppwogt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242300.7043772-644-123386564229183/AnsiballZ_stat.py'
Sep 30 14:25:00 compute-0 sudo[158655]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:25:01 compute-0 python3.9[158657]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:25:01 compute-0 sudo[158655]: pam_unix(sudo:session): session closed for user root
Sep 30 14:25:01 compute-0 sudo[158734]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwdlldbhrkcnktknlpsaorxaybovbfwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242300.7043772-644-123386564229183/AnsiballZ_file.py'
Sep 30 14:25:01 compute-0 sudo[158734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:25:01 compute-0 python3.9[158736]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:25:01 compute-0 sudo[158734]: pam_unix(sudo:session): session closed for user root
Sep 30 14:25:01 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:25:01 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7face8000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:25:01 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:25:01 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:25:01 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:25:01.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:25:02 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:25:02 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:25:02 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:25:02.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:25:02 compute-0 sudo[158887]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-esqajcggnkgdyjefdeilpdalpeleoxus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242301.906129-713-148201015963992/AnsiballZ_file.py'
Sep 30 14:25:02 compute-0 sudo[158887]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:25:02 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:25:02 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facf4004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:25:02 compute-0 python3.9[158889]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:25:02 compute-0 sudo[158887]: pam_unix(sudo:session): session closed for user root
Sep 30 14:25:02 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v276: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Sep 30 14:25:02 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:25:02 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad0c003da0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:25:02 compute-0 sudo[159039]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrstvwtnjuwxubmnajwkbfmyhkuurwwe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242302.646-737-254795559764745/AnsiballZ_stat.py'
Sep 30 14:25:02 compute-0 sudo[159039]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:25:03 compute-0 python3.9[159041]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:25:03 compute-0 sudo[159039]: pam_unix(sudo:session): session closed for user root
Sep 30 14:25:03 compute-0 sudo[159117]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbgdfofsxdwtqnjpklstzvzkcrjfxhmq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242302.646-737-254795559764745/AnsiballZ_file.py'
Sep 30 14:25:03 compute-0 sudo[159117]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:25:03 compute-0 python3.9[159120]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:25:03 compute-0 sudo[159117]: pam_unix(sudo:session): session closed for user root
Sep 30 14:25:03 compute-0 ceph-mon[74194]: pgmap v276: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Sep 30 14:25:03 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:25:03 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad04001930 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:25:03 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:25:03 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:25:03 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:25:03.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:25:04 compute-0 sudo[159271]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lsaecikrlyvnwnjjbonotxlydtvkemye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242303.788261-773-24149907194138/AnsiballZ_stat.py'
Sep 30 14:25:04 compute-0 sudo[159271]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:25:04 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:25:04 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:25:04 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:25:04.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:25:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:25:04 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7face80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:25:04 compute-0 python3.9[159273]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:25:04 compute-0 sudo[159271]: pam_unix(sudo:session): session closed for user root
Sep 30 14:25:04 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v277: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 426 B/s wr, 2 op/s
Sep 30 14:25:04 compute-0 sudo[159349]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmkrnhzdvpcagbnitmckwiqeaboecjij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242303.788261-773-24149907194138/AnsiballZ_file.py'
Sep 30 14:25:04 compute-0 sudo[159349]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:25:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:25:04 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facf4004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:25:04 compute-0 python3.9[159351]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:25:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:25:04] "GET /metrics HTTP/1.1" 200 48417 "" "Prometheus/2.51.0"
Sep 30 14:25:04 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:25:04] "GET /metrics HTTP/1.1" 200 48417 "" "Prometheus/2.51.0"
Sep 30 14:25:04 compute-0 sudo[159349]: pam_unix(sudo:session): session closed for user root
Sep 30 14:25:05 compute-0 sudo[159501]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yokxvdyrcwzleuutnhlykmpgqlzmnvhw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242304.9710743-809-53576753298187/AnsiballZ_systemd.py'
Sep 30 14:25:05 compute-0 sudo[159501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:25:05 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:25:05 compute-0 python3.9[159503]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 14:25:05 compute-0 systemd[1]: Reloading.
Sep 30 14:25:05 compute-0 ceph-mon[74194]: pgmap v277: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 426 B/s wr, 2 op/s
Sep 30 14:25:05 compute-0 systemd-rc-local-generator[159531]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:25:05 compute-0 systemd-sysv-generator[159535]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 14:25:05 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:25:05 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facf4004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:25:05 compute-0 sudo[159501]: pam_unix(sudo:session): session closed for user root
Sep 30 14:25:05 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:25:05 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:25:05 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:25:05.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:25:06 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:25:06 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000028s ======
Sep 30 14:25:06 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:25:06.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Sep 30 14:25:06 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:25:06 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fad04001930 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:25:06 compute-0 sudo[159692]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxxwucpdwtwspophbxoxuqfssccrvwnq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242306.0999734-833-237102045646525/AnsiballZ_stat.py'
Sep 30 14:25:06 compute-0 sudo[159692]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:25:06 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v278: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 426 B/s wr, 2 op/s
Sep 30 14:25:06 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:25:06 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7face80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:25:06 compute-0 python3.9[159694]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:25:06 compute-0 sudo[159692]: pam_unix(sudo:session): session closed for user root
Sep 30 14:25:06 compute-0 sudo[159770]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wcfbgwphiorkreyfzsqltknqvbrjjjty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242306.0999734-833-237102045646525/AnsiballZ_file.py'
Sep 30 14:25:06 compute-0 sudo[159770]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:25:06 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:25:06.996Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:25:06 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:25:06.997Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:25:07 compute-0 python3.9[159772]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:25:07 compute-0 sudo[159770]: pam_unix(sudo:session): session closed for user root
Sep 30 14:25:07 compute-0 sudo[159923]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igsudefxvuzdexusyzzrxgxellkqsmsy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242307.2932022-869-18322097429560/AnsiballZ_stat.py'
Sep 30 14:25:07 compute-0 sudo[159923]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:25:07 compute-0 ceph-mon[74194]: pgmap v278: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 426 B/s wr, 2 op/s
Sep 30 14:25:07 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:25:07 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facf4004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:25:07 compute-0 python3.9[159925]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:25:07 compute-0 sudo[159923]: pam_unix(sudo:session): session closed for user root
Sep 30 14:25:07 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:25:07 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:25:07 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:25:07.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:25:07 compute-0 sudo[160002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-erhtbpbvmodsxghwmrqwwxniugswjtfx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242307.2932022-869-18322097429560/AnsiballZ_file.py'
Sep 30 14:25:07 compute-0 sudo[160002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:25:08 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:25:08 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:25:08 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:25:08.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:25:08 compute-0 kernel: ganesha.nfsd[153851]: segfault at 50 ip 00007fadc840a32e sp 00007fad99ffa210 error 4 in libntirpc.so.5.8[7fadc83ef000+2c000] likely on CPU 1 (core 0, socket 1)
Sep 30 14:25:08 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Sep 30 14:25:08 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[152020]: 30/09/2025 14:25:08 : epoch 68dbe80f : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7facf4004050 fd 39 proxy ignored for local
Sep 30 14:25:08 compute-0 python3.9[160004]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:25:08 compute-0 systemd[1]: Started Process Core Dump (PID 160005/UID 0).
Sep 30 14:25:08 compute-0 sudo[160002]: pam_unix(sudo:session): session closed for user root
Sep 30 14:25:08 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v279: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Sep 30 14:25:08 compute-0 sudo[160156]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqokcpcmfomioacezhwgyypbevnbdxad ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242308.449958-905-68049976742929/AnsiballZ_systemd.py'
Sep 30 14:25:08 compute-0 sudo[160156]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:25:08 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:25:08.872Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:25:09 compute-0 python3.9[160158]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 14:25:09 compute-0 systemd[1]: Reloading.
Sep 30 14:25:09 compute-0 systemd-rc-local-generator[160183]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:25:09 compute-0 systemd-sysv-generator[160187]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 14:25:09 compute-0 systemd[1]: Starting Create netns directory...
Sep 30 14:25:09 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Sep 30 14:25:09 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Sep 30 14:25:09 compute-0 systemd[1]: Finished Create netns directory.
Sep 30 14:25:09 compute-0 sudo[160156]: pam_unix(sudo:session): session closed for user root
Sep 30 14:25:09 compute-0 systemd-coredump[160006]: Process 152042 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 44:
                                                    #0  0x00007fadc840a32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Sep 30 14:25:09 compute-0 systemd[1]: systemd-coredump@4-160005-0.service: Deactivated successfully.
Sep 30 14:25:09 compute-0 systemd[1]: systemd-coredump@4-160005-0.service: Consumed 1.240s CPU time.
Sep 30 14:25:09 compute-0 podman[160230]: 2025-09-30 14:25:09.613347072 +0000 UTC m=+0.036131052 container died 3ee602e5338e3a60a8e04eee81924ddf620b5ea058acc017f2e9979ba848b7a3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:25:09 compute-0 ceph-mon[74194]: pgmap v279: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Sep 30 14:25:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-abc669bbc091ee3e4c0b33c8e81cda760bec0a9bc8609393f9ba34df276cf237-merged.mount: Deactivated successfully.
Sep 30 14:25:09 compute-0 podman[160230]: 2025-09-30 14:25:09.688935384 +0000 UTC m=+0.111719344 container remove 3ee602e5338e3a60a8e04eee81924ddf620b5ea058acc017f2e9979ba848b7a3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Sep 30 14:25:09 compute-0 systemd[1]: ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6@nfs.cephfs.2.0.compute-0.qrbicy.service: Main process exited, code=exited, status=139/n/a
Sep 30 14:25:09 compute-0 systemd[1]: ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6@nfs.cephfs.2.0.compute-0.qrbicy.service: Failed with result 'exit-code'.
Sep 30 14:25:09 compute-0 systemd[1]: ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6@nfs.cephfs.2.0.compute-0.qrbicy.service: Consumed 1.506s CPU time.
Sep 30 14:25:09 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:25:09 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:25:09 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:25:09.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:25:09 compute-0 sudo[160400]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mcxwmwqrialojhwmpnsaixyqmpbvxses ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242309.7267532-935-180389485344884/AnsiballZ_file.py'
Sep 30 14:25:09 compute-0 sudo[160400]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:25:10 compute-0 python3.9[160402]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:25:10 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:25:10 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:25:10 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:25:10.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:25:10 compute-0 sudo[160400]: pam_unix(sudo:session): session closed for user root
Sep 30 14:25:10 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v280: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Sep 30 14:25:10 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:25:10 compute-0 sudo[160552]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-istvlmmbbsccqicohtghalrtscvekivq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242310.4078872-959-251046042031221/AnsiballZ_stat.py'
Sep 30 14:25:10 compute-0 sudo[160552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:25:10 compute-0 python3.9[160554]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:25:10 compute-0 sudo[160552]: pam_unix(sudo:session): session closed for user root
Sep 30 14:25:11 compute-0 sudo[160675]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvwbskcfpgihsdbyrshalyyyzzgryoud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242310.4078872-959-251046042031221/AnsiballZ_copy.py'
Sep 30 14:25:11 compute-0 sudo[160675]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:25:11 compute-0 python3.9[160677]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759242310.4078872-959-251046042031221/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:25:11 compute-0 sudo[160675]: pam_unix(sudo:session): session closed for user root
Sep 30 14:25:11 compute-0 ceph-mon[74194]: pgmap v280: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Sep 30 14:25:11 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:25:11 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:25:11 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:25:11.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:25:12 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:25:12 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:25:12 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:25:12.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:25:12 compute-0 sudo[160829]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvvigbevvndjwhcxnwsgzuhdbexnuggx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242311.9376504-1010-84696653604565/AnsiballZ_file.py'
Sep 30 14:25:12 compute-0 sudo[160829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:25:12 compute-0 python3.9[160831]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:25:12 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v281: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Sep 30 14:25:12 compute-0 sudo[160829]: pam_unix(sudo:session): session closed for user root
Sep 30 14:25:12 compute-0 sudo[160981]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzopdufbigztfiusrdzoyglwykptzsbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242312.705513-1034-20079772722860/AnsiballZ_stat.py'
Sep 30 14:25:12 compute-0 sudo[160981]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:25:13 compute-0 python3.9[160983]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:25:13 compute-0 sudo[160981]: pam_unix(sudo:session): session closed for user root
Sep 30 14:25:13 compute-0 sudo[161105]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjkrrvgqeemswqzkxnunhfxgemuncklu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242312.705513-1034-20079772722860/AnsiballZ_copy.py'
Sep 30 14:25:13 compute-0 sudo[161105]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:25:13 compute-0 python3.9[161107]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759242312.705513-1034-20079772722860/.source.json _original_basename=.lg029g_k follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:25:13 compute-0 ceph-mon[74194]: pgmap v281: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Sep 30 14:25:13 compute-0 sudo[161105]: pam_unix(sudo:session): session closed for user root
Sep 30 14:25:13 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:25:13 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:25:13 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:25:13.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:25:14 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:25:14 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:25:14 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:25:14.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:25:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-nfs-cephfs-compute-0-yvkpei[97239]: [WARNING] 272/142514 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Sep 30 14:25:14 compute-0 sudo[161258]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apcbwywxipdvbhsvuxyzarfienmwunlv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242313.9101617-1079-197276442553475/AnsiballZ_file.py'
Sep 30 14:25:14 compute-0 sudo[161258]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:25:14 compute-0 python3.9[161260]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:25:14 compute-0 sudo[161258]: pam_unix(sudo:session): session closed for user root
Sep 30 14:25:14 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v282: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:25:14 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:25:14 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:25:14 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:25:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:25:14] "GET /metrics HTTP/1.1" 200 48414 "" "Prometheus/2.51.0"
Sep 30 14:25:14 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:25:14] "GET /metrics HTTP/1.1" 200 48414 "" "Prometheus/2.51.0"
Sep 30 14:25:14 compute-0 sudo[161410]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikzbgsllhfesxottvnwntrcuzncmkhpw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242314.659931-1103-229853851540111/AnsiballZ_stat.py'
Sep 30 14:25:14 compute-0 sudo[161410]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:25:15 compute-0 sudo[161410]: pam_unix(sudo:session): session closed for user root
Sep 30 14:25:15 compute-0 sudo[161534]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xinjoveyyhaddnajkzlvatoyyzheoipg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242314.659931-1103-229853851540111/AnsiballZ_copy.py'
Sep 30 14:25:15 compute-0 sudo[161534]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:25:15 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:25:15 compute-0 sudo[161534]: pam_unix(sudo:session): session closed for user root
Sep 30 14:25:15 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-nfs-cephfs-compute-0-yvkpei[97239]: [WARNING] 272/142515 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Sep 30 14:25:15 compute-0 ceph-mon[74194]: pgmap v282: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:25:15 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:25:15 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:25:15 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:25:15.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:25:16 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:25:16 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:25:16 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:25:16.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:25:16 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v283: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:25:16 compute-0 sudo[161687]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkbrmwhfbngtkhlvsifjewwvarohisvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242316.103321-1154-75839053227152/AnsiballZ_container_config_data.py'
Sep 30 14:25:16 compute-0 sudo[161687]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:25:16 compute-0 python3.9[161689]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Sep 30 14:25:16 compute-0 sudo[161687]: pam_unix(sudo:session): session closed for user root
Sep 30 14:25:16 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:25:16.998Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:25:17 compute-0 sudo[161840]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ziuztgvgbzfmfeckvvcptpcmfoxqpwgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242317.021546-1181-144040696870550/AnsiballZ_container_config_hash.py'
Sep 30 14:25:17 compute-0 sudo[161840]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:25:17 compute-0 python3.9[161842]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Sep 30 14:25:17 compute-0 sudo[161840]: pam_unix(sudo:session): session closed for user root
Sep 30 14:25:17 compute-0 ceph-mon[74194]: pgmap v283: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:25:17 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:25:17 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:25:17 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:25:17.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:25:18 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:25:18 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:25:18 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:25:18.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:25:18 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v284: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Sep 30 14:25:18 compute-0 auditd[705]: Audit daemon rotating log files
Sep 30 14:25:18 compute-0 sudo[161993]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrrfwtzqobwmotdhxrmgleopsuolmfdc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242317.986263-1208-172588247732893/AnsiballZ_podman_container_info.py'
Sep 30 14:25:18 compute-0 sudo[161993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:25:18 compute-0 python3.9[161995]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Sep 30 14:25:18 compute-0 sudo[161993]: pam_unix(sudo:session): session closed for user root
Sep 30 14:25:18 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:25:18.874Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:25:19 compute-0 sudo[162048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:25:19 compute-0 sudo[162048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:25:19 compute-0 sudo[162048]: pam_unix(sudo:session): session closed for user root
Sep 30 14:25:19 compute-0 ceph-mon[74194]: pgmap v284: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Sep 30 14:25:19 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:25:19 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:25:19 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:25:19.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:25:20 compute-0 systemd[1]: ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6@nfs.cephfs.2.0.compute-0.qrbicy.service: Scheduled restart job, restart counter is at 5.
Sep 30 14:25:20 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.qrbicy for 5e3c7776-ac03-5698-b79f-a6dc2d80cae6.
Sep 30 14:25:20 compute-0 systemd[1]: ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6@nfs.cephfs.2.0.compute-0.qrbicy.service: Consumed 1.506s CPU time.
Sep 30 14:25:20 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.qrbicy for 5e3c7776-ac03-5698-b79f-a6dc2d80cae6...
Sep 30 14:25:20 compute-0 sudo[162213]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfcbjmnqjgfozvyjzzmjyzvxotczknpo ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759242319.685228-1247-218461518920003/AnsiballZ_edpm_container_manage.py'
Sep 30 14:25:20 compute-0 sudo[162213]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:25:20 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:25:20 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:25:20 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:25:20.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:25:20 compute-0 podman[162253]: 2025-09-30 14:25:20.273707588 +0000 UTC m=+0.042947075 container create 80179a74fce2d068837386b4b41b6e1dd6e60344a4e95807b646c30c4597f9c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Sep 30 14:25:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/daead3677801043f1df4ec6991bed2a91f11985fb06e2646406fe300e2adbedb/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Sep 30 14:25:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/daead3677801043f1df4ec6991bed2a91f11985fb06e2646406fe300e2adbedb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:25:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/daead3677801043f1df4ec6991bed2a91f11985fb06e2646406fe300e2adbedb/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 14:25:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/daead3677801043f1df4ec6991bed2a91f11985fb06e2646406fe300e2adbedb/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.qrbicy-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 14:25:20 compute-0 podman[162253]: 2025-09-30 14:25:20.34669815 +0000 UTC m=+0.115937667 container init 80179a74fce2d068837386b4b41b6e1dd6e60344a4e95807b646c30c4597f9c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:25:20 compute-0 podman[162253]: 2025-09-30 14:25:20.25256021 +0000 UTC m=+0.021799717 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:25:20 compute-0 podman[162253]: 2025-09-30 14:25:20.352130576 +0000 UTC m=+0.121370063 container start 80179a74fce2d068837386b4b41b6e1dd6e60344a4e95807b646c30c4597f9c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:25:20 compute-0 bash[162253]: 80179a74fce2d068837386b4b41b6e1dd6e60344a4e95807b646c30c4597f9c6
Sep 30 14:25:20 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:25:20 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Sep 30 14:25:20 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:25:20 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Sep 30 14:25:20 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.qrbicy for 5e3c7776-ac03-5698-b79f-a6dc2d80cae6.
Sep 30 14:25:20 compute-0 python3[162218]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Sep 30 14:25:20 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v285: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Sep 30 14:25:20 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:25:20 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Sep 30 14:25:20 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:25:20 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Sep 30 14:25:20 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:25:20 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Sep 30 14:25:20 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:25:20 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Sep 30 14:25:20 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:25:20 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Sep 30 14:25:20 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:25:20 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:25:20 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:25:21 compute-0 ceph-mon[74194]: pgmap v285: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Sep 30 14:25:21 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:25:21 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:25:21 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:25:21.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:25:22 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:25:22 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:25:22 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:25:22.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:25:22 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v286: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:25:23 compute-0 ceph-mon[74194]: pgmap v286: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:25:23 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:25:23 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:25:23 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:25:23.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:25:24 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:25:24 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:25:24 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:25:24.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:25:24 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v287: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:25:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:25:24] "GET /metrics HTTP/1.1" 200 48414 "" "Prometheus/2.51.0"
Sep 30 14:25:24 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:25:24] "GET /metrics HTTP/1.1" 200 48414 "" "Prometheus/2.51.0"
Sep 30 14:25:24 compute-0 ceph-mon[74194]: pgmap v287: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:25:24 compute-0 sshd-session[162394]: Received disconnect from 91.224.92.108 port 18828:11:  [preauth]
Sep 30 14:25:24 compute-0 sshd-session[162394]: Disconnected from authenticating user root 91.224.92.108 port 18828 [preauth]
Sep 30 14:25:25 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:25:25 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:25:25 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:25:25 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:25:25.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:25:26 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:25:26 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:25:26 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:25:26.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:25:26 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v288: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 597 B/s wr, 2 op/s
Sep 30 14:25:26 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:25:26 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:25:26 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:25:26 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:25:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:25:27.000Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:25:27 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:25:27 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:25:27 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:25:27.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:25:28 compute-0 sudo[162415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:25:28 compute-0 sudo[162415]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:25:28 compute-0 sudo[162415]: pam_unix(sudo:session): session closed for user root
Sep 30 14:25:28 compute-0 sudo[162440]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 14:25:28 compute-0 sudo[162440]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:25:28 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:25:28 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:25:28 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:25:28.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:25:28 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v289: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 597 B/s wr, 2 op/s
Sep 30 14:25:28 compute-0 ceph-mon[74194]: pgmap v288: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 597 B/s wr, 2 op/s
Sep 30 14:25:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:25:28.875Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:25:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:25:29 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:25:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:25:29 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:25:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:25:29 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:25:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:25:29 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:25:29 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:25:29 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:25:29 compute-0 sudo[162440]: pam_unix(sudo:session): session closed for user root
Sep 30 14:25:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:25:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:25:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:25:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:25:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:25:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:25:29 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:25:29 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:25:29 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:25:29.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:25:30 compute-0 ceph-mon[74194]: pgmap v289: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 597 B/s wr, 2 op/s
Sep 30 14:25:30 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:25:30 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:25:30 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000028s ======
Sep 30 14:25:30 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:25:30.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Sep 30 14:25:30 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:25:30 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:25:30 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 14:25:30 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 14:25:30 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v290: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 605 B/s wr, 2 op/s
Sep 30 14:25:30 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 14:25:30 compute-0 podman[162316]: 2025-09-30 14:25:30.296338983 +0000 UTC m=+9.837534861 image pull aa21cc3d2531fe07b45a943d4ac1ba0268bfab26b0884a4a00fbad7695318ba9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Sep 30 14:25:30 compute-0 podman[162515]: 2025-09-30 14:25:30.372949071 +0000 UTC m=+0.299993136 container health_status 8ace90c1ad44d98a416105b72e1c541724757ab08efa10adfaa0df7e8c2027c6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Sep 30 14:25:30 compute-0 podman[162565]: 2025-09-30 14:25:30.40871426 +0000 UTC m=+0.020362504 image pull aa21cc3d2531fe07b45a943d4ac1ba0268bfab26b0884a4a00fbad7695318ba9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Sep 30 14:25:30 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:25:30 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 14:25:30 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:25:30 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:25:30 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 14:25:30 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 14:25:30 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 14:25:30 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 14:25:30 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:25:30 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:25:30 compute-0 sudo[162581]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:25:30 compute-0 sudo[162581]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:25:30 compute-0 sudo[162581]: pam_unix(sudo:session): session closed for user root
Sep 30 14:25:30 compute-0 podman[162565]: 2025-09-30 14:25:30.751055895 +0000 UTC m=+0.362704119 container create c96c6cc1b89de09f21811bef665f35e069874e3ad1bbd4c37d02227af98e3458 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_managed=true, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Sep 30 14:25:30 compute-0 python3[162218]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Sep 30 14:25:30 compute-0 sudo[162607]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 14:25:30 compute-0 sudo[162607]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:25:30 compute-0 sudo[162213]: pam_unix(sudo:session): session closed for user root
Sep 30 14:25:31 compute-0 podman[162722]: 2025-09-30 14:25:31.19176927 +0000 UTC m=+0.025677941 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:25:31 compute-0 sudo[162862]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kadlagrsqckaapjxoxrssfjvcvssbdnr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242331.2424061-1271-42233140739368/AnsiballZ_stat.py'
Sep 30 14:25:31 compute-0 sudo[162862]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:25:31 compute-0 podman[162722]: 2025-09-30 14:25:31.602839665 +0000 UTC m=+0.436748346 container create 5e618ffb4da69e5a67d2a62257b3ec9886e27b815c0da923bb985ac6d7cefab4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_noyce, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Sep 30 14:25:31 compute-0 systemd[1]: Started libpod-conmon-5e618ffb4da69e5a67d2a62257b3ec9886e27b815c0da923bb985ac6d7cefab4.scope.
Sep 30 14:25:31 compute-0 python3.9[162864]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 14:25:31 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:25:31 compute-0 sudo[162862]: pam_unix(sudo:session): session closed for user root
Sep 30 14:25:31 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:25:31 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 14:25:31 compute-0 ceph-mon[74194]: pgmap v290: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 605 B/s wr, 2 op/s
Sep 30 14:25:31 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:25:31 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:25:31 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 14:25:31 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 14:25:31 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:25:31 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:25:31 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:25:31 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:25:31.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:25:32 compute-0 podman[162722]: 2025-09-30 14:25:32.128806206 +0000 UTC m=+0.962714907 container init 5e618ffb4da69e5a67d2a62257b3ec9886e27b815c0da923bb985ac6d7cefab4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_noyce, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:25:32 compute-0 podman[162722]: 2025-09-30 14:25:32.136680774 +0000 UTC m=+0.970589425 container start 5e618ffb4da69e5a67d2a62257b3ec9886e27b815c0da923bb985ac6d7cefab4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_noyce, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Sep 30 14:25:32 compute-0 affectionate_noyce[162867]: 167 167
Sep 30 14:25:32 compute-0 systemd[1]: libpod-5e618ffb4da69e5a67d2a62257b3ec9886e27b815c0da923bb985ac6d7cefab4.scope: Deactivated successfully.
Sep 30 14:25:32 compute-0 podman[162722]: 2025-09-30 14:25:32.147472372 +0000 UTC m=+0.981381043 container attach 5e618ffb4da69e5a67d2a62257b3ec9886e27b815c0da923bb985ac6d7cefab4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_noyce, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Sep 30 14:25:32 compute-0 conmon[162867]: conmon 5e618ffb4da69e5a67d2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5e618ffb4da69e5a67d2a62257b3ec9886e27b815c0da923bb985ac6d7cefab4.scope/container/memory.events
Sep 30 14:25:32 compute-0 podman[162722]: 2025-09-30 14:25:32.148575353 +0000 UTC m=+0.982484014 container died 5e618ffb4da69e5a67d2a62257b3ec9886e27b815c0da923bb985ac6d7cefab4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_noyce, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:25:32 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:25:32 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:25:32 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:25:32.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:25:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-869ab27466f5a245a96a8a1f812a0d8e4f119e3894cb20dd5ef1629e969eb7c9-merged.mount: Deactivated successfully.
Sep 30 14:25:32 compute-0 podman[162722]: 2025-09-30 14:25:32.240026831 +0000 UTC m=+1.073935482 container remove 5e618ffb4da69e5a67d2a62257b3ec9886e27b815c0da923bb985ac6d7cefab4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Sep 30 14:25:32 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v291: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 1.0 KiB/s wr, 3 op/s
Sep 30 14:25:32 compute-0 systemd[1]: libpod-conmon-5e618ffb4da69e5a67d2a62257b3ec9886e27b815c0da923bb985ac6d7cefab4.scope: Deactivated successfully.
Sep 30 14:25:32 compute-0 podman[162995]: 2025-09-30 14:25:32.436750971 +0000 UTC m=+0.079587722 container create 69dec7ab89f208665af851babf9c0786f761a342fa3b810ae8278fee2cf043df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_elbakyan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True)
Sep 30 14:25:32 compute-0 systemd[1]: Started libpod-conmon-69dec7ab89f208665af851babf9c0786f761a342fa3b810ae8278fee2cf043df.scope.
Sep 30 14:25:32 compute-0 podman[162995]: 2025-09-30 14:25:32.3817527 +0000 UTC m=+0.024589471 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:25:32 compute-0 sudo[163062]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifkhjbzwhcytyamhazjocixsqekhapta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242332.1917765-1298-54586072622327/AnsiballZ_file.py'
Sep 30 14:25:32 compute-0 sudo[163062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:25:32 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:25:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/122d6c2a9c3904d5f63ca726c05402092c179a6927f9373c66eca77936311fb4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:25:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/122d6c2a9c3904d5f63ca726c05402092c179a6927f9373c66eca77936311fb4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:25:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/122d6c2a9c3904d5f63ca726c05402092c179a6927f9373c66eca77936311fb4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:25:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/122d6c2a9c3904d5f63ca726c05402092c179a6927f9373c66eca77936311fb4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:25:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/122d6c2a9c3904d5f63ca726c05402092c179a6927f9373c66eca77936311fb4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 14:25:32 compute-0 podman[162995]: 2025-09-30 14:25:32.611756949 +0000 UTC m=+0.254593720 container init 69dec7ab89f208665af851babf9c0786f761a342fa3b810ae8278fee2cf043df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_elbakyan, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:25:32 compute-0 podman[162995]: 2025-09-30 14:25:32.61792496 +0000 UTC m=+0.260761701 container start 69dec7ab89f208665af851babf9c0786f761a342fa3b810ae8278fee2cf043df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_elbakyan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Sep 30 14:25:32 compute-0 podman[162995]: 2025-09-30 14:25:32.700371179 +0000 UTC m=+0.343207940 container attach 69dec7ab89f208665af851babf9c0786f761a342fa3b810ae8278fee2cf043df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_elbakyan, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Sep 30 14:25:32 compute-0 python3.9[163067]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:25:32 compute-0 sudo[163062]: pam_unix(sudo:session): session closed for user root
Sep 30 14:25:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:25:32 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Sep 30 14:25:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:25:32 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Sep 30 14:25:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:25:32 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Sep 30 14:25:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:25:32 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Sep 30 14:25:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:25:32 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Sep 30 14:25:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:25:32 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Sep 30 14:25:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:25:32 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Sep 30 14:25:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:25:32 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 14:25:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:25:32 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 14:25:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:25:32 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 14:25:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:25:32 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Sep 30 14:25:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:25:32 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 14:25:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:25:32 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Sep 30 14:25:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:25:32 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Sep 30 14:25:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:25:32 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Sep 30 14:25:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:25:32 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Sep 30 14:25:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:25:32 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Sep 30 14:25:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:25:32 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Sep 30 14:25:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:25:32 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Sep 30 14:25:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:25:32 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Sep 30 14:25:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:25:32 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Sep 30 14:25:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:25:32 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Sep 30 14:25:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:25:32 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Sep 30 14:25:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:25:32 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Sep 30 14:25:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:25:32 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Sep 30 14:25:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:25:32 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Sep 30 14:25:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:25:32 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Sep 30 14:25:32 compute-0 ceph-mon[74194]: pgmap v291: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 1.0 KiB/s wr, 3 op/s
Sep 30 14:25:32 compute-0 pedantic_elbakyan[163064]: --> passed data devices: 0 physical, 1 LVM
Sep 30 14:25:32 compute-0 pedantic_elbakyan[163064]: --> All data devices are unavailable
Sep 30 14:25:32 compute-0 sudo[163163]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajghxygjgatbpzgvryjgwivyonficeqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242332.1917765-1298-54586072622327/AnsiballZ_stat.py'
Sep 30 14:25:32 compute-0 sudo[163163]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:25:33 compute-0 systemd[1]: libpod-69dec7ab89f208665af851babf9c0786f761a342fa3b810ae8278fee2cf043df.scope: Deactivated successfully.
Sep 30 14:25:33 compute-0 podman[162995]: 2025-09-30 14:25:33.005356312 +0000 UTC m=+0.648193053 container died 69dec7ab89f208665af851babf9c0786f761a342fa3b810ae8278fee2cf043df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_elbakyan, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Sep 30 14:25:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-122d6c2a9c3904d5f63ca726c05402092c179a6927f9373c66eca77936311fb4-merged.mount: Deactivated successfully.
Sep 30 14:25:33 compute-0 podman[162995]: 2025-09-30 14:25:33.090989459 +0000 UTC m=+0.733826210 container remove 69dec7ab89f208665af851babf9c0786f761a342fa3b810ae8278fee2cf043df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_elbakyan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:25:33 compute-0 systemd[1]: libpod-conmon-69dec7ab89f208665af851babf9c0786f761a342fa3b810ae8278fee2cf043df.scope: Deactivated successfully.
Sep 30 14:25:33 compute-0 sudo[162607]: pam_unix(sudo:session): session closed for user root
Sep 30 14:25:33 compute-0 sudo[163181]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:25:33 compute-0 sudo[163181]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:25:33 compute-0 sudo[163181]: pam_unix(sudo:session): session closed for user root
Sep 30 14:25:33 compute-0 python3.9[163166]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 14:25:33 compute-0 sudo[163163]: pam_unix(sudo:session): session closed for user root
Sep 30 14:25:33 compute-0 sudo[163206]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -- lvm list --format json
Sep 30 14:25:33 compute-0 sudo[163206]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:25:33 compute-0 podman[163370]: 2025-09-30 14:25:33.646901839 +0000 UTC m=+0.047526465 container create e40bac0cb0553ef8c5dd6be2aca18a1e1e09f3a68e57fd6e3e8fedb8399da0dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_tu, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:25:33 compute-0 systemd[1]: Started libpod-conmon-e40bac0cb0553ef8c5dd6be2aca18a1e1e09f3a68e57fd6e3e8fedb8399da0dd.scope.
Sep 30 14:25:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:25:33 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22b0000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:25:33 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:25:33 compute-0 podman[163370]: 2025-09-30 14:25:33.622437313 +0000 UTC m=+0.023061949 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:25:33 compute-0 sudo[163439]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pszruiwxvlnciyqudhcjljsgxomszmmg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242333.28162-1298-188612707624040/AnsiballZ_copy.py'
Sep 30 14:25:33 compute-0 sudo[163439]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:25:33 compute-0 podman[163370]: 2025-09-30 14:25:33.732392313 +0000 UTC m=+0.133016959 container init e40bac0cb0553ef8c5dd6be2aca18a1e1e09f3a68e57fd6e3e8fedb8399da0dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_tu, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Sep 30 14:25:33 compute-0 podman[163370]: 2025-09-30 14:25:33.740350813 +0000 UTC m=+0.140975439 container start e40bac0cb0553ef8c5dd6be2aca18a1e1e09f3a68e57fd6e3e8fedb8399da0dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_tu, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:25:33 compute-0 podman[163370]: 2025-09-30 14:25:33.744028465 +0000 UTC m=+0.144653091 container attach e40bac0cb0553ef8c5dd6be2aca18a1e1e09f3a68e57fd6e3e8fedb8399da0dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_tu, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:25:33 compute-0 brave_tu[163421]: 167 167
Sep 30 14:25:33 compute-0 systemd[1]: libpod-e40bac0cb0553ef8c5dd6be2aca18a1e1e09f3a68e57fd6e3e8fedb8399da0dd.scope: Deactivated successfully.
Sep 30 14:25:33 compute-0 podman[163370]: 2025-09-30 14:25:33.746411951 +0000 UTC m=+0.147036577 container died e40bac0cb0553ef8c5dd6be2aca18a1e1e09f3a68e57fd6e3e8fedb8399da0dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_tu, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Sep 30 14:25:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-d039f9bc581bc7b82e533b1b84dbe7b1f2c719c94e383d3a1add954f2d8cfc1d-merged.mount: Deactivated successfully.
Sep 30 14:25:33 compute-0 podman[163370]: 2025-09-30 14:25:33.787518797 +0000 UTC m=+0.188143423 container remove e40bac0cb0553ef8c5dd6be2aca18a1e1e09f3a68e57fd6e3e8fedb8399da0dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_tu, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:25:33 compute-0 systemd[1]: libpod-conmon-e40bac0cb0553ef8c5dd6be2aca18a1e1e09f3a68e57fd6e3e8fedb8399da0dd.scope: Deactivated successfully.
Sep 30 14:25:33 compute-0 python3.9[163445]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759242333.28162-1298-188612707624040/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:25:33 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:25:33 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:25:33 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:25:33.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:25:33 compute-0 podman[163466]: 2025-09-30 14:25:33.954604527 +0000 UTC m=+0.042581158 container create a635ec8a2dac2f2d1c25ca6f74a12e9dd1d0e5afd0c40bbe462c8ed3201f70cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_hodgkin, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Sep 30 14:25:33 compute-0 sudo[163439]: pam_unix(sudo:session): session closed for user root
Sep 30 14:25:33 compute-0 systemd[1]: Started libpod-conmon-a635ec8a2dac2f2d1c25ca6f74a12e9dd1d0e5afd0c40bbe462c8ed3201f70cb.scope.
Sep 30 14:25:34 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:25:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/777126a5930c8bfa77bd9ab05b347ee45d8bba121e641979af61741b0a263164/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:25:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/777126a5930c8bfa77bd9ab05b347ee45d8bba121e641979af61741b0a263164/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:25:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/777126a5930c8bfa77bd9ab05b347ee45d8bba121e641979af61741b0a263164/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:25:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/777126a5930c8bfa77bd9ab05b347ee45d8bba121e641979af61741b0a263164/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:25:34 compute-0 podman[163466]: 2025-09-30 14:25:33.935622362 +0000 UTC m=+0.023599023 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:25:34 compute-0 podman[163466]: 2025-09-30 14:25:34.040767549 +0000 UTC m=+0.128744210 container init a635ec8a2dac2f2d1c25ca6f74a12e9dd1d0e5afd0c40bbe462c8ed3201f70cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_hodgkin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Sep 30 14:25:34 compute-0 podman[163466]: 2025-09-30 14:25:34.047755942 +0000 UTC m=+0.135732583 container start a635ec8a2dac2f2d1c25ca6f74a12e9dd1d0e5afd0c40bbe462c8ed3201f70cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_hodgkin, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid)
Sep 30 14:25:34 compute-0 podman[163466]: 2025-09-30 14:25:34.054457918 +0000 UTC m=+0.142434599 container attach a635ec8a2dac2f2d1c25ca6f74a12e9dd1d0e5afd0c40bbe462c8ed3201f70cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid)
Sep 30 14:25:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:25:34 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22a00016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:25:34 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:25:34 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:25:34 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:25:34.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:25:34 compute-0 sudo[163562]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhrbzrvfefzuxsuqtaekhyuwleetumze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242333.28162-1298-188612707624040/AnsiballZ_systemd.py'
Sep 30 14:25:34 compute-0 sudo[163562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:25:34 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v292: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 951 B/s wr, 3 op/s
Sep 30 14:25:34 compute-0 silly_hodgkin[163482]: {
Sep 30 14:25:34 compute-0 silly_hodgkin[163482]:     "0": [
Sep 30 14:25:34 compute-0 silly_hodgkin[163482]:         {
Sep 30 14:25:34 compute-0 silly_hodgkin[163482]:             "devices": [
Sep 30 14:25:34 compute-0 silly_hodgkin[163482]:                 "/dev/loop3"
Sep 30 14:25:34 compute-0 silly_hodgkin[163482]:             ],
Sep 30 14:25:34 compute-0 silly_hodgkin[163482]:             "lv_name": "ceph_lv0",
Sep 30 14:25:34 compute-0 silly_hodgkin[163482]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:25:34 compute-0 silly_hodgkin[163482]:             "lv_size": "21470642176",
Sep 30 14:25:34 compute-0 silly_hodgkin[163482]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5e3c7776-ac03-5698-b79f-a6dc2d80cae6,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1bf35304-bfb4-41f5-b832-570aa31de1b2,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 14:25:34 compute-0 silly_hodgkin[163482]:             "lv_uuid": "f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L",
Sep 30 14:25:34 compute-0 silly_hodgkin[163482]:             "name": "ceph_lv0",
Sep 30 14:25:34 compute-0 silly_hodgkin[163482]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:25:34 compute-0 silly_hodgkin[163482]:             "tags": {
Sep 30 14:25:34 compute-0 silly_hodgkin[163482]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:25:34 compute-0 silly_hodgkin[163482]:                 "ceph.block_uuid": "f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L",
Sep 30 14:25:34 compute-0 silly_hodgkin[163482]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 14:25:34 compute-0 silly_hodgkin[163482]:                 "ceph.cluster_fsid": "5e3c7776-ac03-5698-b79f-a6dc2d80cae6",
Sep 30 14:25:34 compute-0 silly_hodgkin[163482]:                 "ceph.cluster_name": "ceph",
Sep 30 14:25:34 compute-0 silly_hodgkin[163482]:                 "ceph.crush_device_class": "",
Sep 30 14:25:34 compute-0 silly_hodgkin[163482]:                 "ceph.encrypted": "0",
Sep 30 14:25:34 compute-0 silly_hodgkin[163482]:                 "ceph.osd_fsid": "1bf35304-bfb4-41f5-b832-570aa31de1b2",
Sep 30 14:25:34 compute-0 silly_hodgkin[163482]:                 "ceph.osd_id": "0",
Sep 30 14:25:34 compute-0 silly_hodgkin[163482]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 14:25:34 compute-0 silly_hodgkin[163482]:                 "ceph.type": "block",
Sep 30 14:25:34 compute-0 silly_hodgkin[163482]:                 "ceph.vdo": "0",
Sep 30 14:25:34 compute-0 silly_hodgkin[163482]:                 "ceph.with_tpm": "0"
Sep 30 14:25:34 compute-0 silly_hodgkin[163482]:             },
Sep 30 14:25:34 compute-0 silly_hodgkin[163482]:             "type": "block",
Sep 30 14:25:34 compute-0 silly_hodgkin[163482]:             "vg_name": "ceph_vg0"
Sep 30 14:25:34 compute-0 silly_hodgkin[163482]:         }
Sep 30 14:25:34 compute-0 silly_hodgkin[163482]:     ]
Sep 30 14:25:34 compute-0 silly_hodgkin[163482]: }
Sep 30 14:25:34 compute-0 systemd[1]: libpod-a635ec8a2dac2f2d1c25ca6f74a12e9dd1d0e5afd0c40bbe462c8ed3201f70cb.scope: Deactivated successfully.
Sep 30 14:25:34 compute-0 podman[163466]: 2025-09-30 14:25:34.367523293 +0000 UTC m=+0.455499954 container died a635ec8a2dac2f2d1c25ca6f74a12e9dd1d0e5afd0c40bbe462c8ed3201f70cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_hodgkin, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Sep 30 14:25:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-777126a5930c8bfa77bd9ab05b347ee45d8bba121e641979af61741b0a263164-merged.mount: Deactivated successfully.
Sep 30 14:25:34 compute-0 podman[163466]: 2025-09-30 14:25:34.412462126 +0000 UTC m=+0.500438787 container remove a635ec8a2dac2f2d1c25ca6f74a12e9dd1d0e5afd0c40bbe462c8ed3201f70cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_hodgkin, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Sep 30 14:25:34 compute-0 systemd[1]: libpod-conmon-a635ec8a2dac2f2d1c25ca6f74a12e9dd1d0e5afd0c40bbe462c8ed3201f70cb.scope: Deactivated successfully.
Sep 30 14:25:34 compute-0 sudo[163206]: pam_unix(sudo:session): session closed for user root
Sep 30 14:25:34 compute-0 sudo[163579]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:25:34 compute-0 sudo[163579]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:25:34 compute-0 sudo[163579]: pam_unix(sudo:session): session closed for user root
Sep 30 14:25:34 compute-0 python3.9[163564]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Sep 30 14:25:34 compute-0 sudo[163604]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -- raw list --format json
Sep 30 14:25:34 compute-0 sudo[163604]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:25:34 compute-0 systemd[1]: Reloading.
Sep 30 14:25:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:25:34 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f228c000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:25:34 compute-0 systemd-rc-local-generator[163655]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:25:34 compute-0 systemd-sysv-generator[163658]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 14:25:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:25:34] "GET /metrics HTTP/1.1" 200 48414 "" "Prometheus/2.51.0"
Sep 30 14:25:34 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:25:34] "GET /metrics HTTP/1.1" 200 48414 "" "Prometheus/2.51.0"
Sep 30 14:25:34 compute-0 sudo[163562]: pam_unix(sudo:session): session closed for user root
Sep 30 14:25:34 compute-0 podman[163706]: 2025-09-30 14:25:34.972558652 +0000 UTC m=+0.044853181 container create 9d1d483bbab164de19088b6d61a51568fb3d7d43e464ed7c27fc7e40f225a772 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_lovelace, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:25:35 compute-0 systemd[1]: Started libpod-conmon-9d1d483bbab164de19088b6d61a51568fb3d7d43e464ed7c27fc7e40f225a772.scope.
Sep 30 14:25:35 compute-0 podman[163706]: 2025-09-30 14:25:34.955546311 +0000 UTC m=+0.027840860 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:25:35 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:25:35 compute-0 podman[163706]: 2025-09-30 14:25:35.067670741 +0000 UTC m=+0.139965310 container init 9d1d483bbab164de19088b6d61a51568fb3d7d43e464ed7c27fc7e40f225a772 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_lovelace, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Sep 30 14:25:35 compute-0 podman[163706]: 2025-09-30 14:25:35.076421823 +0000 UTC m=+0.148716372 container start 9d1d483bbab164de19088b6d61a51568fb3d7d43e464ed7c27fc7e40f225a772 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_lovelace, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:25:35 compute-0 podman[163706]: 2025-09-30 14:25:35.080149536 +0000 UTC m=+0.152444065 container attach 9d1d483bbab164de19088b6d61a51568fb3d7d43e464ed7c27fc7e40f225a772 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_lovelace, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Sep 30 14:25:35 compute-0 quirky_lovelace[163749]: 167 167
Sep 30 14:25:35 compute-0 systemd[1]: libpod-9d1d483bbab164de19088b6d61a51568fb3d7d43e464ed7c27fc7e40f225a772.scope: Deactivated successfully.
Sep 30 14:25:35 compute-0 podman[163706]: 2025-09-30 14:25:35.085065552 +0000 UTC m=+0.157360071 container died 9d1d483bbab164de19088b6d61a51568fb3d7d43e464ed7c27fc7e40f225a772 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_lovelace, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid)
Sep 30 14:25:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-1b0a3af6b6a1a67b467a2d66979a7a8ad777e03b28478daa060d56ce5eb6638a-merged.mount: Deactivated successfully.
Sep 30 14:25:35 compute-0 podman[163706]: 2025-09-30 14:25:35.135440315 +0000 UTC m=+0.207734844 container remove 9d1d483bbab164de19088b6d61a51568fb3d7d43e464ed7c27fc7e40f225a772 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_lovelace, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Sep 30 14:25:35 compute-0 sudo[163810]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhbnoiukqdnegvnvasdwgtomihqinkmh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242333.28162-1298-188612707624040/AnsiballZ_systemd.py'
Sep 30 14:25:35 compute-0 sudo[163810]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:25:35 compute-0 systemd[1]: libpod-conmon-9d1d483bbab164de19088b6d61a51568fb3d7d43e464ed7c27fc7e40f225a772.scope: Deactivated successfully.
Sep 30 14:25:35 compute-0 ceph-mon[74194]: pgmap v292: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 951 B/s wr, 3 op/s
Sep 30 14:25:35 compute-0 podman[163820]: 2025-09-30 14:25:35.355971871 +0000 UTC m=+0.071924118 container create 79621b165363bfdc85caff79fdf87233d96d29ea09998e146762a26671d4c043 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_dewdney, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:25:35 compute-0 systemd[1]: Started libpod-conmon-79621b165363bfdc85caff79fdf87233d96d29ea09998e146762a26671d4c043.scope.
Sep 30 14:25:35 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:25:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f5f96407849c7a22b8ef5e52129828a3e03bf7a945f44462ffef574ac7dd373/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:25:35 compute-0 podman[163820]: 2025-09-30 14:25:35.333864931 +0000 UTC m=+0.049817198 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:25:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f5f96407849c7a22b8ef5e52129828a3e03bf7a945f44462ffef574ac7dd373/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:25:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f5f96407849c7a22b8ef5e52129828a3e03bf7a945f44462ffef574ac7dd373/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:25:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f5f96407849c7a22b8ef5e52129828a3e03bf7a945f44462ffef574ac7dd373/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:25:35 compute-0 podman[163820]: 2025-09-30 14:25:35.449361534 +0000 UTC m=+0.165313781 container init 79621b165363bfdc85caff79fdf87233d96d29ea09998e146762a26671d4c043 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_dewdney, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Sep 30 14:25:35 compute-0 podman[163820]: 2025-09-30 14:25:35.457546 +0000 UTC m=+0.173498237 container start 79621b165363bfdc85caff79fdf87233d96d29ea09998e146762a26671d4c043 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_dewdney, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:25:35 compute-0 podman[163820]: 2025-09-30 14:25:35.461017116 +0000 UTC m=+0.176969363 container attach 79621b165363bfdc85caff79fdf87233d96d29ea09998e146762a26671d4c043 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_dewdney, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Sep 30 14:25:35 compute-0 python3.9[163812]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 14:25:35 compute-0 systemd[1]: Reloading.
Sep 30 14:25:35 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:25:35 compute-0 systemd-sysv-generator[163872]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 14:25:35 compute-0 systemd-rc-local-generator[163869]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:25:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:25:35 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22ac001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:25:35 compute-0 systemd[1]: Starting ovn_metadata_agent container...
Sep 30 14:25:35 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:25:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4debdf5be921cc9efb723e110c07f4b1e891207b77b2b4fd61cd1502e38114b7/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Sep 30 14:25:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4debdf5be921cc9efb723e110c07f4b1e891207b77b2b4fd61cd1502e38114b7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Sep 30 14:25:35 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:25:35 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:25:35 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:25:35.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:25:35 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run c96c6cc1b89de09f21811bef665f35e069874e3ad1bbd4c37d02227af98e3458.
Sep 30 14:25:35 compute-0 podman[163917]: 2025-09-30 14:25:35.968422055 +0000 UTC m=+0.134593733 container init c96c6cc1b89de09f21811bef665f35e069874e3ad1bbd4c37d02227af98e3458 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:25:35 compute-0 ovn_metadata_agent[163949]: + sudo -E kolla_set_configs
Sep 30 14:25:35 compute-0 podman[163917]: 2025-09-30 14:25:35.994983779 +0000 UTC m=+0.161155437 container start c96c6cc1b89de09f21811bef665f35e069874e3ad1bbd4c37d02227af98e3458 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true)
Sep 30 14:25:36 compute-0 edpm-start-podman-container[163917]: ovn_metadata_agent
Sep 30 14:25:36 compute-0 edpm-start-podman-container[163911]: Creating additional drop-in dependency for "ovn_metadata_agent" (c96c6cc1b89de09f21811bef665f35e069874e3ad1bbd4c37d02227af98e3458)
Sep 30 14:25:36 compute-0 systemd[1]: Reloading.
Sep 30 14:25:36 compute-0 podman[163969]: 2025-09-30 14:25:36.085913093 +0000 UTC m=+0.080463956 container health_status c96c6cc1b89de09f21811bef665f35e069874e3ad1bbd4c37d02227af98e3458 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20250923, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Sep 30 14:25:36 compute-0 lvm[164010]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 14:25:36 compute-0 lvm[164010]: VG ceph_vg0 finished
Sep 30 14:25:36 compute-0 ovn_metadata_agent[163949]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Sep 30 14:25:36 compute-0 ovn_metadata_agent[163949]: INFO:__main__:Validating config file
Sep 30 14:25:36 compute-0 ovn_metadata_agent[163949]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Sep 30 14:25:36 compute-0 ovn_metadata_agent[163949]: INFO:__main__:Copying service configuration files
Sep 30 14:25:36 compute-0 ovn_metadata_agent[163949]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Sep 30 14:25:36 compute-0 ovn_metadata_agent[163949]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Sep 30 14:25:36 compute-0 ovn_metadata_agent[163949]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Sep 30 14:25:36 compute-0 ovn_metadata_agent[163949]: INFO:__main__:Writing out command to execute
Sep 30 14:25:36 compute-0 ovn_metadata_agent[163949]: INFO:__main__:Setting permission for /var/lib/neutron
Sep 30 14:25:36 compute-0 ovn_metadata_agent[163949]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Sep 30 14:25:36 compute-0 ovn_metadata_agent[163949]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Sep 30 14:25:36 compute-0 ovn_metadata_agent[163949]: INFO:__main__:Setting permission for /var/lib/neutron/external
Sep 30 14:25:36 compute-0 ovn_metadata_agent[163949]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Sep 30 14:25:36 compute-0 ovn_metadata_agent[163949]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Sep 30 14:25:36 compute-0 ovn_metadata_agent[163949]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Sep 30 14:25:36 compute-0 intelligent_dewdney[163838]: {}
Sep 30 14:25:36 compute-0 ovn_metadata_agent[163949]: ++ cat /run_command
Sep 30 14:25:36 compute-0 ovn_metadata_agent[163949]: + CMD=neutron-ovn-metadata-agent
Sep 30 14:25:36 compute-0 ovn_metadata_agent[163949]: + ARGS=
Sep 30 14:25:36 compute-0 ovn_metadata_agent[163949]: + sudo kolla_copy_cacerts
Sep 30 14:25:36 compute-0 systemd-rc-local-generator[164040]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:25:36 compute-0 systemd-sysv-generator[164044]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 14:25:36 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:25:36 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22ac001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:25:36 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-nfs-cephfs-compute-0-yvkpei[97239]: [WARNING] 272/142536 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Sep 30 14:25:36 compute-0 ovn_metadata_agent[163949]: + [[ ! -n '' ]]
Sep 30 14:25:36 compute-0 ovn_metadata_agent[163949]: + . kolla_extend_start
Sep 30 14:25:36 compute-0 ovn_metadata_agent[163949]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Sep 30 14:25:36 compute-0 ovn_metadata_agent[163949]: Running command: 'neutron-ovn-metadata-agent'
Sep 30 14:25:36 compute-0 ovn_metadata_agent[163949]: + umask 0022
Sep 30 14:25:36 compute-0 ovn_metadata_agent[163949]: + exec neutron-ovn-metadata-agent
Sep 30 14:25:36 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:25:36 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:25:36 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:25:36.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:25:36 compute-0 podman[163820]: 2025-09-30 14:25:36.194198917 +0000 UTC m=+0.910151164 container died 79621b165363bfdc85caff79fdf87233d96d29ea09998e146762a26671d4c043 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_dewdney, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default)
Sep 30 14:25:36 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v293: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.0 KiB/s wr, 32 op/s
Sep 30 14:25:36 compute-0 systemd[1]: Started ovn_metadata_agent container.
Sep 30 14:25:36 compute-0 systemd[1]: libpod-79621b165363bfdc85caff79fdf87233d96d29ea09998e146762a26671d4c043.scope: Deactivated successfully.
Sep 30 14:25:36 compute-0 systemd[1]: libpod-79621b165363bfdc85caff79fdf87233d96d29ea09998e146762a26671d4c043.scope: Consumed 1.111s CPU time.
Sep 30 14:25:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-8f5f96407849c7a22b8ef5e52129828a3e03bf7a945f44462ffef574ac7dd373-merged.mount: Deactivated successfully.
Sep 30 14:25:36 compute-0 sudo[163810]: pam_unix(sudo:session): session closed for user root
Sep 30 14:25:36 compute-0 podman[163820]: 2025-09-30 14:25:36.41998627 +0000 UTC m=+1.135938527 container remove 79621b165363bfdc85caff79fdf87233d96d29ea09998e146762a26671d4c043 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_dewdney, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:25:36 compute-0 systemd[1]: libpod-conmon-79621b165363bfdc85caff79fdf87233d96d29ea09998e146762a26671d4c043.scope: Deactivated successfully.
Sep 30 14:25:36 compute-0 sudo[163604]: pam_unix(sudo:session): session closed for user root
Sep 30 14:25:36 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 14:25:36 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:25:36 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 14:25:36 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:25:36 compute-0 sudo[164089]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 14:25:36 compute-0 sudo[164089]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:25:36 compute-0 sudo[164089]: pam_unix(sudo:session): session closed for user root
Sep 30 14:25:36 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:25:36 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22a00016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:25:37 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:25:37.000Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:25:37 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:25:37.001Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:25:37 compute-0 sshd-session[154631]: Connection closed by 192.168.122.30 port 53780
Sep 30 14:25:37 compute-0 sshd-session[154628]: pam_unix(sshd:session): session closed for user zuul
Sep 30 14:25:37 compute-0 systemd[1]: session-52.scope: Deactivated successfully.
Sep 30 14:25:37 compute-0 systemd[1]: session-52.scope: Consumed 52.535s CPU time.
Sep 30 14:25:37 compute-0 systemd-logind[808]: Session 52 logged out. Waiting for processes to exit.
Sep 30 14:25:37 compute-0 systemd-logind[808]: Removed session 52.
Sep 30 14:25:37 compute-0 ceph-mon[74194]: pgmap v293: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.0 KiB/s wr, 32 op/s
Sep 30 14:25:37 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:25:37 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:25:37 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-nfs-cephfs-compute-0-yvkpei[97239]: [WARNING] 272/142537 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Sep 30 14:25:37 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:25:37 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f228c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:25:37 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:25:37 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:25:37 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:25:37.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:25:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:25:38 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22ac001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:25:38 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:25:38 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:25:38 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:25:38.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.198 163966 INFO neutron.common.config [-] Logging enabled!
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.198 163966 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.198 163966 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.199 163966 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.199 163966 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.199 163966 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.200 163966 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.200 163966 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.200 163966 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.200 163966 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.200 163966 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.200 163966 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.200 163966 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.201 163966 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.201 163966 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.201 163966 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.201 163966 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.201 163966 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.201 163966 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.201 163966 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.201 163966 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.202 163966 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.202 163966 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.202 163966 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.202 163966 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.202 163966 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.202 163966 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.202 163966 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.202 163966 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.203 163966 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.203 163966 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.203 163966 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.203 163966 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.203 163966 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.203 163966 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.203 163966 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.203 163966 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.203 163966 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.204 163966 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.204 163966 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.204 163966 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.204 163966 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.204 163966 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.204 163966 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.204 163966 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.204 163966 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.204 163966 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.205 163966 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.205 163966 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.205 163966 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.205 163966 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.205 163966 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.205 163966 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.205 163966 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.205 163966 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.205 163966 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.206 163966 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.206 163966 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.206 163966 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.206 163966 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.206 163966 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.206 163966 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.206 163966 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.206 163966 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.207 163966 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.207 163966 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.207 163966 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.207 163966 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.207 163966 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.207 163966 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.207 163966 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.207 163966 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.207 163966 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.208 163966 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.208 163966 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.208 163966 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.208 163966 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.208 163966 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.208 163966 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.208 163966 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.208 163966 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.208 163966 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.209 163966 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.209 163966 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.209 163966 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.209 163966 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.209 163966 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.209 163966 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.209 163966 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.209 163966 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.209 163966 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.210 163966 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.210 163966 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.210 163966 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.210 163966 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.210 163966 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.210 163966 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.210 163966 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.210 163966 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.210 163966 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.210 163966 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.210 163966 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.211 163966 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.211 163966 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.211 163966 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.211 163966 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.211 163966 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.211 163966 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.211 163966 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.211 163966 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.211 163966 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.212 163966 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.212 163966 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.212 163966 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.212 163966 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.212 163966 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.212 163966 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.212 163966 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.212 163966 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.212 163966 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.213 163966 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.213 163966 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.213 163966 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.213 163966 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.213 163966 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.213 163966 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.213 163966 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.213 163966 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.213 163966 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.214 163966 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.214 163966 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.214 163966 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.214 163966 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.214 163966 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.214 163966 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.214 163966 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.214 163966 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.214 163966 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.215 163966 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.215 163966 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.215 163966 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.215 163966 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.215 163966 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.215 163966 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.215 163966 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.215 163966 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.215 163966 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.216 163966 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.216 163966 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.216 163966 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.216 163966 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.216 163966 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.216 163966 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.216 163966 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.216 163966 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.216 163966 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.216 163966 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.217 163966 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.217 163966 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.217 163966 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.217 163966 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.217 163966 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.217 163966 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.217 163966 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.217 163966 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.217 163966 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.218 163966 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.218 163966 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.218 163966 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.218 163966 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.218 163966 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.218 163966 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.218 163966 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.218 163966 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.219 163966 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.219 163966 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.219 163966 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.219 163966 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.219 163966 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.219 163966 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.219 163966 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.219 163966 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.219 163966 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.220 163966 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.220 163966 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.220 163966 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.220 163966 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.220 163966 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.220 163966 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.220 163966 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.221 163966 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.221 163966 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.221 163966 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.221 163966 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.221 163966 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.221 163966 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.221 163966 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.221 163966 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.222 163966 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.222 163966 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.222 163966 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.222 163966 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.222 163966 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.222 163966 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.222 163966 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.223 163966 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.223 163966 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.223 163966 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.223 163966 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.223 163966 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.223 163966 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.223 163966 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.224 163966 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.224 163966 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.224 163966 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.224 163966 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.224 163966 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.224 163966 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.224 163966 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.224 163966 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.225 163966 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.225 163966 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.225 163966 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.225 163966 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.225 163966 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.225 163966 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.225 163966 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.225 163966 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.225 163966 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.226 163966 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.226 163966 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.226 163966 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.226 163966 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.226 163966 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.226 163966 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.226 163966 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.226 163966 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.227 163966 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.227 163966 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.227 163966 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.227 163966 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.227 163966 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.227 163966 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.227 163966 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.227 163966 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.227 163966 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.227 163966 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.228 163966 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.228 163966 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.228 163966 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.228 163966 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.228 163966 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.228 163966 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.228 163966 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.228 163966 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.229 163966 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.229 163966 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.229 163966 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.229 163966 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.229 163966 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.229 163966 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.229 163966 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.229 163966 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.229 163966 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.230 163966 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.230 163966 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.230 163966 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.230 163966 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.230 163966 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.230 163966 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.230 163966 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.230 163966 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.230 163966 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.231 163966 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.231 163966 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.231 163966 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.231 163966 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.231 163966 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.231 163966 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.231 163966 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.232 163966 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.232 163966 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.232 163966 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.232 163966 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.232 163966 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.232 163966 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.232 163966 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.232 163966 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.233 163966 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.233 163966 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.233 163966 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.233 163966 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.233 163966 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.233 163966 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.233 163966 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.233 163966 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.234 163966 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.234 163966 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.234 163966 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.234 163966 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.234 163966 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.234 163966 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.234 163966 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.234 163966 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.243 163966 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.244 163966 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.244 163966 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.244 163966 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.244 163966 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected
Sep 30 14:25:38 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v294: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 519 B/s wr, 30 op/s
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.259 163966 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name c6331d25-78a2-493c-bb43-51ad387342be (UUID: c6331d25-78a2-493c-bb43-51ad387342be) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.280 163966 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.280 163966 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.280 163966 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.280 163966 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.283 163966 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.290 163966 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.296 163966 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', 'c6331d25-78a2-493c-bb43-51ad387342be'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7f8c6753f7f0>], external_ids={}, name=c6331d25-78a2-493c-bb43-51ad387342be, nb_cfg_timestamp=1759242277954, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.297 163966 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7f8c67532f70>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.298 163966 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.298 163966 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.299 163966 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.299 163966 INFO oslo_service.service [-] Starting 1 workers
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.303 163966 DEBUG oslo_service.service [-] Started child 164119 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.306 163966 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmppe3lrkro/privsep.sock']
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.307 164119 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-172001'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.347 164119 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.348 164119 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.348 164119 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.354 164119 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.360 164119 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.367 164119 INFO eventlet.wsgi.server [-] (164119) wsgi starting up on http:/var/lib/neutron/metadata_proxy
Sep 30 14:25:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:25:38 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22ac001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:25:38 compute-0 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Sep 30 14:25:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:25:38.876Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:25:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:25:38.876Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:25:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:25:38.877Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.992 163966 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.993 163966 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmppe3lrkro/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.870 164124 INFO oslo.privsep.daemon [-] privsep daemon starting
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.875 164124 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.877 164124 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.878 164124 INFO oslo.privsep.daemon [-] privsep daemon running as pid 164124
Sep 30 14:25:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:38.995 164124 DEBUG oslo.privsep.daemon [-] privsep: reply[b5483709-0589-4e45-810d-dfae90e8ffb3]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:25:39 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:39.531 164124 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:25:39 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:39.532 164124 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:25:39 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:39.532 164124 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:25:39 compute-0 ceph-mon[74194]: pgmap v294: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 519 B/s wr, 30 op/s
Sep 30 14:25:39 compute-0 sudo[164130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:25:39 compute-0 sudo[164130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:25:39 compute-0 sudo[164130]: pam_unix(sudo:session): session closed for user root
Sep 30 14:25:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:25:39 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22a00016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:25:39 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:25:39 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:25:39 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:25:39.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.107 164124 DEBUG oslo.privsep.daemon [-] privsep: reply[d9a23c10-4cc8-4cb0-a8b2-8e82b860fc00]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.110 163966 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=c6331d25-78a2-493c-bb43-51ad387342be, column=external_ids, values=({'neutron:ovn-metadata-id': '00564ea3-b143-568f-9af9-bfcd64ad0c59'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.122 163966 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c6331d25-78a2-493c-bb43-51ad387342be, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.131 163966 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.132 163966 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.132 163966 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.132 163966 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.132 163966 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.133 163966 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.133 163966 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.133 163966 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.134 163966 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.134 163966 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.134 163966 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.134 163966 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.134 163966 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.135 163966 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.135 163966 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.135 163966 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.135 163966 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.135 163966 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.136 163966 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.136 163966 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.136 163966 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.136 163966 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.136 163966 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.137 163966 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.137 163966 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.137 163966 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.137 163966 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.137 163966 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.138 163966 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.138 163966 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.138 163966 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.138 163966 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.138 163966 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.139 163966 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.139 163966 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.139 163966 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.139 163966 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.139 163966 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.140 163966 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.140 163966 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.140 163966 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.140 163966 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.140 163966 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.141 163966 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.141 163966 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.141 163966 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.141 163966 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.141 163966 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.141 163966 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.141 163966 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.141 163966 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.142 163966 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.142 163966 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.142 163966 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.142 163966 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.142 163966 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.142 163966 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.142 163966 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.142 163966 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.143 163966 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.143 163966 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.143 163966 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.143 163966 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.143 163966 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.143 163966 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.143 163966 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.144 163966 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.144 163966 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.144 163966 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.145 163966 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.145 163966 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.145 163966 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.146 163966 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.146 163966 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.146 163966 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.146 163966 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.146 163966 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.146 163966 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.147 163966 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.147 163966 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.147 163966 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.147 163966 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.147 163966 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.147 163966 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.147 163966 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.147 163966 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.148 163966 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.148 163966 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.148 163966 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.148 163966 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.148 163966 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.148 163966 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.148 163966 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.148 163966 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.148 163966 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.149 163966 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.149 163966 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.149 163966 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.149 163966 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.149 163966 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.149 163966 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.149 163966 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.149 163966 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.149 163966 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.149 163966 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.150 163966 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.150 163966 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.150 163966 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.150 163966 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.150 163966 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.150 163966 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.151 163966 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.151 163966 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.151 163966 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.151 163966 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.151 163966 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.152 163966 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.152 163966 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.152 163966 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.152 163966 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.153 163966 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.153 163966 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.153 163966 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.153 163966 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.154 163966 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.154 163966 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.154 163966 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.154 163966 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.154 163966 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.154 163966 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.154 163966 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.155 163966 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.155 163966 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.155 163966 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.155 163966 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.155 163966 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.156 163966 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.156 163966 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.156 163966 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.156 163966 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.156 163966 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.157 163966 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.157 163966 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.157 163966 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.157 163966 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.157 163966 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.157 163966 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.158 163966 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.158 163966 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.158 163966 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.158 163966 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.158 163966 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.158 163966 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.158 163966 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.159 163966 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.159 163966 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.159 163966 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.159 163966 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.159 163966 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.159 163966 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.160 163966 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.160 163966 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.160 163966 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.160 163966 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.160 163966 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.161 163966 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.161 163966 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.161 163966 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.161 163966 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.161 163966 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.162 163966 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.162 163966 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.162 163966 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.162 163966 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.162 163966 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.162 163966 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.163 163966 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.163 163966 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.163 163966 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.163 163966 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.163 163966 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.163 163966 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.164 163966 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.164 163966 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.164 163966 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.164 163966 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.165 163966 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.165 163966 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.165 163966 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.165 163966 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.165 163966 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.166 163966 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.166 163966 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.166 163966 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.166 163966 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.166 163966 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.167 163966 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.167 163966 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.167 163966 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.167 163966 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.167 163966 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.167 163966 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.168 163966 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.168 163966 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.168 163966 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.168 163966 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.168 163966 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.168 163966 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.168 163966 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.169 163966 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.169 163966 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.169 163966 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.169 163966 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.169 163966 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.169 163966 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.169 163966 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.170 163966 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.170 163966 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.170 163966 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.170 163966 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.170 163966 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.170 163966 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.170 163966 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.171 163966 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.171 163966 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.171 163966 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.171 163966 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.171 163966 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.171 163966 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.171 163966 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.172 163966 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.172 163966 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.172 163966 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.172 163966 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.172 163966 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.172 163966 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.172 163966 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.173 163966 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.173 163966 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.173 163966 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.173 163966 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.173 163966 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.173 163966 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.174 163966 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.174 163966 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.174 163966 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.174 163966 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.174 163966 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.174 163966 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.175 163966 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.175 163966 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.175 163966 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.175 163966 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.175 163966 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.175 163966 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.176 163966 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.176 163966 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.176 163966 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.176 163966 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.176 163966 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.177 163966 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.177 163966 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.177 163966 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.177 163966 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.177 163966 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.178 163966 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.178 163966 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:25:40 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22a00016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.178 163966 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.178 163966 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.178 163966 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.178 163966 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.179 163966 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.179 163966 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.179 163966 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.179 163966 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.179 163966 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.179 163966 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.180 163966 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.180 163966 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.180 163966 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.180 163966 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.180 163966 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.180 163966 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.180 163966 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.181 163966 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.181 163966 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.181 163966 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.181 163966 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.181 163966 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.181 163966 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.182 163966 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.182 163966 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.182 163966 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.182 163966 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.182 163966 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.182 163966 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.183 163966 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.183 163966 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.183 163966 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.183 163966 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.183 163966 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.183 163966 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.183 163966 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:25:40 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:25:40.184 163966 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Sep 30 14:25:40 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:25:40 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:25:40 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:25:40.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:25:40 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v295: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 519 B/s wr, 30 op/s
Sep 30 14:25:40 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:25:40 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:25:40 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2288000e00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:25:41 compute-0 ceph-mon[74194]: pgmap v295: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 519 B/s wr, 30 op/s
Sep 30 14:25:41 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:25:41 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2280000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:25:41 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:25:41 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:25:41 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:25:41.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:25:42 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:25:42 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2280000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:25:42 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:25:42 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:25:42 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:25:42.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:25:42 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v296: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 511 B/s wr, 62 op/s
Sep 30 14:25:42 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:25:42 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22a00016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:25:43 compute-0 sshd-session[164160]: Accepted publickey for zuul from 192.168.122.30 port 52104 ssh2: ECDSA SHA256:bXV1aFTGAGwGo0hLh6HZ3pTGxlJrPf0VedxXflT3nU8
Sep 30 14:25:43 compute-0 systemd-logind[808]: New session 53 of user zuul.
Sep 30 14:25:43 compute-0 systemd[1]: Started Session 53 of User zuul.
Sep 30 14:25:43 compute-0 sshd-session[164160]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Sep 30 14:25:43 compute-0 ceph-mon[74194]: pgmap v296: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 511 B/s wr, 62 op/s
Sep 30 14:25:43 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:25:43 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2288001920 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:25:43 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:25:43 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:25:43 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:25:43.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:25:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:25:44 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2288001920 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:25:44 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:25:44 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:25:44 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:25:44.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:25:44 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v297: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 85 B/s wr, 60 op/s
Sep 30 14:25:44 compute-0 python3.9[164315]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Sep 30 14:25:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:25:44 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2280001b40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:25:44 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:25:44 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:25:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:25:44] "GET /metrics HTTP/1.1" 200 48421 "" "Prometheus/2.51.0"
Sep 30 14:25:44 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:25:44] "GET /metrics HTTP/1.1" 200 48421 "" "Prometheus/2.51.0"
Sep 30 14:25:45 compute-0 sudo[164469]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whylkrqwoocffawwkifvssgaqihlkmye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242344.7805355-62-102034195251247/AnsiballZ_command.py'
Sep 30 14:25:45 compute-0 sudo[164469]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:25:45 compute-0 python3.9[164471]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:25:45 compute-0 sudo[164469]: pam_unix(sudo:session): session closed for user root
Sep 30 14:25:45 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:25:45 compute-0 ceph-mon[74194]: pgmap v297: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 85 B/s wr, 60 op/s
Sep 30 14:25:45 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:25:45 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:25:45 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22a00016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:25:45 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:25:45 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:25:45 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:25:45.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:25:46 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:25:46 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22a00016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:25:46 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:25:46 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:25:46 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:25:46.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:25:46 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v298: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 85 B/s wr, 60 op/s
Sep 30 14:25:46 compute-0 sudo[164636]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iipfdbvjwzqgjmxdmyjdqhzmxgnmylwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242345.833236-95-87137485460189/AnsiballZ_systemd_service.py'
Sep 30 14:25:46 compute-0 sudo[164636]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:25:46 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:25:46 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2288001920 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:25:46 compute-0 python3.9[164638]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Sep 30 14:25:46 compute-0 systemd[1]: Reloading.
Sep 30 14:25:46 compute-0 systemd-rc-local-generator[164667]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:25:46 compute-0 systemd-sysv-generator[164670]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 14:25:47 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:25:47.001Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:25:47 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:25:47.002Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:25:47 compute-0 sudo[164636]: pam_unix(sudo:session): session closed for user root
Sep 30 14:25:47 compute-0 ceph-mon[74194]: pgmap v298: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 85 B/s wr, 60 op/s
Sep 30 14:25:47 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:25:47 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2288001920 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:25:47 compute-0 python3.9[164824]: ansible-ansible.builtin.service_facts Invoked
Sep 30 14:25:47 compute-0 network[164842]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Sep 30 14:25:47 compute-0 network[164843]: 'network-scripts' will be removed from distribution in near future.
Sep 30 14:25:47 compute-0 network[164844]: It is advised to switch to 'NetworkManager' instead for network management.
Sep 30 14:25:47 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:25:47 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:25:47 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:25:47.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:25:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:25:48 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2294001f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:25:48 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:25:48 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000028s ======
Sep 30 14:25:48 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:25:48.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Sep 30 14:25:48 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v299: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 0 B/s wr, 32 op/s
Sep 30 14:25:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:25:48 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22a00016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:25:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:25:48.877Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:25:49 compute-0 ceph-mon[74194]: pgmap v299: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 0 B/s wr, 32 op/s
Sep 30 14:25:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:25:49 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22a00016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:25:49 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:25:49 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:25:49 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:25:49.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:25:50 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:25:50 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22a00016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:25:50 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:25:50 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:25:50 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:25:50.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:25:50 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v300: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 0 B/s wr, 32 op/s
Sep 30 14:25:50 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:25:50 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:25:50 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2294002a60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:25:51 compute-0 ceph-mon[74194]: pgmap v300: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 0 B/s wr, 32 op/s
Sep 30 14:25:51 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:25:51 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2288001920 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:25:51 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:25:51 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000028s ======
Sep 30 14:25:51 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:25:51.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Sep 30 14:25:52 compute-0 sudo[165111]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fasjgrfwjghxvwbjyfnmqslcsylqhkje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242351.8450115-152-2027117279115/AnsiballZ_systemd_service.py'
Sep 30 14:25:52 compute-0 sudo[165111]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:25:52 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:25:52 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2280002720 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:25:52 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:25:52 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:25:52 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:25:52.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:25:52 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v301: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 0 B/s wr, 32 op/s
Sep 30 14:25:52 compute-0 python3.9[165113]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 14:25:52 compute-0 sudo[165111]: pam_unix(sudo:session): session closed for user root
Sep 30 14:25:52 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:25:52 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22a00016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:25:52 compute-0 sudo[165264]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgoscmcxqgzjqmoviqzmmphckhhdgcji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242352.5694451-152-163866053716253/AnsiballZ_systemd_service.py'
Sep 30 14:25:52 compute-0 sudo[165264]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:25:53 compute-0 python3.9[165266]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 14:25:53 compute-0 sudo[165264]: pam_unix(sudo:session): session closed for user root
Sep 30 14:25:53 compute-0 sudo[165418]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtvmnpxosxgpemngwtwwjfbapiwftlpj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242353.308424-152-91802860325781/AnsiballZ_systemd_service.py'
Sep 30 14:25:53 compute-0 sudo[165418]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:25:53 compute-0 ceph-mon[74194]: pgmap v301: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 0 B/s wr, 32 op/s
Sep 30 14:25:53 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:25:53 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2294002a60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:25:53 compute-0 python3.9[165420]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 14:25:53 compute-0 sudo[165418]: pam_unix(sudo:session): session closed for user root
Sep 30 14:25:53 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:25:53 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:25:53 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:25:53.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:25:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:25:54 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2288003590 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:25:54 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:25:54 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:25:54 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:25:54.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:25:54 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v302: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:25:54 compute-0 sudo[165572]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwlzcznmbrtdoklkqjgyxszvnvofsqnq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242354.0600882-152-179707224277799/AnsiballZ_systemd_service.py'
Sep 30 14:25:54 compute-0 sudo[165572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:25:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:25:54 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2280002720 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:25:54 compute-0 python3.9[165574]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 14:25:54 compute-0 sudo[165572]: pam_unix(sudo:session): session closed for user root
Sep 30 14:25:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:25:54] "GET /metrics HTTP/1.1" 200 48421 "" "Prometheus/2.51.0"
Sep 30 14:25:54 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:25:54] "GET /metrics HTTP/1.1" 200 48421 "" "Prometheus/2.51.0"
Sep 30 14:25:55 compute-0 sudo[165725]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aurxpmffyhfkxeideshimptochsohcco ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242354.7654989-152-45400173122912/AnsiballZ_systemd_service.py'
Sep 30 14:25:55 compute-0 sudo[165725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:25:55 compute-0 python3.9[165727]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 14:25:55 compute-0 sudo[165725]: pam_unix(sudo:session): session closed for user root
Sep 30 14:25:55 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:25:55 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:25:55 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22a00016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:25:55 compute-0 sudo[165880]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odtkdiauzdkwrzhzgradikrkxckhezil ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242355.5290802-152-13309896920619/AnsiballZ_systemd_service.py'
Sep 30 14:25:55 compute-0 sudo[165880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:25:55 compute-0 ceph-mon[74194]: pgmap v302: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:25:55 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:25:55 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:25:55 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:25:55.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:25:56 compute-0 python3.9[165882]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 14:25:56 compute-0 sudo[165880]: pam_unix(sudo:session): session closed for user root
Sep 30 14:25:56 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:25:56 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2294002a60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:25:56 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:25:56 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:25:56 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:25:56.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:25:56 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v303: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Sep 30 14:25:56 compute-0 sudo[166033]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sidocfrjhoxyzhhrjksytwfhsmtnjutj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242356.2580962-152-138589024832004/AnsiballZ_systemd_service.py'
Sep 30 14:25:56 compute-0 sudo[166033]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:25:56 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:25:56 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2288003590 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:25:56 compute-0 python3.9[166035]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 14:25:56 compute-0 sudo[166033]: pam_unix(sudo:session): session closed for user root
Sep 30 14:25:57 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:25:57.003Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:25:57 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:25:57.003Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:25:57 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:25:57.003Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:25:57 compute-0 ceph-mon[74194]: pgmap v303: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Sep 30 14:25:57 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:25:57 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2280003430 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:25:57 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:25:57 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:25:57 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:25:57.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:25:58 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:25:58 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22a00016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:25:58 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:25:58 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000028s ======
Sep 30 14:25:58 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:25:58.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Sep 30 14:25:58 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v304: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Sep 30 14:25:58 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:25:58 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2294002a60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:25:58 compute-0 sudo[166188]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqyiehviztxgbmnpxnporjtxiqjdnhyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242358.2109237-308-19537010153033/AnsiballZ_file.py'
Sep 30 14:25:58 compute-0 sudo[166188]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:25:58 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:25:58.878Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:25:58 compute-0 python3.9[166190]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:25:58 compute-0 sudo[166188]: pam_unix(sudo:session): session closed for user root
Sep 30 14:25:59 compute-0 sudo[166340]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxlrouvaobmibdqspkgtssbzyrxwtizv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242359.0425847-308-240793790287348/AnsiballZ_file.py'
Sep 30 14:25:59 compute-0 sudo[166340]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:25:59 compute-0 ceph-mon[74194]: pgmap v304: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Sep 30 14:25:59 compute-0 ceph-mgr[74485]: [balancer INFO root] Optimize plan auto_2025-09-30_14:25:59
Sep 30 14:25:59 compute-0 ceph-mgr[74485]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 14:25:59 compute-0 ceph-mgr[74485]: [balancer INFO root] do_upmap
Sep 30 14:25:59 compute-0 ceph-mgr[74485]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.control', 'images', 'volumes', 'vms', 'default.rgw.meta', '.rgw.root', '.mgr', '.nfs', 'backups', 'cephfs.cephfs.meta']
Sep 30 14:25:59 compute-0 ceph-mgr[74485]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 14:25:59 compute-0 python3.9[166342]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:25:59 compute-0 sudo[166340]: pam_unix(sudo:session): session closed for user root
Sep 30 14:25:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 14:25:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:25:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 14:25:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:25:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:25:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:25:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:25:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:25:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:25:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:25:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:25:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:25:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Sep 30 14:25:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:25:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:25:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:25:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Sep 30 14:25:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:25:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Sep 30 14:25:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:25:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:25:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:25:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 14:25:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:25:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 14:25:59 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:25:59 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:25:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:25:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:25:59 compute-0 sudo[166420]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:25:59 compute-0 sudo[166420]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:25:59 compute-0 sudo[166420]: pam_unix(sudo:session): session closed for user root
Sep 30 14:25:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:25:59 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2288003eb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:25:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:25:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:25:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:25:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:25:59 compute-0 sudo[166519]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlfckcccuohlwqbmfuqkcyomgkqxxxms ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242359.627889-308-5705574069249/AnsiballZ_file.py'
Sep 30 14:25:59 compute-0 sudo[166519]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:25:59 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:25:59 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:25:59 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:25:59.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:26:00 compute-0 python3.9[166521]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:26:00 compute-0 sudo[166519]: pam_unix(sudo:session): session closed for user root
Sep 30 14:26:00 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:26:00 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2280003430 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:26:00 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:26:00 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:26:00 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:26:00.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:26:00 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v305: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Sep 30 14:26:00 compute-0 sudo[166684]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmljmbmyvsmdsutdqbpuxplqkhjdkwut ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242360.1983938-308-190009867540244/AnsiballZ_file.py'
Sep 30 14:26:00 compute-0 sudo[166684]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:26:00 compute-0 podman[166645]: 2025-09-30 14:26:00.503643128 +0000 UTC m=+0.083757587 container health_status 8ace90c1ad44d98a416105b72e1c541724757ab08efa10adfaa0df7e8c2027c6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Sep 30 14:26:00 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:26:00 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:26:00 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:26:00 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22a00016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:26:00 compute-0 python3.9[166691]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:26:00 compute-0 sudo[166684]: pam_unix(sudo:session): session closed for user root
Sep 30 14:26:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 14:26:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 14:26:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 14:26:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 14:26:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 14:26:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 14:26:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 14:26:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 14:26:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 14:26:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 14:26:01 compute-0 sudo[166849]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfxdtccmtgesdonkijtdrzefcnzosdni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242360.783354-308-1889463650932/AnsiballZ_file.py'
Sep 30 14:26:01 compute-0 sudo[166849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:26:01 compute-0 python3.9[166851]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:26:01 compute-0 sudo[166849]: pam_unix(sudo:session): session closed for user root
Sep 30 14:26:01 compute-0 ceph-mon[74194]: pgmap v305: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Sep 30 14:26:01 compute-0 sudo[167002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbagjauirzkxrpvpvurkptstsohcovmm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242361.3438668-308-109210336441612/AnsiballZ_file.py'
Sep 30 14:26:01 compute-0 sudo[167002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:26:01 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-nfs-cephfs-compute-0-yvkpei[97239]: [WARNING] 272/142601 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Sep 30 14:26:01 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:26:01 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2294002a60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:26:01 compute-0 python3.9[167004]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:26:01 compute-0 sudo[167002]: pam_unix(sudo:session): session closed for user root
Sep 30 14:26:01 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:26:01 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:26:01 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:26:01.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:26:02 compute-0 sudo[167155]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-usehxucagtclvxhgzmljtaqvjnsjmpwd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242361.9188993-308-190850359349780/AnsiballZ_file.py'
Sep 30 14:26:02 compute-0 sudo[167155]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:26:02 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:26:02 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2288003eb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:26:02 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:26:02 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:26:02 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:26:02.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:26:02 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v306: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Sep 30 14:26:02 compute-0 python3.9[167157]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:26:02 compute-0 sudo[167155]: pam_unix(sudo:session): session closed for user root
Sep 30 14:26:02 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:26:02 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2280004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:26:03 compute-0 sudo[167308]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzphrybmbmablkhqjfuyfeklcwnfnrmh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242363.255074-458-31503719480385/AnsiballZ_file.py'
Sep 30 14:26:03 compute-0 sudo[167308]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:26:03 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:26:03 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22a0003780 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:26:03 compute-0 python3.9[167310]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:26:03 compute-0 sudo[167308]: pam_unix(sudo:session): session closed for user root
Sep 30 14:26:03 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:26:03 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:26:03 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:26:03.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:26:04 compute-0 ceph-mon[74194]: pgmap v306: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Sep 30 14:26:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:26:04 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2294002a60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:26:04 compute-0 sudo[167461]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egwyvzpaxrkhvbbojefxaborsuciwcqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242363.9527044-458-123175470342121/AnsiballZ_file.py'
Sep 30 14:26:04 compute-0 sudo[167461]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:26:04 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:26:04 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:26:04 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:26:04.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:26:04 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v307: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Sep 30 14:26:04 compute-0 python3.9[167463]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:26:04 compute-0 sudo[167461]: pam_unix(sudo:session): session closed for user root
Sep 30 14:26:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:26:04 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2288003eb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:26:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:26:04] "GET /metrics HTTP/1.1" 200 48422 "" "Prometheus/2.51.0"
Sep 30 14:26:04 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:26:04] "GET /metrics HTTP/1.1" 200 48422 "" "Prometheus/2.51.0"
Sep 30 14:26:04 compute-0 sudo[167613]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-inwboujouzeqaamuvqqmiwikhqdnauve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242364.5731359-458-20399773910850/AnsiballZ_file.py'
Sep 30 14:26:04 compute-0 sudo[167613]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:26:05 compute-0 python3.9[167615]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:26:05 compute-0 sudo[167613]: pam_unix(sudo:session): session closed for user root
Sep 30 14:26:05 compute-0 ceph-mon[74194]: pgmap v307: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Sep 30 14:26:05 compute-0 sudo[167766]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xceylrihvvbhycfpeafuxlyfhdhcxfbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242365.1741085-458-244834971075982/AnsiballZ_file.py'
Sep 30 14:26:05 compute-0 sudo[167766]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:26:05 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:26:05 compute-0 python3.9[167768]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:26:05 compute-0 sudo[167766]: pam_unix(sudo:session): session closed for user root
Sep 30 14:26:05 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:26:05 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2280004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:26:05 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:26:05 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:26:05 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:26:05.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:26:06 compute-0 sudo[167919]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zctrksmocaqessznyhifjwgbudmwbgop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242365.765648-458-75250762129284/AnsiballZ_file.py'
Sep 30 14:26:06 compute-0 sudo[167919]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:26:06 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:26:06 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22a0003780 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:26:06 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:26:06 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:26:06 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:26:06.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:26:06 compute-0 python3.9[167921]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:26:06 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v308: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Sep 30 14:26:06 compute-0 sudo[167919]: pam_unix(sudo:session): session closed for user root
Sep 30 14:26:06 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:26:06 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2294002a60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:26:06 compute-0 podman[168045]: 2025-09-30 14:26:06.767353158 +0000 UTC m=+0.044814470 container health_status c96c6cc1b89de09f21811bef665f35e069874e3ad1bbd4c37d02227af98e3458 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Sep 30 14:26:06 compute-0 sudo[168090]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpbrwijbkpiiixncjfggvwugvgcahpfn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242366.382193-458-87144883045113/AnsiballZ_file.py'
Sep 30 14:26:06 compute-0 sudo[168090]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:26:06 compute-0 python3.9[168092]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:26:07 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:26:07.004Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:26:07 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:26:07.004Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:26:07 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:26:07.005Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:26:07 compute-0 sudo[168090]: pam_unix(sudo:session): session closed for user root
Sep 30 14:26:07 compute-0 ceph-mon[74194]: pgmap v308: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Sep 30 14:26:07 compute-0 sudo[168243]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywkvnnhfgsptvoqfsdlfajgbejwwjadl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242367.147032-458-232997643294427/AnsiballZ_file.py'
Sep 30 14:26:07 compute-0 sudo[168243]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:26:07 compute-0 python3.9[168245]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:26:07 compute-0 sudo[168243]: pam_unix(sudo:session): session closed for user root
Sep 30 14:26:07 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:26:07 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2288003eb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:26:07 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:26:07 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:26:07 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:26:07.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:26:08 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:26:08 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2280004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:26:08 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:26:08 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:26:08 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:26:08.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:26:08 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v309: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Sep 30 14:26:08 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:26:08 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22a0003780 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:26:08 compute-0 sudo[168396]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wohfhcucqostqwpgdiskgtfbandpvmoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242368.417665-611-45386471349374/AnsiballZ_command.py'
Sep 30 14:26:08 compute-0 sudo[168396]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:26:08 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:26:08.879Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:26:08 compute-0 python3.9[168398]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:26:08 compute-0 sudo[168396]: pam_unix(sudo:session): session closed for user root
Sep 30 14:26:09 compute-0 ceph-mon[74194]: pgmap v309: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Sep 30 14:26:09 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:26:09 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2294002a60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:26:09 compute-0 python3.9[168551]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Sep 30 14:26:09 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:26:09 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:26:09 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:26:09 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:26:09 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:26:09.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:26:10 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:26:10 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2294002a60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:26:10 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:26:10 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:26:10 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:26:10.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:26:10 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v310: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Sep 30 14:26:10 compute-0 sudo[168702]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfjjutixwaeirpvpjuqoygbfgecgduvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242370.0868485-665-200494888592050/AnsiballZ_systemd_service.py'
Sep 30 14:26:10 compute-0 sudo[168702]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:26:10 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:26:10 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:26:10 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2280004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:26:10 compute-0 python3.9[168704]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Sep 30 14:26:10 compute-0 systemd[1]: Reloading.
Sep 30 14:26:10 compute-0 systemd-rc-local-generator[168731]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:26:10 compute-0 systemd-sysv-generator[168735]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 14:26:10 compute-0 sudo[168702]: pam_unix(sudo:session): session closed for user root
Sep 30 14:26:11 compute-0 ceph-mon[74194]: pgmap v310: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Sep 30 14:26:11 compute-0 sudo[168889]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpknoalykqsubfsxcxiloloefgjilovs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242371.2174346-689-92869075767136/AnsiballZ_command.py'
Sep 30 14:26:11 compute-0 sudo[168889]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:26:11 compute-0 python3.9[168891]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:26:11 compute-0 sudo[168889]: pam_unix(sudo:session): session closed for user root
Sep 30 14:26:11 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:26:11 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22a0003780 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:26:11 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:26:11 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:26:11 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:26:11.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:26:12 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:26:12 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2294002a60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:26:12 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:26:12 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:26:12 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:26:12.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:26:12 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v311: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Sep 30 14:26:12 compute-0 sudo[169045]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ceqcohwqknufomeiytxjgcrqxvguzccw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242372.0350597-689-169767204425799/AnsiballZ_command.py'
Sep 30 14:26:12 compute-0 sudo[169045]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:26:12 compute-0 python3.9[169047]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:26:12 compute-0 sudo[169045]: pam_unix(sudo:session): session closed for user root
Sep 30 14:26:12 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:26:12 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22ac001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:26:12 compute-0 sudo[169198]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbihpommackbrylxojzyvkdkwfgvjhjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242372.6248784-689-45349665353270/AnsiballZ_command.py'
Sep 30 14:26:12 compute-0 sudo[169198]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:26:12 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:26:12 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:26:12 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:26:12 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:26:13 compute-0 python3.9[169200]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:26:13 compute-0 sudo[169198]: pam_unix(sudo:session): session closed for user root
Sep 30 14:26:13 compute-0 ceph-mon[74194]: pgmap v311: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Sep 30 14:26:13 compute-0 sudo[169352]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmdzeusunizvigkwspzyaalstqwkzsaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242373.2276762-689-154239446183587/AnsiballZ_command.py'
Sep 30 14:26:13 compute-0 sudo[169352]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:26:13 compute-0 python3.9[169354]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:26:13 compute-0 sudo[169352]: pam_unix(sudo:session): session closed for user root
Sep 30 14:26:13 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:26:13 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2294002a60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:26:13 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:26:13 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:26:13 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:26:13.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:26:14 compute-0 sudo[169506]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-badvyowlanvalvffefwnqddwspbspwhf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242373.8596706-689-219904622377324/AnsiballZ_command.py'
Sep 30 14:26:14 compute-0 sudo[169506]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:26:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:26:14 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2280004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:26:14 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v312: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Sep 30 14:26:14 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:26:14 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:26:14 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:26:14.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:26:14 compute-0 python3.9[169508]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:26:14 compute-0 sudo[169506]: pam_unix(sudo:session): session closed for user root
Sep 30 14:26:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:26:14 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2294002a60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:26:14 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:26:14 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:26:14 compute-0 sudo[169659]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ecccwwfknuvssentvtiicztsyawjchmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242374.4819-689-254859670644687/AnsiballZ_command.py'
Sep 30 14:26:14 compute-0 sudo[169659]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:26:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:26:14] "GET /metrics HTTP/1.1" 200 48421 "" "Prometheus/2.51.0"
Sep 30 14:26:14 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:26:14] "GET /metrics HTTP/1.1" 200 48421 "" "Prometheus/2.51.0"
Sep 30 14:26:14 compute-0 python3.9[169661]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:26:14 compute-0 sudo[169659]: pam_unix(sudo:session): session closed for user root
Sep 30 14:26:15 compute-0 sudo[169812]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yaitojqhjgavsdpznjxtsqdgptrsmpzq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242375.0663235-689-209650609535858/AnsiballZ_command.py'
Sep 30 14:26:15 compute-0 sudo[169812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:26:15 compute-0 ceph-mon[74194]: pgmap v312: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Sep 30 14:26:15 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:26:15 compute-0 python3.9[169815]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:26:15 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:26:15 compute-0 sudo[169812]: pam_unix(sudo:session): session closed for user root
Sep 30 14:26:15 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:26:15 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22a0003780 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:26:15 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:26:15 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Sep 30 14:26:15 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:26:15 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:26:15 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:26:15.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:26:16 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:26:16 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22ac001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:26:16 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v313: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 1023 B/s wr, 92 op/s
Sep 30 14:26:16 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:26:16 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:26:16 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:26:16.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:26:16 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:26:16 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2280004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:26:16 compute-0 sudo[169967]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ecfkvzcpxxtbqswbiqbmafgsrlgzsitq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242376.5189881-851-139955929365975/AnsiballZ_getent.py'
Sep 30 14:26:16 compute-0 sudo[169967]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:26:17 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:26:17.005Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:26:17 compute-0 python3.9[169969]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Sep 30 14:26:17 compute-0 sudo[169967]: pam_unix(sudo:session): session closed for user root
Sep 30 14:26:17 compute-0 ceph-mon[74194]: pgmap v313: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 1023 B/s wr, 92 op/s
Sep 30 14:26:17 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:26:17 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2294002a60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:26:17 compute-0 sudo[170122]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wcqeclzyvbsczezxambbiolvluvipxzs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242377.3647938-875-244911724769499/AnsiballZ_group.py'
Sep 30 14:26:17 compute-0 sudo[170122]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:26:17 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:26:17 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:26:17 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:26:17.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:26:18 compute-0 python3.9[170124]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Sep 30 14:26:18 compute-0 groupadd[170125]: group added to /etc/group: name=libvirt, GID=42473
Sep 30 14:26:18 compute-0 groupadd[170125]: group added to /etc/gshadow: name=libvirt
Sep 30 14:26:18 compute-0 groupadd[170125]: new group: name=libvirt, GID=42473
Sep 30 14:26:18 compute-0 sudo[170122]: pam_unix(sudo:session): session closed for user root
Sep 30 14:26:18 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:26:18 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22a0003780 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:26:18 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v314: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 1023 B/s wr, 92 op/s
Sep 30 14:26:18 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:26:18 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:26:18 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:26:18.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:26:18 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:26:18 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22ac001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:26:18 compute-0 sudo[170280]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-soohgunfiunmnljaaifsixjxaxlhugji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242378.245037-899-178800511594558/AnsiballZ_user.py'
Sep 30 14:26:18 compute-0 sudo[170280]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:26:18 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:26:18.881Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:26:18 compute-0 python3.9[170282]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Sep 30 14:26:18 compute-0 useradd[170284]: new user: name=libvirt, UID=42473, GID=42473, home=/home/libvirt, shell=/sbin/nologin, from=/dev/pts/0
Sep 30 14:26:18 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Sep 30 14:26:19 compute-0 sudo[170280]: pam_unix(sudo:session): session closed for user root
Sep 30 14:26:19 compute-0 ceph-mon[74194]: pgmap v314: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 1023 B/s wr, 92 op/s
Sep 30 14:26:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:26:19 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2280004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:26:19 compute-0 sudo[170414]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:26:19 compute-0 sudo[170414]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:26:19 compute-0 sudo[170414]: pam_unix(sudo:session): session closed for user root
Sep 30 14:26:19 compute-0 sudo[170468]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkvyzcnsrkhdpsdypnrqdmzytiulfonr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242379.5768967-932-193174986061467/AnsiballZ_setup.py'
Sep 30 14:26:19 compute-0 sudo[170468]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:26:19 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:26:19 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:26:19 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:26:19.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:26:20 compute-0 python3.9[170470]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Sep 30 14:26:20 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:26:20 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2294002a60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:26:20 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v315: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 1023 B/s wr, 92 op/s
Sep 30 14:26:20 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:26:20 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:26:20 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:26:20.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:26:20 compute-0 sudo[170468]: pam_unix(sudo:session): session closed for user root
Sep 30 14:26:20 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:26:20 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:26:20 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22a0003780 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:26:20 compute-0 sudo[170552]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zetyzqyorucxpouznifhgsicrpymuaet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242379.5768967-932-193174986061467/AnsiballZ_dnf.py'
Sep 30 14:26:20 compute-0 sudo[170552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:26:21 compute-0 python3.9[170554]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Sep 30 14:26:21 compute-0 ceph-mon[74194]: pgmap v315: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 1023 B/s wr, 92 op/s
Sep 30 14:26:21 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-nfs-cephfs-compute-0-yvkpei[97239]: [WARNING] 272/142621 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Sep 30 14:26:21 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:26:21 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22ac002bc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:26:21 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:26:21 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:26:21 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:26:21.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:26:22 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:26:22 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2280004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:26:22 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v316: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 74 KiB/s rd, 1023 B/s wr, 122 op/s
Sep 30 14:26:22 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:26:22 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:26:22 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:26:22.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:26:22 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:26:22 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2294002a60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:26:23 compute-0 ceph-mon[74194]: pgmap v316: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 74 KiB/s rd, 1023 B/s wr, 122 op/s
Sep 30 14:26:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:26:23 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22a0003780 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:26:23 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:26:23 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:26:23 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:26:23.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:26:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:26:24 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22ac002bc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:26:24 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v317: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 73 KiB/s rd, 426 B/s wr, 121 op/s
Sep 30 14:26:24 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:26:24 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:26:24 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:26:24.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:26:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:26:24 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2280004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:26:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:26:24] "GET /metrics HTTP/1.1" 200 48421 "" "Prometheus/2.51.0"
Sep 30 14:26:24 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:26:24] "GET /metrics HTTP/1.1" 200 48421 "" "Prometheus/2.51.0"
Sep 30 14:26:25 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:26:25 compute-0 ceph-mon[74194]: pgmap v317: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 73 KiB/s rd, 426 B/s wr, 121 op/s
Sep 30 14:26:25 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:26:25 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2294002a60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:26:25 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:26:25 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:26:25 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:26:25.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:26:26 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:26:26 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22a0003780 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:26:26 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v318: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 73 KiB/s rd, 426 B/s wr, 121 op/s
Sep 30 14:26:26 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:26:26 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:26:26 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:26:26.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:26:26 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:26:26 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22ac0038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:26:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:26:27.005Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:26:27 compute-0 ceph-mon[74194]: pgmap v318: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 73 KiB/s rd, 426 B/s wr, 121 op/s
Sep 30 14:26:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:26:27 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2280004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:26:27 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:26:27 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:26:27 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:26:27.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:26:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:26:28 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2294002a60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:26:28 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v319: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 0 B/s wr, 30 op/s
Sep 30 14:26:28 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:26:28 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:26:28 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:26:28.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:26:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:26:28 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22a0003780 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:26:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:26:28.882Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:26:29 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:26:29 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:26:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:26:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:26:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:26:29 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22ac0038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:26:29 compute-0 ceph-mon[74194]: pgmap v319: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 0 B/s wr, 30 op/s
Sep 30 14:26:29 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:26:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:26:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:26:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:26:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:26:29 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:26:29 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:26:29 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:26:29.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:26:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:26:30 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2280004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:26:30 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v320: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 0 B/s wr, 30 op/s
Sep 30 14:26:30 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:26:30 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:26:30 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:26:30.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:26:30 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:26:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:26:30 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2294002a60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:26:31 compute-0 podman[170737]: 2025-09-30 14:26:31.189138724 +0000 UTC m=+0.112748189 container health_status 8ace90c1ad44d98a416105b72e1c541724757ab08efa10adfaa0df7e8c2027c6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250923, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Sep 30 14:26:31 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:26:31 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22a0003780 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:26:31 compute-0 ceph-mon[74194]: pgmap v320: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 0 B/s wr, 30 op/s
Sep 30 14:26:32 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:26:32 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:26:32 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:26:32.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:26:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:26:32 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22ac0045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:26:32 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v321: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 0 B/s wr, 30 op/s
Sep 30 14:26:32 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:26:32 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:26:32 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:26:32.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:26:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:26:32 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2280004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:26:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:26:33 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2294002a60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:26:33 compute-0 ceph-mon[74194]: pgmap v321: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 0 B/s wr, 30 op/s
Sep 30 14:26:34 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:26:34 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:26:34 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:26:34.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:26:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:26:34 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22a0003780 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:26:34 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v322: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:26:34 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:26:34 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:26:34 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:26:34.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:26:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:26:34 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22a0003780 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:26:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:26:34] "GET /metrics HTTP/1.1" 200 48423 "" "Prometheus/2.51.0"
Sep 30 14:26:34 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:26:34] "GET /metrics HTTP/1.1" 200 48423 "" "Prometheus/2.51.0"
Sep 30 14:26:35 compute-0 ceph-mon[74194]: pgmap v322: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:26:35 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:26:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:26:35 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2280004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:26:36 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:26:36 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:26:36 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:26:36.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:26:36 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:26:36 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2294002a60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:26:36 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v323: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Sep 30 14:26:36 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:26:36 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:26:36 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:26:36.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:26:36 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:26:36 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22ac0045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:26:37 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:26:37.006Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:26:37 compute-0 sudo[170788]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:26:37 compute-0 sudo[170788]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:26:37 compute-0 sudo[170788]: pam_unix(sudo:session): session closed for user root
Sep 30 14:26:37 compute-0 sudo[170819]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 14:26:37 compute-0 sudo[170819]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:26:37 compute-0 podman[170811]: 2025-09-30 14:26:37.13612689 +0000 UTC m=+0.057117135 container health_status c96c6cc1b89de09f21811bef665f35e069874e3ad1bbd4c37d02227af98e3458 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Sep 30 14:26:37 compute-0 ceph-mon[74194]: pgmap v323: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Sep 30 14:26:37 compute-0 sudo[170819]: pam_unix(sudo:session): session closed for user root
Sep 30 14:26:37 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-nfs-cephfs-compute-0-yvkpei[97239]: [WARNING] 272/142637 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Sep 30 14:26:37 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:26:37 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22a0003780 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:26:37 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:26:37 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:26:37 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 14:26:37 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 14:26:37 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v324: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 266 B/s rd, 0 op/s
Sep 30 14:26:37 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 14:26:37 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:26:37 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 14:26:37 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:26:37 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 14:26:37 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 14:26:37 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 14:26:37 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 14:26:37 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:26:37 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:26:37 compute-0 sudo[170890]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:26:37 compute-0 sudo[170890]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:26:37 compute-0 sudo[170890]: pam_unix(sudo:session): session closed for user root
Sep 30 14:26:38 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:26:38 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:26:38 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:26:38.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:26:38 compute-0 sudo[170915]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 14:26:38 compute-0 sudo[170915]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:26:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:26:38 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2280004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:26:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:26:38.237 163966 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:26:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:26:38.238 163966 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:26:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:26:38.238 163966 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:26:38 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:26:38 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:26:38 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:26:38.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:26:38 compute-0 podman[170981]: 2025-09-30 14:26:38.384322013 +0000 UTC m=+0.039795440 container create 4d776dfb74c73c419337d581f409f1c4b355a96c1013fe41367020743b1c59ea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_greider, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Sep 30 14:26:38 compute-0 systemd[1]: Started libpod-conmon-4d776dfb74c73c419337d581f409f1c4b355a96c1013fe41367020743b1c59ea.scope.
Sep 30 14:26:38 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:26:38 compute-0 podman[170981]: 2025-09-30 14:26:38.367239434 +0000 UTC m=+0.022712881 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:26:38 compute-0 podman[170981]: 2025-09-30 14:26:38.471406632 +0000 UTC m=+0.126880089 container init 4d776dfb74c73c419337d581f409f1c4b355a96c1013fe41367020743b1c59ea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_greider, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:26:38 compute-0 podman[170981]: 2025-09-30 14:26:38.48513384 +0000 UTC m=+0.140607267 container start 4d776dfb74c73c419337d581f409f1c4b355a96c1013fe41367020743b1c59ea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_greider, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:26:38 compute-0 podman[170981]: 2025-09-30 14:26:38.48847934 +0000 UTC m=+0.143952797 container attach 4d776dfb74c73c419337d581f409f1c4b355a96c1013fe41367020743b1c59ea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_greider, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:26:38 compute-0 awesome_greider[170997]: 167 167
Sep 30 14:26:38 compute-0 systemd[1]: libpod-4d776dfb74c73c419337d581f409f1c4b355a96c1013fe41367020743b1c59ea.scope: Deactivated successfully.
Sep 30 14:26:38 compute-0 podman[170981]: 2025-09-30 14:26:38.493632399 +0000 UTC m=+0.149105836 container died 4d776dfb74c73c419337d581f409f1c4b355a96c1013fe41367020743b1c59ea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_greider, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Sep 30 14:26:38 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:26:38 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 14:26:38 compute-0 ceph-mon[74194]: pgmap v324: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 266 B/s rd, 0 op/s
Sep 30 14:26:38 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:26:38 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:26:38 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 14:26:38 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 14:26:38 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:26:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-7aa8004b97661053f5968619f33b054185f32c4f2b9f162070842a304a5c1294-merged.mount: Deactivated successfully.
Sep 30 14:26:38 compute-0 podman[170981]: 2025-09-30 14:26:38.55882708 +0000 UTC m=+0.214300507 container remove 4d776dfb74c73c419337d581f409f1c4b355a96c1013fe41367020743b1c59ea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_greider, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Sep 30 14:26:38 compute-0 systemd[1]: libpod-conmon-4d776dfb74c73c419337d581f409f1c4b355a96c1013fe41367020743b1c59ea.scope: Deactivated successfully.
Sep 30 14:26:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:26:38 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2294002a60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:26:38 compute-0 podman[171023]: 2025-09-30 14:26:38.747281011 +0000 UTC m=+0.060008283 container create 2900e42bd3fc296a8e38c142fa6733315c1aefa7db345763533b49dc9ee49acf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_hugle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325)
Sep 30 14:26:38 compute-0 systemd[1]: Started libpod-conmon-2900e42bd3fc296a8e38c142fa6733315c1aefa7db345763533b49dc9ee49acf.scope.
Sep 30 14:26:38 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:26:38 compute-0 podman[171023]: 2025-09-30 14:26:38.728546618 +0000 UTC m=+0.041273910 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:26:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04ce076f756cd93d5e94688b1d4990cf9cb4a342fcdb0def327f643ab56f0721/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:26:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04ce076f756cd93d5e94688b1d4990cf9cb4a342fcdb0def327f643ab56f0721/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:26:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04ce076f756cd93d5e94688b1d4990cf9cb4a342fcdb0def327f643ab56f0721/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:26:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04ce076f756cd93d5e94688b1d4990cf9cb4a342fcdb0def327f643ab56f0721/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:26:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04ce076f756cd93d5e94688b1d4990cf9cb4a342fcdb0def327f643ab56f0721/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 14:26:38 compute-0 podman[171023]: 2025-09-30 14:26:38.837820223 +0000 UTC m=+0.150547515 container init 2900e42bd3fc296a8e38c142fa6733315c1aefa7db345763533b49dc9ee49acf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_hugle, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Sep 30 14:26:38 compute-0 podman[171023]: 2025-09-30 14:26:38.85040376 +0000 UTC m=+0.163131032 container start 2900e42bd3fc296a8e38c142fa6733315c1aefa7db345763533b49dc9ee49acf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_hugle, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:26:38 compute-0 podman[171023]: 2025-09-30 14:26:38.853764741 +0000 UTC m=+0.166492073 container attach 2900e42bd3fc296a8e38c142fa6733315c1aefa7db345763533b49dc9ee49acf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_hugle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Sep 30 14:26:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:26:38.884Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:26:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:26:38.885Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:26:39 compute-0 boring_hugle[171039]: --> passed data devices: 0 physical, 1 LVM
Sep 30 14:26:39 compute-0 boring_hugle[171039]: --> All data devices are unavailable
Sep 30 14:26:39 compute-0 systemd[1]: libpod-2900e42bd3fc296a8e38c142fa6733315c1aefa7db345763533b49dc9ee49acf.scope: Deactivated successfully.
Sep 30 14:26:39 compute-0 podman[171023]: 2025-09-30 14:26:39.230349065 +0000 UTC m=+0.543076377 container died 2900e42bd3fc296a8e38c142fa6733315c1aefa7db345763533b49dc9ee49acf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_hugle, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Sep 30 14:26:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-04ce076f756cd93d5e94688b1d4990cf9cb4a342fcdb0def327f643ab56f0721-merged.mount: Deactivated successfully.
Sep 30 14:26:39 compute-0 podman[171023]: 2025-09-30 14:26:39.287049567 +0000 UTC m=+0.599776849 container remove 2900e42bd3fc296a8e38c142fa6733315c1aefa7db345763533b49dc9ee49acf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_hugle, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid)
Sep 30 14:26:39 compute-0 systemd[1]: libpod-conmon-2900e42bd3fc296a8e38c142fa6733315c1aefa7db345763533b49dc9ee49acf.scope: Deactivated successfully.
Sep 30 14:26:39 compute-0 sudo[170915]: pam_unix(sudo:session): session closed for user root
Sep 30 14:26:39 compute-0 sudo[171067]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:26:39 compute-0 sudo[171067]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:26:39 compute-0 sudo[171067]: pam_unix(sudo:session): session closed for user root
Sep 30 14:26:39 compute-0 sudo[171092]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -- lvm list --format json
Sep 30 14:26:39 compute-0 sudo[171092]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:26:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:26:39 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22ac0045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:26:39 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v325: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 266 B/s rd, 0 op/s
Sep 30 14:26:39 compute-0 podman[171159]: 2025-09-30 14:26:39.840824969 +0000 UTC m=+0.044053864 container create 0a112bbefc946814d03d80dd9ef3432c32d6158aa9ee4687a194dac2e48dcc0c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_torvalds, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:26:39 compute-0 systemd[1]: Started libpod-conmon-0a112bbefc946814d03d80dd9ef3432c32d6158aa9ee4687a194dac2e48dcc0c.scope.
Sep 30 14:26:39 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:26:39 compute-0 sudo[171174]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:26:39 compute-0 sudo[171174]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:26:39 compute-0 sudo[171174]: pam_unix(sudo:session): session closed for user root
Sep 30 14:26:39 compute-0 podman[171159]: 2025-09-30 14:26:39.822331013 +0000 UTC m=+0.025559958 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:26:39 compute-0 podman[171159]: 2025-09-30 14:26:39.926254504 +0000 UTC m=+0.129483419 container init 0a112bbefc946814d03d80dd9ef3432c32d6158aa9ee4687a194dac2e48dcc0c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_torvalds, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True)
Sep 30 14:26:39 compute-0 podman[171159]: 2025-09-30 14:26:39.932342627 +0000 UTC m=+0.135571522 container start 0a112bbefc946814d03d80dd9ef3432c32d6158aa9ee4687a194dac2e48dcc0c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True)
Sep 30 14:26:39 compute-0 podman[171159]: 2025-09-30 14:26:39.934938187 +0000 UTC m=+0.138167102 container attach 0a112bbefc946814d03d80dd9ef3432c32d6158aa9ee4687a194dac2e48dcc0c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_torvalds, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Sep 30 14:26:39 compute-0 pedantic_torvalds[171199]: 167 167
Sep 30 14:26:39 compute-0 systemd[1]: libpod-0a112bbefc946814d03d80dd9ef3432c32d6158aa9ee4687a194dac2e48dcc0c.scope: Deactivated successfully.
Sep 30 14:26:39 compute-0 podman[171159]: 2025-09-30 14:26:39.937706691 +0000 UTC m=+0.140935586 container died 0a112bbefc946814d03d80dd9ef3432c32d6158aa9ee4687a194dac2e48dcc0c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_torvalds, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:26:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-2c89a6025188abe0a901e532bc109b481703be987016919552259c8155396033-merged.mount: Deactivated successfully.
Sep 30 14:26:39 compute-0 podman[171159]: 2025-09-30 14:26:39.978521877 +0000 UTC m=+0.181750772 container remove 0a112bbefc946814d03d80dd9ef3432c32d6158aa9ee4687a194dac2e48dcc0c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_torvalds, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:26:39 compute-0 systemd[1]: libpod-conmon-0a112bbefc946814d03d80dd9ef3432c32d6158aa9ee4687a194dac2e48dcc0c.scope: Deactivated successfully.
Sep 30 14:26:40 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:26:40 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:26:40 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:26:40.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:26:40 compute-0 podman[171224]: 2025-09-30 14:26:40.187391227 +0000 UTC m=+0.056290393 container create f5aec06cac1ab21e9fa0adc3d6f55de7de15eefc24dd355fd97a3fd4dc26caf5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_darwin, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:26:40 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:26:40 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22a0003780 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:26:40 compute-0 systemd[1]: Started libpod-conmon-f5aec06cac1ab21e9fa0adc3d6f55de7de15eefc24dd355fd97a3fd4dc26caf5.scope.
Sep 30 14:26:40 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:26:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b717af633f416b279608e134eb756d5188cc91b8166103ad2ddb6ab2359e0a1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:26:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b717af633f416b279608e134eb756d5188cc91b8166103ad2ddb6ab2359e0a1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:26:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b717af633f416b279608e134eb756d5188cc91b8166103ad2ddb6ab2359e0a1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:26:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b717af633f416b279608e134eb756d5188cc91b8166103ad2ddb6ab2359e0a1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:26:40 compute-0 podman[171224]: 2025-09-30 14:26:40.162305683 +0000 UTC m=+0.031204909 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:26:40 compute-0 podman[171224]: 2025-09-30 14:26:40.272977265 +0000 UTC m=+0.141876471 container init f5aec06cac1ab21e9fa0adc3d6f55de7de15eefc24dd355fd97a3fd4dc26caf5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_darwin, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:26:40 compute-0 podman[171224]: 2025-09-30 14:26:40.281230537 +0000 UTC m=+0.150129713 container start f5aec06cac1ab21e9fa0adc3d6f55de7de15eefc24dd355fd97a3fd4dc26caf5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_darwin, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:26:40 compute-0 podman[171224]: 2025-09-30 14:26:40.284635859 +0000 UTC m=+0.153535055 container attach f5aec06cac1ab21e9fa0adc3d6f55de7de15eefc24dd355fd97a3fd4dc26caf5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_darwin, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:26:40 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:26:40 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:26:40 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:26:40.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:26:40 compute-0 silly_darwin[171241]: {
Sep 30 14:26:40 compute-0 silly_darwin[171241]:     "0": [
Sep 30 14:26:40 compute-0 silly_darwin[171241]:         {
Sep 30 14:26:40 compute-0 silly_darwin[171241]:             "devices": [
Sep 30 14:26:40 compute-0 silly_darwin[171241]:                 "/dev/loop3"
Sep 30 14:26:40 compute-0 silly_darwin[171241]:             ],
Sep 30 14:26:40 compute-0 silly_darwin[171241]:             "lv_name": "ceph_lv0",
Sep 30 14:26:40 compute-0 silly_darwin[171241]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:26:40 compute-0 silly_darwin[171241]:             "lv_size": "21470642176",
Sep 30 14:26:40 compute-0 silly_darwin[171241]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5e3c7776-ac03-5698-b79f-a6dc2d80cae6,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1bf35304-bfb4-41f5-b832-570aa31de1b2,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 14:26:40 compute-0 silly_darwin[171241]:             "lv_uuid": "f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L",
Sep 30 14:26:40 compute-0 silly_darwin[171241]:             "name": "ceph_lv0",
Sep 30 14:26:40 compute-0 silly_darwin[171241]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:26:40 compute-0 silly_darwin[171241]:             "tags": {
Sep 30 14:26:40 compute-0 silly_darwin[171241]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:26:40 compute-0 silly_darwin[171241]:                 "ceph.block_uuid": "f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L",
Sep 30 14:26:40 compute-0 silly_darwin[171241]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 14:26:40 compute-0 silly_darwin[171241]:                 "ceph.cluster_fsid": "5e3c7776-ac03-5698-b79f-a6dc2d80cae6",
Sep 30 14:26:40 compute-0 silly_darwin[171241]:                 "ceph.cluster_name": "ceph",
Sep 30 14:26:40 compute-0 silly_darwin[171241]:                 "ceph.crush_device_class": "",
Sep 30 14:26:40 compute-0 silly_darwin[171241]:                 "ceph.encrypted": "0",
Sep 30 14:26:40 compute-0 silly_darwin[171241]:                 "ceph.osd_fsid": "1bf35304-bfb4-41f5-b832-570aa31de1b2",
Sep 30 14:26:40 compute-0 silly_darwin[171241]:                 "ceph.osd_id": "0",
Sep 30 14:26:40 compute-0 silly_darwin[171241]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 14:26:40 compute-0 silly_darwin[171241]:                 "ceph.type": "block",
Sep 30 14:26:40 compute-0 silly_darwin[171241]:                 "ceph.vdo": "0",
Sep 30 14:26:40 compute-0 silly_darwin[171241]:                 "ceph.with_tpm": "0"
Sep 30 14:26:40 compute-0 silly_darwin[171241]:             },
Sep 30 14:26:40 compute-0 silly_darwin[171241]:             "type": "block",
Sep 30 14:26:40 compute-0 silly_darwin[171241]:             "vg_name": "ceph_vg0"
Sep 30 14:26:40 compute-0 silly_darwin[171241]:         }
Sep 30 14:26:40 compute-0 silly_darwin[171241]:     ]
Sep 30 14:26:40 compute-0 silly_darwin[171241]: }
Sep 30 14:26:40 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:26:40 compute-0 systemd[1]: libpod-f5aec06cac1ab21e9fa0adc3d6f55de7de15eefc24dd355fd97a3fd4dc26caf5.scope: Deactivated successfully.
Sep 30 14:26:40 compute-0 podman[171224]: 2025-09-30 14:26:40.595435666 +0000 UTC m=+0.464334842 container died f5aec06cac1ab21e9fa0adc3d6f55de7de15eefc24dd355fd97a3fd4dc26caf5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_darwin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:26:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-8b717af633f416b279608e134eb756d5188cc91b8166103ad2ddb6ab2359e0a1-merged.mount: Deactivated successfully.
Sep 30 14:26:40 compute-0 podman[171224]: 2025-09-30 14:26:40.641158534 +0000 UTC m=+0.510057710 container remove f5aec06cac1ab21e9fa0adc3d6f55de7de15eefc24dd355fd97a3fd4dc26caf5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_darwin, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Sep 30 14:26:40 compute-0 systemd[1]: libpod-conmon-f5aec06cac1ab21e9fa0adc3d6f55de7de15eefc24dd355fd97a3fd4dc26caf5.scope: Deactivated successfully.
Sep 30 14:26:40 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:26:40 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2280004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:26:40 compute-0 sudo[171092]: pam_unix(sudo:session): session closed for user root
Sep 30 14:26:40 compute-0 sudo[171264]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:26:40 compute-0 sudo[171264]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:26:40 compute-0 sudo[171264]: pam_unix(sudo:session): session closed for user root
Sep 30 14:26:40 compute-0 sudo[171289]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -- raw list --format json
Sep 30 14:26:40 compute-0 sudo[171289]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:26:40 compute-0 ceph-mon[74194]: pgmap v325: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 266 B/s rd, 0 op/s
Sep 30 14:26:41 compute-0 podman[171352]: 2025-09-30 14:26:41.163938384 +0000 UTC m=+0.039490072 container create fe97cb40481389af07d63a99c900869b7723e7c1f6c1c5deb143a02ff54a21d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_keller, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Sep 30 14:26:41 compute-0 systemd[1]: Started libpod-conmon-fe97cb40481389af07d63a99c900869b7723e7c1f6c1c5deb143a02ff54a21d4.scope.
Sep 30 14:26:41 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:26:41 compute-0 podman[171352]: 2025-09-30 14:26:41.225780055 +0000 UTC m=+0.101331703 container init fe97cb40481389af07d63a99c900869b7723e7c1f6c1c5deb143a02ff54a21d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_keller, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Sep 30 14:26:41 compute-0 podman[171352]: 2025-09-30 14:26:41.234226682 +0000 UTC m=+0.109778330 container start fe97cb40481389af07d63a99c900869b7723e7c1f6c1c5deb143a02ff54a21d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_keller, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:26:41 compute-0 podman[171352]: 2025-09-30 14:26:41.237106809 +0000 UTC m=+0.112658477 container attach fe97cb40481389af07d63a99c900869b7723e7c1f6c1c5deb143a02ff54a21d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_keller, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:26:41 compute-0 thirsty_keller[171369]: 167 167
Sep 30 14:26:41 compute-0 systemd[1]: libpod-fe97cb40481389af07d63a99c900869b7723e7c1f6c1c5deb143a02ff54a21d4.scope: Deactivated successfully.
Sep 30 14:26:41 compute-0 podman[171352]: 2025-09-30 14:26:41.239511274 +0000 UTC m=+0.115062922 container died fe97cb40481389af07d63a99c900869b7723e7c1f6c1c5deb143a02ff54a21d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_keller, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True)
Sep 30 14:26:41 compute-0 podman[171352]: 2025-09-30 14:26:41.145749775 +0000 UTC m=+0.021301453 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:26:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-142b462c8bcbe12a032db26f273b4a66a29c01fc60ad1b023fb13d13088e61f0-merged.mount: Deactivated successfully.
Sep 30 14:26:41 compute-0 podman[171352]: 2025-09-30 14:26:41.273848716 +0000 UTC m=+0.149400364 container remove fe97cb40481389af07d63a99c900869b7723e7c1f6c1c5deb143a02ff54a21d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_keller, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:26:41 compute-0 systemd[1]: libpod-conmon-fe97cb40481389af07d63a99c900869b7723e7c1f6c1c5deb143a02ff54a21d4.scope: Deactivated successfully.
Sep 30 14:26:41 compute-0 podman[171394]: 2025-09-30 14:26:41.448833405 +0000 UTC m=+0.040499888 container create d197d76dfabc133f4f318b12b298c97ae4ca65a98c1d0f48706cbefa66d7d4c2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_brown, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:26:41 compute-0 systemd[1]: Started libpod-conmon-d197d76dfabc133f4f318b12b298c97ae4ca65a98c1d0f48706cbefa66d7d4c2.scope.
Sep 30 14:26:41 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:26:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fd66d569793e5c46cf3cbbac4a09d31aba05f6e363fcd26fb607c8f44d510dd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:26:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fd66d569793e5c46cf3cbbac4a09d31aba05f6e363fcd26fb607c8f44d510dd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:26:41 compute-0 podman[171394]: 2025-09-30 14:26:41.430926484 +0000 UTC m=+0.022592997 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:26:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fd66d569793e5c46cf3cbbac4a09d31aba05f6e363fcd26fb607c8f44d510dd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:26:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fd66d569793e5c46cf3cbbac4a09d31aba05f6e363fcd26fb607c8f44d510dd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:26:41 compute-0 podman[171394]: 2025-09-30 14:26:41.541024691 +0000 UTC m=+0.132691204 container init d197d76dfabc133f4f318b12b298c97ae4ca65a98c1d0f48706cbefa66d7d4c2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_brown, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:26:41 compute-0 podman[171394]: 2025-09-30 14:26:41.547651169 +0000 UTC m=+0.139317662 container start d197d76dfabc133f4f318b12b298c97ae4ca65a98c1d0f48706cbefa66d7d4c2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_brown, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:26:41 compute-0 podman[171394]: 2025-09-30 14:26:41.55624004 +0000 UTC m=+0.147906533 container attach d197d76dfabc133f4f318b12b298c97ae4ca65a98c1d0f48706cbefa66d7d4c2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_brown, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Sep 30 14:26:41 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:26:41 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2294002a60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:26:41 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v326: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 266 B/s rd, 0 op/s
Sep 30 14:26:41 compute-0 ceph-mon[74194]: pgmap v326: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 266 B/s rd, 0 op/s
Sep 30 14:26:42 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:26:42 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:26:42 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:26:42.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:26:42 compute-0 lvm[171487]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 14:26:42 compute-0 lvm[171487]: VG ceph_vg0 finished
Sep 30 14:26:42 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:26:42 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22ac0045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:26:42 compute-0 epic_brown[171411]: {}
Sep 30 14:26:42 compute-0 systemd[1]: libpod-d197d76dfabc133f4f318b12b298c97ae4ca65a98c1d0f48706cbefa66d7d4c2.scope: Deactivated successfully.
Sep 30 14:26:42 compute-0 podman[171394]: 2025-09-30 14:26:42.257475373 +0000 UTC m=+0.849141866 container died d197d76dfabc133f4f318b12b298c97ae4ca65a98c1d0f48706cbefa66d7d4c2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_brown, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:26:42 compute-0 systemd[1]: libpod-d197d76dfabc133f4f318b12b298c97ae4ca65a98c1d0f48706cbefa66d7d4c2.scope: Consumed 1.133s CPU time.
Sep 30 14:26:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-0fd66d569793e5c46cf3cbbac4a09d31aba05f6e363fcd26fb607c8f44d510dd-merged.mount: Deactivated successfully.
Sep 30 14:26:42 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:26:42 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:26:42 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:26:42.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:26:42 compute-0 podman[171394]: 2025-09-30 14:26:42.323929758 +0000 UTC m=+0.915596241 container remove d197d76dfabc133f4f318b12b298c97ae4ca65a98c1d0f48706cbefa66d7d4c2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_brown, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Sep 30 14:26:42 compute-0 systemd[1]: libpod-conmon-d197d76dfabc133f4f318b12b298c97ae4ca65a98c1d0f48706cbefa66d7d4c2.scope: Deactivated successfully.
Sep 30 14:26:42 compute-0 sudo[171289]: pam_unix(sudo:session): session closed for user root
Sep 30 14:26:42 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 14:26:42 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:26:42 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 14:26:42 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:26:42 compute-0 sudo[171504]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 14:26:42 compute-0 sudo[171504]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:26:42 compute-0 sudo[171504]: pam_unix(sudo:session): session closed for user root
Sep 30 14:26:42 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:26:42 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22a0003780 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:26:43 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:26:43 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:26:43 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:26:43 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22a0003780 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:26:43 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v327: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 177 B/s rd, 0 op/s
Sep 30 14:26:44 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:26:44 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:26:44 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:26:44.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:26:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:26:44 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2294002a60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:26:44 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:26:44 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:26:44 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:26:44.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:26:44 compute-0 ceph-mon[74194]: pgmap v327: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 177 B/s rd, 0 op/s
Sep 30 14:26:44 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:26:44 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:26:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:26:44 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f228c000f30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:26:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:26:44] "GET /metrics HTTP/1.1" 200 48423 "" "Prometheus/2.51.0"
Sep 30 14:26:44 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:26:44] "GET /metrics HTTP/1.1" 200 48423 "" "Prometheus/2.51.0"
Sep 30 14:26:45 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:26:45 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:26:45 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:26:45 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22ac0045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:26:45 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v328: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 177 B/s rd, 0 op/s
Sep 30 14:26:46 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:26:46 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:26:46 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:26:46.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:26:46 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:26:46 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22a0003780 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:26:46 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:26:46 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:26:46 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:26:46.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:26:46 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:26:46 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:26:46 compute-0 ceph-mon[74194]: pgmap v328: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 177 B/s rd, 0 op/s
Sep 30 14:26:46 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:26:46 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22a0003780 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:26:47 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:26:47.008Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:26:47 compute-0 kernel: SELinux:  Converting 2772 SID table entries...
Sep 30 14:26:47 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Sep 30 14:26:47 compute-0 kernel: SELinux:  policy capability open_perms=1
Sep 30 14:26:47 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Sep 30 14:26:47 compute-0 kernel: SELinux:  policy capability always_check_network=0
Sep 30 14:26:47 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Sep 30 14:26:47 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Sep 30 14:26:47 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Sep 30 14:26:47 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:26:47 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f228c0010d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:26:47 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v329: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 443 B/s rd, 88 B/s wr, 0 op/s
Sep 30 14:26:48 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:26:48 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:26:48 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:26:48.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:26:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:26:48 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22ac0045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:26:48 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:26:48 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:26:48 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:26:48.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:26:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:26:48 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2294002a60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:26:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:26:48.885Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:26:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:26:48.885Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:26:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:26:48.886Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:26:48 compute-0 ceph-mon[74194]: pgmap v329: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 443 B/s rd, 88 B/s wr, 0 op/s
Sep 30 14:26:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:26:49 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:26:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:26:49 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:26:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:26:49 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f228c001e80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:26:49 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v330: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:26:50 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:26:50 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:26:50 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:26:50.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:26:50 compute-0 ceph-mon[74194]: pgmap v330: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:26:50 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:26:50 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22a0003780 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:26:50 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:26:50 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:26:50 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:26:50.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:26:50 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:26:50 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:26:50 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22ac0045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:26:51 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:26:51 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2294002a60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:26:51 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v331: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Sep 30 14:26:52 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:26:52 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:26:52 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:26:52.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:26:52 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:26:52 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f228c002020 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:26:52 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:26:52 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:26:52 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:26:52.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:26:52 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:26:52 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Sep 30 14:26:52 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:26:52 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22a0003780 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:26:52 compute-0 ceph-mon[74194]: pgmap v331: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Sep 30 14:26:53 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:26:53 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22ac0045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:26:53 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v332: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Sep 30 14:26:54 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:26:54 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:26:54 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:26:54.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:26:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:26:54 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2294002a60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:26:54 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:26:54 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:26:54 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:26:54.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:26:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:26:54 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f228c0029d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:26:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:26:54] "GET /metrics HTTP/1.1" 200 48423 "" "Prometheus/2.51.0"
Sep 30 14:26:54 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:26:54] "GET /metrics HTTP/1.1" 200 48423 "" "Prometheus/2.51.0"
Sep 30 14:26:54 compute-0 ceph-mon[74194]: pgmap v332: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Sep 30 14:26:55 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:26:55 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:26:55 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22a0003780 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:26:55 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v333: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Sep 30 14:26:56 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:26:56 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:26:56 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:26:56.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:26:56 compute-0 ceph-mon[74194]: pgmap v333: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Sep 30 14:26:56 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:26:56 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22ac0045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:26:56 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:26:56 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:26:56 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:26:56.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:26:56 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:26:56 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2294002a60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:26:57 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:26:57.009Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:26:57 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:26:57.010Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:26:57 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:26:57.010Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:26:57 compute-0 kernel: SELinux:  Converting 2772 SID table entries...
Sep 30 14:26:57 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Sep 30 14:26:57 compute-0 kernel: SELinux:  policy capability open_perms=1
Sep 30 14:26:57 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Sep 30 14:26:57 compute-0 kernel: SELinux:  policy capability always_check_network=0
Sep 30 14:26:57 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Sep 30 14:26:57 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Sep 30 14:26:57 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Sep 30 14:26:57 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-nfs-cephfs-compute-0-yvkpei[97239]: [WARNING] 272/142657 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 3ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Sep 30 14:26:57 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:26:57 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f228c0029d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:26:57 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v334: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Sep 30 14:26:58 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:26:58 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:26:58 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:26:58.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:26:58 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:26:58 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22a0003780 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:26:58 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:26:58 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:26:58 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:26:58.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:26:58 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:26:58 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22ac0045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:26:58 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:26:58.886Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:26:58 compute-0 ceph-mon[74194]: pgmap v334: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Sep 30 14:26:59 compute-0 ceph-mgr[74485]: [balancer INFO root] Optimize plan auto_2025-09-30_14:26:59
Sep 30 14:26:59 compute-0 ceph-mgr[74485]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 14:26:59 compute-0 ceph-mgr[74485]: [balancer INFO root] do_upmap
Sep 30 14:26:59 compute-0 ceph-mgr[74485]: [balancer INFO root] pools ['images', 'vms', 'default.rgw.meta', 'cephfs.cephfs.meta', '.rgw.root', 'cephfs.cephfs.data', '.mgr', '.nfs', 'volumes', 'default.rgw.control', 'default.rgw.log', 'backups']
Sep 30 14:26:59 compute-0 ceph-mgr[74485]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 14:26:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 14:26:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:26:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 14:26:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:26:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:26:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:26:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:26:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:26:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:26:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:26:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:26:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:26:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Sep 30 14:26:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:26:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:26:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:26:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Sep 30 14:26:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:26:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Sep 30 14:26:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:26:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:26:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:26:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 14:26:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:26:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 14:26:59 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:26:59 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:26:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:26:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:26:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:26:59 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2294002a60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:26:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:26:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:26:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:26:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:26:59 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v335: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Sep 30 14:26:59 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:27:00 compute-0 sudo[171563]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:27:00 compute-0 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Sep 30 14:27:00 compute-0 sudo[171563]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:27:00 compute-0 sudo[171563]: pam_unix(sudo:session): session closed for user root
Sep 30 14:27:00 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:27:00 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:27:00 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:27:00.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:27:00 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:27:00 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f228c003840 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:27:00 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:27:00 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:27:00 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:27:00.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:27:00 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:27:00 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:27:00 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22a0003780 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:27:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 14:27:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 14:27:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 14:27:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 14:27:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 14:27:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 14:27:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 14:27:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 14:27:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 14:27:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 14:27:01 compute-0 ceph-mon[74194]: pgmap v335: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Sep 30 14:27:01 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:27:01 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22ac0045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:27:01 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v336: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Sep 30 14:27:02 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:27:02 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:27:02 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:27:02.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:27:02 compute-0 ceph-mon[74194]: pgmap v336: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Sep 30 14:27:02 compute-0 podman[171590]: 2025-09-30 14:27:02.197997419 +0000 UTC m=+0.109172853 container health_status 8ace90c1ad44d98a416105b72e1c541724757ab08efa10adfaa0df7e8c2027c6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=36bccb96575468ec919301205d8daa2c, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20250923)
Sep 30 14:27:02 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:27:02 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2294002a60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:27:02 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:27:02 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:27:02 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:27:02.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:27:02 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:27:02 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f228c003840 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:27:03 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:27:03 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22a0003780 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:27:03 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v337: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:27:04 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:27:04 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:27:04 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:27:04.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:27:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:27:04 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22ac0045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:27:04 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:27:04 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:27:04 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:27:04.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:27:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:27:04 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2294002a60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:27:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:27:04] "GET /metrics HTTP/1.1" 200 48427 "" "Prometheus/2.51.0"
Sep 30 14:27:04 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:27:04] "GET /metrics HTTP/1.1" 200 48427 "" "Prometheus/2.51.0"
Sep 30 14:27:04 compute-0 ceph-mon[74194]: pgmap v337: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:27:05 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:27:05 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:27:05 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f228c003840 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:27:05 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v338: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:27:06 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:27:06 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:27:06 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:27:06.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:27:06 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:27:06 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22a0003780 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:27:06 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:27:06 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:27:06 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:27:06.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:27:06 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:27:06 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22ac0045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:27:06 compute-0 ceph-mon[74194]: pgmap v338: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:27:07 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:27:07.010Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:27:07 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:27:07 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2294002a60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:27:07 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v339: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:27:08 compute-0 ceph-mon[74194]: pgmap v339: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:27:08 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:27:08 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:27:08 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:27:08.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:27:08 compute-0 podman[171622]: 2025-09-30 14:27:08.141572345 +0000 UTC m=+0.071067440 container health_status c96c6cc1b89de09f21811bef665f35e069874e3ad1bbd4c37d02227af98e3458 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0)
Sep 30 14:27:08 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:27:08 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f228c003840 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:27:08 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:27:08 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:27:08 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:27:08.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:27:08 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:27:08 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22a0003780 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:27:08 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:27:08.887Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:27:09 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:27:09 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22ac0045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:27:09 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v340: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:27:10 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:27:10 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:27:10 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:27:10.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:27:10 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:27:10 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2294002a60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:27:10 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:27:10 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:27:10 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:27:10.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:27:10 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:27:10 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:27:10 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f228c003840 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:27:10 compute-0 ceph-mon[74194]: pgmap v340: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:27:11 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:27:11 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22a0003780 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:27:11 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v341: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:27:12 compute-0 ceph-mon[74194]: pgmap v341: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:27:12 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:27:12 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:27:12 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:27:12.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:27:12 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:27:12 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22ac0045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:27:12 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:27:12 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:27:12 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:27:12.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:27:12 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:27:12 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2294002a60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:27:13 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v342: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:27:13 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:27:13 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f228c003840 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:27:14 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:27:14 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:27:14 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:27:14.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:27:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:27:14 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22a0003780 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:27:14 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:27:14 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:27:14 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:27:14.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:27:14 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:27:14 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:27:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:27:14 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22ac0045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:27:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:27:14] "GET /metrics HTTP/1.1" 200 48421 "" "Prometheus/2.51.0"
Sep 30 14:27:14 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:27:14] "GET /metrics HTTP/1.1" 200 48421 "" "Prometheus/2.51.0"
Sep 30 14:27:14 compute-0 ceph-mon[74194]: pgmap v342: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:27:14 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:27:15 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:27:15 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v343: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:27:15 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:27:15 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2288001280 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:27:16 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:27:16 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:27:16 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:27:16.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:27:16 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:27:16 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f228c003840 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:27:16 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:27:16 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:27:16 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:27:16.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:27:16 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:27:16 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22a0003780 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:27:17 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:27:17.011Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:27:17 compute-0 ceph-mon[74194]: pgmap v343: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:27:17 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v344: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:27:17 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:27:17 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2280002070 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:27:18 compute-0 ceph-mon[74194]: pgmap v344: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:27:18 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:27:18 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:27:18 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:27:18.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:27:18 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:27:18 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2288002160 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:27:18 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:27:18 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:27:18 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:27:18.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:27:18 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:27:18 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f228c003840 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:27:18 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:27:18.889Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:27:18 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:27:18.890Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:27:19 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v345: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:27:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:27:19 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22a0003780 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:27:20 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:27:20 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:27:20 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:27:20.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:27:20 compute-0 ceph-mon[74194]: pgmap v345: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:27:20 compute-0 sudo[178421]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:27:20 compute-0 sudo[178421]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:27:20 compute-0 sudo[178421]: pam_unix(sudo:session): session closed for user root
Sep 30 14:27:20 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:27:20 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2280002070 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:27:20 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:27:20 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:27:20 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:27:20.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:27:20 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:27:20 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:27:20 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2288002160 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:27:21 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v346: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:27:21 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:27:21 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f228c003840 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:27:22 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:27:22 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:27:22 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:27:22.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:27:22 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:27:22 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22a0003780 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:27:22 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:27:22 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:27:22 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:27:22.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:27:22 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:27:22 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22a0003780 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:27:22 compute-0 ceph-mon[74194]: pgmap v346: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:27:23 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v347: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:27:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:27:23 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2288002160 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:27:24 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:27:24 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:27:24 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:27:24.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:27:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:27:24 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f228c003840 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:27:24 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:27:24 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:27:24 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:27:24.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:27:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:27:24 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2280002070 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:27:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:27:24] "GET /metrics HTTP/1.1" 200 48421 "" "Prometheus/2.51.0"
Sep 30 14:27:24 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:27:24] "GET /metrics HTTP/1.1" 200 48421 "" "Prometheus/2.51.0"
Sep 30 14:27:24 compute-0 ceph-mon[74194]: pgmap v347: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:27:25 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:27:25 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v348: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:27:25 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:27:25 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22a0003780 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:27:26 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:27:26 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:27:26 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:27:26.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:27:26 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:27:26 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2288003110 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:27:26 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:27:26 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:27:26 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:27:26.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:27:26 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:27:26 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f228c003840 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:27:26 compute-0 ceph-mon[74194]: pgmap v348: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:27:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:27:27.011Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:27:27 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v349: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:27:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:27:27 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f228c003840 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:27:28 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:27:28 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:27:28 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:27:28.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:27:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:27:28 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f228c003840 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:27:28 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:27:28 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:27:28 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:27:28.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:27:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:27:28 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2288003110 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:27:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:27:28.891Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:27:28 compute-0 ceph-mon[74194]: pgmap v349: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:27:29 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:27:29 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:27:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:27:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:27:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:27:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:27:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:27:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:27:29 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v350: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:27:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:27:29 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2280001480 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:27:29 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:27:30 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:27:30 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:27:30 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:27:30.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:27:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:27:30 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22a0003780 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:27:30 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:27:30 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:27:30 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:27:30.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:27:30 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:27:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:27:30 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f228c003840 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:27:30 compute-0 ceph-mon[74194]: pgmap v350: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:27:31 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v351: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:27:31 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:27:31 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2288003110 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:27:31 compute-0 ceph-mon[74194]: pgmap v351: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:27:32 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:27:32 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:27:32 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:27:32.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:27:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:27:32 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2280001480 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:27:32 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:27:32 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:27:32 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:27:32.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:27:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:27:32 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22a00037a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:27:33 compute-0 podman[186842]: 2025-09-30 14:27:33.155818646 +0000 UTC m=+0.082247060 container health_status 8ace90c1ad44d98a416105b72e1c541724757ab08efa10adfaa0df7e8c2027c6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:27:33 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v352: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:27:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:27:33 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f228c003840 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:27:34 compute-0 ceph-mon[74194]: pgmap v352: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:27:34 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:27:34 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:27:34 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:27:34.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:27:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:27:34 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2288004300 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:27:34 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:27:34 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:27:34 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:27:34.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:27:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:27:34 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2280001480 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:27:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:27:34] "GET /metrics HTTP/1.1" 200 48420 "" "Prometheus/2.51.0"
Sep 30 14:27:34 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:27:34] "GET /metrics HTTP/1.1" 200 48420 "" "Prometheus/2.51.0"
Sep 30 14:27:35 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:27:35 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v353: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:27:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:27:35 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22a00037c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:27:36 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:27:36 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:27:36 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:27:36.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:27:36 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:27:36 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f228c003840 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:27:36 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:27:36 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:27:36 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:27:36.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:27:36 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:27:36 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2288004300 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:27:36 compute-0 ceph-mon[74194]: pgmap v353: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:27:37 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:27:37.012Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:27:37 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v354: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:27:37 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:27:37 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2280001480 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:27:38 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:27:38 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:27:38 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:27:38.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:27:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:27:38.239 163966 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:27:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:27:38.240 163966 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:27:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:27:38.240 163966 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:27:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:27:38 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22a00037e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:27:38 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:27:38 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:27:38 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:27:38.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:27:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:27:38 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f228c004160 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:27:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:27:38.892Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:27:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:27:38.893Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:27:38 compute-0 ceph-mon[74194]: pgmap v354: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:27:39 compute-0 podman[188465]: 2025-09-30 14:27:39.129080287 +0000 UTC m=+0.054472453 container health_status c96c6cc1b89de09f21811bef665f35e069874e3ad1bbd4c37d02227af98e3458 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Sep 30 14:27:39 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v355: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:27:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:27:39 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2288004300 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:27:40 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:27:40 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:27:40 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:27:40.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:27:40 compute-0 ceph-mon[74194]: pgmap v355: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:27:40 compute-0 sudo[188484]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:27:40 compute-0 sudo[188484]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:27:40 compute-0 sudo[188484]: pam_unix(sudo:session): session closed for user root
Sep 30 14:27:40 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:27:40 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2280001620 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:27:40 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:27:40 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:27:40 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:27:40.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:27:40 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:27:40 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:27:40 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22a0003800 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:27:41 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v356: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:27:41 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:27:41 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f228c004160 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:27:42 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:27:42 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:27:42 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:27:42.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:27:42 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:27:42 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2288004300 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:27:42 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:27:42 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:27:42 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:27:42.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:27:42 compute-0 sudo[188511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:27:42 compute-0 sudo[188511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:27:42 compute-0 sudo[188511]: pam_unix(sudo:session): session closed for user root
Sep 30 14:27:42 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:27:42 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2280001620 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:27:42 compute-0 sudo[188536]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 14:27:42 compute-0 sudo[188536]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:27:42 compute-0 ceph-mon[74194]: pgmap v356: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:27:43 compute-0 sudo[188536]: pam_unix(sudo:session): session closed for user root
Sep 30 14:27:43 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:27:43 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:27:43 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 14:27:43 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 14:27:43 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v357: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 265 B/s rd, 0 op/s
Sep 30 14:27:43 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 14:27:43 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:27:43 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 14:27:43 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:27:43 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 14:27:43 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 14:27:43 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 14:27:43 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 14:27:43 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:27:43 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:27:43 compute-0 sudo[188593]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:27:43 compute-0 sudo[188593]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:27:43 compute-0 sudo[188593]: pam_unix(sudo:session): session closed for user root
Sep 30 14:27:43 compute-0 sudo[188618]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 14:27:43 compute-0 sudo[188618]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:27:43 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:27:43 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22a0003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:27:43 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:27:43 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 14:27:43 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:27:43 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:27:43 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 14:27:43 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 14:27:43 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:27:44 compute-0 podman[188682]: 2025-09-30 14:27:44.014151145 +0000 UTC m=+0.048718020 container create ddd4e2cef37b745edddf61073fe16d3ab39a56d3903392edbeb25372c86389e9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_haslett, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Sep 30 14:27:44 compute-0 systemd[1]: Started libpod-conmon-ddd4e2cef37b745edddf61073fe16d3ab39a56d3903392edbeb25372c86389e9.scope.
Sep 30 14:27:44 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:27:44 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:27:44 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:27:44.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:27:44 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:27:44 compute-0 podman[188682]: 2025-09-30 14:27:43.990370536 +0000 UTC m=+0.024937431 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:27:44 compute-0 podman[188682]: 2025-09-30 14:27:44.104421299 +0000 UTC m=+0.138988194 container init ddd4e2cef37b745edddf61073fe16d3ab39a56d3903392edbeb25372c86389e9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_haslett, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Sep 30 14:27:44 compute-0 podman[188682]: 2025-09-30 14:27:44.112875736 +0000 UTC m=+0.147442621 container start ddd4e2cef37b745edddf61073fe16d3ab39a56d3903392edbeb25372c86389e9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_haslett, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:27:44 compute-0 podman[188682]: 2025-09-30 14:27:44.116237146 +0000 UTC m=+0.150804041 container attach ddd4e2cef37b745edddf61073fe16d3ab39a56d3903392edbeb25372c86389e9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Sep 30 14:27:44 compute-0 sleepy_haslett[188698]: 167 167
Sep 30 14:27:44 compute-0 systemd[1]: libpod-ddd4e2cef37b745edddf61073fe16d3ab39a56d3903392edbeb25372c86389e9.scope: Deactivated successfully.
Sep 30 14:27:44 compute-0 podman[188682]: 2025-09-30 14:27:44.120353777 +0000 UTC m=+0.154920652 container died ddd4e2cef37b745edddf61073fe16d3ab39a56d3903392edbeb25372c86389e9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_haslett, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Sep 30 14:27:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-96938f0f534840575daa4fe58facf79ad1b130e8586be8dc3d821a0bc9c1a188-merged.mount: Deactivated successfully.
Sep 30 14:27:44 compute-0 podman[188682]: 2025-09-30 14:27:44.16143277 +0000 UTC m=+0.195999645 container remove ddd4e2cef37b745edddf61073fe16d3ab39a56d3903392edbeb25372c86389e9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_haslett, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Sep 30 14:27:44 compute-0 systemd[1]: libpod-conmon-ddd4e2cef37b745edddf61073fe16d3ab39a56d3903392edbeb25372c86389e9.scope: Deactivated successfully.
Sep 30 14:27:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:27:44 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f228c004160 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:27:44 compute-0 podman[188720]: 2025-09-30 14:27:44.309698741 +0000 UTC m=+0.035772942 container create 6bc82747b4ba8446209cb9f07a6e80e5408e3804c99f3a9683fbbd7f20c8d3ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_wilbur, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:27:44 compute-0 systemd[1]: Started libpod-conmon-6bc82747b4ba8446209cb9f07a6e80e5408e3804c99f3a9683fbbd7f20c8d3ca.scope.
Sep 30 14:27:44 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:27:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c7183a21c4e3dff6786aca4d687f1fcc9a63f5d886ad74393101956519cb255/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:27:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c7183a21c4e3dff6786aca4d687f1fcc9a63f5d886ad74393101956519cb255/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:27:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c7183a21c4e3dff6786aca4d687f1fcc9a63f5d886ad74393101956519cb255/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:27:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c7183a21c4e3dff6786aca4d687f1fcc9a63f5d886ad74393101956519cb255/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:27:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c7183a21c4e3dff6786aca4d687f1fcc9a63f5d886ad74393101956519cb255/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 14:27:44 compute-0 podman[188720]: 2025-09-30 14:27:44.388421015 +0000 UTC m=+0.114495236 container init 6bc82747b4ba8446209cb9f07a6e80e5408e3804c99f3a9683fbbd7f20c8d3ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Sep 30 14:27:44 compute-0 podman[188720]: 2025-09-30 14:27:44.295335355 +0000 UTC m=+0.021409576 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:27:44 compute-0 podman[188720]: 2025-09-30 14:27:44.395414443 +0000 UTC m=+0.121488644 container start 6bc82747b4ba8446209cb9f07a6e80e5408e3804c99f3a9683fbbd7f20c8d3ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_wilbur, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:27:44 compute-0 podman[188720]: 2025-09-30 14:27:44.398879866 +0000 UTC m=+0.124954087 container attach 6bc82747b4ba8446209cb9f07a6e80e5408e3804c99f3a9683fbbd7f20c8d3ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_wilbur, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Sep 30 14:27:44 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:27:44 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:27:44 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:27:44.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:27:44 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:27:44 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:27:44 compute-0 great_wilbur[188737]: --> passed data devices: 0 physical, 1 LVM
Sep 30 14:27:44 compute-0 great_wilbur[188737]: --> All data devices are unavailable
Sep 30 14:27:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:27:44 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2288004300 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:27:44 compute-0 systemd[1]: libpod-6bc82747b4ba8446209cb9f07a6e80e5408e3804c99f3a9683fbbd7f20c8d3ca.scope: Deactivated successfully.
Sep 30 14:27:44 compute-0 podman[188720]: 2025-09-30 14:27:44.723423813 +0000 UTC m=+0.449498014 container died 6bc82747b4ba8446209cb9f07a6e80e5408e3804c99f3a9683fbbd7f20c8d3ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_wilbur, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Sep 30 14:27:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:27:44] "GET /metrics HTTP/1.1" 200 48416 "" "Prometheus/2.51.0"
Sep 30 14:27:44 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:27:44] "GET /metrics HTTP/1.1" 200 48416 "" "Prometheus/2.51.0"
Sep 30 14:27:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-5c7183a21c4e3dff6786aca4d687f1fcc9a63f5d886ad74393101956519cb255-merged.mount: Deactivated successfully.
Sep 30 14:27:44 compute-0 podman[188720]: 2025-09-30 14:27:44.768243076 +0000 UTC m=+0.494317277 container remove 6bc82747b4ba8446209cb9f07a6e80e5408e3804c99f3a9683fbbd7f20c8d3ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_wilbur, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:27:44 compute-0 systemd[1]: libpod-conmon-6bc82747b4ba8446209cb9f07a6e80e5408e3804c99f3a9683fbbd7f20c8d3ca.scope: Deactivated successfully.
Sep 30 14:27:44 compute-0 sudo[188618]: pam_unix(sudo:session): session closed for user root
Sep 30 14:27:44 compute-0 sudo[188763]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:27:44 compute-0 sudo[188763]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:27:44 compute-0 sudo[188763]: pam_unix(sudo:session): session closed for user root
Sep 30 14:27:44 compute-0 sudo[188788]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -- lvm list --format json
Sep 30 14:27:44 compute-0 sudo[188788]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:27:44 compute-0 ceph-mon[74194]: pgmap v357: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 265 B/s rd, 0 op/s
Sep 30 14:27:44 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:27:45 compute-0 podman[188852]: 2025-09-30 14:27:45.317408605 +0000 UTC m=+0.039969434 container create 047f13b2b82a678915992422297271d34d5b342a630131fb0efadac3d7174d36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_dubinsky, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Sep 30 14:27:45 compute-0 systemd[1]: Started libpod-conmon-047f13b2b82a678915992422297271d34d5b342a630131fb0efadac3d7174d36.scope.
Sep 30 14:27:45 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:27:45 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v358: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 265 B/s rd, 0 op/s
Sep 30 14:27:45 compute-0 podman[188852]: 2025-09-30 14:27:45.29821157 +0000 UTC m=+0.020772419 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:27:45 compute-0 podman[188852]: 2025-09-30 14:27:45.39725626 +0000 UTC m=+0.119817109 container init 047f13b2b82a678915992422297271d34d5b342a630131fb0efadac3d7174d36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_dubinsky, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Sep 30 14:27:45 compute-0 podman[188852]: 2025-09-30 14:27:45.404282578 +0000 UTC m=+0.126843407 container start 047f13b2b82a678915992422297271d34d5b342a630131fb0efadac3d7174d36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_dubinsky, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:27:45 compute-0 podman[188852]: 2025-09-30 14:27:45.407904036 +0000 UTC m=+0.130464885 container attach 047f13b2b82a678915992422297271d34d5b342a630131fb0efadac3d7174d36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_dubinsky, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Sep 30 14:27:45 compute-0 eloquent_dubinsky[188869]: 167 167
Sep 30 14:27:45 compute-0 systemd[1]: libpod-047f13b2b82a678915992422297271d34d5b342a630131fb0efadac3d7174d36.scope: Deactivated successfully.
Sep 30 14:27:45 compute-0 podman[188852]: 2025-09-30 14:27:45.410360482 +0000 UTC m=+0.132921311 container died 047f13b2b82a678915992422297271d34d5b342a630131fb0efadac3d7174d36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_dubinsky, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Sep 30 14:27:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-3bf31a9d0f301e8dfd8c6505396e43b8625360f968e979e04e2b34f94bb2b05e-merged.mount: Deactivated successfully.
Sep 30 14:27:45 compute-0 podman[188852]: 2025-09-30 14:27:45.44566812 +0000 UTC m=+0.168228949 container remove 047f13b2b82a678915992422297271d34d5b342a630131fb0efadac3d7174d36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_dubinsky, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Sep 30 14:27:45 compute-0 systemd[1]: libpod-conmon-047f13b2b82a678915992422297271d34d5b342a630131fb0efadac3d7174d36.scope: Deactivated successfully.
Sep 30 14:27:45 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:27:45 compute-0 podman[188893]: 2025-09-30 14:27:45.610052235 +0000 UTC m=+0.037759395 container create 7bd35bdec3f61f9347bc3f00fede7fc91550232ac0914c32a8c70d5ca30511b3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Sep 30 14:27:45 compute-0 systemd[1]: Started libpod-conmon-7bd35bdec3f61f9347bc3f00fede7fc91550232ac0914c32a8c70d5ca30511b3.scope.
Sep 30 14:27:45 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:27:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f245beaf033f8e95ba4132310d384b633483ed327273cc2f08824c70fccce893/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:27:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f245beaf033f8e95ba4132310d384b633483ed327273cc2f08824c70fccce893/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:27:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f245beaf033f8e95ba4132310d384b633483ed327273cc2f08824c70fccce893/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:27:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f245beaf033f8e95ba4132310d384b633483ed327273cc2f08824c70fccce893/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:27:45 compute-0 podman[188893]: 2025-09-30 14:27:45.594435105 +0000 UTC m=+0.022142295 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:27:45 compute-0 podman[188893]: 2025-09-30 14:27:45.698931052 +0000 UTC m=+0.126638212 container init 7bd35bdec3f61f9347bc3f00fede7fc91550232ac0914c32a8c70d5ca30511b3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_villani, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:27:45 compute-0 podman[188893]: 2025-09-30 14:27:45.713719669 +0000 UTC m=+0.141426869 container start 7bd35bdec3f61f9347bc3f00fede7fc91550232ac0914c32a8c70d5ca30511b3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_villani, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:27:45 compute-0 podman[188893]: 2025-09-30 14:27:45.717426718 +0000 UTC m=+0.145133928 container attach 7bd35bdec3f61f9347bc3f00fede7fc91550232ac0914c32a8c70d5ca30511b3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_villani, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Sep 30 14:27:45 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:27:45 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2280001620 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:27:46 compute-0 optimistic_villani[188910]: {
Sep 30 14:27:46 compute-0 optimistic_villani[188910]:     "0": [
Sep 30 14:27:46 compute-0 optimistic_villani[188910]:         {
Sep 30 14:27:46 compute-0 optimistic_villani[188910]:             "devices": [
Sep 30 14:27:46 compute-0 optimistic_villani[188910]:                 "/dev/loop3"
Sep 30 14:27:46 compute-0 optimistic_villani[188910]:             ],
Sep 30 14:27:46 compute-0 optimistic_villani[188910]:             "lv_name": "ceph_lv0",
Sep 30 14:27:46 compute-0 optimistic_villani[188910]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:27:46 compute-0 optimistic_villani[188910]:             "lv_size": "21470642176",
Sep 30 14:27:46 compute-0 optimistic_villani[188910]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5e3c7776-ac03-5698-b79f-a6dc2d80cae6,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1bf35304-bfb4-41f5-b832-570aa31de1b2,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 14:27:46 compute-0 optimistic_villani[188910]:             "lv_uuid": "f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L",
Sep 30 14:27:46 compute-0 optimistic_villani[188910]:             "name": "ceph_lv0",
Sep 30 14:27:46 compute-0 optimistic_villani[188910]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:27:46 compute-0 optimistic_villani[188910]:             "tags": {
Sep 30 14:27:46 compute-0 optimistic_villani[188910]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:27:46 compute-0 optimistic_villani[188910]:                 "ceph.block_uuid": "f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L",
Sep 30 14:27:46 compute-0 optimistic_villani[188910]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 14:27:46 compute-0 optimistic_villani[188910]:                 "ceph.cluster_fsid": "5e3c7776-ac03-5698-b79f-a6dc2d80cae6",
Sep 30 14:27:46 compute-0 optimistic_villani[188910]:                 "ceph.cluster_name": "ceph",
Sep 30 14:27:46 compute-0 optimistic_villani[188910]:                 "ceph.crush_device_class": "",
Sep 30 14:27:46 compute-0 optimistic_villani[188910]:                 "ceph.encrypted": "0",
Sep 30 14:27:46 compute-0 optimistic_villani[188910]:                 "ceph.osd_fsid": "1bf35304-bfb4-41f5-b832-570aa31de1b2",
Sep 30 14:27:46 compute-0 optimistic_villani[188910]:                 "ceph.osd_id": "0",
Sep 30 14:27:46 compute-0 optimistic_villani[188910]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 14:27:46 compute-0 optimistic_villani[188910]:                 "ceph.type": "block",
Sep 30 14:27:46 compute-0 optimistic_villani[188910]:                 "ceph.vdo": "0",
Sep 30 14:27:46 compute-0 optimistic_villani[188910]:                 "ceph.with_tpm": "0"
Sep 30 14:27:46 compute-0 optimistic_villani[188910]:             },
Sep 30 14:27:46 compute-0 optimistic_villani[188910]:             "type": "block",
Sep 30 14:27:46 compute-0 optimistic_villani[188910]:             "vg_name": "ceph_vg0"
Sep 30 14:27:46 compute-0 optimistic_villani[188910]:         }
Sep 30 14:27:46 compute-0 optimistic_villani[188910]:     ]
Sep 30 14:27:46 compute-0 optimistic_villani[188910]: }
Sep 30 14:27:46 compute-0 systemd[1]: libpod-7bd35bdec3f61f9347bc3f00fede7fc91550232ac0914c32a8c70d5ca30511b3.scope: Deactivated successfully.
Sep 30 14:27:46 compute-0 podman[188893]: 2025-09-30 14:27:46.032348136 +0000 UTC m=+0.460055306 container died 7bd35bdec3f61f9347bc3f00fede7fc91550232ac0914c32a8c70d5ca30511b3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_villani, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:27:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-f245beaf033f8e95ba4132310d384b633483ed327273cc2f08824c70fccce893-merged.mount: Deactivated successfully.
Sep 30 14:27:46 compute-0 podman[188893]: 2025-09-30 14:27:46.06637617 +0000 UTC m=+0.494083330 container remove 7bd35bdec3f61f9347bc3f00fede7fc91550232ac0914c32a8c70d5ca30511b3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:27:46 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:27:46 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:27:46 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:27:46.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:27:46 compute-0 systemd[1]: libpod-conmon-7bd35bdec3f61f9347bc3f00fede7fc91550232ac0914c32a8c70d5ca30511b3.scope: Deactivated successfully.
Sep 30 14:27:46 compute-0 sudo[188788]: pam_unix(sudo:session): session closed for user root
Sep 30 14:27:46 compute-0 sudo[188933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:27:46 compute-0 sudo[188933]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:27:46 compute-0 sudo[188933]: pam_unix(sudo:session): session closed for user root
Sep 30 14:27:46 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:27:46 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22a0003840 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:27:46 compute-0 sudo[188958]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -- raw list --format json
Sep 30 14:27:46 compute-0 sudo[188958]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:27:46 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:27:46 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:27:46 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:27:46.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:27:46 compute-0 podman[189022]: 2025-09-30 14:27:46.663410955 +0000 UTC m=+0.038078034 container create 4212ec9ce9f469bbcae6084517de01e04ebe6ad3f8ec77da1a582a506f836b30 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_liskov, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Sep 30 14:27:46 compute-0 systemd[1]: Started libpod-conmon-4212ec9ce9f469bbcae6084517de01e04ebe6ad3f8ec77da1a582a506f836b30.scope.
Sep 30 14:27:46 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:27:46 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:27:46 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22a0003840 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:27:46 compute-0 podman[189022]: 2025-09-30 14:27:46.720073776 +0000 UTC m=+0.094740885 container init 4212ec9ce9f469bbcae6084517de01e04ebe6ad3f8ec77da1a582a506f836b30 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_liskov, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True)
Sep 30 14:27:46 compute-0 podman[189022]: 2025-09-30 14:27:46.725685977 +0000 UTC m=+0.100353066 container start 4212ec9ce9f469bbcae6084517de01e04ebe6ad3f8ec77da1a582a506f836b30 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_liskov, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Sep 30 14:27:46 compute-0 podman[189022]: 2025-09-30 14:27:46.728974616 +0000 UTC m=+0.103641695 container attach 4212ec9ce9f469bbcae6084517de01e04ebe6ad3f8ec77da1a582a506f836b30 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_liskov, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Sep 30 14:27:46 compute-0 laughing_liskov[189039]: 167 167
Sep 30 14:27:46 compute-0 systemd[1]: libpod-4212ec9ce9f469bbcae6084517de01e04ebe6ad3f8ec77da1a582a506f836b30.scope: Deactivated successfully.
Sep 30 14:27:46 compute-0 podman[189022]: 2025-09-30 14:27:46.732724126 +0000 UTC m=+0.107391205 container died 4212ec9ce9f469bbcae6084517de01e04ebe6ad3f8ec77da1a582a506f836b30 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Sep 30 14:27:46 compute-0 podman[189022]: 2025-09-30 14:27:46.647252861 +0000 UTC m=+0.021919960 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:27:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-b3b8fe341d7bc5b64f5fb75925c702a17126374deb6e5005897d8ca79f924986-merged.mount: Deactivated successfully.
Sep 30 14:27:46 compute-0 podman[189022]: 2025-09-30 14:27:46.76636859 +0000 UTC m=+0.141035669 container remove 4212ec9ce9f469bbcae6084517de01e04ebe6ad3f8ec77da1a582a506f836b30 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_liskov, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Sep 30 14:27:46 compute-0 systemd[1]: libpod-conmon-4212ec9ce9f469bbcae6084517de01e04ebe6ad3f8ec77da1a582a506f836b30.scope: Deactivated successfully.
Sep 30 14:27:46 compute-0 ceph-mon[74194]: pgmap v358: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 265 B/s rd, 0 op/s
Sep 30 14:27:46 compute-0 podman[189063]: 2025-09-30 14:27:46.956050264 +0000 UTC m=+0.054189236 container create 46e7ff62248a081cb2fde0637ebe0139c834d4c7cc654a5578f1f2909a75fcdc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_carver, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:27:46 compute-0 systemd[1]: Started libpod-conmon-46e7ff62248a081cb2fde0637ebe0139c834d4c7cc654a5578f1f2909a75fcdc.scope.
Sep 30 14:27:47 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:27:47.013Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:27:47 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:27:47.014Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:27:47 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:27:47 compute-0 podman[189063]: 2025-09-30 14:27:46.924295631 +0000 UTC m=+0.022434633 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:27:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dad806173a89219a02873f42b80e9d617b7c83d37eaacdc670237a82ef55b88/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:27:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dad806173a89219a02873f42b80e9d617b7c83d37eaacdc670237a82ef55b88/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:27:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dad806173a89219a02873f42b80e9d617b7c83d37eaacdc670237a82ef55b88/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:27:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dad806173a89219a02873f42b80e9d617b7c83d37eaacdc670237a82ef55b88/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:27:47 compute-0 podman[189063]: 2025-09-30 14:27:47.035423616 +0000 UTC m=+0.133562618 container init 46e7ff62248a081cb2fde0637ebe0139c834d4c7cc654a5578f1f2909a75fcdc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_carver, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:27:47 compute-0 podman[189063]: 2025-09-30 14:27:47.04265305 +0000 UTC m=+0.140792042 container start 46e7ff62248a081cb2fde0637ebe0139c834d4c7cc654a5578f1f2909a75fcdc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_carver, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:27:47 compute-0 podman[189063]: 2025-09-30 14:27:47.045237669 +0000 UTC m=+0.143376661 container attach 46e7ff62248a081cb2fde0637ebe0139c834d4c7cc654a5578f1f2909a75fcdc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_carver, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:27:47 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v359: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 531 B/s rd, 0 op/s
Sep 30 14:27:47 compute-0 lvm[189158]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 14:27:47 compute-0 lvm[189158]: VG ceph_vg0 finished
Sep 30 14:27:47 compute-0 gallant_carver[189079]: {}
Sep 30 14:27:47 compute-0 systemd[1]: libpod-46e7ff62248a081cb2fde0637ebe0139c834d4c7cc654a5578f1f2909a75fcdc.scope: Deactivated successfully.
Sep 30 14:27:47 compute-0 systemd[1]: libpod-46e7ff62248a081cb2fde0637ebe0139c834d4c7cc654a5578f1f2909a75fcdc.scope: Consumed 1.117s CPU time.
Sep 30 14:27:47 compute-0 podman[189063]: 2025-09-30 14:27:47.844929566 +0000 UTC m=+0.943068578 container died 46e7ff62248a081cb2fde0637ebe0139c834d4c7cc654a5578f1f2909a75fcdc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_carver, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Sep 30 14:27:47 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:27:47 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2288004300 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:27:48 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:27:48 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:27:48 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:27:48.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:27:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:27:48 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2280001620 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:27:48 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:27:48 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:27:48 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:27:48.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:27:48 compute-0 ceph-mon[74194]: pgmap v359: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 531 B/s rd, 0 op/s
Sep 30 14:27:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-6dad806173a89219a02873f42b80e9d617b7c83d37eaacdc670237a82ef55b88-merged.mount: Deactivated successfully.
Sep 30 14:27:48 compute-0 podman[189063]: 2025-09-30 14:27:48.569892276 +0000 UTC m=+1.668031258 container remove 46e7ff62248a081cb2fde0637ebe0139c834d4c7cc654a5578f1f2909a75fcdc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_carver, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:27:48 compute-0 sudo[188958]: pam_unix(sudo:session): session closed for user root
Sep 30 14:27:48 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 14:27:48 compute-0 systemd[1]: libpod-conmon-46e7ff62248a081cb2fde0637ebe0139c834d4c7cc654a5578f1f2909a75fcdc.scope: Deactivated successfully.
Sep 30 14:27:48 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:27:48 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 14:27:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:27:48 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22ac003050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:27:48 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:27:48 compute-0 kernel: SELinux:  Converting 2773 SID table entries...
Sep 30 14:27:48 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Sep 30 14:27:48 compute-0 kernel: SELinux:  policy capability open_perms=1
Sep 30 14:27:48 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Sep 30 14:27:48 compute-0 kernel: SELinux:  policy capability always_check_network=0
Sep 30 14:27:48 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Sep 30 14:27:48 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Sep 30 14:27:48 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Sep 30 14:27:48 compute-0 sudo[189181]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 14:27:48 compute-0 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Sep 30 14:27:48 compute-0 sudo[189181]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:27:48 compute-0 sudo[189181]: pam_unix(sudo:session): session closed for user root
Sep 30 14:27:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:27:48.894Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:27:49 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v360: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 265 B/s rd, 0 op/s
Sep 30 14:27:49 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:27:49 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:27:49 compute-0 groupadd[189214]: group added to /etc/group: name=dnsmasq, GID=991
Sep 30 14:27:49 compute-0 groupadd[189214]: group added to /etc/gshadow: name=dnsmasq
Sep 30 14:27:49 compute-0 groupadd[189214]: new group: name=dnsmasq, GID=991
Sep 30 14:27:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:27:49 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2294002470 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:27:49 compute-0 useradd[189221]: new user: name=dnsmasq, UID=991, GID=991, home=/var/lib/dnsmasq, shell=/usr/sbin/nologin, from=none
Sep 30 14:27:49 compute-0 dbus-broker-launch[769]: Noticed file-system modification, trigger reload.
Sep 30 14:27:49 compute-0 dbus-broker-launch[769]: Noticed file-system modification, trigger reload.
Sep 30 14:27:50 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:27:50 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:27:50 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:27:50.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:27:50 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:27:50 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2288004300 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:27:50 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:27:50 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:27:50 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:27:50.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:27:50 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:27:50 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:27:50 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2280001620 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:27:50 compute-0 ceph-mon[74194]: pgmap v360: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 265 B/s rd, 0 op/s
Sep 30 14:27:50 compute-0 groupadd[189234]: group added to /etc/group: name=clevis, GID=990
Sep 30 14:27:50 compute-0 groupadd[189234]: group added to /etc/gshadow: name=clevis
Sep 30 14:27:50 compute-0 groupadd[189234]: new group: name=clevis, GID=990
Sep 30 14:27:50 compute-0 useradd[189241]: new user: name=clevis, UID=990, GID=990, home=/var/cache/clevis, shell=/usr/sbin/nologin, from=none
Sep 30 14:27:51 compute-0 usermod[189251]: add 'clevis' to group 'tss'
Sep 30 14:27:51 compute-0 usermod[189251]: add 'clevis' to shadow group 'tss'
Sep 30 14:27:51 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v361: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 265 B/s rd, 0 op/s
Sep 30 14:27:51 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:27:51 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22ac003050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:27:52 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:27:52 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:27:52 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:27:52.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:27:52 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:27:52 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22ac003050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:27:52 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:27:52 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:27:52 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:27:52.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:27:52 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:27:52 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2288004300 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:27:52 compute-0 ceph-mon[74194]: pgmap v361: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 265 B/s rd, 0 op/s
Sep 30 14:27:53 compute-0 polkitd[8246]: Reloading rules
Sep 30 14:27:53 compute-0 polkitd[8246]: Collecting garbage unconditionally...
Sep 30 14:27:53 compute-0 polkitd[8246]: Loading rules from directory /etc/polkit-1/rules.d
Sep 30 14:27:53 compute-0 polkitd[8246]: Loading rules from directory /usr/share/polkit-1/rules.d
Sep 30 14:27:53 compute-0 polkitd[8246]: Finished loading, compiling and executing 4 rules
Sep 30 14:27:53 compute-0 polkitd[8246]: Reloading rules
Sep 30 14:27:53 compute-0 polkitd[8246]: Collecting garbage unconditionally...
Sep 30 14:27:53 compute-0 polkitd[8246]: Loading rules from directory /etc/polkit-1/rules.d
Sep 30 14:27:53 compute-0 polkitd[8246]: Loading rules from directory /usr/share/polkit-1/rules.d
Sep 30 14:27:53 compute-0 polkitd[8246]: Finished loading, compiling and executing 4 rules
Sep 30 14:27:53 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v362: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 265 B/s rd, 0 op/s
Sep 30 14:27:53 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:27:53 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2280001620 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:27:54 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:27:54 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:27:54 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:27:54.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:27:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:27:54 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2280001620 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:27:54 compute-0 groupadd[189442]: group added to /etc/group: name=ceph, GID=167
Sep 30 14:27:54 compute-0 groupadd[189442]: group added to /etc/gshadow: name=ceph
Sep 30 14:27:54 compute-0 groupadd[189442]: new group: name=ceph, GID=167
Sep 30 14:27:54 compute-0 useradd[189448]: new user: name=ceph, UID=167, GID=167, home=/var/lib/ceph, shell=/sbin/nologin, from=none
Sep 30 14:27:54 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:27:54 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:27:54 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:27:54.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:27:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:27:54 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2294002470 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:27:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:27:54] "GET /metrics HTTP/1.1" 200 48416 "" "Prometheus/2.51.0"
Sep 30 14:27:54 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:27:54] "GET /metrics HTTP/1.1" 200 48416 "" "Prometheus/2.51.0"
Sep 30 14:27:54 compute-0 ceph-mon[74194]: pgmap v362: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 265 B/s rd, 0 op/s
Sep 30 14:27:55 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v363: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:27:55 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:27:55 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:27:55 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2288004300 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:27:56 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:27:56 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:27:56 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:27:56.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:27:56 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:27:56 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22ac003050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:27:56 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:27:56 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:27:56 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:27:56.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:27:56 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:27:56 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2280001620 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:27:56 compute-0 ceph-mon[74194]: pgmap v363: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:27:57 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:27:57.015Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:27:57 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v364: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:27:57 compute-0 sshd[1005]: Received signal 15; terminating.
Sep 30 14:27:57 compute-0 systemd[1]: Stopping OpenSSH server daemon...
Sep 30 14:27:57 compute-0 systemd[1]: sshd.service: Deactivated successfully.
Sep 30 14:27:57 compute-0 systemd[1]: Stopped OpenSSH server daemon.
Sep 30 14:27:57 compute-0 systemd[1]: sshd.service: Consumed 10.883s CPU time, read 0B from disk, written 224.0K to disk.
Sep 30 14:27:57 compute-0 systemd[1]: Stopped target sshd-keygen.target.
Sep 30 14:27:57 compute-0 systemd[1]: Stopping sshd-keygen.target...
Sep 30 14:27:57 compute-0 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Sep 30 14:27:57 compute-0 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Sep 30 14:27:57 compute-0 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Sep 30 14:27:57 compute-0 systemd[1]: Reached target sshd-keygen.target.
Sep 30 14:27:57 compute-0 systemd[1]: Starting OpenSSH server daemon...
Sep 30 14:27:57 compute-0 sshd[190147]: Server listening on 0.0.0.0 port 22.
Sep 30 14:27:57 compute-0 sshd[190147]: Server listening on :: port 22.
Sep 30 14:27:57 compute-0 systemd[1]: Started OpenSSH server daemon.
Sep 30 14:27:57 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:27:57 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2294002470 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:27:58 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:27:58 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:27:58 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:27:58.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:27:58 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:27:58 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2288004300 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:27:58 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:27:58 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:27:58 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:27:58.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:27:58 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:27:58 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22ac003050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:27:58 compute-0 ceph-mon[74194]: pgmap v364: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:27:58 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:27:58.895Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:27:58 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:27:58.896Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:27:59 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v365: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:27:59 compute-0 ceph-mgr[74485]: [balancer INFO root] Optimize plan auto_2025-09-30_14:27:59
Sep 30 14:27:59 compute-0 ceph-mgr[74485]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 14:27:59 compute-0 ceph-mgr[74485]: [balancer INFO root] do_upmap
Sep 30 14:27:59 compute-0 ceph-mgr[74485]: [balancer INFO root] pools ['cephfs.cephfs.data', '.rgw.root', '.nfs', 'volumes', 'default.rgw.meta', 'images', 'default.rgw.log', 'default.rgw.control', 'backups', 'cephfs.cephfs.meta', '.mgr', 'vms']
Sep 30 14:27:59 compute-0 ceph-mgr[74485]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 14:27:59 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Sep 30 14:27:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 14:27:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:27:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 14:27:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:27:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:27:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:27:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:27:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:27:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:27:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:27:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:27:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:27:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Sep 30 14:27:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:27:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:27:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:27:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Sep 30 14:27:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:27:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Sep 30 14:27:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:27:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:27:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:27:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 14:27:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:27:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 14:27:59 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:27:59 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:27:59 compute-0 systemd[1]: Starting man-db-cache-update.service...
Sep 30 14:27:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:27:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:27:59 compute-0 systemd[1]: Reloading.
Sep 30 14:27:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:27:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:27:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:27:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:27:59 compute-0 systemd-rc-local-generator[190402]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:27:59 compute-0 systemd-sysv-generator[190408]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 14:27:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:27:59 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22ac003050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:27:59 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:28:00 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Sep 30 14:28:00 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:28:00 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:28:00 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:28:00.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:28:00 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:28:00 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2294002470 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:28:00 compute-0 sudo[190844]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:28:00 compute-0 sudo[190844]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:28:00 compute-0 sudo[190844]: pam_unix(sudo:session): session closed for user root
Sep 30 14:28:00 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:28:00 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:28:00 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:28:00.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:28:00 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:28:00 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:28:00 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2288004300 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:28:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 14:28:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 14:28:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 14:28:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 14:28:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 14:28:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 14:28:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 14:28:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 14:28:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 14:28:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 14:28:01 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v366: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:28:01 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:28:01 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22ac003050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:28:02 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:28:02 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:28:02 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:28:02.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:28:02 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:28:02 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22ac003050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:28:02 compute-0 ceph-mgr[74485]: [devicehealth INFO root] Check health
Sep 30 14:28:02 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:28:02 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:28:02 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:28:02.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:28:02 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:28:02 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2294002470 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:28:02 compute-0 ceph-mon[74194]: pgmap v365: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:28:03 compute-0 systemd[1]: Starting PackageKit Daemon...
Sep 30 14:28:03 compute-0 PackageKit[194450]: daemon start
Sep 30 14:28:03 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v367: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:28:03 compute-0 systemd[1]: Started PackageKit Daemon.
Sep 30 14:28:03 compute-0 podman[194432]: 2025-09-30 14:28:03.458255088 +0000 UTC m=+0.090980784 container health_status 8ace90c1ad44d98a416105b72e1c541724757ab08efa10adfaa0df7e8c2027c6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=36bccb96575468ec919301205d8daa2c, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true)
Sep 30 14:28:03 compute-0 sudo[170552]: pam_unix(sudo:session): session closed for user root
Sep 30 14:28:03 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:28:03 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2288004300 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:28:04 compute-0 ceph-mon[74194]: pgmap v366: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:28:04 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:28:04 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:28:04 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:28:04.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:28:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:28:04 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22ac003050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:28:04 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:28:04 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:28:04 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:28:04.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:28:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:28:04 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22ac003050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:28:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:28:04] "GET /metrics HTTP/1.1" 200 48420 "" "Prometheus/2.51.0"
Sep 30 14:28:04 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:28:04] "GET /metrics HTTP/1.1" 200 48420 "" "Prometheus/2.51.0"
Sep 30 14:28:05 compute-0 ceph-mon[74194]: pgmap v367: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:28:05 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v368: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:28:05 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:28:05 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:28:05 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2294004250 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:28:06 compute-0 ceph-mon[74194]: pgmap v368: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:28:06 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:28:06 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:28:06 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:28:06.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:28:06 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:28:06 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2288004300 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:28:06 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:28:06 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:28:06 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:28:06.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:28:06 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:28:06 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2280001620 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:28:07 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:28:07.015Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:28:07 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:28:07.016Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:28:07 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v369: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:28:07 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Sep 30 14:28:07 compute-0 systemd[1]: Finished man-db-cache-update.service.
Sep 30 14:28:07 compute-0 systemd[1]: man-db-cache-update.service: Consumed 10.205s CPU time.
Sep 30 14:28:07 compute-0 systemd[1]: run-r20f80d1f8fce4dada944d8b42b77b9ec.service: Deactivated successfully.
Sep 30 14:28:07 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:28:07 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22ac0048b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:28:08 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:28:08 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:28:08 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:28:08.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:28:08 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:28:08 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2294004250 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:28:08 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:28:08 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:28:08 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:28:08.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:28:08 compute-0 ceph-mon[74194]: pgmap v369: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:28:08 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:28:08 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2288004300 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:28:08 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:28:08.897Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:28:09 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v370: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:28:09 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:28:09 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2288004300 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:28:10 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:28:10 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:28:10 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:28:10.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:28:10 compute-0 podman[198746]: 2025-09-30 14:28:10.125919912 +0000 UTC m=+0.060089465 container health_status c96c6cc1b89de09f21811bef665f35e069874e3ad1bbd4c37d02227af98e3458 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20250923)
Sep 30 14:28:10 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:28:10 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22ac0048b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:28:10 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:28:10 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:28:10 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:28:10.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:28:10 compute-0 ceph-mon[74194]: pgmap v370: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:28:10 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:28:10 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:28:10 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2294004250 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:28:11 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v371: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:28:11 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:28:11 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2294004250 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:28:12 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:28:12 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:28:12 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:28:12.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:28:12 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:28:12 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2288004300 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:28:12 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:28:12 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:28:12 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:28:12.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:28:12 compute-0 ceph-mon[74194]: pgmap v371: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:28:12 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:28:12 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22ac0048b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:28:13 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v372: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:28:13 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:28:13 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2294004250 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:28:14 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:28:14 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:28:14 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:28:14.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:28:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:28:14 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2294004250 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:28:14 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:28:14 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:28:14 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:28:14.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:28:14 compute-0 ceph-mon[74194]: pgmap v372: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:28:14 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:28:14 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:28:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:28:14] "GET /metrics HTTP/1.1" 200 48422 "" "Prometheus/2.51.0"
Sep 30 14:28:14 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:28:14] "GET /metrics HTTP/1.1" 200 48422 "" "Prometheus/2.51.0"
Sep 30 14:28:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:28:14 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2288004300 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:28:15 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v373: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:28:15 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:28:15 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:28:15 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2288004300 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:28:16 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:28:16 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:28:16 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:28:16.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:28:16 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:28:16 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2288004300 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:28:16 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:28:16 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:28:16 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:28:16 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:28:16.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:28:16 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:28:16 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2288004300 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:28:17 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:28:17.017Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:28:17 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v374: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:28:17 compute-0 ceph-mon[74194]: pgmap v373: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:28:17 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:28:17 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22ac0048b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:28:18 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:28:18 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:28:18 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:28:18.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:28:18 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:28:18 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2280001660 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:28:18 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:28:18 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:28:18 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:28:18.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:28:18 compute-0 ceph-mon[74194]: pgmap v374: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:28:18 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:28:18 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f228c0010d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:28:18 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:28:18.898Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:28:19 compute-0 sudo[198900]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwcwhgaihoejrarozbhrfkdqcfshzntn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242498.5394626-968-271076410315152/AnsiballZ_systemd.py'
Sep 30 14:28:19 compute-0 sudo[198900]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:28:19 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v375: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:28:19 compute-0 python3.9[198902]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Sep 30 14:28:19 compute-0 systemd[1]: Reloading.
Sep 30 14:28:19 compute-0 systemd-rc-local-generator[198929]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:28:19 compute-0 systemd-sysv-generator[198934]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 14:28:19 compute-0 sudo[198900]: pam_unix(sudo:session): session closed for user root
Sep 30 14:28:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:28:19 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2294004250 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:28:20 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:28:20 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:28:20 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:28:20.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:28:20 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:28:20 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22ac0048b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:28:20 compute-0 sudo[199093]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbwrngzgzkkhdczrslehrsupnooqiylr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242500.0346878-968-173291778089062/AnsiballZ_systemd.py'
Sep 30 14:28:20 compute-0 sudo[199093]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:28:20 compute-0 sudo[199094]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:28:20 compute-0 sudo[199094]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:28:20 compute-0 sudo[199094]: pam_unix(sudo:session): session closed for user root
Sep 30 14:28:20 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:28:20 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:28:20 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:28:20.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:28:20 compute-0 ceph-mon[74194]: pgmap v375: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:28:20 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:28:20 compute-0 python3.9[199102]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Sep 30 14:28:20 compute-0 systemd[1]: Reloading.
Sep 30 14:28:20 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:28:20 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22ac0048b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:28:20 compute-0 systemd-rc-local-generator[199148]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:28:20 compute-0 systemd-sysv-generator[199151]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 14:28:21 compute-0 sudo[199093]: pam_unix(sudo:session): session closed for user root
Sep 30 14:28:21 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v376: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:28:21 compute-0 sudo[199309]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apifqumszmernicremyixgykxygcbdwv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242501.1697478-968-25144634425762/AnsiballZ_systemd.py'
Sep 30 14:28:21 compute-0 sudo[199309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:28:21 compute-0 python3.9[199311]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Sep 30 14:28:21 compute-0 systemd[1]: Reloading.
Sep 30 14:28:21 compute-0 systemd-rc-local-generator[199343]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:28:21 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:28:21 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2280001680 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:28:21 compute-0 systemd-sysv-generator[199347]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 14:28:22 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:28:22 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:28:22 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:28:22.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:28:22 compute-0 sudo[199309]: pam_unix(sudo:session): session closed for user root
Sep 30 14:28:22 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:28:22 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2294004250 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:28:22 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:28:22 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:28:22 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:28:22.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:28:22 compute-0 ceph-mon[74194]: pgmap v376: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:28:22 compute-0 sudo[199500]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ignfdwvulitnnzyuzbdwtoyfaumsclfu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242502.305266-968-247196024034517/AnsiballZ_systemd.py'
Sep 30 14:28:22 compute-0 sudo[199500]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:28:22 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:28:22 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22a0001bf0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:28:22 compute-0 python3.9[199502]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Sep 30 14:28:22 compute-0 systemd[1]: Reloading.
Sep 30 14:28:23 compute-0 systemd-rc-local-generator[199531]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:28:23 compute-0 systemd-sysv-generator[199536]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 14:28:23 compute-0 sudo[199500]: pam_unix(sudo:session): session closed for user root
Sep 30 14:28:23 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v377: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:28:23 compute-0 sudo[199691]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srquhsiijmfkzjfqwxpowmuexfxhjhti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242503.452603-1055-157173492670681/AnsiballZ_systemd.py'
Sep 30 14:28:23 compute-0 sudo[199691]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:28:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:28:23 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22ac0048b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:28:24 compute-0 python3.9[199694]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Sep 30 14:28:24 compute-0 systemd[1]: Reloading.
Sep 30 14:28:24 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:28:24 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:28:24 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:28:24.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:28:24 compute-0 systemd-rc-local-generator[199726]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:28:24 compute-0 systemd-sysv-generator[199730]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 14:28:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:28:24 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2280001680 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:28:24 compute-0 sudo[199691]: pam_unix(sudo:session): session closed for user root
Sep 30 14:28:24 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:28:24 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:28:24 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:28:24.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:28:24 compute-0 ceph-mon[74194]: pgmap v377: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:28:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:28:24] "GET /metrics HTTP/1.1" 200 48422 "" "Prometheus/2.51.0"
Sep 30 14:28:24 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:28:24] "GET /metrics HTTP/1.1" 200 48422 "" "Prometheus/2.51.0"
Sep 30 14:28:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:28:24 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2294004250 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:28:25 compute-0 sudo[199883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akxlsuqkyxwurhplwqimaghnipwnmlhb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242504.6495035-1055-276886107427015/AnsiballZ_systemd.py'
Sep 30 14:28:25 compute-0 sudo[199883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:28:25 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v378: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:28:25 compute-0 python3.9[199885]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Sep 30 14:28:25 compute-0 systemd[1]: Reloading.
Sep 30 14:28:25 compute-0 systemd-sysv-generator[199920]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 14:28:25 compute-0 systemd-rc-local-generator[199917]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:28:25 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:28:25 compute-0 sudo[199883]: pam_unix(sudo:session): session closed for user root
Sep 30 14:28:25 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:28:25 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22a0001bf0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:28:26 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:28:26 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:28:26 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:28:26.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:28:26 compute-0 sudo[200075]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jskmcravlieqzaqpadbadkiiqqogbliu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242505.9779506-1055-16091470397694/AnsiballZ_systemd.py'
Sep 30 14:28:26 compute-0 sudo[200075]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:28:26 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:28:26 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22ac0048b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:28:26 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:28:26 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:28:26 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:28:26.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:28:26 compute-0 python3.9[200077]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Sep 30 14:28:26 compute-0 systemd[1]: Reloading.
Sep 30 14:28:26 compute-0 ceph-mon[74194]: pgmap v378: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:28:26 compute-0 systemd-sysv-generator[200114]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 14:28:26 compute-0 systemd-rc-local-generator[200109]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:28:26 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:28:26 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22800016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:28:26 compute-0 sudo[200075]: pam_unix(sudo:session): session closed for user root
Sep 30 14:28:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:28:27.018Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:28:27 compute-0 sudo[200265]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtqsjxozjnjztpshpnmsspmqxqndcsud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242507.0445263-1055-53158468196029/AnsiballZ_systemd.py'
Sep 30 14:28:27 compute-0 sudo[200265]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:28:27 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v379: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:28:27 compute-0 python3.9[200268]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Sep 30 14:28:27 compute-0 sudo[200265]: pam_unix(sudo:session): session closed for user root
Sep 30 14:28:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:28:27 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2294004250 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:28:28 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:28:28 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:28:28 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:28:28.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:28:28 compute-0 sudo[200422]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anwkaydsjspdjeapxnqmmsipnvpgpkvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242507.8609734-1055-74450722757611/AnsiballZ_systemd.py'
Sep 30 14:28:28 compute-0 sudo[200422]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:28:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:28:28 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22a0002510 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:28:28 compute-0 python3.9[200424]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Sep 30 14:28:28 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:28:28 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:28:28 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:28:28.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:28:28 compute-0 systemd[1]: Reloading.
Sep 30 14:28:28 compute-0 systemd-rc-local-generator[200455]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:28:28 compute-0 ceph-mon[74194]: pgmap v379: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:28:28 compute-0 systemd-sysv-generator[200460]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 14:28:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:28:28 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22ac0048b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:28:28 compute-0 sudo[200422]: pam_unix(sudo:session): session closed for user root
Sep 30 14:28:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:28:28.899Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:28:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:28:28.899Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:28:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:28:28.899Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:28:29 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v380: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:28:29 compute-0 sudo[200614]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cputwhjzrkhxmgcfuiskewbfueeicctt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242509.316559-1163-21048064765388/AnsiballZ_systemd.py'
Sep 30 14:28:29 compute-0 sudo[200614]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:28:29 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:28:29 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:28:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:28:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:28:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:28:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:28:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:28:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:28:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[106217]: logger=sqlstore.transactions t=2025-09-30T14:28:29.854750695Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked"
Sep 30 14:28:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[106217]: logger=cleanup t=2025-09-30T14:28:29.877965888Z level=info msg="Completed cleanup jobs" duration=35.76332ms
Sep 30 14:28:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[106217]: logger=sqlstore.transactions t=2025-09-30T14:28:29.886536618Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=1 code="database is locked"
Sep 30 14:28:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:28:29 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22800016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:28:29 compute-0 python3.9[200616]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Sep 30 14:28:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[106217]: logger=plugins.update.checker t=2025-09-30T14:28:29.962677133Z level=info msg="Update check succeeded" duration=52.651664ms
Sep 30 14:28:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[106217]: logger=grafana.update.checker t=2025-09-30T14:28:29.965126519Z level=info msg="Update check succeeded" duration=52.292025ms
Sep 30 14:28:29 compute-0 systemd[1]: Reloading.
Sep 30 14:28:30 compute-0 systemd-rc-local-generator[200644]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:28:30 compute-0 systemd-sysv-generator[200650]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 14:28:30 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:28:30 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:28:30 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:28:30.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:28:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:28:30 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2294004250 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:28:30 compute-0 systemd[1]: Listening on libvirt proxy daemon socket.
Sep 30 14:28:30 compute-0 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Sep 30 14:28:30 compute-0 sudo[200614]: pam_unix(sudo:session): session closed for user root
Sep 30 14:28:30 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:28:30 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:28:30 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:28:30.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:28:30 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:28:30 compute-0 ceph-mon[74194]: pgmap v380: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:28:30 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:28:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:28:30 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22a0002510 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:28:30 compute-0 sudo[200808]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubqlifrtjldyugochxdvegidtasjrduj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242510.6064787-1187-240730830589874/AnsiballZ_systemd.py'
Sep 30 14:28:30 compute-0 sudo[200808]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:28:31 compute-0 python3.9[200810]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Sep 30 14:28:31 compute-0 sudo[200808]: pam_unix(sudo:session): session closed for user root
Sep 30 14:28:31 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v381: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:28:31 compute-0 sudo[200965]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhybxcidlnlyhkrquhynebemefielyvv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242511.5032153-1187-147748140279155/AnsiballZ_systemd.py'
Sep 30 14:28:31 compute-0 sudo[200965]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:28:31 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:28:31 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22ac0048b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:28:32 compute-0 python3.9[200967]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Sep 30 14:28:32 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:28:32 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:28:32 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:28:32.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:28:32 compute-0 sudo[200965]: pam_unix(sudo:session): session closed for user root
Sep 30 14:28:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:28:32 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22800016e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:28:32 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:28:32 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:28:32 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:28:32.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:28:32 compute-0 sudo[201120]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnlrthxeuteccjdwnbvcqjabuwtgjiix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242512.2980354-1187-44837647841645/AnsiballZ_systemd.py'
Sep 30 14:28:32 compute-0 sudo[201120]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:28:32 compute-0 ceph-mon[74194]: pgmap v381: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:28:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:28:32 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22800016e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:28:32 compute-0 python3.9[201122]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Sep 30 14:28:32 compute-0 sudo[201120]: pam_unix(sudo:session): session closed for user root
Sep 30 14:28:33 compute-0 sudo[201276]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-musosvbxigkuchggywocvfvtfowfaaaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242513.099874-1187-164607242577692/AnsiballZ_systemd.py'
Sep 30 14:28:33 compute-0 sudo[201276]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:28:33 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v382: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:28:33 compute-0 python3.9[201278]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Sep 30 14:28:33 compute-0 sudo[201276]: pam_unix(sudo:session): session closed for user root
Sep 30 14:28:33 compute-0 podman[201280]: 2025-09-30 14:28:33.862026607 +0000 UTC m=+0.106950674 container health_status 8ace90c1ad44d98a416105b72e1c541724757ab08efa10adfaa0df7e8c2027c6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Sep 30 14:28:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:28:33 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22a0003220 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:28:34 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:28:34 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:28:34 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:28:34.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:28:34 compute-0 sudo[201456]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-upxohyuwegkfrxiasnopeaebaxhofaya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242513.9719543-1187-103065016808161/AnsiballZ_systemd.py'
Sep 30 14:28:34 compute-0 sudo[201456]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:28:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:28:34 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22ac0048b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:28:34 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:28:34 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:28:34 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:28:34.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:28:34 compute-0 python3.9[201458]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Sep 30 14:28:34 compute-0 sudo[201456]: pam_unix(sudo:session): session closed for user root
Sep 30 14:28:34 compute-0 ceph-mon[74194]: pgmap v382: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:28:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:28:34] "GET /metrics HTTP/1.1" 200 48422 "" "Prometheus/2.51.0"
Sep 30 14:28:34 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:28:34] "GET /metrics HTTP/1.1" 200 48422 "" "Prometheus/2.51.0"
Sep 30 14:28:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:28:34 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22800016e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:28:35 compute-0 sudo[201611]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlgnlrspslsjgvefgpicffyekygghwem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242514.7654288-1187-176492004443219/AnsiballZ_systemd.py'
Sep 30 14:28:35 compute-0 sudo[201611]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:28:35 compute-0 python3.9[201613]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Sep 30 14:28:35 compute-0 sudo[201611]: pam_unix(sudo:session): session closed for user root
Sep 30 14:28:35 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v383: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:28:35 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:28:35 compute-0 sudo[201768]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plaxbfydxdhxhdpvkruqhldmjtesjtlt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242515.5347242-1187-202902251607284/AnsiballZ_systemd.py'
Sep 30 14:28:35 compute-0 sudo[201768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:28:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:28:35 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22800016e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:28:36 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:28:36 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:28:36 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:28:36.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:28:36 compute-0 python3.9[201770]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Sep 30 14:28:36 compute-0 sudo[201768]: pam_unix(sudo:session): session closed for user root
Sep 30 14:28:36 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:28:36 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22a0003220 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:28:36 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:28:36 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:28:36 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:28:36.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:28:36 compute-0 sudo[201923]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gojrlcadakezpfbkgmjruotkjsadgdjn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242516.3739324-1187-74658587350841/AnsiballZ_systemd.py'
Sep 30 14:28:36 compute-0 sudo[201923]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:28:36 compute-0 ceph-mon[74194]: pgmap v383: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:28:36 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:28:36 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22ac0048b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:28:36 compute-0 python3.9[201925]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Sep 30 14:28:37 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:28:37.018Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:28:37 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:28:37.018Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:28:37 compute-0 sudo[201923]: pam_unix(sudo:session): session closed for user root
Sep 30 14:28:37 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v384: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:28:37 compute-0 sudo[202079]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hpmgrfjjoerwijaflolfpqcajsloulbk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242517.1721435-1187-29248964326132/AnsiballZ_systemd.py'
Sep 30 14:28:37 compute-0 sudo[202079]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:28:37 compute-0 python3.9[202081]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Sep 30 14:28:37 compute-0 sudo[202079]: pam_unix(sudo:session): session closed for user root
Sep 30 14:28:37 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:28:37 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22ac0048b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:28:38 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:28:38 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:28:38 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:28:38.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:28:38 compute-0 sudo[202235]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skkwhpncalaxppougisboyksgpxyrizb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242517.9183464-1187-238416360804334/AnsiballZ_systemd.py'
Sep 30 14:28:38 compute-0 sudo[202235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:28:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:28:38.241 163966 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:28:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:28:38.242 163966 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:28:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:28:38.242 163966 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:28:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:28:38 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2294004250 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:28:38 compute-0 python3.9[202237]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Sep 30 14:28:38 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:28:38 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:28:38 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:28:38.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:28:38 compute-0 sudo[202235]: pam_unix(sudo:session): session closed for user root
Sep 30 14:28:38 compute-0 ceph-mon[74194]: pgmap v384: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:28:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:28:38 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2294004250 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:28:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:28:38.900Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:28:39 compute-0 sudo[202390]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lazuwygkqidgwtfxsjepcbxwkxgyrfmj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242518.7197406-1187-155415489584174/AnsiballZ_systemd.py'
Sep 30 14:28:39 compute-0 sudo[202390]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:28:39 compute-0 python3.9[202392]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Sep 30 14:28:39 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v385: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:28:39 compute-0 sudo[202390]: pam_unix(sudo:session): session closed for user root
Sep 30 14:28:39 compute-0 sudo[202547]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-coamzlrukeczxzvjcjvxyxnknpacswob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242519.5900471-1187-21940785602142/AnsiballZ_systemd.py'
Sep 30 14:28:39 compute-0 sudo[202547]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:28:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:28:39 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2280001700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:28:40 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:28:40 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:28:40 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:28:40.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:28:40 compute-0 python3.9[202549]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Sep 30 14:28:40 compute-0 sudo[202547]: pam_unix(sudo:session): session closed for user root
Sep 30 14:28:40 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:28:40 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22ac0048b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:28:40 compute-0 podman[202551]: 2025-09-30 14:28:40.27995976 +0000 UTC m=+0.068375607 container health_status c96c6cc1b89de09f21811bef665f35e069874e3ad1bbd4c37d02227af98e3458 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Sep 30 14:28:40 compute-0 sudo[202609]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:28:40 compute-0 sudo[202609]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:28:40 compute-0 sudo[202609]: pam_unix(sudo:session): session closed for user root
Sep 30 14:28:40 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:28:40 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:28:40 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:28:40.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:28:40 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:28:40 compute-0 sudo[202746]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ucynbvwaghgphgskrmnzuictmvbiqsay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242520.3889906-1187-85705849056849/AnsiballZ_systemd.py'
Sep 30 14:28:40 compute-0 sudo[202746]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:28:40 compute-0 ceph-mon[74194]: pgmap v385: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:28:40 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:28:40 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22a00040d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:28:41 compute-0 python3.9[202748]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Sep 30 14:28:41 compute-0 sudo[202746]: pam_unix(sudo:session): session closed for user root
Sep 30 14:28:41 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v386: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:28:41 compute-0 sudo[202902]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufthspghckpggbqfyhgshriwipewxtwe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242521.228083-1187-122089652233948/AnsiballZ_systemd.py'
Sep 30 14:28:41 compute-0 sudo[202902]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:28:41 compute-0 python3.9[202904]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Sep 30 14:28:41 compute-0 sudo[202902]: pam_unix(sudo:session): session closed for user root
Sep 30 14:28:41 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:28:41 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2294004250 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:28:42 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:28:42 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:28:42 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:28:42.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:28:42 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:28:42 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2294004250 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:28:42 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:28:42 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:28:42 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:28:42.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:28:42 compute-0 ceph-mon[74194]: pgmap v386: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:28:42 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:28:42 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22ac0048b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:28:43 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v387: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:28:43 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:28:43 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22ac0048b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:28:44 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:28:44 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:28:44 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:28:44.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:28:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:28:44 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f227c000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:28:44 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:28:44 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:28:44 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:28:44.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:28:44 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:28:44 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:28:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:28:44] "GET /metrics HTTP/1.1" 200 48414 "" "Prometheus/2.51.0"
Sep 30 14:28:44 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:28:44] "GET /metrics HTTP/1.1" 200 48414 "" "Prometheus/2.51.0"
Sep 30 14:28:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:28:44 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2274000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:28:44 compute-0 ceph-mon[74194]: pgmap v387: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:28:44 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:28:45 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v388: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:28:45 compute-0 sudo[203063]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvsgscfgmbrbleavtebnoxdbrxnbxoxt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242525.2158806-1493-150852832601199/AnsiballZ_file.py'
Sep 30 14:28:45 compute-0 sudo[203063]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:28:45 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:28:45 compute-0 python3.9[203065]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:28:45 compute-0 sudo[203063]: pam_unix(sudo:session): session closed for user root
Sep 30 14:28:45 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:28:45 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2280004610 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:28:46 compute-0 sudo[203216]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhhbaslvnzkmxoxtbxoyglsvkjggklsb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242525.816744-1493-46505888902846/AnsiballZ_file.py'
Sep 30 14:28:46 compute-0 sudo[203216]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:28:46 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:28:46 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:28:46 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:28:46.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:28:46 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:28:46 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22ac0048b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:28:46 compute-0 python3.9[203218]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:28:46 compute-0 sudo[203216]: pam_unix(sudo:session): session closed for user root
Sep 30 14:28:46 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:28:46 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:28:46 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:28:46.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:28:46 compute-0 sudo[203368]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmupjcyxlxivaxwjxmxzlabsqpryqzml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242526.442357-1493-122999964854730/AnsiballZ_file.py'
Sep 30 14:28:46 compute-0 sudo[203368]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:28:46 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:28:46 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f227c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:28:46 compute-0 ceph-mon[74194]: pgmap v388: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:28:46 compute-0 python3.9[203370]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:28:46 compute-0 sudo[203368]: pam_unix(sudo:session): session closed for user root
Sep 30 14:28:47 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:28:47.018Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:28:47 compute-0 sudo[203520]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odhwnvqcrrsbxshiowuplthyzrfjvoxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242527.0394166-1493-12829499591527/AnsiballZ_file.py'
Sep 30 14:28:47 compute-0 sudo[203520]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:28:47 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v389: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:28:47 compute-0 python3.9[203522]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:28:47 compute-0 sudo[203520]: pam_unix(sudo:session): session closed for user root
Sep 30 14:28:47 compute-0 sudo[203674]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uthuggontbhnrhaobmbauwrjlpviiikt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242527.6151497-1493-180838794243578/AnsiballZ_file.py'
Sep 30 14:28:47 compute-0 sudo[203674]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:28:47 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:28:47 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22740016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:28:48 compute-0 python3.9[203676]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:28:48 compute-0 sudo[203674]: pam_unix(sudo:session): session closed for user root
Sep 30 14:28:48 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:28:48 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:28:48 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:28:48.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:28:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:28:48 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2280004610 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:28:48 compute-0 sudo[203826]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihexfxnxadaxixpobrfvvcwxsczrieij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242528.1980422-1493-159997854174918/AnsiballZ_file.py'
Sep 30 14:28:48 compute-0 sudo[203826]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:28:48 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:28:48 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:28:48 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:28:48.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:28:48 compute-0 python3.9[203828]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:28:48 compute-0 sudo[203826]: pam_unix(sudo:session): session closed for user root
Sep 30 14:28:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:28:48 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22ac0048b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:28:48 compute-0 ceph-mon[74194]: pgmap v389: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:28:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:28:48.901Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:28:49 compute-0 sudo[203853]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:28:49 compute-0 sudo[203853]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:28:49 compute-0 sudo[203853]: pam_unix(sudo:session): session closed for user root
Sep 30 14:28:49 compute-0 sudo[203878]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 14:28:49 compute-0 sudo[203878]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:28:49 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v390: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:28:49 compute-0 sudo[204060]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rudgkgxxmqjxxwafvidhcooqixbbfhqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242529.2098262-1622-89790555950357/AnsiballZ_stat.py'
Sep 30 14:28:49 compute-0 sudo[204060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:28:49 compute-0 sudo[203878]: pam_unix(sudo:session): session closed for user root
Sep 30 14:28:49 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Sep 30 14:28:49 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Sep 30 14:28:49 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Sep 30 14:28:49 compute-0 python3.9[204062]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:28:49 compute-0 sudo[204060]: pam_unix(sudo:session): session closed for user root
Sep 30 14:28:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:28:49 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f227c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:28:50 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:28:50 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:28:50 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:28:50.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:28:50 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:28:50 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22740016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:28:50 compute-0 sudo[204186]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xovjkimgzsrsbvyuzmqgzupefcpezwix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242529.2098262-1622-89790555950357/AnsiballZ_copy.py'
Sep 30 14:28:50 compute-0 sudo[204186]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:28:50 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:28:50 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:28:50 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:28:50.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:28:50 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:28:50 compute-0 python3.9[204188]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759242529.2098262-1622-89790555950357/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:28:50 compute-0 sudo[204186]: pam_unix(sudo:session): session closed for user root
Sep 30 14:28:50 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:28:50 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2280004610 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:28:50 compute-0 ceph-mon[74194]: pgmap v390: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:28:51 compute-0 sudo[204338]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tohltrhvlhdvihaaaxdlbyavmxfdgmsx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242530.7618425-1622-248532963524326/AnsiballZ_stat.py'
Sep 30 14:28:51 compute-0 sudo[204338]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:28:51 compute-0 python3.9[204340]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:28:51 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Sep 30 14:28:51 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:28:51 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Sep 30 14:28:51 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:28:51 compute-0 sudo[204338]: pam_unix(sudo:session): session closed for user root
Sep 30 14:28:51 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v391: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:28:51 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Sep 30 14:28:51 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:28:51 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Sep 30 14:28:51 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:28:51 compute-0 sudo[204464]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-muxdxqwfgneseflrrxdvvajzrqrbqozg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242530.7618425-1622-248532963524326/AnsiballZ_copy.py'
Sep 30 14:28:51 compute-0 sudo[204464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:28:51 compute-0 python3.9[204466]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759242530.7618425-1622-248532963524326/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:28:51 compute-0 sudo[204464]: pam_unix(sudo:session): session closed for user root
Sep 30 14:28:51 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Sep 30 14:28:51 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Sep 30 14:28:51 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:28:51 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22ac0048b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:28:52 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:28:52 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:28:52 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:28:52.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:28:52 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Sep 30 14:28:52 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Sep 30 14:28:52 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:28:52 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:28:52 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 14:28:52 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 14:28:52 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 14:28:52 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v392: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 284 B/s rd, 0 op/s
Sep 30 14:28:52 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:28:52 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 14:28:52 compute-0 sudo[204617]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edvbfwoicvpryxiomfpzlhqdloegdxel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242531.9318066-1622-193119999114496/AnsiballZ_stat.py'
Sep 30 14:28:52 compute-0 sudo[204617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:28:52 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:28:52 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 14:28:52 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 14:28:52 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 14:28:52 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 14:28:52 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:28:52 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:28:52 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:28:52 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:28:52 compute-0 ceph-mon[74194]: pgmap v391: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:28:52 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:28:52 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:28:52 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Sep 30 14:28:52 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Sep 30 14:28:52 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:28:52 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 14:28:52 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:28:52 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:28:52 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22ac0048b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:28:52 compute-0 sudo[204620]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:28:52 compute-0 sudo[204620]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:28:52 compute-0 sudo[204620]: pam_unix(sudo:session): session closed for user root
Sep 30 14:28:52 compute-0 sudo[204645]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 14:28:52 compute-0 sudo[204645]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:28:52 compute-0 python3.9[204619]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:28:52 compute-0 sudo[204617]: pam_unix(sudo:session): session closed for user root
Sep 30 14:28:52 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:28:52 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:28:52 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:28:52.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:28:52 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:28:52 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22740016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:28:52 compute-0 podman[204804]: 2025-09-30 14:28:52.762971034 +0000 UTC m=+0.038824854 container create 207f7cdb81571153b00fb0645ac7f1a029dd4b3dc29881c653424cf54795e38f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_noether, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Sep 30 14:28:52 compute-0 systemd[1]: Started libpod-conmon-207f7cdb81571153b00fb0645ac7f1a029dd4b3dc29881c653424cf54795e38f.scope.
Sep 30 14:28:52 compute-0 sudo[204845]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oodwdbwvgrgskpsfqwhbhlikiogmciup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242531.9318066-1622-193119999114496/AnsiballZ_copy.py'
Sep 30 14:28:52 compute-0 sudo[204845]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:28:52 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:28:52 compute-0 podman[204804]: 2025-09-30 14:28:52.745289289 +0000 UTC m=+0.021143139 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:28:52 compute-0 podman[204804]: 2025-09-30 14:28:52.876715219 +0000 UTC m=+0.152569059 container init 207f7cdb81571153b00fb0645ac7f1a029dd4b3dc29881c653424cf54795e38f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Sep 30 14:28:52 compute-0 podman[204804]: 2025-09-30 14:28:52.883512781 +0000 UTC m=+0.159366601 container start 207f7cdb81571153b00fb0645ac7f1a029dd4b3dc29881c653424cf54795e38f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_noether, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:28:52 compute-0 podman[204804]: 2025-09-30 14:28:52.88643887 +0000 UTC m=+0.162292690 container attach 207f7cdb81571153b00fb0645ac7f1a029dd4b3dc29881c653424cf54795e38f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_noether, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1)
Sep 30 14:28:52 compute-0 wonderful_noether[204849]: 167 167
Sep 30 14:28:52 compute-0 systemd[1]: libpod-207f7cdb81571153b00fb0645ac7f1a029dd4b3dc29881c653424cf54795e38f.scope: Deactivated successfully.
Sep 30 14:28:52 compute-0 podman[204804]: 2025-09-30 14:28:52.889348028 +0000 UTC m=+0.165201858 container died 207f7cdb81571153b00fb0645ac7f1a029dd4b3dc29881c653424cf54795e38f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_noether, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Sep 30 14:28:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-aba51e8100cb404af28437f149899836e4ca0018e30086333860bb4713331ac4-merged.mount: Deactivated successfully.
Sep 30 14:28:53 compute-0 podman[204804]: 2025-09-30 14:28:53.022561786 +0000 UTC m=+0.298415606 container remove 207f7cdb81571153b00fb0645ac7f1a029dd4b3dc29881c653424cf54795e38f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_noether, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:28:53 compute-0 python3.9[204851]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759242531.9318066-1622-193119999114496/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:28:53 compute-0 systemd[1]: libpod-conmon-207f7cdb81571153b00fb0645ac7f1a029dd4b3dc29881c653424cf54795e38f.scope: Deactivated successfully.
Sep 30 14:28:53 compute-0 sudo[204845]: pam_unix(sudo:session): session closed for user root
Sep 30 14:28:53 compute-0 podman[204898]: 2025-09-30 14:28:53.19772284 +0000 UTC m=+0.051662829 container create d7184d87ae797e7f7a05f4df287ffb8694e5b7c68aac9a5a988bf38bcb6a1d55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_goldberg, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:28:53 compute-0 systemd[1]: Started libpod-conmon-d7184d87ae797e7f7a05f4df287ffb8694e5b7c68aac9a5a988bf38bcb6a1d55.scope.
Sep 30 14:28:53 compute-0 podman[204898]: 2025-09-30 14:28:53.167749515 +0000 UTC m=+0.021689524 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:28:53 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:28:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ac10d28963d5525de1c79ac73dbd5c7f908a779548b20c82cc6e0a646a644a2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:28:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ac10d28963d5525de1c79ac73dbd5c7f908a779548b20c82cc6e0a646a644a2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:28:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ac10d28963d5525de1c79ac73dbd5c7f908a779548b20c82cc6e0a646a644a2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:28:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ac10d28963d5525de1c79ac73dbd5c7f908a779548b20c82cc6e0a646a644a2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:28:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ac10d28963d5525de1c79ac73dbd5c7f908a779548b20c82cc6e0a646a644a2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 14:28:53 compute-0 ceph-mon[74194]: pgmap v392: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 284 B/s rd, 0 op/s
Sep 30 14:28:53 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:28:53 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 14:28:53 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 14:28:53 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:28:53 compute-0 podman[204898]: 2025-09-30 14:28:53.300965683 +0000 UTC m=+0.154905692 container init d7184d87ae797e7f7a05f4df287ffb8694e5b7c68aac9a5a988bf38bcb6a1d55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_goldberg, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:28:53 compute-0 podman[204898]: 2025-09-30 14:28:53.311421334 +0000 UTC m=+0.165361323 container start d7184d87ae797e7f7a05f4df287ffb8694e5b7c68aac9a5a988bf38bcb6a1d55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_goldberg, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325)
Sep 30 14:28:53 compute-0 podman[204898]: 2025-09-30 14:28:53.318582776 +0000 UTC m=+0.172522785 container attach d7184d87ae797e7f7a05f4df287ffb8694e5b7c68aac9a5a988bf38bcb6a1d55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_goldberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1)
Sep 30 14:28:53 compute-0 sudo[205045]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnzblsqguoyunuohnxbqmjzefrqlqdqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242533.179054-1622-244271272446651/AnsiballZ_stat.py'
Sep 30 14:28:53 compute-0 sudo[205045]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:28:53 compute-0 gracious_goldberg[204966]: --> passed data devices: 0 physical, 1 LVM
Sep 30 14:28:53 compute-0 gracious_goldberg[204966]: --> All data devices are unavailable
Sep 30 14:28:53 compute-0 python3.9[205047]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:28:53 compute-0 systemd[1]: libpod-d7184d87ae797e7f7a05f4df287ffb8694e5b7c68aac9a5a988bf38bcb6a1d55.scope: Deactivated successfully.
Sep 30 14:28:53 compute-0 podman[204898]: 2025-09-30 14:28:53.65422495 +0000 UTC m=+0.508164969 container died d7184d87ae797e7f7a05f4df287ffb8694e5b7c68aac9a5a988bf38bcb6a1d55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_goldberg, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:28:53 compute-0 sudo[205045]: pam_unix(sudo:session): session closed for user root
Sep 30 14:28:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-8ac10d28963d5525de1c79ac73dbd5c7f908a779548b20c82cc6e0a646a644a2-merged.mount: Deactivated successfully.
Sep 30 14:28:53 compute-0 podman[204898]: 2025-09-30 14:28:53.726418449 +0000 UTC m=+0.580358438 container remove d7184d87ae797e7f7a05f4df287ffb8694e5b7c68aac9a5a988bf38bcb6a1d55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_goldberg, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Sep 30 14:28:53 compute-0 systemd[1]: libpod-conmon-d7184d87ae797e7f7a05f4df287ffb8694e5b7c68aac9a5a988bf38bcb6a1d55.scope: Deactivated successfully.
Sep 30 14:28:53 compute-0 sudo[204645]: pam_unix(sudo:session): session closed for user root
Sep 30 14:28:53 compute-0 sudo[205119]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:28:53 compute-0 sudo[205119]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:28:53 compute-0 sudo[205119]: pam_unix(sudo:session): session closed for user root
Sep 30 14:28:53 compute-0 sudo[205165]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -- lvm list --format json
Sep 30 14:28:53 compute-0 sudo[205165]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:28:53 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:28:53 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22a00040d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:28:53 compute-0 sudo[205242]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvnhamvpanaabtsyttuvpmxvqhhwzezj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242533.179054-1622-244271272446651/AnsiballZ_copy.py'
Sep 30 14:28:53 compute-0 sudo[205242]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:28:54 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:28:54 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:28:54 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:28:54.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:28:54 compute-0 python3.9[205244]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759242533.179054-1622-244271272446651/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:28:54 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v393: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 284 B/s rd, 0 op/s
Sep 30 14:28:54 compute-0 sudo[205242]: pam_unix(sudo:session): session closed for user root
Sep 30 14:28:54 compute-0 podman[205286]: 2025-09-30 14:28:54.280420428 +0000 UTC m=+0.034726944 container create ea49da581cb5b44aae6624f8ab45a8138f30f48deee59856ae0baa0572bfc75b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_lederberg, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:28:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:28:54 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f227c002720 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:28:54 compute-0 systemd[1]: Started libpod-conmon-ea49da581cb5b44aae6624f8ab45a8138f30f48deee59856ae0baa0572bfc75b.scope.
Sep 30 14:28:54 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:28:54 compute-0 podman[205286]: 2025-09-30 14:28:54.349112103 +0000 UTC m=+0.103418649 container init ea49da581cb5b44aae6624f8ab45a8138f30f48deee59856ae0baa0572bfc75b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:28:54 compute-0 podman[205286]: 2025-09-30 14:28:54.357113898 +0000 UTC m=+0.111420414 container start ea49da581cb5b44aae6624f8ab45a8138f30f48deee59856ae0baa0572bfc75b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_lederberg, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:28:54 compute-0 podman[205286]: 2025-09-30 14:28:54.265563409 +0000 UTC m=+0.019869945 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:28:54 compute-0 podman[205286]: 2025-09-30 14:28:54.361151796 +0000 UTC m=+0.115458322 container attach ea49da581cb5b44aae6624f8ab45a8138f30f48deee59856ae0baa0572bfc75b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_lederberg, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:28:54 compute-0 jolly_lederberg[205337]: 167 167
Sep 30 14:28:54 compute-0 systemd[1]: libpod-ea49da581cb5b44aae6624f8ab45a8138f30f48deee59856ae0baa0572bfc75b.scope: Deactivated successfully.
Sep 30 14:28:54 compute-0 podman[205286]: 2025-09-30 14:28:54.362671857 +0000 UTC m=+0.116978383 container died ea49da581cb5b44aae6624f8ab45a8138f30f48deee59856ae0baa0572bfc75b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:28:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-5e748e49c7995fe14570e02746973751962a02f464eb5588488dddf901fa14c2-merged.mount: Deactivated successfully.
Sep 30 14:28:54 compute-0 podman[205286]: 2025-09-30 14:28:54.397319888 +0000 UTC m=+0.151626404 container remove ea49da581cb5b44aae6624f8ab45a8138f30f48deee59856ae0baa0572bfc75b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_lederberg, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:28:54 compute-0 systemd[1]: libpod-conmon-ea49da581cb5b44aae6624f8ab45a8138f30f48deee59856ae0baa0572bfc75b.scope: Deactivated successfully.
Sep 30 14:28:54 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:28:54 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:28:54 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:28:54.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:28:54 compute-0 podman[205426]: 2025-09-30 14:28:54.55565047 +0000 UTC m=+0.039422020 container create 5812f82ddfdecd1c72bc768d5e352b5fc866ac78eec99037d804f1d3b05b705e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_engelbart, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Sep 30 14:28:54 compute-0 systemd[1]: Started libpod-conmon-5812f82ddfdecd1c72bc768d5e352b5fc866ac78eec99037d804f1d3b05b705e.scope.
Sep 30 14:28:54 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:28:54 compute-0 sudo[205496]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxfnsxysziemfftumtrchiyoojvuteum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242534.3468492-1622-21477329663079/AnsiballZ_stat.py'
Sep 30 14:28:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f98ba92a033a710c3f40282304a8d2c8d6164dd2cb5f2cd54837ccac22ca91f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:28:54 compute-0 sudo[205496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:28:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f98ba92a033a710c3f40282304a8d2c8d6164dd2cb5f2cd54837ccac22ca91f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:28:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f98ba92a033a710c3f40282304a8d2c8d6164dd2cb5f2cd54837ccac22ca91f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:28:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f98ba92a033a710c3f40282304a8d2c8d6164dd2cb5f2cd54837ccac22ca91f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:28:54 compute-0 podman[205426]: 2025-09-30 14:28:54.539617259 +0000 UTC m=+0.023388839 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:28:54 compute-0 podman[205426]: 2025-09-30 14:28:54.637117228 +0000 UTC m=+0.120888778 container init 5812f82ddfdecd1c72bc768d5e352b5fc866ac78eec99037d804f1d3b05b705e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_engelbart, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:28:54 compute-0 podman[205426]: 2025-09-30 14:28:54.646337455 +0000 UTC m=+0.130108995 container start 5812f82ddfdecd1c72bc768d5e352b5fc866ac78eec99037d804f1d3b05b705e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_engelbart, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:28:54 compute-0 podman[205426]: 2025-09-30 14:28:54.649658675 +0000 UTC m=+0.133430245 container attach 5812f82ddfdecd1c72bc768d5e352b5fc866ac78eec99037d804f1d3b05b705e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_engelbart, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Sep 30 14:28:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:28:54] "GET /metrics HTTP/1.1" 200 48414 "" "Prometheus/2.51.0"
Sep 30 14:28:54 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:28:54] "GET /metrics HTTP/1.1" 200 48414 "" "Prometheus/2.51.0"
Sep 30 14:28:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:28:54 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22ac0048b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:28:54 compute-0 python3.9[205498]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:28:54 compute-0 sudo[205496]: pam_unix(sudo:session): session closed for user root
Sep 30 14:28:54 compute-0 optimistic_engelbart[205489]: {
Sep 30 14:28:54 compute-0 optimistic_engelbart[205489]:     "0": [
Sep 30 14:28:54 compute-0 optimistic_engelbart[205489]:         {
Sep 30 14:28:54 compute-0 optimistic_engelbart[205489]:             "devices": [
Sep 30 14:28:54 compute-0 optimistic_engelbart[205489]:                 "/dev/loop3"
Sep 30 14:28:54 compute-0 optimistic_engelbart[205489]:             ],
Sep 30 14:28:54 compute-0 optimistic_engelbart[205489]:             "lv_name": "ceph_lv0",
Sep 30 14:28:54 compute-0 optimistic_engelbart[205489]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:28:54 compute-0 optimistic_engelbart[205489]:             "lv_size": "21470642176",
Sep 30 14:28:54 compute-0 optimistic_engelbart[205489]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5e3c7776-ac03-5698-b79f-a6dc2d80cae6,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1bf35304-bfb4-41f5-b832-570aa31de1b2,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 14:28:54 compute-0 optimistic_engelbart[205489]:             "lv_uuid": "f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L",
Sep 30 14:28:54 compute-0 optimistic_engelbart[205489]:             "name": "ceph_lv0",
Sep 30 14:28:54 compute-0 optimistic_engelbart[205489]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:28:54 compute-0 optimistic_engelbart[205489]:             "tags": {
Sep 30 14:28:54 compute-0 optimistic_engelbart[205489]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:28:54 compute-0 optimistic_engelbart[205489]:                 "ceph.block_uuid": "f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L",
Sep 30 14:28:54 compute-0 optimistic_engelbart[205489]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 14:28:54 compute-0 optimistic_engelbart[205489]:                 "ceph.cluster_fsid": "5e3c7776-ac03-5698-b79f-a6dc2d80cae6",
Sep 30 14:28:54 compute-0 optimistic_engelbart[205489]:                 "ceph.cluster_name": "ceph",
Sep 30 14:28:54 compute-0 optimistic_engelbart[205489]:                 "ceph.crush_device_class": "",
Sep 30 14:28:54 compute-0 optimistic_engelbart[205489]:                 "ceph.encrypted": "0",
Sep 30 14:28:54 compute-0 optimistic_engelbart[205489]:                 "ceph.osd_fsid": "1bf35304-bfb4-41f5-b832-570aa31de1b2",
Sep 30 14:28:54 compute-0 optimistic_engelbart[205489]:                 "ceph.osd_id": "0",
Sep 30 14:28:54 compute-0 optimistic_engelbart[205489]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 14:28:54 compute-0 optimistic_engelbart[205489]:                 "ceph.type": "block",
Sep 30 14:28:54 compute-0 optimistic_engelbart[205489]:                 "ceph.vdo": "0",
Sep 30 14:28:54 compute-0 optimistic_engelbart[205489]:                 "ceph.with_tpm": "0"
Sep 30 14:28:54 compute-0 optimistic_engelbart[205489]:             },
Sep 30 14:28:54 compute-0 optimistic_engelbart[205489]:             "type": "block",
Sep 30 14:28:54 compute-0 optimistic_engelbart[205489]:             "vg_name": "ceph_vg0"
Sep 30 14:28:54 compute-0 optimistic_engelbart[205489]:         }
Sep 30 14:28:54 compute-0 optimistic_engelbart[205489]:     ]
Sep 30 14:28:54 compute-0 optimistic_engelbart[205489]: }
Sep 30 14:28:54 compute-0 systemd[1]: libpod-5812f82ddfdecd1c72bc768d5e352b5fc866ac78eec99037d804f1d3b05b705e.scope: Deactivated successfully.
Sep 30 14:28:54 compute-0 podman[205426]: 2025-09-30 14:28:54.951737988 +0000 UTC m=+0.435509548 container died 5812f82ddfdecd1c72bc768d5e352b5fc866ac78eec99037d804f1d3b05b705e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_engelbart, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:28:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-8f98ba92a033a710c3f40282304a8d2c8d6164dd2cb5f2cd54837ccac22ca91f-merged.mount: Deactivated successfully.
Sep 30 14:28:54 compute-0 podman[205426]: 2025-09-30 14:28:54.996252773 +0000 UTC m=+0.480024353 container remove 5812f82ddfdecd1c72bc768d5e352b5fc866ac78eec99037d804f1d3b05b705e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_engelbart, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Sep 30 14:28:55 compute-0 systemd[1]: libpod-conmon-5812f82ddfdecd1c72bc768d5e352b5fc866ac78eec99037d804f1d3b05b705e.scope: Deactivated successfully.
Sep 30 14:28:55 compute-0 sudo[205165]: pam_unix(sudo:session): session closed for user root
Sep 30 14:28:55 compute-0 sudo[205585]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:28:55 compute-0 sudo[205585]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:28:55 compute-0 sudo[205585]: pam_unix(sudo:session): session closed for user root
Sep 30 14:28:55 compute-0 sudo[205629]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -- raw list --format json
Sep 30 14:28:55 compute-0 sudo[205629]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:28:55 compute-0 sudo[205687]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmvsdvzvvbxiqlyzjheccmbvkywcakey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242534.3468492-1622-21477329663079/AnsiballZ_copy.py'
Sep 30 14:28:55 compute-0 sudo[205687]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:28:55 compute-0 ceph-mon[74194]: pgmap v393: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 284 B/s rd, 0 op/s
Sep 30 14:28:55 compute-0 python3.9[205689]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759242534.3468492-1622-21477329663079/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:28:55 compute-0 sudo[205687]: pam_unix(sudo:session): session closed for user root
Sep 30 14:28:55 compute-0 podman[205755]: 2025-09-30 14:28:55.541267211 +0000 UTC m=+0.041101345 container create 9647ac4e0f497106aaa822e8d7e29f2d7e2117ec3fe7d0d3a6b43469dcd1b0c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_poitras, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Sep 30 14:28:55 compute-0 systemd[1]: Started libpod-conmon-9647ac4e0f497106aaa822e8d7e29f2d7e2117ec3fe7d0d3a6b43469dcd1b0c6.scope.
Sep 30 14:28:55 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:28:55 compute-0 podman[205755]: 2025-09-30 14:28:55.522740063 +0000 UTC m=+0.022574227 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:28:55 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:28:55 compute-0 podman[205755]: 2025-09-30 14:28:55.637139975 +0000 UTC m=+0.136974149 container init 9647ac4e0f497106aaa822e8d7e29f2d7e2117ec3fe7d0d3a6b43469dcd1b0c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_poitras, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Sep 30 14:28:55 compute-0 podman[205755]: 2025-09-30 14:28:55.645691565 +0000 UTC m=+0.145525709 container start 9647ac4e0f497106aaa822e8d7e29f2d7e2117ec3fe7d0d3a6b43469dcd1b0c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_poitras, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:28:55 compute-0 podman[205755]: 2025-09-30 14:28:55.649518238 +0000 UTC m=+0.149352392 container attach 9647ac4e0f497106aaa822e8d7e29f2d7e2117ec3fe7d0d3a6b43469dcd1b0c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_poitras, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Sep 30 14:28:55 compute-0 frosty_poitras[205820]: 167 167
Sep 30 14:28:55 compute-0 systemd[1]: libpod-9647ac4e0f497106aaa822e8d7e29f2d7e2117ec3fe7d0d3a6b43469dcd1b0c6.scope: Deactivated successfully.
Sep 30 14:28:55 compute-0 podman[205755]: 2025-09-30 14:28:55.651640025 +0000 UTC m=+0.151474179 container died 9647ac4e0f497106aaa822e8d7e29f2d7e2117ec3fe7d0d3a6b43469dcd1b0c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_poitras, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:28:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-463e99b36a0cbc456aa9ce0229491485a394098f8108234ea905ef7801fa258c-merged.mount: Deactivated successfully.
Sep 30 14:28:55 compute-0 podman[205755]: 2025-09-30 14:28:55.691902366 +0000 UTC m=+0.191736510 container remove 9647ac4e0f497106aaa822e8d7e29f2d7e2117ec3fe7d0d3a6b43469dcd1b0c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_poitras, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Sep 30 14:28:55 compute-0 systemd[1]: libpod-conmon-9647ac4e0f497106aaa822e8d7e29f2d7e2117ec3fe7d0d3a6b43469dcd1b0c6.scope: Deactivated successfully.
Sep 30 14:28:55 compute-0 sudo[205915]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acideqtwyasttaxsemkiwyearjagnwvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242535.5434575-1622-15941550007493/AnsiballZ_stat.py'
Sep 30 14:28:55 compute-0 sudo[205915]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:28:55 compute-0 podman[205921]: 2025-09-30 14:28:55.862194229 +0000 UTC m=+0.049589453 container create b1d0ee364433c69ab1a6efd39379b7b9f882f390589794347c8b2767dd52eb9c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_bell, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:28:55 compute-0 systemd[1]: Started libpod-conmon-b1d0ee364433c69ab1a6efd39379b7b9f882f390589794347c8b2767dd52eb9c.scope.
Sep 30 14:28:55 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:28:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e63df148b2f566a1ec26b598fa3e7e9cf1bb1bba85c3398af92af679b5418eb1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:28:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e63df148b2f566a1ec26b598fa3e7e9cf1bb1bba85c3398af92af679b5418eb1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:28:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e63df148b2f566a1ec26b598fa3e7e9cf1bb1bba85c3398af92af679b5418eb1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:28:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e63df148b2f566a1ec26b598fa3e7e9cf1bb1bba85c3398af92af679b5418eb1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:28:55 compute-0 podman[205921]: 2025-09-30 14:28:55.840025293 +0000 UTC m=+0.027420537 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:28:55 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:28:55 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2274002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:28:55 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-nfs-cephfs-compute-0-yvkpei[97239]: [WARNING] 272/142855 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Sep 30 14:28:55 compute-0 podman[205921]: 2025-09-30 14:28:55.941755966 +0000 UTC m=+0.129151220 container init b1d0ee364433c69ab1a6efd39379b7b9f882f390589794347c8b2767dd52eb9c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_bell, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Sep 30 14:28:55 compute-0 podman[205921]: 2025-09-30 14:28:55.947753047 +0000 UTC m=+0.135148271 container start b1d0ee364433c69ab1a6efd39379b7b9f882f390589794347c8b2767dd52eb9c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_bell, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:28:55 compute-0 podman[205921]: 2025-09-30 14:28:55.951028455 +0000 UTC m=+0.138423749 container attach b1d0ee364433c69ab1a6efd39379b7b9f882f390589794347c8b2767dd52eb9c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_bell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Sep 30 14:28:56 compute-0 python3.9[205923]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:28:56 compute-0 sudo[205915]: pam_unix(sudo:session): session closed for user root
Sep 30 14:28:56 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:28:56 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:28:56 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:28:56.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:28:56 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v394: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 284 B/s rd, 0 op/s
Sep 30 14:28:56 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:28:56 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2274002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:28:56 compute-0 sudo[206114]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbwhvmdymgtosmkrouufaojrivmmusxt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242535.5434575-1622-15941550007493/AnsiballZ_copy.py'
Sep 30 14:28:56 compute-0 sudo[206114]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:28:56 compute-0 lvm[206137]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 14:28:56 compute-0 lvm[206137]: VG ceph_vg0 finished
Sep 30 14:28:56 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:28:56 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:28:56 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:28:56.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:28:56 compute-0 nifty_bell[205938]: {}
Sep 30 14:28:56 compute-0 python3.9[206120]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759242535.5434575-1622-15941550007493/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:28:56 compute-0 systemd[1]: libpod-b1d0ee364433c69ab1a6efd39379b7b9f882f390589794347c8b2767dd52eb9c.scope: Deactivated successfully.
Sep 30 14:28:56 compute-0 systemd[1]: libpod-b1d0ee364433c69ab1a6efd39379b7b9f882f390589794347c8b2767dd52eb9c.scope: Consumed 1.026s CPU time.
Sep 30 14:28:56 compute-0 sudo[206114]: pam_unix(sudo:session): session closed for user root
Sep 30 14:28:56 compute-0 podman[206140]: 2025-09-30 14:28:56.639783062 +0000 UTC m=+0.026027800 container died b1d0ee364433c69ab1a6efd39379b7b9f882f390589794347c8b2767dd52eb9c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_bell, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:28:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-e63df148b2f566a1ec26b598fa3e7e9cf1bb1bba85c3398af92af679b5418eb1-merged.mount: Deactivated successfully.
Sep 30 14:28:56 compute-0 podman[206140]: 2025-09-30 14:28:56.677047643 +0000 UTC m=+0.063292351 container remove b1d0ee364433c69ab1a6efd39379b7b9f882f390589794347c8b2767dd52eb9c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_bell, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Sep 30 14:28:56 compute-0 systemd[1]: libpod-conmon-b1d0ee364433c69ab1a6efd39379b7b9f882f390589794347c8b2767dd52eb9c.scope: Deactivated successfully.
Sep 30 14:28:56 compute-0 sudo[205629]: pam_unix(sudo:session): session closed for user root
Sep 30 14:28:56 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 14:28:56 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:28:56 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 14:28:56 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:28:56 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:28:56 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f227c002720 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:28:56 compute-0 sudo[206207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 14:28:56 compute-0 sudo[206207]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:28:56 compute-0 sudo[206207]: pam_unix(sudo:session): session closed for user root
Sep 30 14:28:56 compute-0 sudo[206329]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqcerpjalnarqmlvgwctagvodrdwkikc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242536.7290425-1622-245988627129766/AnsiballZ_stat.py'
Sep 30 14:28:56 compute-0 sudo[206329]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:28:57 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:28:57.019Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:28:57 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:28:57.019Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:28:57 compute-0 python3.9[206331]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:28:57 compute-0 sudo[206329]: pam_unix(sudo:session): session closed for user root
Sep 30 14:28:57 compute-0 sudo[206453]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwaonrzhhpdnhcjbknvoochzcnoqlxod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242536.7290425-1622-245988627129766/AnsiballZ_copy.py'
Sep 30 14:28:57 compute-0 sudo[206453]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:28:57 compute-0 ceph-mon[74194]: pgmap v394: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 284 B/s rd, 0 op/s
Sep 30 14:28:57 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:28:57 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:28:57 compute-0 python3.9[206455]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759242536.7290425-1622-245988627129766/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:28:57 compute-0 sudo[206453]: pam_unix(sudo:session): session closed for user root
Sep 30 14:28:57 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:28:57 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2280004610 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:28:58 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:28:58 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:28:58 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:28:58.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:28:58 compute-0 sudo[206606]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uyohialprwqcszljgpkbgaffxaooithk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242537.9025698-1622-126281906815994/AnsiballZ_stat.py'
Sep 30 14:28:58 compute-0 sudo[206606]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:28:58 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v395: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 189 B/s rd, 0 op/s
Sep 30 14:28:58 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:28:58 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2280004610 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:28:58 compute-0 python3.9[206608]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:28:58 compute-0 sudo[206606]: pam_unix(sudo:session): session closed for user root
Sep 30 14:28:58 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:28:58 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:28:58 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:28:58.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:28:58 compute-0 sudo[206731]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwxydkromkwtldukwfxuaypztnjuhwml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242537.9025698-1622-126281906815994/AnsiballZ_copy.py'
Sep 30 14:28:58 compute-0 sudo[206731]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:28:58 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:28:58 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2274002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:28:58 compute-0 python3.9[206733]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759242537.9025698-1622-126281906815994/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:28:58 compute-0 sudo[206731]: pam_unix(sudo:session): session closed for user root
Sep 30 14:28:58 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:28:58.902Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:28:58 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:28:58.902Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:28:58 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:28:58.902Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:28:59 compute-0 ceph-mgr[74485]: [balancer INFO root] Optimize plan auto_2025-09-30_14:28:59
Sep 30 14:28:59 compute-0 ceph-mgr[74485]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 14:28:59 compute-0 ceph-mgr[74485]: [balancer INFO root] do_upmap
Sep 30 14:28:59 compute-0 ceph-mgr[74485]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.mgr', 'volumes', '.nfs', 'default.rgw.meta', 'vms', 'images', 'default.rgw.log', 'cephfs.cephfs.data', 'backups', '.rgw.root', 'default.rgw.control']
Sep 30 14:28:59 compute-0 ceph-mgr[74485]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 14:28:59 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:28:59 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:28:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 14:28:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:28:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 14:28:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:28:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:28:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:28:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:28:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:28:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:28:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:28:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:28:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:28:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Sep 30 14:28:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:28:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:28:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:28:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Sep 30 14:28:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:28:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Sep 30 14:28:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:28:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:28:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:28:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 14:28:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:28:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 14:28:59 compute-0 sudo[206884]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pydxpusazgslgqacotcuqunnmitczpnj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242539.4185438-1961-150119146793615/AnsiballZ_command.py'
Sep 30 14:28:59 compute-0 sudo[206884]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:28:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:28:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:28:59 compute-0 ceph-mon[74194]: pgmap v395: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 189 B/s rd, 0 op/s
Sep 30 14:28:59 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:28:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:28:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:28:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:28:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:28:59 compute-0 python3.9[206886]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Sep 30 14:28:59 compute-0 sudo[206884]: pam_unix(sudo:session): session closed for user root
Sep 30 14:28:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:28:59 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f227c002720 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:29:00 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:29:00 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:29:00 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:29:00.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:29:00 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v396: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 189 B/s rd, 0 op/s
Sep 30 14:29:00 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:29:00 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2280004610 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:29:00 compute-0 sudo[207044]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzmtmwxeqtbrovvwclmozsezwwwxeaze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242540.20929-1988-160600995230145/AnsiballZ_file.py'
Sep 30 14:29:00 compute-0 sudo[207044]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:29:00 compute-0 sudo[207036]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:29:00 compute-0 sudo[207036]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:29:00 compute-0 sudo[207036]: pam_unix(sudo:session): session closed for user root
Sep 30 14:29:00 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:29:00 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:29:00 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:29:00.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:29:00 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:29:00 compute-0 python3.9[207060]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:29:00 compute-0 sudo[207044]: pam_unix(sudo:session): session closed for user root
Sep 30 14:29:00 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:29:00 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22a00040d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:29:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 14:29:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 14:29:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 14:29:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 14:29:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 14:29:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 14:29:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 14:29:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 14:29:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 14:29:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 14:29:01 compute-0 sudo[207215]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axzbytzhzucwwljzsrjxikwvnzfxrmrx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242540.79925-1988-196039238102477/AnsiballZ_file.py'
Sep 30 14:29:01 compute-0 sudo[207215]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:29:01 compute-0 python3.9[207217]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:29:01 compute-0 sudo[207215]: pam_unix(sudo:session): session closed for user root
Sep 30 14:29:01 compute-0 sudo[207368]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfcjbunskopmvnatgoskscqbmhocpvlt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242541.4350865-1988-186457794092004/AnsiballZ_file.py'
Sep 30 14:29:01 compute-0 sudo[207368]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:29:01 compute-0 ceph-mon[74194]: pgmap v396: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 189 B/s rd, 0 op/s
Sep 30 14:29:01 compute-0 python3.9[207370]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:29:01 compute-0 sudo[207368]: pam_unix(sudo:session): session closed for user root
Sep 30 14:29:01 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:29:01 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22a00040d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:29:02 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:29:02 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:29:02 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:29:02.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:29:02 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v397: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 189 B/s rd, 0 op/s
Sep 30 14:29:02 compute-0 sudo[207521]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjlshnrbetfodlkzffhtoykoxaccuftj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242542.0259871-1988-90028877079191/AnsiballZ_file.py'
Sep 30 14:29:02 compute-0 sudo[207521]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:29:02 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:29:02 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2274003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:29:02 compute-0 python3.9[207523]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:29:02 compute-0 sudo[207521]: pam_unix(sudo:session): session closed for user root
Sep 30 14:29:02 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:29:02 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:29:02 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:29:02.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:29:02 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:29:02 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2280004610 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:29:02 compute-0 sudo[207673]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmstmmykzpjasivyjuabxpyfnlxntsnw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242542.6270454-1988-192385057670865/AnsiballZ_file.py'
Sep 30 14:29:02 compute-0 sudo[207673]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:29:03 compute-0 python3.9[207675]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:29:03 compute-0 sudo[207673]: pam_unix(sudo:session): session closed for user root
Sep 30 14:29:03 compute-0 sudo[207826]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oguclamkzvhvapnztqzhwmqafwgmivpv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242543.241149-1988-165442754990179/AnsiballZ_file.py'
Sep 30 14:29:03 compute-0 sudo[207826]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:29:03 compute-0 python3.9[207828]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:29:03 compute-0 sudo[207826]: pam_unix(sudo:session): session closed for user root
Sep 30 14:29:03 compute-0 ceph-mon[74194]: pgmap v397: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 189 B/s rd, 0 op/s
Sep 30 14:29:03 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:29:03 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f227c003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:29:04 compute-0 sudo[207988]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpmzacraynyhmejrxjbhjldidwlqzuee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242543.8258848-1988-93490308493861/AnsiballZ_file.py'
Sep 30 14:29:04 compute-0 sudo[207988]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:29:04 compute-0 podman[207953]: 2025-09-30 14:29:04.140145327 +0000 UTC m=+0.082206639 container health_status 8ace90c1ad44d98a416105b72e1c541724757ab08efa10adfaa0df7e8c2027c6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller)
Sep 30 14:29:04 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:29:04 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:29:04 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:29:04.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:29:04 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v398: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Sep 30 14:29:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:29:04 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22a00040d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:29:04 compute-0 python3.9[207996]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:29:04 compute-0 sudo[207988]: pam_unix(sudo:session): session closed for user root
Sep 30 14:29:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:29:04 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:29:04 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:29:04 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:29:04 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:29:04.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:29:04 compute-0 sudo[208157]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrtxjxksqyxwyfzjznuvtryylpnlwgza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242544.4491434-1988-75252377281734/AnsiballZ_file.py'
Sep 30 14:29:04 compute-0 sudo[208157]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:29:04 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:29:04] "GET /metrics HTTP/1.1" 200 48423 "" "Prometheus/2.51.0"
Sep 30 14:29:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:29:04] "GET /metrics HTTP/1.1" 200 48423 "" "Prometheus/2.51.0"
Sep 30 14:29:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:29:04 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2274003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:29:04 compute-0 python3.9[208159]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:29:04 compute-0 sudo[208157]: pam_unix(sudo:session): session closed for user root
Sep 30 14:29:05 compute-0 sudo[208309]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqxajewphusnhjgmczwlhcctzjwlfyak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242545.012833-1988-24919756688244/AnsiballZ_file.py'
Sep 30 14:29:05 compute-0 sudo[208309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:29:05 compute-0 python3.9[208311]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:29:05 compute-0 sudo[208309]: pam_unix(sudo:session): session closed for user root
Sep 30 14:29:05 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:29:05 compute-0 ceph-mon[74194]: pgmap v398: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Sep 30 14:29:05 compute-0 sudo[208463]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hffhqmxjcvajyozdyfnhepzmrxlfugxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242545.5754602-1988-260335763078671/AnsiballZ_file.py'
Sep 30 14:29:05 compute-0 sudo[208463]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:29:05 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:29:05 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2280004610 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:29:06 compute-0 python3.9[208465]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:29:06 compute-0 sudo[208463]: pam_unix(sudo:session): session closed for user root
Sep 30 14:29:06 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:29:06 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:29:06 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:29:06.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:29:06 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v399: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:29:06 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:29:06 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f227c003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:29:06 compute-0 sudo[208615]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knyikilmvvegazwcemvmpiwimbeaxicq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242546.198885-1988-205738689824173/AnsiballZ_file.py'
Sep 30 14:29:06 compute-0 sudo[208615]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:29:06 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:29:06 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:29:06 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:29:06.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:29:06 compute-0 python3.9[208617]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:29:06 compute-0 sudo[208615]: pam_unix(sudo:session): session closed for user root
Sep 30 14:29:06 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:29:06 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22a00040d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:29:07 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:29:07.020Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:29:07 compute-0 sudo[208767]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmsieimxxzaftqznovybwajakwklioli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242546.825372-1988-135132740520083/AnsiballZ_file.py'
Sep 30 14:29:07 compute-0 sudo[208767]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:29:07 compute-0 python3.9[208769]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:29:07 compute-0 sudo[208767]: pam_unix(sudo:session): session closed for user root
Sep 30 14:29:07 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:29:07 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:29:07 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:29:07 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:29:07 compute-0 sudo[208920]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytjednctvqukljslehasbvgnrnibenet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242547.4095995-1988-61548820585872/AnsiballZ_file.py'
Sep 30 14:29:07 compute-0 sudo[208920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:29:07 compute-0 ceph-mon[74194]: pgmap v399: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:29:07 compute-0 python3.9[208922]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:29:07 compute-0 sudo[208920]: pam_unix(sudo:session): session closed for user root
Sep 30 14:29:07 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:29:07 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2274003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:29:08 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:29:08 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:29:08 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:29:08.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:29:08 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v400: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Sep 30 14:29:08 compute-0 sudo[209073]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbowwstjxinmjwiaestbbqeoznxwleuz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242548.000139-1988-82574973354705/AnsiballZ_file.py'
Sep 30 14:29:08 compute-0 sudo[209073]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:29:08 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:29:08 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2280004610 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:29:08 compute-0 python3.9[209075]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:29:08 compute-0 sudo[209073]: pam_unix(sudo:session): session closed for user root
Sep 30 14:29:08 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:29:08 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:29:08 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:29:08.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:29:08 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:29:08 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f227c003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:29:08 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:29:08.903Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:29:09 compute-0 sudo[209226]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epiumwpglgoliqoascnbvudgewbforlp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242549.3465385-2285-235348077069659/AnsiballZ_stat.py'
Sep 30 14:29:09 compute-0 sudo[209226]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:29:09 compute-0 python3.9[209228]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:29:09 compute-0 sudo[209226]: pam_unix(sudo:session): session closed for user root
Sep 30 14:29:09 compute-0 ceph-mon[74194]: pgmap v400: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Sep 30 14:29:09 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:29:09 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22a00040d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:29:10 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:29:10 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:29:10 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:29:10.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:29:10 compute-0 sudo[209350]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vagmcuafvlkzysmyjxqrbqvdonjrirmo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242549.3465385-2285-235348077069659/AnsiballZ_copy.py'
Sep 30 14:29:10 compute-0 sudo[209350]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:29:10 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v401: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Sep 30 14:29:10 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:29:10 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2274003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:29:10 compute-0 python3.9[209352]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759242549.3465385-2285-235348077069659/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:29:10 compute-0 sudo[209350]: pam_unix(sudo:session): session closed for user root
Sep 30 14:29:10 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:29:10 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Sep 30 14:29:10 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:29:10 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:29:10 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:29:10.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:29:10 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:29:10 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:29:10 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2280004610 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:29:10 compute-0 sudo[209518]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zarkwandqzipixaxxdlfehkckpnpvcmk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242550.581118-2285-198005997915307/AnsiballZ_stat.py'
Sep 30 14:29:10 compute-0 sudo[209518]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:29:10 compute-0 podman[209476]: 2025-09-30 14:29:10.857976305 +0000 UTC m=+0.051355630 container health_status c96c6cc1b89de09f21811bef665f35e069874e3ad1bbd4c37d02227af98e3458 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Sep 30 14:29:11 compute-0 python3.9[209526]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:29:11 compute-0 sudo[209518]: pam_unix(sudo:session): session closed for user root
Sep 30 14:29:11 compute-0 sudo[209648]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgxirifklbsgsltkjpkatcgyjtjljkhl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242550.581118-2285-198005997915307/AnsiballZ_copy.py'
Sep 30 14:29:11 compute-0 sudo[209648]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:29:11 compute-0 python3.9[209650]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759242550.581118-2285-198005997915307/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:29:11 compute-0 sudo[209648]: pam_unix(sudo:session): session closed for user root
Sep 30 14:29:11 compute-0 ceph-mon[74194]: pgmap v401: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Sep 30 14:29:11 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:29:11 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f227c003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:29:11 compute-0 sudo[209801]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqtebldqbvlbbxusrhvfmvizrhrfsnfg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242551.705233-2285-202943891442000/AnsiballZ_stat.py'
Sep 30 14:29:11 compute-0 sudo[209801]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:29:12 compute-0 python3.9[209803]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:29:12 compute-0 sudo[209801]: pam_unix(sudo:session): session closed for user root
Sep 30 14:29:12 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:29:12 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:29:12 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:29:12.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:29:12 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v402: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Sep 30 14:29:12 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:29:12 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22a00040d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:29:12 compute-0 sudo[209924]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udigwuyxmwypvvtndojfpqrngebrrqqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242551.705233-2285-202943891442000/AnsiballZ_copy.py'
Sep 30 14:29:12 compute-0 sudo[209924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:29:12 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:29:12 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:29:12 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:29:12.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:29:12 compute-0 python3.9[209926]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759242551.705233-2285-202943891442000/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:29:12 compute-0 sudo[209924]: pam_unix(sudo:session): session closed for user root
Sep 30 14:29:12 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:29:12 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2274003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:29:13 compute-0 sudo[210076]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxyahqlzqregcseisklolgskdpzifyml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242552.8443449-2285-113704038027349/AnsiballZ_stat.py'
Sep 30 14:29:13 compute-0 sudo[210076]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:29:13 compute-0 python3.9[210078]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:29:13 compute-0 sudo[210076]: pam_unix(sudo:session): session closed for user root
Sep 30 14:29:13 compute-0 sudo[210200]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rceeoncjbwkdrclggkgapweilpphraxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242552.8443449-2285-113704038027349/AnsiballZ_copy.py'
Sep 30 14:29:13 compute-0 sudo[210200]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:29:13 compute-0 python3.9[210202]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759242552.8443449-2285-113704038027349/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:29:13 compute-0 sudo[210200]: pam_unix(sudo:session): session closed for user root
Sep 30 14:29:13 compute-0 ceph-mon[74194]: pgmap v402: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Sep 30 14:29:13 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:29:13 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22a00040d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:29:14 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:29:14 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:29:14 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:29:14.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:29:14 compute-0 sudo[210354]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xsrmzumsyahiouhyoazpjacjkxuphuct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242553.9551857-2285-268964383089120/AnsiballZ_stat.py'
Sep 30 14:29:14 compute-0 sudo[210354]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:29:14 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v403: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Sep 30 14:29:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:29:14 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f227c003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:29:14 compute-0 python3.9[210356]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:29:14 compute-0 sudo[210354]: pam_unix(sudo:session): session closed for user root
Sep 30 14:29:14 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:29:14 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:29:14 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:29:14.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:29:14 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:29:14 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:29:14 compute-0 sudo[210477]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flubqynnepuvnrjhpsmsyrmlpcpazgve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242553.9551857-2285-268964383089120/AnsiballZ_copy.py'
Sep 30 14:29:14 compute-0 sudo[210477]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:29:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:29:14] "GET /metrics HTTP/1.1" 200 48425 "" "Prometheus/2.51.0"
Sep 30 14:29:14 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:29:14] "GET /metrics HTTP/1.1" 200 48425 "" "Prometheus/2.51.0"
Sep 30 14:29:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:29:14 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22940026b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:29:14 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:29:14 compute-0 python3.9[210479]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759242553.9551857-2285-268964383089120/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:29:14 compute-0 sudo[210477]: pam_unix(sudo:session): session closed for user root
Sep 30 14:29:15 compute-0 sudo[210629]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpyexdldsmyiqprecifggnozwbmcdvra ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242555.036428-2285-97523888257327/AnsiballZ_stat.py'
Sep 30 14:29:15 compute-0 sudo[210629]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:29:15 compute-0 python3.9[210631]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:29:15 compute-0 sudo[210629]: pam_unix(sudo:session): session closed for user root
Sep 30 14:29:15 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:29:15 compute-0 sudo[210754]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qartvmflhgewbeoxqnfxkfiixipuldit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242555.036428-2285-97523888257327/AnsiballZ_copy.py'
Sep 30 14:29:15 compute-0 sudo[210754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:29:15 compute-0 ceph-mon[74194]: pgmap v403: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Sep 30 14:29:15 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-nfs-cephfs-compute-0-yvkpei[97239]: [WARNING] 272/142915 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Sep 30 14:29:15 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:29:15 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2274003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:29:16 compute-0 python3.9[210756]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759242555.036428-2285-97523888257327/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:29:16 compute-0 sudo[210754]: pam_unix(sudo:session): session closed for user root
Sep 30 14:29:16 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:29:16 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:29:16 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:29:16.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:29:16 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v404: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Sep 30 14:29:16 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:29:16 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22a000c0a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:29:16 compute-0 sudo[210906]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-unbvydcwcyjkbewhntzdoadknmnvavkt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242556.2389467-2285-280135441852734/AnsiballZ_stat.py'
Sep 30 14:29:16 compute-0 sudo[210906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:29:16 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:29:16 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:29:16 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:29:16.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:29:16 compute-0 python3.9[210908]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:29:16 compute-0 sudo[210906]: pam_unix(sudo:session): session closed for user root
Sep 30 14:29:16 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:29:16 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f227c003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:29:16 compute-0 sudo[211029]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-suirktzvkdsonxtkybjykivdygpgjdcj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242556.2389467-2285-280135441852734/AnsiballZ_copy.py'
Sep 30 14:29:16 compute-0 sudo[211029]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:29:17 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:29:17.020Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:29:17 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:29:17.020Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:29:17 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:29:17.020Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:29:17 compute-0 python3.9[211031]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759242556.2389467-2285-280135441852734/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:29:17 compute-0 sudo[211029]: pam_unix(sudo:session): session closed for user root
Sep 30 14:29:17 compute-0 sudo[211182]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxyvgxwzvkqasjperijjaktjwqwvdnvh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242557.3161435-2285-191685188923632/AnsiballZ_stat.py'
Sep 30 14:29:17 compute-0 sudo[211182]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:29:17 compute-0 python3.9[211184]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:29:17 compute-0 sudo[211182]: pam_unix(sudo:session): session closed for user root
Sep 30 14:29:17 compute-0 ceph-mon[74194]: pgmap v404: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Sep 30 14:29:17 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:29:17 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22940026b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:29:18 compute-0 sudo[211306]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-asgdxrpjkuextrpnshqupmtylkooszkv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242557.3161435-2285-191685188923632/AnsiballZ_copy.py'
Sep 30 14:29:18 compute-0 sudo[211306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:29:18 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:29:18 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:29:18 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:29:18.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:29:18 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v405: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Sep 30 14:29:18 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:29:18 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2274003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:29:18 compute-0 python3.9[211308]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759242557.3161435-2285-191685188923632/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:29:18 compute-0 sudo[211306]: pam_unix(sudo:session): session closed for user root
Sep 30 14:29:18 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:29:18 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:29:18 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:29:18.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:29:18 compute-0 sudo[211458]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ylnzxcyezefmaqrdxnyfkoffqpetynnb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242558.4675307-2285-254449623572380/AnsiballZ_stat.py'
Sep 30 14:29:18 compute-0 sudo[211458]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:29:18 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:29:18 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22a000c0a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:29:18 compute-0 python3.9[211460]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:29:18 compute-0 sudo[211458]: pam_unix(sudo:session): session closed for user root
Sep 30 14:29:18 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:29:18.904Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:29:19 compute-0 sudo[211581]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-upmcjteujxdnkhkkxfqxxybdwhmidewo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242558.4675307-2285-254449623572380/AnsiballZ_copy.py'
Sep 30 14:29:19 compute-0 sudo[211581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:29:19 compute-0 python3.9[211583]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759242558.4675307-2285-254449623572380/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:29:19 compute-0 sudo[211581]: pam_unix(sudo:session): session closed for user root
Sep 30 14:29:19 compute-0 sudo[211735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdpylgttkmmsyxotltdblfwlweppxbbk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242559.5657153-2285-133191983748441/AnsiballZ_stat.py'
Sep 30 14:29:19 compute-0 sudo[211735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:29:19 compute-0 ceph-mon[74194]: pgmap v405: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Sep 30 14:29:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:29:19 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f227c003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:29:20 compute-0 python3.9[211737]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:29:20 compute-0 sudo[211735]: pam_unix(sudo:session): session closed for user root
Sep 30 14:29:20 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:29:20 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:29:20 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:29:20.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:29:20 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v406: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Sep 30 14:29:20 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:29:20 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22940026b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:29:20 compute-0 sudo[211858]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywozeluyznmfgkamqmdogglfzimybyxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242559.5657153-2285-133191983748441/AnsiballZ_copy.py'
Sep 30 14:29:20 compute-0 sudo[211858]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:29:20 compute-0 python3.9[211860]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759242559.5657153-2285-133191983748441/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:29:20 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:29:20 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:29:20 compute-0 sudo[211858]: pam_unix(sudo:session): session closed for user root
Sep 30 14:29:20 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:29:20.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:29:20 compute-0 sudo[211861]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:29:20 compute-0 sudo[211861]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:29:20 compute-0 sudo[211861]: pam_unix(sudo:session): session closed for user root
Sep 30 14:29:20 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:29:20 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:29:20 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2274003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:29:20 compute-0 sudo[212035]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwrojeshzldwwxmopabmozcfbkvqazyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242560.6782684-2285-205016206295734/AnsiballZ_stat.py'
Sep 30 14:29:20 compute-0 sudo[212035]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:29:21 compute-0 python3.9[212037]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:29:21 compute-0 sudo[212035]: pam_unix(sudo:session): session closed for user root
Sep 30 14:29:21 compute-0 ceph-mon[74194]: pgmap v406: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Sep 30 14:29:21 compute-0 sudo[212159]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxuqgyfpvqevhiiyjzsrhbdpxnimbsyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242560.6782684-2285-205016206295734/AnsiballZ_copy.py'
Sep 30 14:29:21 compute-0 sudo[212159]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:29:21 compute-0 python3.9[212161]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759242560.6782684-2285-205016206295734/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:29:21 compute-0 sudo[212159]: pam_unix(sudo:session): session closed for user root
Sep 30 14:29:21 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:29:21 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22a000c0a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:29:22 compute-0 sudo[212312]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ausdufetajtbtlxzllgocvomjytcdeez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242561.9016094-2285-227619892681742/AnsiballZ_stat.py'
Sep 30 14:29:22 compute-0 sudo[212312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:29:22 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:29:22 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:29:22 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:29:22.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:29:22 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v407: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Sep 30 14:29:22 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:29:22 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f227c003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:29:22 compute-0 python3.9[212314]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:29:22 compute-0 sudo[212312]: pam_unix(sudo:session): session closed for user root
Sep 30 14:29:22 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:29:22 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:29:22 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:29:22.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:29:22 compute-0 sudo[212435]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbrztrovsehlzabmytqsjuyffgufrctb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242561.9016094-2285-227619892681742/AnsiballZ_copy.py'
Sep 30 14:29:22 compute-0 sudo[212435]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:29:22 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:29:22 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22940026b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:29:22 compute-0 python3.9[212437]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759242561.9016094-2285-227619892681742/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:29:22 compute-0 sudo[212435]: pam_unix(sudo:session): session closed for user root
Sep 30 14:29:23 compute-0 ceph-mon[74194]: pgmap v407: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Sep 30 14:29:23 compute-0 sudo[212588]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovhxshjsgtesefnjhxfgujzpmdvzzhws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242563.0693264-2285-25778468166955/AnsiballZ_stat.py'
Sep 30 14:29:23 compute-0 sudo[212588]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:29:23 compute-0 python3.9[212590]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:29:23 compute-0 sudo[212588]: pam_unix(sudo:session): session closed for user root
Sep 30 14:29:23 compute-0 sudo[212712]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgrvtlkgbsmqglxukqrgbkftrttpovqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242563.0693264-2285-25778468166955/AnsiballZ_copy.py'
Sep 30 14:29:23 compute-0 sudo[212712]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:29:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:29:23 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2274003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:29:24 compute-0 python3.9[212714]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759242563.0693264-2285-25778468166955/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:29:24 compute-0 sudo[212712]: pam_unix(sudo:session): session closed for user root
Sep 30 14:29:24 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:29:24 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:29:24 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:29:24.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:29:24 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v408: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Sep 30 14:29:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:29:24 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22a000c0a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:29:24 compute-0 sudo[212864]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahvelabbzuhbizwkivlghzqsxhgvppid ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242564.2231803-2285-75009398053944/AnsiballZ_stat.py'
Sep 30 14:29:24 compute-0 sudo[212864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:29:24 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:29:24 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:29:24 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:29:24.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:29:24 compute-0 python3.9[212866]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:29:24 compute-0 sudo[212864]: pam_unix(sudo:session): session closed for user root
Sep 30 14:29:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:29:24] "GET /metrics HTTP/1.1" 200 48425 "" "Prometheus/2.51.0"
Sep 30 14:29:24 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:29:24] "GET /metrics HTTP/1.1" 200 48425 "" "Prometheus/2.51.0"
Sep 30 14:29:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:29:24 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f227c003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:29:25 compute-0 sudo[212987]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvnwlolrhbohvlvgkjqjrhpzjytaluvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242564.2231803-2285-75009398053944/AnsiballZ_copy.py'
Sep 30 14:29:25 compute-0 sudo[212987]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:29:25 compute-0 python3.9[212989]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759242564.2231803-2285-75009398053944/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:29:25 compute-0 sudo[212987]: pam_unix(sudo:session): session closed for user root
Sep 30 14:29:25 compute-0 ceph-mon[74194]: pgmap v408: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Sep 30 14:29:25 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:29:25 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:29:25 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22940026b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:29:26 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:29:26 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:29:26 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:29:26.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:29:26 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v409: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Sep 30 14:29:26 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:29:26 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2274003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:29:26 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:29:26 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:29:26 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:29:26.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:29:26 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:29:26 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22a000c0a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:29:26 compute-0 sshd-session[213016]: Invalid user kevin from 194.0.234.93 port 20322
Sep 30 14:29:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:29:27.022Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:29:27 compute-0 python3.9[213143]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ls -lRZ /run/libvirt | grep -E ':container_\S+_t'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:29:27 compute-0 sshd-session[213016]: pam_unix(sshd:auth): check pass; user unknown
Sep 30 14:29:27 compute-0 sshd-session[213016]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=194.0.234.93
Sep 30 14:29:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-nfs-cephfs-compute-0-yvkpei[97239]: [WARNING] 272/142927 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Sep 30 14:29:27 compute-0 ceph-mon[74194]: pgmap v409: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Sep 30 14:29:27 compute-0 sudo[213298]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybkpqukyemvjkotbfxkqlndidszxmyrs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242567.4403572-2903-68640495682657/AnsiballZ_seboolean.py'
Sep 30 14:29:27 compute-0 sudo[213298]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:29:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:29:27 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f227c003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:29:28 compute-0 python3.9[213300]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Sep 30 14:29:28 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:29:28 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:29:28 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:29:28.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:29:28 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v410: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Sep 30 14:29:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:29:28 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22940026b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:29:28 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:29:28 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:29:28 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:29:28.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:29:28 compute-0 sshd-session[213016]: Failed password for invalid user kevin from 194.0.234.93 port 20322 ssh2
Sep 30 14:29:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:29:28 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22ac0033b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:29:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:29:28.905Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:29:29 compute-0 sudo[213298]: pam_unix(sudo:session): session closed for user root
Sep 30 14:29:29 compute-0 ceph-mon[74194]: pgmap v410: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Sep 30 14:29:29 compute-0 sshd-session[213016]: Connection closed by invalid user kevin 194.0.234.93 port 20322 [preauth]
Sep 30 14:29:29 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:29:29 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:29:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:29:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:29:29 compute-0 sudo[213457]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsdmcrfeponrhdbszwrefpudiuizyykv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242569.4966214-2927-112408515424539/AnsiballZ_copy.py'
Sep 30 14:29:29 compute-0 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Sep 30 14:29:29 compute-0 sudo[213457]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:29:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:29:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:29:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:29:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:29:29 compute-0 python3.9[213459]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:29:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:29:29 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22ac0033b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:29:29 compute-0 sudo[213457]: pam_unix(sudo:session): session closed for user root
Sep 30 14:29:30 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:29:30 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:29:30 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:29:30.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:29:30 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v411: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:29:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:29:30 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f227c003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:29:30 compute-0 sudo[213609]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkbzokanjokrwhrgqhefkuqrudqkdkhn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242570.1155407-2927-49875657660454/AnsiballZ_copy.py'
Sep 30 14:29:30 compute-0 sudo[213609]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:29:30 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:29:30 compute-0 python3.9[213611]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:29:30 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:29:30 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:29:30 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:29:30.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:29:30 compute-0 sudo[213609]: pam_unix(sudo:session): session closed for user root
Sep 30 14:29:30 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:29:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:29:30 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22940026b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:29:30 compute-0 sudo[213761]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vljieoksobdashftbtfjdxhanfeqcvai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242570.7156541-2927-81564250472145/AnsiballZ_copy.py'
Sep 30 14:29:30 compute-0 sudo[213761]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:29:31 compute-0 python3.9[213763]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:29:31 compute-0 sudo[213761]: pam_unix(sudo:session): session closed for user root
Sep 30 14:29:31 compute-0 ceph-mon[74194]: pgmap v411: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:29:31 compute-0 sudo[213914]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dukltkseaebmqniywvogxtzngnoxnhtr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242571.3027976-2927-206062130077079/AnsiballZ_copy.py'
Sep 30 14:29:31 compute-0 sudo[213914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:29:31 compute-0 python3.9[213916]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:29:31 compute-0 sudo[213914]: pam_unix(sudo:session): session closed for user root
Sep 30 14:29:31 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:29:31 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22ac0027d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:29:32 compute-0 sudo[214067]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzdppvaetyxuzwdkryoxgwgnldxsisml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242571.9094646-2927-155646148373017/AnsiballZ_copy.py'
Sep 30 14:29:32 compute-0 sudo[214067]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:29:32 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:29:32 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:29:32 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:29:32.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:29:32 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v412: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Sep 30 14:29:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:29:32 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22ac0027d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:29:32 compute-0 python3.9[214069]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:29:32 compute-0 sudo[214067]: pam_unix(sudo:session): session closed for user root
Sep 30 14:29:32 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:29:32 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:29:32 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:29:32.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:29:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:29:32 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f227c003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:29:33 compute-0 sudo[214219]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgcwzglcqylhcnixjgrahvyasfuntcjp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242572.7788699-3035-221649468533414/AnsiballZ_copy.py'
Sep 30 14:29:33 compute-0 sudo[214219]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:29:33 compute-0 python3.9[214221]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:29:33 compute-0 sudo[214219]: pam_unix(sudo:session): session closed for user root
Sep 30 14:29:33 compute-0 ceph-mon[74194]: pgmap v412: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Sep 30 14:29:33 compute-0 sudo[214372]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sswoqlepzhvbgaadtjtvvfrrcilzccvh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242573.4640002-3035-43720857324520/AnsiballZ_copy.py'
Sep 30 14:29:33 compute-0 sudo[214372]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:29:33 compute-0 python3.9[214375]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:29:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:29:33 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22940026b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:29:33 compute-0 sudo[214372]: pam_unix(sudo:session): session closed for user root
Sep 30 14:29:34 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:29:34 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:29:34 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:29:34.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:29:34 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v413: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Sep 30 14:29:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:29:34 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22ac0027d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:29:34 compute-0 sudo[214538]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwflzcuzgaleaozmwozwpduarkwvorbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242574.1160996-3035-182174418224724/AnsiballZ_copy.py'
Sep 30 14:29:34 compute-0 sudo[214538]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:29:34 compute-0 podman[214499]: 2025-09-30 14:29:34.470238514 +0000 UTC m=+0.113946801 container health_status 8ace90c1ad44d98a416105b72e1c541724757ab08efa10adfaa0df7e8c2027c6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250923, container_name=ovn_controller, org.label-schema.license=GPLv2)
Sep 30 14:29:34 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:29:34 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:29:34 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:29:34.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:29:34 compute-0 python3.9[214544]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:29:34 compute-0 sudo[214538]: pam_unix(sudo:session): session closed for user root
Sep 30 14:29:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:29:34] "GET /metrics HTTP/1.1" 200 48419 "" "Prometheus/2.51.0"
Sep 30 14:29:34 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:29:34] "GET /metrics HTTP/1.1" 200 48419 "" "Prometheus/2.51.0"
Sep 30 14:29:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:29:34 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22a000c0a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:29:35 compute-0 sudo[214703]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhzylsiemerzqbpkdngemqpgomsnrxci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242574.7749736-3035-44115065236454/AnsiballZ_copy.py'
Sep 30 14:29:35 compute-0 sudo[214703]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:29:35 compute-0 python3.9[214705]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:29:35 compute-0 sudo[214703]: pam_unix(sudo:session): session closed for user root
Sep 30 14:29:35 compute-0 ceph-mon[74194]: pgmap v413: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Sep 30 14:29:35 compute-0 sudo[214856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-metqbvkccjlzdpnljevhriloomrqqeik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242575.360931-3035-74712740292631/AnsiballZ_copy.py'
Sep 30 14:29:35 compute-0 sudo[214856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:29:35 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:29:35 compute-0 python3.9[214858]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:29:35 compute-0 sudo[214856]: pam_unix(sudo:session): session closed for user root
Sep 30 14:29:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:29:35 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f227c003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:29:36 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:29:36 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:29:36 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:29:36.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:29:36 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v414: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Sep 30 14:29:36 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:29:36 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22940026b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:29:36 compute-0 sudo[215011]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghcvjgbeedolgwtjwuwqroatvrooxmrk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242576.0467017-3143-10600032821631/AnsiballZ_systemd.py'
Sep 30 14:29:36 compute-0 sudo[215011]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:29:36 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:29:36 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:29:36 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:29:36 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:29:36 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:29:36.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:29:36 compute-0 python3.9[215013]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Sep 30 14:29:36 compute-0 systemd[1]: Reloading.
Sep 30 14:29:36 compute-0 systemd-rc-local-generator[215040]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:29:36 compute-0 systemd-sysv-generator[215045]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 14:29:36 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:29:36 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22ac0027d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:29:36 compute-0 sshd-session[214990]: Invalid user admin from 78.128.112.74 port 45098
Sep 30 14:29:37 compute-0 systemd[1]: Starting libvirt logging daemon socket...
Sep 30 14:29:37 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:29:37.023Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:29:37 compute-0 systemd[1]: Listening on libvirt logging daemon socket.
Sep 30 14:29:37 compute-0 systemd[1]: Starting libvirt logging daemon admin socket...
Sep 30 14:29:37 compute-0 systemd[1]: Listening on libvirt logging daemon admin socket.
Sep 30 14:29:37 compute-0 systemd[1]: Starting libvirt logging daemon...
Sep 30 14:29:37 compute-0 sshd-session[214990]: pam_unix(sshd:auth): check pass; user unknown
Sep 30 14:29:37 compute-0 sshd-session[214990]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=78.128.112.74
Sep 30 14:29:37 compute-0 systemd[1]: Started libvirt logging daemon.
Sep 30 14:29:37 compute-0 sudo[215011]: pam_unix(sudo:session): session closed for user root
Sep 30 14:29:37 compute-0 ceph-mon[74194]: pgmap v414: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Sep 30 14:29:37 compute-0 sudo[215205]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iumvihxjjolgzsmzbjbjmuefmkshdqxe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242577.3682992-3143-15415581931390/AnsiballZ_systemd.py'
Sep 30 14:29:37 compute-0 sudo[215205]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:29:37 compute-0 python3.9[215207]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Sep 30 14:29:37 compute-0 systemd[1]: Reloading.
Sep 30 14:29:37 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:29:37 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22ac0027d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:29:38 compute-0 systemd-rc-local-generator[215233]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:29:38 compute-0 systemd-sysv-generator[215237]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 14:29:38 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:29:38 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:29:38 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:29:38.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:29:38 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v415: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:29:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:29:38.243 163966 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:29:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:29:38.244 163966 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:29:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:29:38.244 163966 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:29:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:29:38 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f227c003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:29:38 compute-0 systemd[1]: Starting libvirt nodedev daemon socket...
Sep 30 14:29:38 compute-0 systemd[1]: Listening on libvirt nodedev daemon socket.
Sep 30 14:29:38 compute-0 systemd[1]: Starting libvirt nodedev daemon admin socket...
Sep 30 14:29:38 compute-0 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Sep 30 14:29:38 compute-0 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Sep 30 14:29:38 compute-0 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Sep 30 14:29:38 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Sep 30 14:29:38 compute-0 systemd[1]: Started libvirt nodedev daemon.
Sep 30 14:29:38 compute-0 sudo[215205]: pam_unix(sudo:session): session closed for user root
Sep 30 14:29:38 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:29:38 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:29:38 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:29:38.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:29:38 compute-0 sshd-session[214990]: Failed password for invalid user admin from 78.128.112.74 port 45098 ssh2
Sep 30 14:29:38 compute-0 sudo[215421]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkzlvqbbehypmqljohbctcyyouohkikz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242578.5397897-3143-259865236578671/AnsiballZ_systemd.py'
Sep 30 14:29:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:29:38 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22940026b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:29:38 compute-0 sudo[215421]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:29:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:29:38.908Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:29:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:29:38.909Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:29:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:29:38.910Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:29:39 compute-0 python3.9[215423]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Sep 30 14:29:39 compute-0 systemd[1]: Reloading.
Sep 30 14:29:39 compute-0 systemd-rc-local-generator[215455]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:29:39 compute-0 systemd-sysv-generator[215458]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 14:29:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:29:39 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:29:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:29:39 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:29:39 compute-0 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Sep 30 14:29:39 compute-0 sshd-session[214990]: Connection closed by invalid user admin 78.128.112.74 port 45098 [preauth]
Sep 30 14:29:39 compute-0 ceph-mon[74194]: pgmap v415: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:29:39 compute-0 systemd[1]: Starting libvirt proxy daemon admin socket...
Sep 30 14:29:39 compute-0 systemd[1]: Starting libvirt proxy daemon read-only socket...
Sep 30 14:29:39 compute-0 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Sep 30 14:29:39 compute-0 systemd[1]: Listening on libvirt proxy daemon admin socket.
Sep 30 14:29:39 compute-0 systemd[1]: Starting libvirt proxy daemon...
Sep 30 14:29:39 compute-0 systemd[1]: Started libvirt proxy daemon.
Sep 30 14:29:39 compute-0 sudo[215421]: pam_unix(sudo:session): session closed for user root
Sep 30 14:29:39 compute-0 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Sep 30 14:29:39 compute-0 sudo[215636]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmvenkvvxuikbvoyjiveowqmkkltdfff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242579.7096334-3143-4896439379988/AnsiballZ_systemd.py'
Sep 30 14:29:39 compute-0 sudo[215636]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:29:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:29:39 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22ac0027d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:29:39 compute-0 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Sep 30 14:29:40 compute-0 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Sep 30 14:29:40 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:29:40 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:29:40 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:29:40.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:29:40 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v416: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:29:40 compute-0 python3.9[215639]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Sep 30 14:29:40 compute-0 systemd[1]: Reloading.
Sep 30 14:29:40 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:29:40 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22a000c0a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:29:40 compute-0 systemd-rc-local-generator[215672]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:29:40 compute-0 systemd-sysv-generator[215676]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 14:29:40 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:29:40 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:29:40 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:29:40.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:29:40 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:29:40 compute-0 sudo[215681]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:29:40 compute-0 sudo[215681]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:29:40 compute-0 systemd[1]: Listening on libvirt locking daemon socket.
Sep 30 14:29:40 compute-0 sudo[215681]: pam_unix(sudo:session): session closed for user root
Sep 30 14:29:40 compute-0 systemd[1]: Starting libvirt QEMU daemon socket...
Sep 30 14:29:40 compute-0 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Sep 30 14:29:40 compute-0 systemd[1]: Starting Virtual Machine and Container Registration Service...
Sep 30 14:29:40 compute-0 systemd[1]: Listening on libvirt QEMU daemon socket.
Sep 30 14:29:40 compute-0 systemd[1]: Starting libvirt QEMU daemon admin socket...
Sep 30 14:29:40 compute-0 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Sep 30 14:29:40 compute-0 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Sep 30 14:29:40 compute-0 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Sep 30 14:29:40 compute-0 systemd[1]: Started Virtual Machine and Container Registration Service.
Sep 30 14:29:40 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Sep 30 14:29:40 compute-0 systemd[1]: Started libvirt QEMU daemon.
Sep 30 14:29:40 compute-0 sudo[215636]: pam_unix(sudo:session): session closed for user root
Sep 30 14:29:40 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:29:40 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f227c003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:29:40 compute-0 setroubleshoot[215462]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 9c4bf31f-cb54-4b92-ac01-99db16341a32
Sep 30 14:29:41 compute-0 setroubleshoot[215462]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Sep 30 14:29:41 compute-0 setroubleshoot[215462]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 9c4bf31f-cb54-4b92-ac01-99db16341a32
Sep 30 14:29:41 compute-0 setroubleshoot[215462]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Sep 30 14:29:41 compute-0 podman[215855]: 2025-09-30 14:29:41.131067262 +0000 UTC m=+0.055939603 container health_status c96c6cc1b89de09f21811bef665f35e069874e3ad1bbd4c37d02227af98e3458 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Sep 30 14:29:41 compute-0 sudo[215899]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yiupiqrifsvpxzgghilfxnbxyjxxlcaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242580.8673508-3143-183754104123593/AnsiballZ_systemd.py'
Sep 30 14:29:41 compute-0 sudo[215899]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:29:41 compute-0 python3.9[215903]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Sep 30 14:29:41 compute-0 systemd[1]: Reloading.
Sep 30 14:29:41 compute-0 ceph-mon[74194]: pgmap v416: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:29:41 compute-0 systemd-rc-local-generator[215932]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:29:41 compute-0 systemd-sysv-generator[215935]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 14:29:41 compute-0 systemd[1]: Starting libvirt secret daemon socket...
Sep 30 14:29:41 compute-0 systemd[1]: Listening on libvirt secret daemon socket.
Sep 30 14:29:41 compute-0 systemd[1]: Starting libvirt secret daemon admin socket...
Sep 30 14:29:41 compute-0 systemd[1]: Starting libvirt secret daemon read-only socket...
Sep 30 14:29:41 compute-0 systemd[1]: Listening on libvirt secret daemon admin socket.
Sep 30 14:29:41 compute-0 systemd[1]: Listening on libvirt secret daemon read-only socket.
Sep 30 14:29:41 compute-0 systemd[1]: Starting libvirt secret daemon...
Sep 30 14:29:41 compute-0 systemd[1]: Started libvirt secret daemon.
Sep 30 14:29:41 compute-0 sudo[215899]: pam_unix(sudo:session): session closed for user root
Sep 30 14:29:41 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:29:41 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22940026b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:29:42 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:29:42 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:29:42 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:29:42.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:29:42 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v417: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Sep 30 14:29:42 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:29:42 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22ac0027d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:29:42 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:29:42 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Sep 30 14:29:42 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:29:42 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:29:42 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:29:42.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:29:42 compute-0 sudo[216113]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pajrybljxrxbjkzeurbeleijsilqwgdh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242582.4090712-3254-71415453996578/AnsiballZ_file.py'
Sep 30 14:29:42 compute-0 sudo[216113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:29:42 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:29:42 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22a000c0a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:29:42 compute-0 python3.9[216115]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:29:42 compute-0 sudo[216113]: pam_unix(sudo:session): session closed for user root
Sep 30 14:29:43 compute-0 sudo[216266]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvwyxpkvbbbjgdicazinjnipvcemjgfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242583.1581042-3278-176534040376141/AnsiballZ_find.py'
Sep 30 14:29:43 compute-0 sudo[216266]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:29:43 compute-0 python3.9[216268]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Sep 30 14:29:43 compute-0 sudo[216266]: pam_unix(sudo:session): session closed for user root
Sep 30 14:29:43 compute-0 ceph-mon[74194]: pgmap v417: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Sep 30 14:29:43 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:29:43 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f227c003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:29:44 compute-0 sudo[216419]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-taheungnppnyfgtnkgtzlkionpwziyng ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242583.879201-3302-124214319851450/AnsiballZ_command.py'
Sep 30 14:29:44 compute-0 sudo[216419]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:29:44 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:29:44 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:29:44 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:29:44.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:29:44 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v418: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 2 op/s
Sep 30 14:29:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:29:44 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22940026b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:29:44 compute-0 python3.9[216421]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;
                                             echo ceph
                                             awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:29:44 compute-0 sudo[216419]: pam_unix(sudo:session): session closed for user root
Sep 30 14:29:44 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:29:44 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:29:44 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:29:44.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:29:44 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:29:44 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:29:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:29:44] "GET /metrics HTTP/1.1" 200 48425 "" "Prometheus/2.51.0"
Sep 30 14:29:44 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:29:44] "GET /metrics HTTP/1.1" 200 48425 "" "Prometheus/2.51.0"
Sep 30 14:29:44 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:29:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:29:44 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22ac0027d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:29:45 compute-0 python3.9[216575]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Sep 30 14:29:45 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:29:45 compute-0 ceph-mon[74194]: pgmap v418: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 2 op/s
Sep 30 14:29:45 compute-0 python3.9[216727]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:29:45 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:29:45 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22ac0027d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:29:46 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:29:46 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:29:46 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:29:46.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:29:46 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v419: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Sep 30 14:29:46 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:29:46 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22ac0027d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:29:46 compute-0 python3.9[216850]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759242585.5237303-3359-271949597564832/.source.xml follow=False _original_basename=secret.xml.j2 checksum=86427d82caa0e8c5c00972f425868cb7058b73cb backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:29:46 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:29:46 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:29:46 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:29:46.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:29:46 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:29:46 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22940026b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:29:47 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:29:47.024Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:29:47 compute-0 sudo[217000]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtgcrqkpidoobugwivcdmxwndgapbsmd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242586.77994-3404-64498856721128/AnsiballZ_command.py'
Sep 30 14:29:47 compute-0 sudo[217000]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:29:47 compute-0 python3.9[217002]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 5e3c7776-ac03-5698-b79f-a6dc2d80cae6
                                             virsh secret-define --file /tmp/secret.xml
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:29:47 compute-0 polkitd[8246]: Registered Authentication Agent for unix-process:217004:594966 (system bus name :1.2950 [/usr/bin/pkttyagent --process 217004 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Sep 30 14:29:47 compute-0 polkitd[8246]: Unregistered Authentication Agent for unix-process:217004:594966 (system bus name :1.2950, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Sep 30 14:29:47 compute-0 polkitd[8246]: Registered Authentication Agent for unix-process:217003:594965 (system bus name :1.2951 [/usr/bin/pkttyagent --process 217003 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Sep 30 14:29:47 compute-0 polkitd[8246]: Unregistered Authentication Agent for unix-process:217003:594965 (system bus name :1.2951, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Sep 30 14:29:47 compute-0 sudo[217000]: pam_unix(sudo:session): session closed for user root
Sep 30 14:29:47 compute-0 ceph-mon[74194]: pgmap v419: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Sep 30 14:29:47 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:29:47 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2280001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:29:48 compute-0 python3.9[217166]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:29:48 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:29:48 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:29:48 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:29:48.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:29:48 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v420: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Sep 30 14:29:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:29:48 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22ac0027d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:29:48 compute-0 sudo[217316]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gibullazugfdnekfajzappfalaauiwif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242588.297997-3452-25660104002016/AnsiballZ_command.py'
Sep 30 14:29:48 compute-0 sudo[217316]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:29:48 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:29:48 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:29:48 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:29:48.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:29:48 compute-0 sudo[217316]: pam_unix(sudo:session): session closed for user root
Sep 30 14:29:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:29:48 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22a000c310 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:29:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:29:48.910Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:29:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:29:48.910Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:29:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-nfs-cephfs-compute-0-yvkpei[97239]: [WARNING] 272/142949 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Sep 30 14:29:49 compute-0 sudo[217469]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jseonodbjzyomrdhghdwrcareobkmwcj ; FSID=5e3c7776-ac03-5698-b79f-a6dc2d80cae6 KEY=AQAM5dtoAAAAABAAzvguOWjVdWRDH6OkdLxqDw== /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242589.0270927-3476-30697740736639/AnsiballZ_command.py'
Sep 30 14:29:49 compute-0 sudo[217469]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:29:49 compute-0 polkitd[8246]: Registered Authentication Agent for unix-process:217473:595189 (system bus name :1.2954 [/usr/bin/pkttyagent --process 217473 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Sep 30 14:29:49 compute-0 polkitd[8246]: Unregistered Authentication Agent for unix-process:217473:595189 (system bus name :1.2954, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Sep 30 14:29:49 compute-0 sudo[217469]: pam_unix(sudo:session): session closed for user root
Sep 30 14:29:49 compute-0 ceph-mon[74194]: pgmap v420: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Sep 30 14:29:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:29:49 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22940026b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:29:50 compute-0 sudo[217629]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xosaggadniefeacxgareabufobvthiyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242589.7858813-3500-236111080214894/AnsiballZ_copy.py'
Sep 30 14:29:50 compute-0 sudo[217629]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:29:50 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:29:50 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:29:50 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:29:50.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:29:50 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v421: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 3 op/s
Sep 30 14:29:50 compute-0 python3.9[217631]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:29:50 compute-0 sudo[217629]: pam_unix(sudo:session): session closed for user root
Sep 30 14:29:50 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:29:50 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2280001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:29:50 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:29:50 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:29:50 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:29:50.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:29:50 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:29:50 compute-0 sudo[217781]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-figsxxhstwppprpkattwbmrovwjrvhtd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242590.478672-3524-154492467950996/AnsiballZ_stat.py'
Sep 30 14:29:50 compute-0 sudo[217781]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:29:50 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:29:50 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22ac0027d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:29:50 compute-0 python3.9[217783]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:29:50 compute-0 sudo[217781]: pam_unix(sudo:session): session closed for user root
Sep 30 14:29:51 compute-0 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Sep 30 14:29:51 compute-0 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Consumed 1.017s CPU time.
Sep 30 14:29:51 compute-0 systemd[1]: setroubleshootd.service: Deactivated successfully.
Sep 30 14:29:51 compute-0 sudo[217904]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-caqkoucxnbsmxfyzctgvwtcdcnbkexlc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242590.478672-3524-154492467950996/AnsiballZ_copy.py'
Sep 30 14:29:51 compute-0 sudo[217904]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:29:51 compute-0 python3.9[217906]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1759242590.478672-3524-154492467950996/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:29:51 compute-0 sudo[217904]: pam_unix(sudo:session): session closed for user root
Sep 30 14:29:51 compute-0 ceph-mon[74194]: pgmap v421: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 3 op/s
Sep 30 14:29:51 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:29:51 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22a000c330 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:29:52 compute-0 sudo[218058]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsupqwxbllrqihfspodrxnitgtgntbox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242591.872303-3572-109310948246650/AnsiballZ_file.py'
Sep 30 14:29:52 compute-0 sudo[218058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:29:52 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:29:52 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:29:52 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:29:52.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:29:52 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v422: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 3 op/s
Sep 30 14:29:52 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:29:52 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22940026b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:29:52 compute-0 python3.9[218060]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:29:52 compute-0 sudo[218058]: pam_unix(sudo:session): session closed for user root
Sep 30 14:29:52 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:29:52 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:29:52 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:29:52.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:29:52 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:29:52 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2280003040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:29:52 compute-0 sudo[218210]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uakkzjryxkqsishmggdftgxqmrkeqcao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242592.5644252-3596-101710559793235/AnsiballZ_stat.py'
Sep 30 14:29:52 compute-0 sudo[218210]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:29:53 compute-0 python3.9[218212]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:29:53 compute-0 sudo[218210]: pam_unix(sudo:session): session closed for user root
Sep 30 14:29:53 compute-0 sudo[218288]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwolacntegsjnyqgcwwrppmhebdfnmdz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242592.5644252-3596-101710559793235/AnsiballZ_file.py'
Sep 30 14:29:53 compute-0 sudo[218288]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:29:53 compute-0 python3.9[218291]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:29:53 compute-0 sudo[218288]: pam_unix(sudo:session): session closed for user root
Sep 30 14:29:53 compute-0 ceph-mon[74194]: pgmap v422: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 3 op/s
Sep 30 14:29:53 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:29:53 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22ac0027d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:29:54 compute-0 sudo[218442]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nsppqwpkiljvfuponwxferzkffcbrmww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242593.7977788-3632-237772612268832/AnsiballZ_stat.py'
Sep 30 14:29:54 compute-0 sudo[218442]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:29:54 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:29:54 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:29:54 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:29:54.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:29:54 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v423: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:29:54 compute-0 python3.9[218444]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:29:54 compute-0 sudo[218442]: pam_unix(sudo:session): session closed for user root
Sep 30 14:29:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:29:54 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22a000c350 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:29:54 compute-0 sudo[218520]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iovsbfvsboiulvmdetynjxjtjpphnxar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242593.7977788-3632-237772612268832/AnsiballZ_file.py'
Sep 30 14:29:54 compute-0 sudo[218520]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:29:54 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:29:54 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:29:54 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:29:54.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:29:54 compute-0 python3.9[218522]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.ucpqfx8i recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:29:54 compute-0 sudo[218520]: pam_unix(sudo:session): session closed for user root
Sep 30 14:29:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:29:54] "GET /metrics HTTP/1.1" 200 48425 "" "Prometheus/2.51.0"
Sep 30 14:29:54 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:29:54] "GET /metrics HTTP/1.1" 200 48425 "" "Prometheus/2.51.0"
Sep 30 14:29:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:29:54 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22940026b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:29:55 compute-0 sudo[218672]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iehltpnovjmszzdsoyayxgbqarrvvrfg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242594.9933612-3668-84638548934830/AnsiballZ_stat.py'
Sep 30 14:29:55 compute-0 sudo[218672]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:29:55 compute-0 python3.9[218674]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:29:55 compute-0 sudo[218672]: pam_unix(sudo:session): session closed for user root
Sep 30 14:29:55 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:29:55 compute-0 sudo[218751]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rnlvraubdyjyiphtpqtpnxcikidzowwd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242594.9933612-3668-84638548934830/AnsiballZ_file.py'
Sep 30 14:29:55 compute-0 sudo[218751]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:29:55 compute-0 ceph-mon[74194]: pgmap v423: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:29:55 compute-0 python3.9[218754]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:29:55 compute-0 sudo[218751]: pam_unix(sudo:session): session closed for user root
Sep 30 14:29:55 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:29:55 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2280003040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:29:56 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:29:56 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:29:56 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:29:56.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:29:56 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v424: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:29:56 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:29:56 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22ac0027d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:29:56 compute-0 sudo[218904]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwxbocdsyrousgfaypusavxamlkzwkwh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242596.2176535-3707-98210032793163/AnsiballZ_command.py'
Sep 30 14:29:56 compute-0 sudo[218904]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:29:56 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:29:56 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:29:56 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:29:56.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:29:56 compute-0 python3.9[218906]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:29:56 compute-0 sudo[218904]: pam_unix(sudo:session): session closed for user root
Sep 30 14:29:56 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:29:56 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22a000c370 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:29:57 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:29:57.025Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:29:57 compute-0 sudo[218984]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:29:57 compute-0 sudo[218984]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:29:57 compute-0 sudo[218984]: pam_unix(sudo:session): session closed for user root
Sep 30 14:29:57 compute-0 sudo[219009]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 14:29:57 compute-0 sudo[219009]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:29:57 compute-0 sudo[219123]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwlqwywcqoncnmjstgiashdsocomalje ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759242596.9582064-3731-48968865621619/AnsiballZ_edpm_nftables_from_files.py'
Sep 30 14:29:57 compute-0 sudo[219123]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:29:57 compute-0 python3[219125]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Sep 30 14:29:57 compute-0 sudo[219009]: pam_unix(sudo:session): session closed for user root
Sep 30 14:29:57 compute-0 sudo[219123]: pam_unix(sudo:session): session closed for user root
Sep 30 14:29:57 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:29:57 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:29:57 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 14:29:57 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 14:29:57 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v425: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 619 B/s rd, 88 B/s wr, 0 op/s
Sep 30 14:29:57 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 14:29:57 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:29:57 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 14:29:57 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:29:57 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 14:29:57 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 14:29:57 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 14:29:57 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 14:29:57 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:29:57 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:29:57 compute-0 sudo[219188]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:29:57 compute-0 sudo[219188]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:29:57 compute-0 sudo[219188]: pam_unix(sudo:session): session closed for user root
Sep 30 14:29:57 compute-0 sudo[219239]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 14:29:57 compute-0 sudo[219239]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:29:57 compute-0 ceph-mon[74194]: pgmap v424: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:29:57 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:29:57 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 14:29:57 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:29:57 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:29:57 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 14:29:57 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 14:29:57 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:29:58 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:29:57 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22940026b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:29:58 compute-0 sudo[219354]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqgctsbhfuzpvyzdmygiwqfqtmyvsvlz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242597.8451586-3755-234277663439609/AnsiballZ_stat.py'
Sep 30 14:29:58 compute-0 sudo[219354]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:29:58 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:29:58 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:29:58 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:29:58.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:29:58 compute-0 podman[219386]: 2025-09-30 14:29:58.312342733 +0000 UTC m=+0.054400382 container create 6168d08737a62c82f8a8db0bdac9c1fdfb01009a4223a147012ffe52f6cf4465 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_jepsen, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True)
Sep 30 14:29:58 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:29:58 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2280003040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:29:58 compute-0 systemd[1]: Started libpod-conmon-6168d08737a62c82f8a8db0bdac9c1fdfb01009a4223a147012ffe52f6cf4465.scope.
Sep 30 14:29:58 compute-0 python3.9[219363]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:29:58 compute-0 podman[219386]: 2025-09-30 14:29:58.284333871 +0000 UTC m=+0.026391600 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:29:58 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:29:58 compute-0 sudo[219354]: pam_unix(sudo:session): session closed for user root
Sep 30 14:29:58 compute-0 podman[219386]: 2025-09-30 14:29:58.410202722 +0000 UTC m=+0.152260421 container init 6168d08737a62c82f8a8db0bdac9c1fdfb01009a4223a147012ffe52f6cf4465 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_jepsen, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:29:58 compute-0 podman[219386]: 2025-09-30 14:29:58.417133378 +0000 UTC m=+0.159191027 container start 6168d08737a62c82f8a8db0bdac9c1fdfb01009a4223a147012ffe52f6cf4465 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_jepsen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Sep 30 14:29:58 compute-0 podman[219386]: 2025-09-30 14:29:58.420460047 +0000 UTC m=+0.162517716 container attach 6168d08737a62c82f8a8db0bdac9c1fdfb01009a4223a147012ffe52f6cf4465 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_jepsen, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:29:58 compute-0 thirsty_jepsen[219402]: 167 167
Sep 30 14:29:58 compute-0 systemd[1]: libpod-6168d08737a62c82f8a8db0bdac9c1fdfb01009a4223a147012ffe52f6cf4465.scope: Deactivated successfully.
Sep 30 14:29:58 compute-0 podman[219386]: 2025-09-30 14:29:58.423784896 +0000 UTC m=+0.165842535 container died 6168d08737a62c82f8a8db0bdac9c1fdfb01009a4223a147012ffe52f6cf4465 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_jepsen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:29:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-d86a2901a17005cac46fdccfda035b858ff61f48c7f9349b48e0be9a989abafc-merged.mount: Deactivated successfully.
Sep 30 14:29:58 compute-0 podman[219386]: 2025-09-30 14:29:58.462830795 +0000 UTC m=+0.204888434 container remove 6168d08737a62c82f8a8db0bdac9c1fdfb01009a4223a147012ffe52f6cf4465 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_jepsen, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Sep 30 14:29:58 compute-0 systemd[1]: libpod-conmon-6168d08737a62c82f8a8db0bdac9c1fdfb01009a4223a147012ffe52f6cf4465.scope: Deactivated successfully.
Sep 30 14:29:58 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:29:58 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:29:58 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:29:58.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:29:58 compute-0 sudo[219511]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jljdrreasqebnpjslimcdhqcbxrqmcjn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242597.8451586-3755-234277663439609/AnsiballZ_file.py'
Sep 30 14:29:58 compute-0 sudo[219511]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:29:58 compute-0 podman[219474]: 2025-09-30 14:29:58.61455851 +0000 UTC m=+0.042126512 container create 2968b213fb94fb29fa6f063854d96a2f5f7a98220dbbc3e64ff3b3ea2174c2de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_shaw, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Sep 30 14:29:58 compute-0 systemd[1]: Started libpod-conmon-2968b213fb94fb29fa6f063854d96a2f5f7a98220dbbc3e64ff3b3ea2174c2de.scope.
Sep 30 14:29:58 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:29:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de7fe03a800263272c23e087129faf40b9f14966aae5187c3759bf3d2f7e7841/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:29:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de7fe03a800263272c23e087129faf40b9f14966aae5187c3759bf3d2f7e7841/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:29:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de7fe03a800263272c23e087129faf40b9f14966aae5187c3759bf3d2f7e7841/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:29:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de7fe03a800263272c23e087129faf40b9f14966aae5187c3759bf3d2f7e7841/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:29:58 compute-0 podman[219474]: 2025-09-30 14:29:58.597188264 +0000 UTC m=+0.024756286 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:29:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de7fe03a800263272c23e087129faf40b9f14966aae5187c3759bf3d2f7e7841/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 14:29:58 compute-0 podman[219474]: 2025-09-30 14:29:58.708677338 +0000 UTC m=+0.136245400 container init 2968b213fb94fb29fa6f063854d96a2f5f7a98220dbbc3e64ff3b3ea2174c2de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_shaw, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:29:58 compute-0 podman[219474]: 2025-09-30 14:29:58.717193746 +0000 UTC m=+0.144761748 container start 2968b213fb94fb29fa6f063854d96a2f5f7a98220dbbc3e64ff3b3ea2174c2de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_shaw, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Sep 30 14:29:58 compute-0 podman[219474]: 2025-09-30 14:29:58.720469034 +0000 UTC m=+0.148037036 container attach 2968b213fb94fb29fa6f063854d96a2f5f7a98220dbbc3e64ff3b3ea2174c2de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_shaw, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Sep 30 14:29:58 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:29:58 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22ac0027d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:29:58 compute-0 python3.9[219516]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:29:58 compute-0 sudo[219511]: pam_unix(sudo:session): session closed for user root
Sep 30 14:29:58 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:29:58.911Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:29:58 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:29:58.912Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:29:58 compute-0 ceph-mon[74194]: pgmap v425: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 619 B/s rd, 88 B/s wr, 0 op/s
Sep 30 14:29:59 compute-0 epic_shaw[219519]: --> passed data devices: 0 physical, 1 LVM
Sep 30 14:29:59 compute-0 epic_shaw[219519]: --> All data devices are unavailable
Sep 30 14:29:59 compute-0 systemd[1]: libpod-2968b213fb94fb29fa6f063854d96a2f5f7a98220dbbc3e64ff3b3ea2174c2de.scope: Deactivated successfully.
Sep 30 14:29:59 compute-0 podman[219474]: 2025-09-30 14:29:59.048762071 +0000 UTC m=+0.476330083 container died 2968b213fb94fb29fa6f063854d96a2f5f7a98220dbbc3e64ff3b3ea2174c2de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_shaw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:29:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-de7fe03a800263272c23e087129faf40b9f14966aae5187c3759bf3d2f7e7841-merged.mount: Deactivated successfully.
Sep 30 14:29:59 compute-0 podman[219474]: 2025-09-30 14:29:59.100739197 +0000 UTC m=+0.528307199 container remove 2968b213fb94fb29fa6f063854d96a2f5f7a98220dbbc3e64ff3b3ea2174c2de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_shaw, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Sep 30 14:29:59 compute-0 systemd[1]: libpod-conmon-2968b213fb94fb29fa6f063854d96a2f5f7a98220dbbc3e64ff3b3ea2174c2de.scope: Deactivated successfully.
Sep 30 14:29:59 compute-0 sudo[219239]: pam_unix(sudo:session): session closed for user root
Sep 30 14:29:59 compute-0 sudo[219620]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:29:59 compute-0 sudo[219620]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:29:59 compute-0 sudo[219620]: pam_unix(sudo:session): session closed for user root
Sep 30 14:29:59 compute-0 sudo[219664]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -- lvm list --format json
Sep 30 14:29:59 compute-0 sudo[219664]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:29:59 compute-0 sudo[219744]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-moaegdfazwptxlcucswzcnejnmwxican ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242599.0572968-3791-196141251378583/AnsiballZ_stat.py'
Sep 30 14:29:59 compute-0 sudo[219744]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:29:59 compute-0 ceph-mgr[74485]: [balancer INFO root] Optimize plan auto_2025-09-30_14:29:59
Sep 30 14:29:59 compute-0 ceph-mgr[74485]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 14:29:59 compute-0 ceph-mgr[74485]: [balancer INFO root] do_upmap
Sep 30 14:29:59 compute-0 ceph-mgr[74485]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.log', '.nfs', 'cephfs.cephfs.data', 'vms', 'images', 'default.rgw.meta', 'backups', 'volumes', 'default.rgw.control', 'cephfs.cephfs.meta', '.mgr']
Sep 30 14:29:59 compute-0 ceph-mgr[74485]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 14:29:59 compute-0 python3.9[219746]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:29:59 compute-0 sudo[219744]: pam_unix(sudo:session): session closed for user root
Sep 30 14:29:59 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:29:59 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:29:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 14:29:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:29:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 14:29:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:29:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:29:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:29:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:29:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:29:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:29:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:29:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:29:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:29:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Sep 30 14:29:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:29:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:29:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:29:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Sep 30 14:29:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:29:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Sep 30 14:29:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:29:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:29:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:29:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 14:29:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:29:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 14:29:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:29:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:29:59 compute-0 podman[219811]: 2025-09-30 14:29:59.751345711 +0000 UTC m=+0.068038538 container create 93f03f2dc5cb676477734e2a25aecd14431159ff690c75c63a0302a62dd3ebf2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_northcutt, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:29:59 compute-0 systemd[1]: Started libpod-conmon-93f03f2dc5cb676477734e2a25aecd14431159ff690c75c63a0302a62dd3ebf2.scope.
Sep 30 14:29:59 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v426: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 265 B/s rd, 0 op/s
Sep 30 14:29:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:29:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:29:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:29:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:29:59 compute-0 podman[219811]: 2025-09-30 14:29:59.722597959 +0000 UTC m=+0.039290876 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:29:59 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:29:59 compute-0 podman[219811]: 2025-09-30 14:29:59.842662403 +0000 UTC m=+0.159355250 container init 93f03f2dc5cb676477734e2a25aecd14431159ff690c75c63a0302a62dd3ebf2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_northcutt, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Sep 30 14:29:59 compute-0 sudo[219883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igcumcynmazvzdnqiedihhkjtqpkriah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242599.0572968-3791-196141251378583/AnsiballZ_file.py'
Sep 30 14:29:59 compute-0 sudo[219883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:29:59 compute-0 podman[219811]: 2025-09-30 14:29:59.852280722 +0000 UTC m=+0.168973549 container start 93f03f2dc5cb676477734e2a25aecd14431159ff690c75c63a0302a62dd3ebf2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_northcutt, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Sep 30 14:29:59 compute-0 podman[219811]: 2025-09-30 14:29:59.855963481 +0000 UTC m=+0.172656308 container attach 93f03f2dc5cb676477734e2a25aecd14431159ff690c75c63a0302a62dd3ebf2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_northcutt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:29:59 compute-0 systemd[1]: libpod-93f03f2dc5cb676477734e2a25aecd14431159ff690c75c63a0302a62dd3ebf2.scope: Deactivated successfully.
Sep 30 14:29:59 compute-0 optimistic_northcutt[219854]: 167 167
Sep 30 14:29:59 compute-0 conmon[219854]: conmon 93f03f2dc5cb67647773 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-93f03f2dc5cb676477734e2a25aecd14431159ff690c75c63a0302a62dd3ebf2.scope/container/memory.events
Sep 30 14:29:59 compute-0 podman[219811]: 2025-09-30 14:29:59.859398293 +0000 UTC m=+0.176091140 container died 93f03f2dc5cb676477734e2a25aecd14431159ff690c75c63a0302a62dd3ebf2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_northcutt, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Sep 30 14:29:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-6235955f618b35e6716f1b981cead557f0824393ce1853e88094ce701aa540f8-merged.mount: Deactivated successfully.
Sep 30 14:29:59 compute-0 podman[219811]: 2025-09-30 14:29:59.89988543 +0000 UTC m=+0.216578257 container remove 93f03f2dc5cb676477734e2a25aecd14431159ff690c75c63a0302a62dd3ebf2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_northcutt, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Sep 30 14:29:59 compute-0 systemd[1]: libpod-conmon-93f03f2dc5cb676477734e2a25aecd14431159ff690c75c63a0302a62dd3ebf2.scope: Deactivated successfully.
Sep 30 14:29:59 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:30:00 compute-0 ceph-mon[74194]: log_channel(cluster) log [WRN] : overall HEALTH_WARN 1 OSD(s) experiencing slow operations in BlueStore; 1 failed cephadm daemon(s)
Sep 30 14:30:00 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:30:00 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22a000c390 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:30:00 compute-0 python3.9[219886]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:30:00 compute-0 podman[219906]: 2025-09-30 14:30:00.074377677 +0000 UTC m=+0.047498717 container create 185a1c3ebf74be7da0be7538d36f97d30829f179d24f14897e0d2b22ff68cc54 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325)
Sep 30 14:30:00 compute-0 sudo[219883]: pam_unix(sudo:session): session closed for user root
Sep 30 14:30:00 compute-0 systemd[1]: Started libpod-conmon-185a1c3ebf74be7da0be7538d36f97d30829f179d24f14897e0d2b22ff68cc54.scope.
Sep 30 14:30:00 compute-0 podman[219906]: 2025-09-30 14:30:00.051983585 +0000 UTC m=+0.025104675 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:30:00 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:30:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49a71fa04446eb9ee6a3ea0638ae218ac9919d2c06d06842ef09d6eb95f22933/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:30:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49a71fa04446eb9ee6a3ea0638ae218ac9919d2c06d06842ef09d6eb95f22933/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:30:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49a71fa04446eb9ee6a3ea0638ae218ac9919d2c06d06842ef09d6eb95f22933/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:30:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49a71fa04446eb9ee6a3ea0638ae218ac9919d2c06d06842ef09d6eb95f22933/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:30:00 compute-0 podman[219906]: 2025-09-30 14:30:00.179686225 +0000 UTC m=+0.152807285 container init 185a1c3ebf74be7da0be7538d36f97d30829f179d24f14897e0d2b22ff68cc54 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_lamport, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:30:00 compute-0 podman[219906]: 2025-09-30 14:30:00.18769951 +0000 UTC m=+0.160820550 container start 185a1c3ebf74be7da0be7538d36f97d30829f179d24f14897e0d2b22ff68cc54 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_lamport, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:30:00 compute-0 podman[219906]: 2025-09-30 14:30:00.192275263 +0000 UTC m=+0.165396313 container attach 185a1c3ebf74be7da0be7538d36f97d30829f179d24f14897e0d2b22ff68cc54 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Sep 30 14:30:00 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:30:00 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:30:00 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:30:00.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:30:00 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:30:00 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22940026b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:30:00 compute-0 sleepy_lamport[219925]: {
Sep 30 14:30:00 compute-0 sleepy_lamport[219925]:     "0": [
Sep 30 14:30:00 compute-0 sleepy_lamport[219925]:         {
Sep 30 14:30:00 compute-0 sleepy_lamport[219925]:             "devices": [
Sep 30 14:30:00 compute-0 sleepy_lamport[219925]:                 "/dev/loop3"
Sep 30 14:30:00 compute-0 sleepy_lamport[219925]:             ],
Sep 30 14:30:00 compute-0 sleepy_lamport[219925]:             "lv_name": "ceph_lv0",
Sep 30 14:30:00 compute-0 sleepy_lamport[219925]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:30:00 compute-0 sleepy_lamport[219925]:             "lv_size": "21470642176",
Sep 30 14:30:00 compute-0 sleepy_lamport[219925]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5e3c7776-ac03-5698-b79f-a6dc2d80cae6,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1bf35304-bfb4-41f5-b832-570aa31de1b2,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 14:30:00 compute-0 sleepy_lamport[219925]:             "lv_uuid": "f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L",
Sep 30 14:30:00 compute-0 sleepy_lamport[219925]:             "name": "ceph_lv0",
Sep 30 14:30:00 compute-0 sleepy_lamport[219925]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:30:00 compute-0 sleepy_lamport[219925]:             "tags": {
Sep 30 14:30:00 compute-0 sleepy_lamport[219925]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:30:00 compute-0 sleepy_lamport[219925]:                 "ceph.block_uuid": "f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L",
Sep 30 14:30:00 compute-0 sleepy_lamport[219925]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 14:30:00 compute-0 sleepy_lamport[219925]:                 "ceph.cluster_fsid": "5e3c7776-ac03-5698-b79f-a6dc2d80cae6",
Sep 30 14:30:00 compute-0 sleepy_lamport[219925]:                 "ceph.cluster_name": "ceph",
Sep 30 14:30:00 compute-0 sleepy_lamport[219925]:                 "ceph.crush_device_class": "",
Sep 30 14:30:00 compute-0 sleepy_lamport[219925]:                 "ceph.encrypted": "0",
Sep 30 14:30:00 compute-0 sleepy_lamport[219925]:                 "ceph.osd_fsid": "1bf35304-bfb4-41f5-b832-570aa31de1b2",
Sep 30 14:30:00 compute-0 sleepy_lamport[219925]:                 "ceph.osd_id": "0",
Sep 30 14:30:00 compute-0 sleepy_lamport[219925]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 14:30:00 compute-0 sleepy_lamport[219925]:                 "ceph.type": "block",
Sep 30 14:30:00 compute-0 sleepy_lamport[219925]:                 "ceph.vdo": "0",
Sep 30 14:30:00 compute-0 sleepy_lamport[219925]:                 "ceph.with_tpm": "0"
Sep 30 14:30:00 compute-0 sleepy_lamport[219925]:             },
Sep 30 14:30:00 compute-0 sleepy_lamport[219925]:             "type": "block",
Sep 30 14:30:00 compute-0 sleepy_lamport[219925]:             "vg_name": "ceph_vg0"
Sep 30 14:30:00 compute-0 sleepy_lamport[219925]:         }
Sep 30 14:30:00 compute-0 sleepy_lamport[219925]:     ]
Sep 30 14:30:00 compute-0 sleepy_lamport[219925]: }
Sep 30 14:30:00 compute-0 systemd[1]: libpod-185a1c3ebf74be7da0be7538d36f97d30829f179d24f14897e0d2b22ff68cc54.scope: Deactivated successfully.
Sep 30 14:30:00 compute-0 podman[219906]: 2025-09-30 14:30:00.508237838 +0000 UTC m=+0.481358888 container died 185a1c3ebf74be7da0be7538d36f97d30829f179d24f14897e0d2b22ff68cc54 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_lamport, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:30:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-49a71fa04446eb9ee6a3ea0638ae218ac9919d2c06d06842ef09d6eb95f22933-merged.mount: Deactivated successfully.
Sep 30 14:30:00 compute-0 podman[219906]: 2025-09-30 14:30:00.548754726 +0000 UTC m=+0.521875766 container remove 185a1c3ebf74be7da0be7538d36f97d30829f179d24f14897e0d2b22ff68cc54 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_lamport, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Sep 30 14:30:00 compute-0 systemd[1]: libpod-conmon-185a1c3ebf74be7da0be7538d36f97d30829f179d24f14897e0d2b22ff68cc54.scope: Deactivated successfully.
Sep 30 14:30:00 compute-0 sudo[220095]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcgsrxqzbdqjgycurdxgcfrialqareqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242600.3019974-3827-268659447410475/AnsiballZ_stat.py'
Sep 30 14:30:00 compute-0 sudo[220095]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:30:00 compute-0 sudo[219664]: pam_unix(sudo:session): session closed for user root
Sep 30 14:30:00 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:30:00 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:30:00 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:30:00.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:30:00 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:30:00 compute-0 sudo[220098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:30:00 compute-0 sudo[220098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:30:00 compute-0 sudo[220098]: pam_unix(sudo:session): session closed for user root
Sep 30 14:30:00 compute-0 sudo[220123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -- raw list --format json
Sep 30 14:30:00 compute-0 sudo[220123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:30:00 compute-0 python3.9[220097]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:30:00 compute-0 sudo[220148]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:30:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 14:30:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 14:30:00 compute-0 sudo[220148]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:30:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 14:30:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 14:30:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 14:30:00 compute-0 sudo[220148]: pam_unix(sudo:session): session closed for user root
Sep 30 14:30:00 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:30:00 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2280003040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:30:00 compute-0 sudo[220095]: pam_unix(sudo:session): session closed for user root
Sep 30 14:30:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 14:30:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 14:30:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 14:30:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 14:30:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 14:30:00 compute-0 ceph-mon[74194]: pgmap v426: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 265 B/s rd, 0 op/s
Sep 30 14:30:00 compute-0 ceph-mon[74194]: overall HEALTH_WARN 1 OSD(s) experiencing slow operations in BlueStore; 1 failed cephadm daemon(s)
Sep 30 14:30:01 compute-0 sudo[220288]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbqbgabknrphtxhwfjkpeffzoeyfrezu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242600.3019974-3827-268659447410475/AnsiballZ_file.py'
Sep 30 14:30:01 compute-0 sudo[220288]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:30:01 compute-0 podman[220291]: 2025-09-30 14:30:01.11636475 +0000 UTC m=+0.039590834 container create f81d6c47131ce38e7bfbd2b2b5363645899d1f51ad689490aeb423690d195662 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_grothendieck, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:30:01 compute-0 systemd[1]: Started libpod-conmon-f81d6c47131ce38e7bfbd2b2b5363645899d1f51ad689490aeb423690d195662.scope.
Sep 30 14:30:01 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:30:01 compute-0 podman[220291]: 2025-09-30 14:30:01.185739844 +0000 UTC m=+0.108965948 container init f81d6c47131ce38e7bfbd2b2b5363645899d1f51ad689490aeb423690d195662 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_grothendieck, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:30:01 compute-0 podman[220291]: 2025-09-30 14:30:01.193720738 +0000 UTC m=+0.116946822 container start f81d6c47131ce38e7bfbd2b2b5363645899d1f51ad689490aeb423690d195662 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_grothendieck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Sep 30 14:30:01 compute-0 podman[220291]: 2025-09-30 14:30:01.099169309 +0000 UTC m=+0.022395423 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:30:01 compute-0 podman[220291]: 2025-09-30 14:30:01.197215902 +0000 UTC m=+0.120441996 container attach f81d6c47131ce38e7bfbd2b2b5363645899d1f51ad689490aeb423690d195662 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_grothendieck, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:30:01 compute-0 practical_grothendieck[220307]: 167 167
Sep 30 14:30:01 compute-0 podman[220291]: 2025-09-30 14:30:01.199237286 +0000 UTC m=+0.122463370 container died f81d6c47131ce38e7bfbd2b2b5363645899d1f51ad689490aeb423690d195662 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_grothendieck, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:30:01 compute-0 systemd[1]: libpod-f81d6c47131ce38e7bfbd2b2b5363645899d1f51ad689490aeb423690d195662.scope: Deactivated successfully.
Sep 30 14:30:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-27b418dbc5743151d9737e5f0a838fcafd7a82ac37b51ea97111b974a9175587-merged.mount: Deactivated successfully.
Sep 30 14:30:01 compute-0 podman[220291]: 2025-09-30 14:30:01.237222906 +0000 UTC m=+0.160448980 container remove f81d6c47131ce38e7bfbd2b2b5363645899d1f51ad689490aeb423690d195662 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_grothendieck, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:30:01 compute-0 systemd[1]: libpod-conmon-f81d6c47131ce38e7bfbd2b2b5363645899d1f51ad689490aeb423690d195662.scope: Deactivated successfully.
Sep 30 14:30:01 compute-0 python3.9[220290]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:30:01 compute-0 sudo[220288]: pam_unix(sudo:session): session closed for user root
Sep 30 14:30:01 compute-0 podman[220356]: 2025-09-30 14:30:01.398292632 +0000 UTC m=+0.041255849 container create ca5d1e73914e86797f2a95e339f95dfeb17c06ae172cea28d417205c4d296ac3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_vaughan, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2)
Sep 30 14:30:01 compute-0 systemd[1]: Started libpod-conmon-ca5d1e73914e86797f2a95e339f95dfeb17c06ae172cea28d417205c4d296ac3.scope.
Sep 30 14:30:01 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:30:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8578209adf58585bad3ed449b90217c81f94345b56f2d07554e0a2f28aeb6eb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:30:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8578209adf58585bad3ed449b90217c81f94345b56f2d07554e0a2f28aeb6eb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:30:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8578209adf58585bad3ed449b90217c81f94345b56f2d07554e0a2f28aeb6eb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:30:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8578209adf58585bad3ed449b90217c81f94345b56f2d07554e0a2f28aeb6eb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:30:01 compute-0 podman[220356]: 2025-09-30 14:30:01.465543838 +0000 UTC m=+0.108507065 container init ca5d1e73914e86797f2a95e339f95dfeb17c06ae172cea28d417205c4d296ac3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_vaughan, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid)
Sep 30 14:30:01 compute-0 podman[220356]: 2025-09-30 14:30:01.472755722 +0000 UTC m=+0.115718939 container start ca5d1e73914e86797f2a95e339f95dfeb17c06ae172cea28d417205c4d296ac3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_vaughan, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Sep 30 14:30:01 compute-0 podman[220356]: 2025-09-30 14:30:01.380495864 +0000 UTC m=+0.023459111 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:30:01 compute-0 podman[220356]: 2025-09-30 14:30:01.476450801 +0000 UTC m=+0.119414038 container attach ca5d1e73914e86797f2a95e339f95dfeb17c06ae172cea28d417205c4d296ac3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:30:01 compute-0 sudo[220518]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwjltedposxlbrkfjrxlcydacwjztokc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242601.494432-3863-36480361684176/AnsiballZ_stat.py'
Sep 30 14:30:01 compute-0 sudo[220518]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:30:01 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v427: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 265 B/s rd, 0 op/s
Sep 30 14:30:01 compute-0 python3.9[220524]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:30:02 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:30:02 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22ac0027d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:30:02 compute-0 sudo[220518]: pam_unix(sudo:session): session closed for user root
Sep 30 14:30:02 compute-0 lvm[220623]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 14:30:02 compute-0 lvm[220623]: VG ceph_vg0 finished
Sep 30 14:30:02 compute-0 zen_vaughan[220373]: {}
Sep 30 14:30:02 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:30:02 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:30:02 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:30:02.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:30:02 compute-0 sudo[220656]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iysoznoriczujuydfnngtolvjkzdgmst ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242601.494432-3863-36480361684176/AnsiballZ_file.py'
Sep 30 14:30:02 compute-0 sudo[220656]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:30:02 compute-0 systemd[1]: libpod-ca5d1e73914e86797f2a95e339f95dfeb17c06ae172cea28d417205c4d296ac3.scope: Deactivated successfully.
Sep 30 14:30:02 compute-0 systemd[1]: libpod-ca5d1e73914e86797f2a95e339f95dfeb17c06ae172cea28d417205c4d296ac3.scope: Consumed 1.134s CPU time.
Sep 30 14:30:02 compute-0 podman[220356]: 2025-09-30 14:30:02.232899827 +0000 UTC m=+0.875863044 container died ca5d1e73914e86797f2a95e339f95dfeb17c06ae172cea28d417205c4d296ac3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_vaughan, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default)
Sep 30 14:30:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-a8578209adf58585bad3ed449b90217c81f94345b56f2d07554e0a2f28aeb6eb-merged.mount: Deactivated successfully.
Sep 30 14:30:02 compute-0 podman[220356]: 2025-09-30 14:30:02.275332637 +0000 UTC m=+0.918295854 container remove ca5d1e73914e86797f2a95e339f95dfeb17c06ae172cea28d417205c4d296ac3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:30:02 compute-0 systemd[1]: libpod-conmon-ca5d1e73914e86797f2a95e339f95dfeb17c06ae172cea28d417205c4d296ac3.scope: Deactivated successfully.
Sep 30 14:30:02 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:30:02 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22a000c3b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:30:02 compute-0 sudo[220123]: pam_unix(sudo:session): session closed for user root
Sep 30 14:30:02 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 14:30:02 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:30:02 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 14:30:02 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:30:02 compute-0 sudo[220671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 14:30:02 compute-0 sudo[220671]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:30:02 compute-0 sudo[220671]: pam_unix(sudo:session): session closed for user root
Sep 30 14:30:02 compute-0 python3.9[220658]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:30:02 compute-0 sudo[220656]: pam_unix(sudo:session): session closed for user root
Sep 30 14:30:02 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:30:02 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:30:02 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:30:02.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:30:02 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:30:02 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22940026b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:30:03 compute-0 ceph-mon[74194]: pgmap v427: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 265 B/s rd, 0 op/s
Sep 30 14:30:03 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:30:03 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:30:03 compute-0 sudo[220845]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jiqaslhiufdyguxvfoklijkjgteguotr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242602.6895795-3899-126864236333650/AnsiballZ_stat.py'
Sep 30 14:30:03 compute-0 sudo[220845]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:30:03 compute-0 python3.9[220847]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:30:03 compute-0 sudo[220845]: pam_unix(sudo:session): session closed for user root
Sep 30 14:30:03 compute-0 sudo[220971]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cefmkczdeaypyvwbafggplvwgsrxbsqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242602.6895795-3899-126864236333650/AnsiballZ_copy.py'
Sep 30 14:30:03 compute-0 sudo[220971]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:30:03 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v428: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 265 B/s rd, 0 op/s
Sep 30 14:30:03 compute-0 python3.9[220973]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759242602.6895795-3899-126864236333650/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:30:03 compute-0 sudo[220971]: pam_unix(sudo:session): session closed for user root
Sep 30 14:30:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:30:04 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2280003040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:30:04 compute-0 ceph-mon[74194]: pgmap v428: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 265 B/s rd, 0 op/s
Sep 30 14:30:04 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:30:04 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:30:04 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:30:04.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:30:04 compute-0 sudo[221124]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adxpouozlfitgiutczebordrbqqtwzcr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242604.056767-3944-135605482580125/AnsiballZ_file.py'
Sep 30 14:30:04 compute-0 sudo[221124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:30:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:30:04 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22ac0027d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:30:04 compute-0 python3.9[221126]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:30:04 compute-0 sudo[221124]: pam_unix(sudo:session): session closed for user root
Sep 30 14:30:04 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:30:04 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:30:04 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:30:04.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:30:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:30:04] "GET /metrics HTTP/1.1" 200 48420 "" "Prometheus/2.51.0"
Sep 30 14:30:04 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:30:04] "GET /metrics HTTP/1.1" 200 48420 "" "Prometheus/2.51.0"
Sep 30 14:30:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:30:04 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22a000c3d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:30:05 compute-0 sudo[221285]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aungjfxxjswwojgfabksmqhibrpalsbi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242604.7842035-3968-40135541983171/AnsiballZ_command.py'
Sep 30 14:30:05 compute-0 sudo[221285]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:30:05 compute-0 podman[221250]: 2025-09-30 14:30:05.121343391 +0000 UTC m=+0.102194006 container health_status 8ace90c1ad44d98a416105b72e1c541724757ab08efa10adfaa0df7e8c2027c6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2)
Sep 30 14:30:05 compute-0 python3.9[221291]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:30:05 compute-0 sudo[221285]: pam_unix(sudo:session): session closed for user root
Sep 30 14:30:05 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:30:05 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v429: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 265 B/s rd, 0 op/s
Sep 30 14:30:05 compute-0 sudo[221460]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njxqgbkjjhoolzhdxinrzbgfnxcfbqjs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242605.486963-3992-264955113407376/AnsiballZ_blockinfile.py'
Sep 30 14:30:05 compute-0 sudo[221460]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:30:06 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:30:06 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22940026b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:30:06 compute-0 python3.9[221462]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:30:06 compute-0 sudo[221460]: pam_unix(sudo:session): session closed for user root
Sep 30 14:30:06 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:30:06 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:30:06 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:30:06.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:30:06 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:30:06 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2280003040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:30:06 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:30:06 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:30:06 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:30:06.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:30:06 compute-0 sudo[221612]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ssbgucditkxkszaghoyccrcnfwmzobai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242606.4459276-4019-149126781954411/AnsiballZ_command.py'
Sep 30 14:30:06 compute-0 sudo[221612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:30:06 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:30:06 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22ac0027d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:30:06 compute-0 ceph-mon[74194]: pgmap v429: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 265 B/s rd, 0 op/s
Sep 30 14:30:06 compute-0 python3.9[221614]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:30:06 compute-0 sudo[221612]: pam_unix(sudo:session): session closed for user root
Sep 30 14:30:07 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:30:07.026Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:30:07 compute-0 sudo[221766]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjstupzswqatoeauwenevvvadzheedfd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242607.0946934-4043-277175487575816/AnsiballZ_stat.py'
Sep 30 14:30:07 compute-0 sudo[221766]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:30:07 compute-0 python3.9[221768]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 14:30:07 compute-0 sudo[221766]: pam_unix(sudo:session): session closed for user root
Sep 30 14:30:07 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v430: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 531 B/s rd, 0 op/s
Sep 30 14:30:08 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:30:08 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22a000c3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:30:08 compute-0 sudo[221921]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uyoqkbxspgskzlvtmootucwekemtmawo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242607.7706013-4067-161628514211225/AnsiballZ_command.py'
Sep 30 14:30:08 compute-0 sudo[221921]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:30:08 compute-0 python3.9[221923]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:30:08 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:30:08 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:30:08 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:30:08.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:30:08 compute-0 sudo[221921]: pam_unix(sudo:session): session closed for user root
Sep 30 14:30:08 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:30:08 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22940026b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:30:08 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:30:08 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:30:08 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:30:08.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:30:08 compute-0 sudo[222076]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jpdbwhiesfwawquqpdhuewwufvlggupp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242608.4465268-4091-94790339029242/AnsiballZ_file.py'
Sep 30 14:30:08 compute-0 sudo[222076]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:30:08 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:30:08 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2280003040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:30:08 compute-0 ceph-mon[74194]: pgmap v430: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 531 B/s rd, 0 op/s
Sep 30 14:30:08 compute-0 python3.9[222078]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:30:08 compute-0 sudo[222076]: pam_unix(sudo:session): session closed for user root
Sep 30 14:30:08 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:30:08.912Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:30:09 compute-0 sudo[222229]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xyagwgdhhyewmkxzdwbdbenllglxuipy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242609.1639137-4115-171814652060164/AnsiballZ_stat.py'
Sep 30 14:30:09 compute-0 sudo[222229]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:30:09 compute-0 python3.9[222231]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:30:09 compute-0 sudo[222229]: pam_unix(sudo:session): session closed for user root
Sep 30 14:30:09 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v431: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:30:10 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:30:10 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22ac0027d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:30:10 compute-0 sudo[222353]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-noizqqfjsznimbqnqyxxvehulzcmydbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242609.1639137-4115-171814652060164/AnsiballZ_copy.py'
Sep 30 14:30:10 compute-0 sudo[222353]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:30:10 compute-0 python3.9[222355]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759242609.1639137-4115-171814652060164/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:30:10 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:30:10 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:30:10 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:30:10.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:30:10 compute-0 sudo[222353]: pam_unix(sudo:session): session closed for user root
Sep 30 14:30:10 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:30:10 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22a000c410 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:30:10 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:30:10 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:30:10 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:30:10.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:30:10 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:30:10 compute-0 sudo[222505]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ddfvtejzgyhkifspwsebfbwtlkkddkyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242610.5171125-4160-127377663091664/AnsiballZ_stat.py'
Sep 30 14:30:10 compute-0 sudo[222505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:30:10 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:30:10 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2294004e20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:30:10 compute-0 ceph-mon[74194]: pgmap v431: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:30:11 compute-0 python3.9[222507]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:30:11 compute-0 sudo[222505]: pam_unix(sudo:session): session closed for user root
Sep 30 14:30:11 compute-0 sudo[222644]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzwdncvwcxzlgnyhtysddmksmasonnyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242610.5171125-4160-127377663091664/AnsiballZ_copy.py'
Sep 30 14:30:11 compute-0 sudo[222644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:30:11 compute-0 podman[222602]: 2025-09-30 14:30:11.359965659 +0000 UTC m=+0.043712975 container health_status c96c6cc1b89de09f21811bef665f35e069874e3ad1bbd4c37d02227af98e3458 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:30:11 compute-0 python3.9[222652]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759242610.5171125-4160-127377663091664/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:30:11 compute-0 sudo[222644]: pam_unix(sudo:session): session closed for user root
Sep 30 14:30:11 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v432: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:30:12 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:30:12 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2280003040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:30:12 compute-0 sudo[222803]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oahiwwqncgvtqwskkglwbvtazgzbdovo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242611.757859-4205-44759626457065/AnsiballZ_stat.py'
Sep 30 14:30:12 compute-0 sudo[222803]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:30:12 compute-0 python3.9[222805]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:30:12 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:30:12 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:30:12 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:30:12.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:30:12 compute-0 sudo[222803]: pam_unix(sudo:session): session closed for user root
Sep 30 14:30:12 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:30:12 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2280003040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:30:12 compute-0 sudo[222926]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhzqrugfzioqknjgyhgzfvdwfvqecipw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242611.757859-4205-44759626457065/AnsiballZ_copy.py'
Sep 30 14:30:12 compute-0 sudo[222926]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:30:12 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:30:12 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:30:12 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:30:12.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:30:12 compute-0 python3.9[222928]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759242611.757859-4205-44759626457065/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:30:12 compute-0 sudo[222926]: pam_unix(sudo:session): session closed for user root
Sep 30 14:30:12 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:30:12 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f22a000c430 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:30:12 compute-0 ceph-mon[74194]: pgmap v432: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:30:13 compute-0 sudo[223080]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrcemqhgdqbuuldxrdqlhbfeirryjsjt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242613.0201504-4250-108185244273196/AnsiballZ_systemd.py'
Sep 30 14:30:13 compute-0 sudo[223080]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:30:13 compute-0 python3.9[223082]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 14:30:13 compute-0 systemd[1]: Reloading.
Sep 30 14:30:13 compute-0 unix_chkpwd[223085]: password check failed for user (root)
Sep 30 14:30:13 compute-0 sshd-session[222953]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.119  user=root
Sep 30 14:30:13 compute-0 systemd-rc-local-generator[223112]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:30:13 compute-0 systemd-sysv-generator[223115]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 14:30:13 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v433: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:30:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:30:14 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2294004e20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:30:14 compute-0 systemd[1]: Reached target edpm_libvirt.target.
Sep 30 14:30:14 compute-0 sudo[223080]: pam_unix(sudo:session): session closed for user root
Sep 30 14:30:14 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:30:14 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:30:14 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:30:14.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:30:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[162268]: 30/09/2025 14:30:14 : epoch 68dbe850 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2280003040 fd 48 proxy ignored for local
Sep 30 14:30:14 compute-0 kernel: ganesha.nfsd[213301]: segfault at 50 ip 00007f235f23732e sp 00007f2317ffe210 error 4 in libntirpc.so.5.8[7f235f21c000+2c000] likely on CPU 6 (core 0, socket 6)
Sep 30 14:30:14 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Sep 30 14:30:14 compute-0 systemd[1]: Started Process Core Dump (PID 223222/UID 0).
Sep 30 14:30:14 compute-0 sudo[223275]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwckdzdtxlzbvcguszpsymkhvsrwwips ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242614.219237-4274-6372527967846/AnsiballZ_systemd.py'
Sep 30 14:30:14 compute-0 sudo[223275]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:30:14 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:30:14 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:30:14 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:30:14.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:30:14 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:30:14 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:30:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:30:14] "GET /metrics HTTP/1.1" 200 48420 "" "Prometheus/2.51.0"
Sep 30 14:30:14 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:30:14] "GET /metrics HTTP/1.1" 200 48420 "" "Prometheus/2.51.0"
Sep 30 14:30:14 compute-0 python3.9[223277]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Sep 30 14:30:14 compute-0 systemd[1]: Reloading.
Sep 30 14:30:14 compute-0 systemd-rc-local-generator[223305]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:30:14 compute-0 ceph-mon[74194]: pgmap v433: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:30:14 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:30:14 compute-0 systemd-sysv-generator[223308]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 14:30:15 compute-0 systemd[1]: Reloading.
Sep 30 14:30:15 compute-0 systemd-rc-local-generator[223341]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:30:15 compute-0 systemd-sysv-generator[223344]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 14:30:15 compute-0 systemd-coredump[223224]: Process 162272 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 69:
                                                    #0  0x00007f235f23732e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Sep 30 14:30:15 compute-0 sudo[223275]: pam_unix(sudo:session): session closed for user root
Sep 30 14:30:15 compute-0 systemd[1]: systemd-coredump@5-223222-0.service: Deactivated successfully.
Sep 30 14:30:15 compute-0 systemd[1]: systemd-coredump@5-223222-0.service: Consumed 1.127s CPU time.
Sep 30 14:30:15 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:30:15 compute-0 podman[223364]: 2025-09-30 14:30:15.645069813 +0000 UTC m=+0.041593368 container died 80179a74fce2d068837386b4b41b6e1dd6e60344a4e95807b646c30c4597f9c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Sep 30 14:30:15 compute-0 sshd-session[222953]: Failed password for root from 80.94.93.119 port 24692 ssh2
Sep 30 14:30:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-daead3677801043f1df4ec6991bed2a91f11985fb06e2646406fe300e2adbedb-merged.mount: Deactivated successfully.
Sep 30 14:30:15 compute-0 podman[223364]: 2025-09-30 14:30:15.682393805 +0000 UTC m=+0.078917380 container remove 80179a74fce2d068837386b4b41b6e1dd6e60344a4e95807b646c30c4597f9c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Sep 30 14:30:15 compute-0 systemd[1]: ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6@nfs.cephfs.2.0.compute-0.qrbicy.service: Main process exited, code=exited, status=139/n/a
Sep 30 14:30:15 compute-0 unix_chkpwd[223417]: password check failed for user (root)
Sep 30 14:30:15 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v434: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:30:15 compute-0 systemd[1]: ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6@nfs.cephfs.2.0.compute-0.qrbicy.service: Failed with result 'exit-code'.
Sep 30 14:30:15 compute-0 systemd[1]: ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6@nfs.cephfs.2.0.compute-0.qrbicy.service: Consumed 1.897s CPU time.
Sep 30 14:30:15 compute-0 sshd-session[164164]: Connection closed by 192.168.122.30 port 52104
Sep 30 14:30:15 compute-0 sshd-session[164160]: pam_unix(sshd:session): session closed for user zuul
Sep 30 14:30:15 compute-0 systemd[1]: session-53.scope: Deactivated successfully.
Sep 30 14:30:15 compute-0 systemd[1]: session-53.scope: Consumed 3min 23.068s CPU time.
Sep 30 14:30:15 compute-0 systemd-logind[808]: Session 53 logged out. Waiting for processes to exit.
Sep 30 14:30:15 compute-0 systemd-logind[808]: Removed session 53.
Sep 30 14:30:16 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:30:16 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:30:16 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:30:16.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:30:16 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:30:16 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:30:16 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:30:16.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:30:16 compute-0 ceph-mon[74194]: pgmap v434: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:30:17 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:30:17.027Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:30:17 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v435: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:30:18 compute-0 sshd-session[222953]: Failed password for root from 80.94.93.119 port 24692 ssh2
Sep 30 14:30:18 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:30:18 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:30:18 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:30:18.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:30:18 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:30:18 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:30:18 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:30:18.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:30:18 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:30:18.912Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:30:18 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:30:18.913Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:30:18 compute-0 ceph-mon[74194]: pgmap v435: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:30:19 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v436: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:30:19 compute-0 unix_chkpwd[223430]: password check failed for user (root)
Sep 30 14:30:20 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:30:20 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:30:20 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:30:20.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:30:20 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-nfs-cephfs-compute-0-yvkpei[97239]: [WARNING] 272/143020 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Sep 30 14:30:20 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:30:20 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:30:20 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:30:20.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:30:20 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:30:20 compute-0 sudo[223431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:30:20 compute-0 sudo[223431]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:30:20 compute-0 sudo[223431]: pam_unix(sudo:session): session closed for user root
Sep 30 14:30:20 compute-0 ceph-mon[74194]: pgmap v436: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:30:21 compute-0 sshd-session[223457]: Accepted publickey for zuul from 192.168.122.30 port 53268 ssh2: ECDSA SHA256:bXV1aFTGAGwGo0hLh6HZ3pTGxlJrPf0VedxXflT3nU8
Sep 30 14:30:21 compute-0 systemd-logind[808]: New session 54 of user zuul.
Sep 30 14:30:21 compute-0 systemd[1]: Started Session 54 of User zuul.
Sep 30 14:30:21 compute-0 sshd-session[223457]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Sep 30 14:30:21 compute-0 sshd-session[222953]: Failed password for root from 80.94.93.119 port 24692 ssh2
Sep 30 14:30:21 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v437: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:30:21 compute-0 sshd-session[222953]: Received disconnect from 80.94.93.119 port 24692:11:  [preauth]
Sep 30 14:30:21 compute-0 sshd-session[222953]: Disconnected from authenticating user root 80.94.93.119 port 24692 [preauth]
Sep 30 14:30:21 compute-0 sshd-session[222953]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.119  user=root
Sep 30 14:30:22 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-nfs-cephfs-compute-0-yvkpei[97239]: [WARNING] 272/143022 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Sep 30 14:30:22 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:30:22 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.002000054s ======
Sep 30 14:30:22 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:30:22.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Sep 30 14:30:22 compute-0 python3.9[223613]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Sep 30 14:30:22 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:30:22 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:30:22 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:30:22.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:30:22 compute-0 unix_chkpwd[223618]: password check failed for user (root)
Sep 30 14:30:22 compute-0 sshd-session[223538]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.119  user=root
Sep 30 14:30:22 compute-0 ceph-mon[74194]: pgmap v437: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:30:23 compute-0 sudo[223769]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqtpzncobbygvpvsxyufcfilhvmbqdrs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242623.1811638-62-173618356579162/AnsiballZ_file.py'
Sep 30 14:30:23 compute-0 sudo[223769]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:30:23 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v438: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:30:23 compute-0 python3.9[223771]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:30:23 compute-0 sudo[223769]: pam_unix(sudo:session): session closed for user root
Sep 30 14:30:24 compute-0 ceph-mon[74194]: pgmap v438: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:30:24 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:30:24 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:30:24 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:30:24.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:30:24 compute-0 sudo[223922]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdogpzqsmvkmjindmfvizendeqxftokg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242624.0389955-62-241400344613172/AnsiballZ_file.py'
Sep 30 14:30:24 compute-0 sudo[223922]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:30:24 compute-0 python3.9[223924]: ansible-ansible.builtin.file Invoked with path=/etc/target setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:30:24 compute-0 sudo[223922]: pam_unix(sudo:session): session closed for user root
Sep 30 14:30:24 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:30:24 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:30:24 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:30:24.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:30:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:30:24] "GET /metrics HTTP/1.1" 200 48420 "" "Prometheus/2.51.0"
Sep 30 14:30:24 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:30:24] "GET /metrics HTTP/1.1" 200 48420 "" "Prometheus/2.51.0"
Sep 30 14:30:24 compute-0 sudo[224074]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybehjbedkirlbzimahycaeuvfknhptps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242624.6392617-62-171969742312189/AnsiballZ_file.py'
Sep 30 14:30:24 compute-0 sudo[224074]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:30:24 compute-0 sshd-session[223538]: Failed password for root from 80.94.93.119 port 64956 ssh2
Sep 30 14:30:25 compute-0 python3.9[224076]: ansible-ansible.builtin.file Invoked with path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:30:25 compute-0 sudo[224074]: pam_unix(sudo:session): session closed for user root
Sep 30 14:30:25 compute-0 sudo[224227]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfeiaggorcaraxzqfknmjaolveiipmaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242625.3077106-62-164200355138756/AnsiballZ_file.py'
Sep 30 14:30:25 compute-0 sudo[224227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:30:25 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:30:25 compute-0 ceph-mon[74194]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Sep 30 14:30:25 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:30:25.649689) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Sep 30 14:30:25 compute-0 ceph-mon[74194]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Sep 30 14:30:25 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759242625649715, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 3962, "num_deletes": 502, "total_data_size": 8107175, "memory_usage": 8250456, "flush_reason": "Manual Compaction"}
Sep 30 14:30:25 compute-0 ceph-mon[74194]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Sep 30 14:30:25 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759242625681215, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 4523475, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13339, "largest_seqno": 17300, "table_properties": {"data_size": 4511881, "index_size": 6552, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3973, "raw_key_size": 31180, "raw_average_key_size": 19, "raw_value_size": 4484407, "raw_average_value_size": 2861, "num_data_blocks": 285, "num_entries": 1567, "num_filter_entries": 1567, "num_deletions": 502, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759242207, "oldest_key_time": 1759242207, "file_creation_time": 1759242625, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4a74fe2f-a33e-416b-ba25-743e7942b3ac", "db_session_id": "KY5CTSKWFSFJYE5835A9", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Sep 30 14:30:25 compute-0 ceph-mon[74194]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 31837 microseconds, and 7764 cpu microseconds.
Sep 30 14:30:25 compute-0 ceph-mon[74194]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 14:30:25 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:30:25.681522) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 4523475 bytes OK
Sep 30 14:30:25 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:30:25.681628) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Sep 30 14:30:25 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:30:25.684705) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Sep 30 14:30:25 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:30:25.684747) EVENT_LOG_v1 {"time_micros": 1759242625684737, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Sep 30 14:30:25 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:30:25.684771) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Sep 30 14:30:25 compute-0 ceph-mon[74194]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 8091227, prev total WAL file size 8091227, number of live WAL files 2.
Sep 30 14:30:25 compute-0 ceph-mon[74194]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 14:30:25 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:30:25.687960) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323531' seq:72057594037927935, type:22 .. '6D67727374617400353033' seq:0, type:0; will stop at (end)
Sep 30 14:30:25 compute-0 ceph-mon[74194]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Sep 30 14:30:25 compute-0 ceph-mon[74194]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(4417KB)], [32(12MB)]
Sep 30 14:30:25 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759242625688019, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 18027528, "oldest_snapshot_seqno": -1}
Sep 30 14:30:25 compute-0 python3.9[224229]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/config-data selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Sep 30 14:30:25 compute-0 ceph-mon[74194]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 5048 keys, 13523017 bytes, temperature: kUnknown
Sep 30 14:30:25 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759242625766425, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 13523017, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13487348, "index_size": 21927, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12677, "raw_key_size": 126173, "raw_average_key_size": 24, "raw_value_size": 13393768, "raw_average_value_size": 2653, "num_data_blocks": 917, "num_entries": 5048, "num_filter_entries": 5048, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759241526, "oldest_key_time": 0, "file_creation_time": 1759242625, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4a74fe2f-a33e-416b-ba25-743e7942b3ac", "db_session_id": "KY5CTSKWFSFJYE5835A9", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Sep 30 14:30:25 compute-0 ceph-mon[74194]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 14:30:25 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:30:25.766731) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 13523017 bytes
Sep 30 14:30:25 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:30:25.768301) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 229.6 rd, 172.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(4.3, 12.9 +0.0 blob) out(12.9 +0.0 blob), read-write-amplify(7.0) write-amplify(3.0) OK, records in: 5868, records dropped: 820 output_compression: NoCompression
Sep 30 14:30:25 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:30:25.768325) EVENT_LOG_v1 {"time_micros": 1759242625768312, "job": 14, "event": "compaction_finished", "compaction_time_micros": 78526, "compaction_time_cpu_micros": 29298, "output_level": 6, "num_output_files": 1, "total_output_size": 13523017, "num_input_records": 5868, "num_output_records": 5048, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Sep 30 14:30:25 compute-0 ceph-mon[74194]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 14:30:25 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759242625769518, "job": 14, "event": "table_file_deletion", "file_number": 34}
Sep 30 14:30:25 compute-0 ceph-mon[74194]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 14:30:25 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759242625772523, "job": 14, "event": "table_file_deletion", "file_number": 32}
Sep 30 14:30:25 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:30:25.687712) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:30:25 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:30:25.772672) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:30:25 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:30:25.772677) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:30:25 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:30:25.772679) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:30:25 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:30:25.772680) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:30:25 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:30:25.772682) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:30:25 compute-0 sudo[224227]: pam_unix(sudo:session): session closed for user root
Sep 30 14:30:25 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v439: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:30:26 compute-0 systemd[1]: ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6@nfs.cephfs.2.0.compute-0.qrbicy.service: Scheduled restart job, restart counter is at 6.
Sep 30 14:30:26 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.qrbicy for 5e3c7776-ac03-5698-b79f-a6dc2d80cae6.
Sep 30 14:30:26 compute-0 systemd[1]: ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6@nfs.cephfs.2.0.compute-0.qrbicy.service: Consumed 1.897s CPU time.
Sep 30 14:30:26 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.qrbicy for 5e3c7776-ac03-5698-b79f-a6dc2d80cae6...
Sep 30 14:30:26 compute-0 sudo[224404]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmamdtvjrfontmzdawdnihtgbfbjmeib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242625.9120517-62-180259009312188/AnsiballZ_file.py'
Sep 30 14:30:26 compute-0 sudo[224404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:30:26 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:30:26 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:30:26 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:30:26.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:30:26 compute-0 podman[224428]: 2025-09-30 14:30:26.281311627 +0000 UTC m=+0.046775957 container create a9f632908bc14e1c8c508281bd22a30acb3acffb173c593a50e9fa74e66cefeb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Sep 30 14:30:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cfbb0d4da650e36839dccb1ea26065108c4a4cfbb681aac74dd645b5c0a4c63/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Sep 30 14:30:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cfbb0d4da650e36839dccb1ea26065108c4a4cfbb681aac74dd645b5c0a4c63/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:30:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cfbb0d4da650e36839dccb1ea26065108c4a4cfbb681aac74dd645b5c0a4c63/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 14:30:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cfbb0d4da650e36839dccb1ea26065108c4a4cfbb681aac74dd645b5c0a4c63/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.qrbicy-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 14:30:26 compute-0 podman[224428]: 2025-09-30 14:30:26.350000242 +0000 UTC m=+0.115464592 container init a9f632908bc14e1c8c508281bd22a30acb3acffb173c593a50e9fa74e66cefeb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:30:26 compute-0 podman[224428]: 2025-09-30 14:30:26.261601728 +0000 UTC m=+0.027066078 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:30:26 compute-0 podman[224428]: 2025-09-30 14:30:26.357423512 +0000 UTC m=+0.122887842 container start a9f632908bc14e1c8c508281bd22a30acb3acffb173c593a50e9fa74e66cefeb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:30:26 compute-0 bash[224428]: a9f632908bc14e1c8c508281bd22a30acb3acffb173c593a50e9fa74e66cefeb
Sep 30 14:30:26 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:30:26 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Sep 30 14:30:26 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:30:26 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Sep 30 14:30:26 compute-0 python3.9[224412]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/config-data/ansible-generated/iscsid setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:30:26 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.qrbicy for 5e3c7776-ac03-5698-b79f-a6dc2d80cae6.
Sep 30 14:30:26 compute-0 sudo[224404]: pam_unix(sudo:session): session closed for user root
Sep 30 14:30:26 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:30:26 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Sep 30 14:30:26 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:30:26 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Sep 30 14:30:26 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:30:26 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Sep 30 14:30:26 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:30:26 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Sep 30 14:30:26 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:30:26 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Sep 30 14:30:26 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:30:26 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:30:26 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:30:26 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:30:26 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:30:26.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:30:26 compute-0 ceph-mon[74194]: pgmap v439: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:30:26 compute-0 unix_chkpwd[224562]: password check failed for user (root)
Sep 30 14:30:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:30:27.031Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:30:27 compute-0 sudo[224636]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnlsezdruqizdbdgrssycmrzopwbsnmm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242626.663734-170-247348387958140/AnsiballZ_stat.py'
Sep 30 14:30:27 compute-0 sudo[224636]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:30:27 compute-0 python3.9[224638]: ansible-ansible.builtin.stat Invoked with path=/lib/systemd/system/iscsid.socket follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 14:30:27 compute-0 sudo[224636]: pam_unix(sudo:session): session closed for user root
Sep 30 14:30:27 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v440: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Sep 30 14:30:28 compute-0 sudo[224792]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-snmnapuypuuptseppzurogfcyzsuuhyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242627.453356-194-207964709521062/AnsiballZ_systemd.py'
Sep 30 14:30:28 compute-0 sudo[224792]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:30:28 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:30:28 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:30:28 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:30:28.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:30:28 compute-0 python3.9[224794]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iscsid.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 14:30:28 compute-0 systemd[1]: Reloading.
Sep 30 14:30:28 compute-0 systemd-sysv-generator[224828]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 14:30:28 compute-0 systemd-rc-local-generator[224824]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:30:28 compute-0 sshd-session[223538]: Failed password for root from 80.94.93.119 port 64956 ssh2
Sep 30 14:30:28 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:30:28 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:30:28 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:30:28.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:30:28 compute-0 sudo[224792]: pam_unix(sudo:session): session closed for user root
Sep 30 14:30:28 compute-0 ceph-mon[74194]: pgmap v440: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Sep 30 14:30:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:30:28.913Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:30:28 compute-0 unix_chkpwd[224892]: password check failed for user (root)
Sep 30 14:30:29 compute-0 sudo[224983]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dokuvzngnzkivbudvthqmsczldbzgjaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242628.934641-218-224861533438262/AnsiballZ_service_facts.py'
Sep 30 14:30:29 compute-0 sudo[224983]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:30:29 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:30:29 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:30:29 compute-0 python3.9[224985]: ansible-ansible.builtin.service_facts Invoked
Sep 30 14:30:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:30:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:30:29 compute-0 network[225002]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Sep 30 14:30:29 compute-0 network[225004]: 'network-scripts' will be removed from distribution in near future.
Sep 30 14:30:29 compute-0 network[225005]: It is advised to switch to 'NetworkManager' instead for network management.
Sep 30 14:30:29 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v441: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Sep 30 14:30:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:30:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:30:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:30:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:30:29 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:30:30 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:30:30 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:30:30 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:30:30.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:30:30 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:30:30 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:30:30 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:30:30.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:30:30 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:30:30 compute-0 sshd-session[223538]: Failed password for root from 80.94.93.119 port 64956 ssh2
Sep 30 14:30:30 compute-0 ceph-mon[74194]: pgmap v441: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Sep 30 14:30:31 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v442: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 597 B/s wr, 2 op/s
Sep 30 14:30:32 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:30:32 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:30:32 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:30:32.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:30:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:30:32 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:30:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:30:32 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:30:32 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:30:32 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:30:32 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:30:32.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:30:32 compute-0 sudo[224983]: pam_unix(sudo:session): session closed for user root
Sep 30 14:30:33 compute-0 ceph-mon[74194]: pgmap v442: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 597 B/s wr, 2 op/s
Sep 30 14:30:33 compute-0 sshd-session[223538]: Received disconnect from 80.94.93.119 port 64956:11:  [preauth]
Sep 30 14:30:33 compute-0 sshd-session[223538]: Disconnected from authenticating user root 80.94.93.119 port 64956 [preauth]
Sep 30 14:30:33 compute-0 sshd-session[223538]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.119  user=root
Sep 30 14:30:33 compute-0 sudo[225282]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vsytyurlzrioiohpblkzfrxdbfohuide ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242633.2051482-242-269049834707000/AnsiballZ_systemd.py'
Sep 30 14:30:33 compute-0 sudo[225282]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:30:33 compute-0 python3.9[225284]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iscsi-starter.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 14:30:33 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v443: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 597 B/s wr, 2 op/s
Sep 30 14:30:33 compute-0 systemd[1]: Reloading.
Sep 30 14:30:33 compute-0 unix_chkpwd[225318]: password check failed for user (root)
Sep 30 14:30:33 compute-0 sshd-session[225154]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.119  user=root
Sep 30 14:30:33 compute-0 systemd-rc-local-generator[225320]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:30:33 compute-0 systemd-sysv-generator[225324]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 14:30:34 compute-0 ceph-mon[74194]: pgmap v443: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 597 B/s wr, 2 op/s
Sep 30 14:30:34 compute-0 sudo[225282]: pam_unix(sudo:session): session closed for user root
Sep 30 14:30:34 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:30:34 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:30:34 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:30:34.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:30:34 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:30:34 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:30:34 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:30:34.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:30:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:30:34] "GET /metrics HTTP/1.1" 200 48419 "" "Prometheus/2.51.0"
Sep 30 14:30:34 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:30:34] "GET /metrics HTTP/1.1" 200 48419 "" "Prometheus/2.51.0"
Sep 30 14:30:34 compute-0 python3.9[225475]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 14:30:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:30:35 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:30:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:30:35 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:30:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:30:35 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:30:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:30:35 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:30:35 compute-0 sshd-session[225154]: Failed password for root from 80.94.93.119 port 29850 ssh2
Sep 30 14:30:35 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:30:35 compute-0 sudo[225636]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcvkhjttgfwcwjeeccgfcplnukptnwsz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242635.1744382-293-176418657167241/AnsiballZ_podman_container.py'
Sep 30 14:30:35 compute-0 sudo[225636]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:30:35 compute-0 podman[225600]: 2025-09-30 14:30:35.719114086 +0000 UTC m=+0.104469986 container health_status 8ace90c1ad44d98a416105b72e1c541724757ab08efa10adfaa0df7e8c2027c6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20250923, io.buildah.version=1.41.3)
Sep 30 14:30:35 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v444: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 597 B/s wr, 2 op/s
Sep 30 14:30:35 compute-0 python3.9[225646]: ansible-containers.podman.podman_container Invoked with command=/usr/sbin/iscsi-iname detach=False image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified name=iscsid_config rm=True tty=True executable=podman state=started debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Sep 30 14:30:35 compute-0 unix_chkpwd[225684]: password check failed for user (root)
Sep 30 14:30:35 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Sep 30 14:30:36 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:30:36 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:30:36 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:30:36.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:30:36 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:30:36 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:30:36 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:30:36.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:30:36 compute-0 ceph-mon[74194]: pgmap v444: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 597 B/s wr, 2 op/s
Sep 30 14:30:37 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:30:37.031Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:30:37 compute-0 podman[225671]: 2025-09-30 14:30:37.127130971 +0000 UTC m=+1.187088082 image pull 4c2cf735485aec82560a51e8042a9e65bbe194a07c6812512d6a5e2ed955852b quay.io/podified-antelope-centos9/openstack-iscsid:current-podified
Sep 30 14:30:37 compute-0 podman[225730]: 2025-09-30 14:30:37.248507691 +0000 UTC m=+0.038137645 container create 4ffe708cdc32742e9b50b967a9312fd96c1365ee367631a1a2bb3bb4f203c8ad (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid_config, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Sep 30 14:30:37 compute-0 NetworkManager[45472]: <info>  [1759242637.2916] manager: (podman0): new Bridge device (/org/freedesktop/NetworkManager/Devices/23)
Sep 30 14:30:37 compute-0 kernel: podman0: port 1(veth0) entered blocking state
Sep 30 14:30:37 compute-0 kernel: podman0: port 1(veth0) entered disabled state
Sep 30 14:30:37 compute-0 kernel: veth0: entered allmulticast mode
Sep 30 14:30:37 compute-0 kernel: veth0: entered promiscuous mode
Sep 30 14:30:37 compute-0 kernel: podman0: port 1(veth0) entered blocking state
Sep 30 14:30:37 compute-0 kernel: podman0: port 1(veth0) entered forwarding state
Sep 30 14:30:37 compute-0 NetworkManager[45472]: <info>  [1759242637.3198] manager: (veth0): new Veth device (/org/freedesktop/NetworkManager/Devices/24)
Sep 30 14:30:37 compute-0 NetworkManager[45472]: <info>  [1759242637.3226] device (veth0): carrier: link connected
Sep 30 14:30:37 compute-0 NetworkManager[45472]: <info>  [1759242637.3229] device (podman0): carrier: link connected
Sep 30 14:30:37 compute-0 podman[225730]: 2025-09-30 14:30:37.231322929 +0000 UTC m=+0.020952903 image pull 4c2cf735485aec82560a51e8042a9e65bbe194a07c6812512d6a5e2ed955852b quay.io/podified-antelope-centos9/openstack-iscsid:current-podified
Sep 30 14:30:37 compute-0 systemd-udevd[225768]: Network interface NamePolicy= disabled on kernel command line.
Sep 30 14:30:37 compute-0 systemd-udevd[225765]: Network interface NamePolicy= disabled on kernel command line.
Sep 30 14:30:37 compute-0 NetworkManager[45472]: <info>  [1759242637.3715] device (podman0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Sep 30 14:30:37 compute-0 NetworkManager[45472]: <info>  [1759242637.3722] device (podman0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Sep 30 14:30:37 compute-0 NetworkManager[45472]: <info>  [1759242637.3729] device (podman0): Activation: starting connection 'podman0' (53e139e9-9429-48a9-a124-c66be19899c0)
Sep 30 14:30:37 compute-0 NetworkManager[45472]: <info>  [1759242637.3730] device (podman0): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Sep 30 14:30:37 compute-0 NetworkManager[45472]: <info>  [1759242637.3733] device (podman0): state change: prepare -> config (reason 'none', managed-type: 'external')
Sep 30 14:30:37 compute-0 NetworkManager[45472]: <info>  [1759242637.3735] device (podman0): state change: config -> ip-config (reason 'none', managed-type: 'external')
Sep 30 14:30:37 compute-0 NetworkManager[45472]: <info>  [1759242637.3737] device (podman0): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Sep 30 14:30:37 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Sep 30 14:30:37 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Sep 30 14:30:37 compute-0 NetworkManager[45472]: <info>  [1759242637.4062] device (podman0): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Sep 30 14:30:37 compute-0 NetworkManager[45472]: <info>  [1759242637.4064] device (podman0): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Sep 30 14:30:37 compute-0 NetworkManager[45472]: <info>  [1759242637.4070] device (podman0): Activation: successful, device activated.
Sep 30 14:30:37 compute-0 systemd[1]: iscsi.service: Unit cannot be reloaded because it is inactive.
Sep 30 14:30:37 compute-0 systemd[1]: Started libpod-conmon-4ffe708cdc32742e9b50b967a9312fd96c1365ee367631a1a2bb3bb4f203c8ad.scope.
Sep 30 14:30:37 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:30:37 compute-0 podman[225730]: 2025-09-30 14:30:37.613520024 +0000 UTC m=+0.403150028 container init 4ffe708cdc32742e9b50b967a9312fd96c1365ee367631a1a2bb3bb4f203c8ad (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid_config, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.vendor=CentOS)
Sep 30 14:30:37 compute-0 podman[225730]: 2025-09-30 14:30:37.622957537 +0000 UTC m=+0.412587491 container start 4ffe708cdc32742e9b50b967a9312fd96c1365ee367631a1a2bb3bb4f203c8ad (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid_config, io.buildah.version=1.41.3, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Sep 30 14:30:37 compute-0 podman[225730]: 2025-09-30 14:30:37.626798281 +0000 UTC m=+0.416428235 container attach 4ffe708cdc32742e9b50b967a9312fd96c1365ee367631a1a2bb3bb4f203c8ad (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid_config, org.label-schema.build-date=20250923, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Sep 30 14:30:37 compute-0 iscsid_config[225890]: iqn.1994-05.com.redhat:c24128bc9f8d
Sep 30 14:30:37 compute-0 systemd[1]: libpod-4ffe708cdc32742e9b50b967a9312fd96c1365ee367631a1a2bb3bb4f203c8ad.scope: Deactivated successfully.
Sep 30 14:30:37 compute-0 conmon[225890]: conmon 4ffe708cdc32742e9b50 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4ffe708cdc32742e9b50b967a9312fd96c1365ee367631a1a2bb3bb4f203c8ad.scope/container/memory.events
Sep 30 14:30:37 compute-0 podman[225730]: 2025-09-30 14:30:37.630006467 +0000 UTC m=+0.419636441 container died 4ffe708cdc32742e9b50b967a9312fd96c1365ee367631a1a2bb3bb4f203c8ad (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid_config, org.label-schema.build-date=20250923, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Sep 30 14:30:37 compute-0 kernel: podman0: port 1(veth0) entered disabled state
Sep 30 14:30:37 compute-0 kernel: veth0 (unregistering): left allmulticast mode
Sep 30 14:30:37 compute-0 kernel: veth0 (unregistering): left promiscuous mode
Sep 30 14:30:37 compute-0 kernel: podman0: port 1(veth0) entered disabled state
Sep 30 14:30:37 compute-0 NetworkManager[45472]: <info>  [1759242637.6848] device (podman0): state change: activated -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Sep 30 14:30:37 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v445: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 1023 B/s wr, 4 op/s
Sep 30 14:30:38 compute-0 systemd[1]: run-netns-netns\x2d455d2d7e\x2d154e\x2de097\x2ddc8b\x2d593bc7c0db94.mount: Deactivated successfully.
Sep 30 14:30:38 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4ffe708cdc32742e9b50b967a9312fd96c1365ee367631a1a2bb3bb4f203c8ad-userdata-shm.mount: Deactivated successfully.
Sep 30 14:30:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-060474988d35314b0db4684ebf8f5da26f3642af3342750624ecff63e3531baf-merged.mount: Deactivated successfully.
Sep 30 14:30:38 compute-0 podman[225730]: 2025-09-30 14:30:38.032901037 +0000 UTC m=+0.822530991 container remove 4ffe708cdc32742e9b50b967a9312fd96c1365ee367631a1a2bb3bb4f203c8ad (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid_config, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:30:38 compute-0 python3.9[225646]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman run --name iscsid_config --detach=False --rm --tty=True quay.io/podified-antelope-centos9/openstack-iscsid:current-podified /usr/sbin/iscsi-iname
Sep 30 14:30:38 compute-0 systemd[1]: libpod-conmon-4ffe708cdc32742e9b50b967a9312fd96c1365ee367631a1a2bb3bb4f203c8ad.scope: Deactivated successfully.
Sep 30 14:30:38 compute-0 python3.9[225646]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: Error generating systemd: 
                                             DEPRECATED command:
                                             It is recommended to use Quadlets for running containers and pods under systemd.
                                             
                                             Please refer to podman-systemd.unit(5) for details.
                                             Error: iscsid_config does not refer to a container or pod: no pod with name or ID iscsid_config found: no such pod: no container with name or ID "iscsid_config" found: no such container
Sep 30 14:30:38 compute-0 sudo[225636]: pam_unix(sudo:session): session closed for user root
Sep 30 14:30:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:30:38.245 163966 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:30:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:30:38.246 163966 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:30:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:30:38.246 163966 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:30:38 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:30:38 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:30:38 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:30:38.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:30:38 compute-0 sshd-session[225154]: Failed password for root from 80.94.93.119 port 29850 ssh2
Sep 30 14:30:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:30:38 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Sep 30 14:30:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:30:38 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Sep 30 14:30:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:30:38 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Sep 30 14:30:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:30:38 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Sep 30 14:30:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:30:38 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Sep 30 14:30:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:30:38 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Sep 30 14:30:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:30:38 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Sep 30 14:30:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:30:38 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 14:30:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:30:38 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 14:30:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:30:38 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 14:30:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:30:38 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Sep 30 14:30:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:30:38 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 14:30:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:30:38 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Sep 30 14:30:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:30:38 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Sep 30 14:30:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:30:38 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Sep 30 14:30:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:30:38 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Sep 30 14:30:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:30:38 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Sep 30 14:30:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:30:38 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Sep 30 14:30:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:30:38 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Sep 30 14:30:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:30:38 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Sep 30 14:30:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:30:38 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Sep 30 14:30:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:30:38 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Sep 30 14:30:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:30:38 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Sep 30 14:30:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:30:38 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Sep 30 14:30:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:30:38 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Sep 30 14:30:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:30:38 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Sep 30 14:30:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:30:38 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Sep 30 14:30:38 compute-0 sudo[226146]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlbszdsskregnqeqsmasrfeowqlbyeoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242638.3521428-317-28774566396771/AnsiballZ_stat.py'
Sep 30 14:30:38 compute-0 sudo[226146]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:30:38 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:30:38 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:30:38 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:30:38.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:30:38 compute-0 python3.9[226148]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:30:38 compute-0 sudo[226146]: pam_unix(sudo:session): session closed for user root
Sep 30 14:30:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:30:38 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f60000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:30:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:30:38.915Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:30:38 compute-0 ceph-mon[74194]: pgmap v445: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 1023 B/s wr, 4 op/s
Sep 30 14:30:39 compute-0 sudo[226270]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpgeqyhgcuzprppdacytkdorzpybqwzo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242638.3521428-317-28774566396771/AnsiballZ_copy.py'
Sep 30 14:30:39 compute-0 sudo[226270]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:30:39 compute-0 python3.9[226273]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759242638.3521428-317-28774566396771/.source.iscsi _original_basename=.mpp90smk follow=False checksum=d61ba9e9db69abb9079159a62737a7ae389f451e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:30:39 compute-0 sudo[226270]: pam_unix(sudo:session): session closed for user root
Sep 30 14:30:39 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v446: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 1023 B/s wr, 3 op/s
Sep 30 14:30:40 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:30:40 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f480016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:30:40 compute-0 unix_chkpwd[226425]: password check failed for user (root)
Sep 30 14:30:40 compute-0 sudo[226424]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uoerincwkyxoikmdgbgoetkhaprjqdug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242639.7496364-362-172092729916480/AnsiballZ_file.py'
Sep 30 14:30:40 compute-0 sudo[226424]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:30:40 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:30:40 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:30:40 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:30:40.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:30:40 compute-0 python3.9[226427]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:30:40 compute-0 sudo[226424]: pam_unix(sudo:session): session closed for user root
Sep 30 14:30:40 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:30:40 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f38000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:30:40 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:30:40 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:30:40 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:30:40.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:30:40 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:30:40 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:30:40 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f54001460 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:30:40 compute-0 ceph-mon[74194]: pgmap v446: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 1023 B/s wr, 3 op/s
Sep 30 14:30:40 compute-0 python3.9[226577]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/iscsid.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 14:30:40 compute-0 sudo[226578]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:30:40 compute-0 sudo[226578]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:30:40 compute-0 sudo[226578]: pam_unix(sudo:session): session closed for user root
Sep 30 14:30:41 compute-0 podman[226729]: 2025-09-30 14:30:41.668684572 +0000 UTC m=+0.060993379 container health_status c96c6cc1b89de09f21811bef665f35e069874e3ad1bbd4c37d02227af98e3458 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Sep 30 14:30:41 compute-0 sudo[226769]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxfbexiuypccbmuyzjtweaqyapcodaxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242641.2117534-413-30961965359665/AnsiballZ_lineinfile.py'
Sep 30 14:30:41 compute-0 sudo[226769]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:30:41 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v447: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Sep 30 14:30:41 compute-0 python3.9[226774]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:30:41 compute-0 sudo[226769]: pam_unix(sudo:session): session closed for user root
Sep 30 14:30:42 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:30:42 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f3c000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:30:42 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-nfs-cephfs-compute-0-yvkpei[97239]: [WARNING] 272/143042 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Sep 30 14:30:42 compute-0 sshd-session[225154]: Failed password for root from 80.94.93.119 port 29850 ssh2
Sep 30 14:30:42 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:30:42 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:30:42 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:30:42.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:30:42 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-nfs-cephfs-compute-0-yvkpei[97239]: [WARNING] 272/143042 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Sep 30 14:30:42 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:30:42 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f480016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:30:42 compute-0 sudo[226925]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-isrxwidiljsivsnzegjbiigkqkxvcyfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242642.205047-440-987228453615/AnsiballZ_file.py'
Sep 30 14:30:42 compute-0 sudo[226925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:30:42 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:30:42 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:30:42 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:30:42.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:30:42 compute-0 python3.9[226927]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:30:42 compute-0 sudo[226925]: pam_unix(sudo:session): session closed for user root
Sep 30 14:30:42 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:30:42 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f380016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:30:42 compute-0 ceph-mon[74194]: pgmap v447: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Sep 30 14:30:43 compute-0 sudo[227077]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azsokfskohlvnoixpyqxxvyjskcwczzb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242642.8982868-464-50116785399240/AnsiballZ_stat.py'
Sep 30 14:30:43 compute-0 sudo[227077]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:30:43 compute-0 python3.9[227079]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:30:43 compute-0 sudo[227077]: pam_unix(sudo:session): session closed for user root
Sep 30 14:30:43 compute-0 sudo[227156]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zusxebqegosehtdppfozokapmntolpsf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242642.8982868-464-50116785399240/AnsiballZ_file.py'
Sep 30 14:30:43 compute-0 sudo[227156]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:30:43 compute-0 python3.9[227158]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:30:43 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v448: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 511 B/s wr, 2 op/s
Sep 30 14:30:43 compute-0 sudo[227156]: pam_unix(sudo:session): session closed for user root
Sep 30 14:30:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:30:44 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f54001ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:30:44 compute-0 sshd-session[225154]: Received disconnect from 80.94.93.119 port 29850:11:  [preauth]
Sep 30 14:30:44 compute-0 sshd-session[225154]: Disconnected from authenticating user root 80.94.93.119 port 29850 [preauth]
Sep 30 14:30:44 compute-0 sshd-session[225154]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.119  user=root
Sep 30 14:30:44 compute-0 sudo[227309]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szefzoyhbbtxjezgnoctjnlyrawkdxbg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242643.9428349-464-52674042050646/AnsiballZ_stat.py'
Sep 30 14:30:44 compute-0 sudo[227309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:30:44 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:30:44 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:30:44 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:30:44.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:30:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:30:44 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f3c001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:30:44 compute-0 python3.9[227311]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:30:44 compute-0 sudo[227309]: pam_unix(sudo:session): session closed for user root
Sep 30 14:30:44 compute-0 sudo[227387]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wedztjrjxdsttfmpblcvtqfrrycurdjf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242643.9428349-464-52674042050646/AnsiballZ_file.py'
Sep 30 14:30:44 compute-0 sudo[227387]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:30:44 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:30:44 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:30:44 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:30:44 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:30:44 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:30:44.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:30:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:30:44] "GET /metrics HTTP/1.1" 200 48420 "" "Prometheus/2.51.0"
Sep 30 14:30:44 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:30:44] "GET /metrics HTTP/1.1" 200 48420 "" "Prometheus/2.51.0"
Sep 30 14:30:44 compute-0 python3.9[227389]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:30:44 compute-0 sudo[227387]: pam_unix(sudo:session): session closed for user root
Sep 30 14:30:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:30:44 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f480023d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:30:44 compute-0 ceph-mon[74194]: pgmap v448: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 511 B/s wr, 2 op/s
Sep 30 14:30:44 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:30:45 compute-0 sudo[227540]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxwefyrauwksjjkbqyjkvjmhcfbveujt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242645.067627-533-13956606277440/AnsiballZ_file.py'
Sep 30 14:30:45 compute-0 sudo[227540]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:30:45 compute-0 python3.9[227542]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:30:45 compute-0 sudo[227540]: pam_unix(sudo:session): session closed for user root
Sep 30 14:30:45 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:30:45 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v449: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 511 B/s wr, 2 op/s
Sep 30 14:30:46 compute-0 sudo[227693]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkqczsflsqsmlvayccoilluuyzhosfhs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242645.7591867-557-935238336887/AnsiballZ_stat.py'
Sep 30 14:30:46 compute-0 sudo[227693]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:30:46 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:30:46 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f380016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:30:46 compute-0 ceph-mon[74194]: pgmap v449: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 511 B/s wr, 2 op/s
Sep 30 14:30:46 compute-0 python3.9[227695]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:30:46 compute-0 sudo[227693]: pam_unix(sudo:session): session closed for user root
Sep 30 14:30:46 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:30:46 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:30:46 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:30:46.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:30:46 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:30:46 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f54001ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:30:46 compute-0 sudo[227771]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izxuwtnpbzmtrxdoolzyskkrdkjvanig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242645.7591867-557-935238336887/AnsiballZ_file.py'
Sep 30 14:30:46 compute-0 sudo[227771]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:30:46 compute-0 python3.9[227773]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:30:46 compute-0 sudo[227771]: pam_unix(sudo:session): session closed for user root
Sep 30 14:30:46 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:30:46 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:30:46 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:30:46.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:30:46 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:30:46 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f3c001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:30:47 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:30:47.033Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:30:47 compute-0 sudo[227923]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eiihimuezlaxwjrksvwsejhznaunhtlv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242646.9959502-593-163961994797776/AnsiballZ_stat.py'
Sep 30 14:30:47 compute-0 sudo[227923]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:30:47 compute-0 python3.9[227925]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:30:47 compute-0 sudo[227923]: pam_unix(sudo:session): session closed for user root
Sep 30 14:30:47 compute-0 sudo[228002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdmusvcqsvkejrjdwmqnjywskqxfrxxy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242646.9959502-593-163961994797776/AnsiballZ_file.py'
Sep 30 14:30:47 compute-0 sudo[228002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:30:47 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Sep 30 14:30:47 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v450: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 511 B/s wr, 2 op/s
Sep 30 14:30:47 compute-0 python3.9[228004]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:30:47 compute-0 sudo[228002]: pam_unix(sudo:session): session closed for user root
Sep 30 14:30:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:30:48 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f480023d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:30:48 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:30:48 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:30:48 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:30:48.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:30:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:30:48 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f380016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:30:48 compute-0 sudo[228155]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-klwxoxnetozfkfkiuqnguqwnxeocqzfp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242648.1735036-629-91766506542728/AnsiballZ_systemd.py'
Sep 30 14:30:48 compute-0 sudo[228155]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:30:48 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:30:48 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:30:48 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:30:48.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:30:48 compute-0 python3.9[228157]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 14:30:48 compute-0 systemd[1]: Reloading.
Sep 30 14:30:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:30:48 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f54001ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:30:48 compute-0 systemd-rc-local-generator[228183]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:30:48 compute-0 systemd-sysv-generator[228188]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 14:30:48 compute-0 ceph-mon[74194]: pgmap v450: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 511 B/s wr, 2 op/s
Sep 30 14:30:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:30:48.917Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:30:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:30:48.918Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:30:49 compute-0 sudo[228155]: pam_unix(sudo:session): session closed for user root
Sep 30 14:30:49 compute-0 sudo[228345]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghacqivlrvbkhdfehjswkvffncxnhcfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242649.4487998-653-235419785675079/AnsiballZ_stat.py'
Sep 30 14:30:49 compute-0 sudo[228345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:30:49 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v451: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:30:49 compute-0 python3.9[228347]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:30:49 compute-0 sudo[228345]: pam_unix(sudo:session): session closed for user root
Sep 30 14:30:50 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:30:50 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f3c001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:30:50 compute-0 sudo[228424]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iikuwfztxvfcdslbutuvtkwfapakvsop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242649.4487998-653-235419785675079/AnsiballZ_file.py'
Sep 30 14:30:50 compute-0 sudo[228424]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:30:50 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:30:50 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:30:50 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:30:50.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:30:50 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:30:50 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f480023d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:30:50 compute-0 python3.9[228426]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:30:50 compute-0 sudo[228424]: pam_unix(sudo:session): session closed for user root
Sep 30 14:30:50 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:30:50 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:30:50 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:30:50 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:30:50.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:30:50 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:30:50 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f38002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:30:50 compute-0 sudo[228576]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzlcmwjxngnjnpablisuchmivfmiwkin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242650.574574-689-27184019622044/AnsiballZ_stat.py'
Sep 30 14:30:50 compute-0 sudo[228576]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:30:50 compute-0 ceph-mon[74194]: pgmap v451: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:30:51 compute-0 python3.9[228578]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:30:51 compute-0 sudo[228576]: pam_unix(sudo:session): session closed for user root
Sep 30 14:30:51 compute-0 sudo[228654]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppkubzjtbfwwwdiykbtmiuykfjrgfnme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242650.574574-689-27184019622044/AnsiballZ_file.py'
Sep 30 14:30:51 compute-0 sudo[228654]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:30:51 compute-0 python3.9[228657]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:30:51 compute-0 sudo[228654]: pam_unix(sudo:session): session closed for user root
Sep 30 14:30:51 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v452: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:30:52 compute-0 sudo[228808]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkgcluffojxqshcirnfxflnbdxfkjxzc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242651.7270663-725-129739590364442/AnsiballZ_systemd.py'
Sep 30 14:30:52 compute-0 sudo[228808]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:30:52 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:30:52 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f54003480 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:30:52 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:30:52 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:30:52 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:30:52.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:30:52 compute-0 python3.9[228810]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 14:30:52 compute-0 systemd[1]: Reloading.
Sep 30 14:30:52 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:30:52 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f3c002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:30:52 compute-0 systemd-rc-local-generator[228836]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:30:52 compute-0 systemd-sysv-generator[228841]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 14:30:52 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:30:52 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:30:52 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:30:52.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:30:52 compute-0 systemd[1]: Starting Create netns directory...
Sep 30 14:30:52 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Sep 30 14:30:52 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Sep 30 14:30:52 compute-0 systemd[1]: Finished Create netns directory.
Sep 30 14:30:52 compute-0 sudo[228808]: pam_unix(sudo:session): session closed for user root
Sep 30 14:30:52 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:30:52 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f480023d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:30:52 compute-0 ceph-mon[74194]: pgmap v452: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:30:53 compute-0 sudo[229002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aeopzqrywrlruammowielowrqsixvmbh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242653.1842098-755-138819022868471/AnsiballZ_file.py'
Sep 30 14:30:53 compute-0 sudo[229002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:30:53 compute-0 python3.9[229004]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:30:53 compute-0 sudo[229002]: pam_unix(sudo:session): session closed for user root
Sep 30 14:30:53 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v453: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:30:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:30:54 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f38002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:30:54 compute-0 sudo[229155]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahdorrnpenkribrsivbbefqrwfswqgsp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242653.9295123-779-173034680207447/AnsiballZ_stat.py'
Sep 30 14:30:54 compute-0 sudo[229155]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:30:54 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:30:54 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:30:54 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:30:54.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:30:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:30:54 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f54003480 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:30:54 compute-0 python3.9[229157]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/iscsid/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:30:54 compute-0 sudo[229155]: pam_unix(sudo:session): session closed for user root
Sep 30 14:30:54 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:30:54 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:30:54 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:30:54.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:30:54 compute-0 sudo[229278]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edfakvztxcepowbyvkkxqblgpvfzfsjx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242653.9295123-779-173034680207447/AnsiballZ_copy.py'
Sep 30 14:30:54 compute-0 sudo[229278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:30:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:30:54] "GET /metrics HTTP/1.1" 200 48420 "" "Prometheus/2.51.0"
Sep 30 14:30:54 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:30:54] "GET /metrics HTTP/1.1" 200 48420 "" "Prometheus/2.51.0"
Sep 30 14:30:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:30:54 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f3c002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:30:54 compute-0 python3.9[229280]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/iscsid/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759242653.9295123-779-173034680207447/.source _original_basename=healthcheck follow=False checksum=2e1237e7fe015c809b173c52e24cfb87132f4344 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:30:54 compute-0 ceph-mon[74194]: pgmap v453: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:30:54 compute-0 ceph-mon[74194]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Sep 30 14:30:54 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:30:54.963772) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Sep 30 14:30:54 compute-0 ceph-mon[74194]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Sep 30 14:30:54 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759242654963791, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 479, "num_deletes": 251, "total_data_size": 541175, "memory_usage": 551016, "flush_reason": "Manual Compaction"}
Sep 30 14:30:54 compute-0 ceph-mon[74194]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Sep 30 14:30:54 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759242654967631, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 531873, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17301, "largest_seqno": 17779, "table_properties": {"data_size": 529154, "index_size": 755, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6342, "raw_average_key_size": 18, "raw_value_size": 523850, "raw_average_value_size": 1545, "num_data_blocks": 34, "num_entries": 339, "num_filter_entries": 339, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759242626, "oldest_key_time": 1759242626, "file_creation_time": 1759242654, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4a74fe2f-a33e-416b-ba25-743e7942b3ac", "db_session_id": "KY5CTSKWFSFJYE5835A9", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Sep 30 14:30:54 compute-0 ceph-mon[74194]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 3914 microseconds, and 1949 cpu microseconds.
Sep 30 14:30:54 compute-0 ceph-mon[74194]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 14:30:54 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:30:54.967681) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 531873 bytes OK
Sep 30 14:30:54 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:30:54.967701) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Sep 30 14:30:54 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:30:54.969602) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Sep 30 14:30:54 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:30:54.969622) EVENT_LOG_v1 {"time_micros": 1759242654969616, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Sep 30 14:30:54 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:30:54.969641) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Sep 30 14:30:54 compute-0 ceph-mon[74194]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 538399, prev total WAL file size 538399, number of live WAL files 2.
Sep 30 14:30:54 compute-0 ceph-mon[74194]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 14:30:54 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:30:54.970074) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Sep 30 14:30:54 compute-0 ceph-mon[74194]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Sep 30 14:30:54 compute-0 ceph-mon[74194]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(519KB)], [35(12MB)]
Sep 30 14:30:54 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759242654970111, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 14054890, "oldest_snapshot_seqno": -1}
Sep 30 14:30:54 compute-0 sudo[229278]: pam_unix(sudo:session): session closed for user root
Sep 30 14:30:55 compute-0 ceph-mon[74194]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 4877 keys, 11849181 bytes, temperature: kUnknown
Sep 30 14:30:55 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759242655056303, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 11849181, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11816012, "index_size": 19875, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12229, "raw_key_size": 123261, "raw_average_key_size": 25, "raw_value_size": 11726753, "raw_average_value_size": 2404, "num_data_blocks": 827, "num_entries": 4877, "num_filter_entries": 4877, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759241526, "oldest_key_time": 0, "file_creation_time": 1759242654, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4a74fe2f-a33e-416b-ba25-743e7942b3ac", "db_session_id": "KY5CTSKWFSFJYE5835A9", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Sep 30 14:30:55 compute-0 ceph-mon[74194]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 14:30:55 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:30:55.056690) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 11849181 bytes
Sep 30 14:30:55 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:30:55.059811) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 162.9 rd, 137.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 12.9 +0.0 blob) out(11.3 +0.0 blob), read-write-amplify(48.7) write-amplify(22.3) OK, records in: 5387, records dropped: 510 output_compression: NoCompression
Sep 30 14:30:55 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:30:55.059840) EVENT_LOG_v1 {"time_micros": 1759242655059827, "job": 16, "event": "compaction_finished", "compaction_time_micros": 86297, "compaction_time_cpu_micros": 24380, "output_level": 6, "num_output_files": 1, "total_output_size": 11849181, "num_input_records": 5387, "num_output_records": 4877, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Sep 30 14:30:55 compute-0 ceph-mon[74194]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 14:30:55 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759242655060137, "job": 16, "event": "table_file_deletion", "file_number": 37}
Sep 30 14:30:55 compute-0 ceph-mon[74194]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 14:30:55 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759242655063594, "job": 16, "event": "table_file_deletion", "file_number": 35}
Sep 30 14:30:55 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:30:54.970016) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:30:55 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:30:55.063670) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:30:55 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:30:55.063676) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:30:55 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:30:55.063679) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:30:55 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:30:55.063681) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:30:55 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:30:55.063684) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:30:55 compute-0 sudo[229431]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-modhplityfuhicubedldzfprnuqevzjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242655.3902528-830-93833339552276/AnsiballZ_file.py'
Sep 30 14:30:55 compute-0 sudo[229431]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:30:55 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:30:55 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v454: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:30:55 compute-0 python3.9[229433]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:30:55 compute-0 sudo[229431]: pam_unix(sudo:session): session closed for user root
Sep 30 14:30:56 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:30:56 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f480023d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:30:56 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:30:56 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:30:56 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:30:56.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:30:56 compute-0 sudo[229584]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebscfepwhzdhjtszdwgjpzdojhiezfde ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242656.0899134-854-194141685708205/AnsiballZ_stat.py'
Sep 30 14:30:56 compute-0 sudo[229584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:30:56 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:30:56 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f38002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:30:56 compute-0 python3.9[229586]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/iscsid.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:30:56 compute-0 sudo[229584]: pam_unix(sudo:session): session closed for user root
Sep 30 14:30:56 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:30:56 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:30:56 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:30:56.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:30:56 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:30:56 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f54003480 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:30:56 compute-0 sudo[229707]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdagdkxbyttrvimeajwvvtfhparemhel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242656.0899134-854-194141685708205/AnsiballZ_copy.py'
Sep 30 14:30:56 compute-0 sudo[229707]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:30:56 compute-0 ceph-mon[74194]: pgmap v454: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:30:57 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:30:57.034Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:30:57 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:30:57.035Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:30:57 compute-0 python3.9[229709]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/iscsid.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759242656.0899134-854-194141685708205/.source.json _original_basename=.vvvdbbkx follow=False checksum=80e4f97460718c7e5c66b21ef8b846eba0e0dbc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:30:57 compute-0 sudo[229707]: pam_unix(sudo:session): session closed for user root
Sep 30 14:30:57 compute-0 sudo[229860]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhnkkltlewroxwbuyeqzswodfzepcqdv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242657.2851307-899-66597046214319/AnsiballZ_file.py'
Sep 30 14:30:57 compute-0 sudo[229860]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:30:57 compute-0 python3.9[229862]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/iscsid state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:30:57 compute-0 sudo[229860]: pam_unix(sudo:session): session closed for user root
Sep 30 14:30:57 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v455: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:30:58 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:30:58 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f480023d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:30:58 compute-0 sudo[230013]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzynpuwdfzswcaiupxwhhzsrzbwkiqcd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242658.0012162-923-161357001427563/AnsiballZ_stat.py'
Sep 30 14:30:58 compute-0 sudo[230013]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:30:58 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:30:58 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:30:58 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:30:58.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:30:58 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:30:58 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f3c002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:30:58 compute-0 sudo[230013]: pam_unix(sudo:session): session closed for user root
Sep 30 14:30:58 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:30:58 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:30:58 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:30:58.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:30:58 compute-0 sudo[230136]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdyvoomrfvezauqkociixjkvwucjxsva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242658.0012162-923-161357001427563/AnsiballZ_copy.py'
Sep 30 14:30:58 compute-0 sudo[230136]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:30:58 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:30:58 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f38003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:30:58 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:30:58.918Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:30:58 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:30:58.918Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:30:58 compute-0 ceph-mon[74194]: pgmap v455: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:30:59 compute-0 sudo[230136]: pam_unix(sudo:session): session closed for user root
Sep 30 14:30:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-nfs-cephfs-compute-0-yvkpei[97239]: [WARNING] 272/143059 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Sep 30 14:30:59 compute-0 ceph-mgr[74485]: [balancer INFO root] Optimize plan auto_2025-09-30_14:30:59
Sep 30 14:30:59 compute-0 ceph-mgr[74485]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 14:30:59 compute-0 ceph-mgr[74485]: [balancer INFO root] do_upmap
Sep 30 14:30:59 compute-0 ceph-mgr[74485]: [balancer INFO root] pools ['images', 'volumes', 'default.rgw.meta', 'backups', 'cephfs.cephfs.data', 'default.rgw.control', '.nfs', '.mgr', '.rgw.root', 'vms', 'default.rgw.log', 'cephfs.cephfs.meta']
Sep 30 14:30:59 compute-0 ceph-mgr[74485]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 14:30:59 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:30:59 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:30:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:30:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:30:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 14:30:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:30:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 14:30:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:30:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:30:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:30:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:30:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:30:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:30:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:30:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:30:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:30:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Sep 30 14:30:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:30:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:30:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:30:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Sep 30 14:30:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:30:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Sep 30 14:30:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:30:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:30:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:30:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 14:30:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:30:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 14:30:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:30:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:30:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:30:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:30:59 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v456: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:30:59 compute-0 sudo[230290]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ksnteuoqawjcmttgtooxebcbzegyssch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242659.4656126-974-166885069395869/AnsiballZ_container_config_data.py'
Sep 30 14:30:59 compute-0 sudo[230290]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:31:00 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:31:00 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:31:00 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f54003480 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:31:00 compute-0 python3.9[230292]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/iscsid config_pattern=*.json debug=False
Sep 30 14:31:00 compute-0 sudo[230290]: pam_unix(sudo:session): session closed for user root
Sep 30 14:31:00 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:31:00 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:31:00 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:31:00.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:31:00 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:31:00 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f480023d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:31:00 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:31:00 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:31:00 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:31:00 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:31:00.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:31:00 compute-0 sudo[230442]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oljmqpsuzwcoyqysuybonuldbjercaee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242660.3975894-1001-24808632969608/AnsiballZ_container_config_hash.py'
Sep 30 14:31:00 compute-0 sudo[230442]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:31:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 14:31:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 14:31:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 14:31:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 14:31:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 14:31:00 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:31:00 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f3c002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:31:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 14:31:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 14:31:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 14:31:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 14:31:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 14:31:01 compute-0 ceph-mon[74194]: pgmap v456: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:31:01 compute-0 python3.9[230444]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Sep 30 14:31:01 compute-0 sudo[230442]: pam_unix(sudo:session): session closed for user root
Sep 30 14:31:01 compute-0 sudo[230445]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:31:01 compute-0 sudo[230445]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:31:01 compute-0 sudo[230445]: pam_unix(sudo:session): session closed for user root
Sep 30 14:31:01 compute-0 sudo[230621]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afyclgeayhlcernrwadluomhsqhefuep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242661.3468223-1028-72276458295693/AnsiballZ_podman_container_info.py'
Sep 30 14:31:01 compute-0 sudo[230621]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:31:01 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v457: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Sep 30 14:31:01 compute-0 python3.9[230623]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Sep 30 14:31:02 compute-0 ceph-mon[74194]: pgmap v457: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Sep 30 14:31:02 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:31:02 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f38003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:31:02 compute-0 sudo[230621]: pam_unix(sudo:session): session closed for user root
Sep 30 14:31:02 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:31:02 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:31:02 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:31:02.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:31:02 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:31:02 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f54003480 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:31:02 compute-0 sudo[230675]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:31:02 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:31:02 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:31:02 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:31:02.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:31:02 compute-0 sudo[230675]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:31:02 compute-0 sudo[230675]: pam_unix(sudo:session): session closed for user root
Sep 30 14:31:02 compute-0 sudo[230700]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Sep 30 14:31:02 compute-0 sudo[230700]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:31:02 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:31:02 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f480023d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:31:03 compute-0 podman[230821]: 2025-09-30 14:31:03.255540479 +0000 UTC m=+0.059142568 container exec a277d7b6b6f3cf10a7ce0ade5eebf0f8127074c248f9bce4451399614b97ded5 (image=quay.io/ceph/ceph:v19, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Sep 30 14:31:03 compute-0 podman[230821]: 2025-09-30 14:31:03.337974131 +0000 UTC m=+0.141576240 container exec_died a277d7b6b6f3cf10a7ce0ade5eebf0f8127074c248f9bce4451399614b97ded5 (image=quay.io/ceph/ceph:v19, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Sep 30 14:31:03 compute-0 sudo[231035]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qomkrwzmkhvvqfnapjlomtcwaleabxtu ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759242663.183876-1067-41957580601694/AnsiballZ_edpm_container_manage.py'
Sep 30 14:31:03 compute-0 sudo[231035]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:31:03 compute-0 podman[231045]: 2025-09-30 14:31:03.76230657 +0000 UTC m=+0.060831093 container exec 7517aa84b8564a81255eab7821e47762fe9b9d86aae2c7d77e10c0dfa057ab6d (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:31:03 compute-0 podman[231045]: 2025-09-30 14:31:03.776142202 +0000 UTC m=+0.074666725 container exec_died 7517aa84b8564a81255eab7821e47762fe9b9d86aae2c7d77e10c0dfa057ab6d (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:31:03 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v458: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Sep 30 14:31:03 compute-0 python3[231042]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/iscsid config_id=iscsid config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Sep 30 14:31:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:31:04 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f3c004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:31:04 compute-0 podman[231159]: 2025-09-30 14:31:04.078272231 +0000 UTC m=+0.052170071 container exec a9f632908bc14e1c8c508281bd22a30acb3acffb173c593a50e9fa74e66cefeb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Sep 30 14:31:04 compute-0 podman[231159]: 2025-09-30 14:31:04.090442908 +0000 UTC m=+0.064340728 container exec_died a9f632908bc14e1c8c508281bd22a30acb3acffb173c593a50e9fa74e66cefeb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Sep 30 14:31:04 compute-0 podman[231193]: 2025-09-30 14:31:04.133091662 +0000 UTC m=+0.049776797 container create 3f9405f717bf7bccb1d94628a6cea0442375ebf8d5cf43ef2536ee30dce6c6e0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.build-date=20250923, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c, io.buildah.version=1.41.3)
Sep 30 14:31:04 compute-0 podman[231193]: 2025-09-30 14:31:04.109796477 +0000 UTC m=+0.026481632 image pull 4c2cf735485aec82560a51e8042a9e65bbe194a07c6812512d6a5e2ed955852b quay.io/podified-antelope-centos9/openstack-iscsid:current-podified
Sep 30 14:31:04 compute-0 python3[231042]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name iscsid --conmon-pidfile /run/iscsid.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=iscsid --label container_name=iscsid --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run:/run --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:z --volume /etc/target:/etc/target:z --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /var/lib/openstack/healthchecks/iscsid:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-iscsid:current-podified
Sep 30 14:31:04 compute-0 sudo[231035]: pam_unix(sudo:session): session closed for user root
Sep 30 14:31:04 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:31:04 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:31:04 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:31:04.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:31:04 compute-0 podman[231276]: 2025-09-30 14:31:04.31183327 +0000 UTC m=+0.046987753 container exec ec49c6e24c4fbc830188fe80824f1adb9a8c3cd6d4f4491a3e9330b04061bea8 (image=quay.io/ceph/haproxy:2.3, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-nfs-cephfs-compute-0-yvkpei)
Sep 30 14:31:04 compute-0 podman[231276]: 2025-09-30 14:31:04.320515473 +0000 UTC m=+0.055669936 container exec_died ec49c6e24c4fbc830188fe80824f1adb9a8c3cd6d4f4491a3e9330b04061bea8 (image=quay.io/ceph/haproxy:2.3, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-nfs-cephfs-compute-0-yvkpei)
Sep 30 14:31:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:31:04 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f38003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:31:04 compute-0 podman[231349]: 2025-09-30 14:31:04.536304724 +0000 UTC m=+0.067278246 container exec df25873f420822291a2a2f3e4272e6ab946447daa59ec12441fae67f848da096 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-keepalived-nfs-cephfs-compute-0-nfjjcv, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, version=2.2.4, name=keepalived, architecture=x86_64, distribution-scope=public, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=keepalived for Ceph, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, release=1793, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Sep 30 14:31:04 compute-0 podman[231349]: 2025-09-30 14:31:04.564485341 +0000 UTC m=+0.095458833 container exec_died df25873f420822291a2a2f3e4272e6ab946447daa59ec12441fae67f848da096 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-keepalived-nfs-cephfs-compute-0-nfjjcv, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc., release=1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, com.redhat.component=keepalived-container, version=2.2.4, io.openshift.expose-services=)
Sep 30 14:31:04 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:31:04 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:31:04 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:31:04.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:31:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:31:04] "GET /metrics HTTP/1.1" 200 48422 "" "Prometheus/2.51.0"
Sep 30 14:31:04 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:31:04] "GET /metrics HTTP/1.1" 200 48422 "" "Prometheus/2.51.0"
Sep 30 14:31:04 compute-0 podman[231481]: 2025-09-30 14:31:04.773381307 +0000 UTC m=+0.046755566 container exec b02a1f46575144d1c0fa40fb1da73aeaa83cbe57512ae5912168f030bf7101d3 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:31:04 compute-0 podman[231481]: 2025-09-30 14:31:04.802477738 +0000 UTC m=+0.075851617 container exec_died b02a1f46575144d1c0fa40fb1da73aeaa83cbe57512ae5912168f030bf7101d3 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:31:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:31:04 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f54003480 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:31:04 compute-0 ceph-mon[74194]: pgmap v458: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Sep 30 14:31:04 compute-0 sudo[231612]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-caerkeryvxdngxyjrureiszhjtherbqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242664.639269-1091-156637484862645/AnsiballZ_stat.py'
Sep 30 14:31:04 compute-0 sudo[231612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:31:05 compute-0 podman[231630]: 2025-09-30 14:31:05.022097172 +0000 UTC m=+0.056262811 container exec 4fd9639868c9fdb652f2d65dd14f46e8bfbcca13240732508ba689971c876ee0 (image=quay.io/ceph/grafana:10.4.0, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Sep 30 14:31:05 compute-0 python3.9[231617]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 14:31:05 compute-0 sudo[231612]: pam_unix(sudo:session): session closed for user root
Sep 30 14:31:05 compute-0 podman[231630]: 2025-09-30 14:31:05.220908388 +0000 UTC m=+0.255074027 container exec_died 4fd9639868c9fdb652f2d65dd14f46e8bfbcca13240732508ba689971c876ee0 (image=quay.io/ceph/grafana:10.4.0, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Sep 30 14:31:05 compute-0 podman[231818]: 2025-09-30 14:31:05.561073018 +0000 UTC m=+0.050955629 container exec e4a50bbeb60f228cd09239a211f5e468f7ca87363229c6999e3900e12da32b57 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:31:05 compute-0 podman[231818]: 2025-09-30 14:31:05.613559497 +0000 UTC m=+0.103442108 container exec_died e4a50bbeb60f228cd09239a211f5e468f7ca87363229c6999e3900e12da32b57 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:31:05 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:31:05 compute-0 sudo[230700]: pam_unix(sudo:session): session closed for user root
Sep 30 14:31:05 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 14:31:05 compute-0 sudo[231928]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txqooygixdnoiuikjpbybjshvvizftnp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242665.4244728-1118-101116128256092/AnsiballZ_file.py'
Sep 30 14:31:05 compute-0 sudo[231928]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:31:05 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:31:05 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 14:31:05 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:31:05 compute-0 sudo[231931]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:31:05 compute-0 sudo[231931]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:31:05 compute-0 sudo[231931]: pam_unix(sudo:session): session closed for user root
Sep 30 14:31:05 compute-0 sudo[231963]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 14:31:05 compute-0 sudo[231963]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:31:05 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v459: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Sep 30 14:31:05 compute-0 podman[231956]: 2025-09-30 14:31:05.848332978 +0000 UTC m=+0.086318258 container health_status 8ace90c1ad44d98a416105b72e1c541724757ab08efa10adfaa0df7e8c2027c6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=ovn_controller, container_name=ovn_controller)
Sep 30 14:31:05 compute-0 python3.9[231930]: ansible-file Invoked with path=/etc/systemd/system/edpm_iscsid.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:31:05 compute-0 sudo[231928]: pam_unix(sudo:session): session closed for user root
Sep 30 14:31:06 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:31:06 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f480023d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:31:06 compute-0 sudo[232095]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjlxumivcygrxopgetgyzxiltgplnznk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242665.4244728-1118-101116128256092/AnsiballZ_stat.py'
Sep 30 14:31:06 compute-0 sudo[232095]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:31:06 compute-0 python3.9[232097]: ansible-stat Invoked with path=/etc/systemd/system/edpm_iscsid_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 14:31:06 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:31:06 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:31:06 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:31:06.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:31:06 compute-0 sudo[232095]: pam_unix(sudo:session): session closed for user root
Sep 30 14:31:06 compute-0 sudo[231963]: pam_unix(sudo:session): session closed for user root
Sep 30 14:31:06 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:31:06 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:31:06 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 14:31:06 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 14:31:06 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 14:31:06 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v460: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 680 B/s rd, 97 B/s wr, 0 op/s
Sep 30 14:31:06 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:31:06 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f3c004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:31:06 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:31:06 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 14:31:06 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:31:06 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 14:31:06 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 14:31:06 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 14:31:06 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 14:31:06 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:31:06 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:31:06 compute-0 sudo[232159]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:31:06 compute-0 sudo[232159]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:31:06 compute-0 sudo[232159]: pam_unix(sudo:session): session closed for user root
Sep 30 14:31:06 compute-0 sudo[232192]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 14:31:06 compute-0 sudo[232192]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:31:06 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:31:06 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:31:06 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:31:06.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:31:06 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:31:06 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:31:06 compute-0 ceph-mon[74194]: pgmap v459: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Sep 30 14:31:06 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:31:06 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 14:31:06 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:31:06 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:31:06 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 14:31:06 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 14:31:06 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:31:06 compute-0 ceph-mon[74194]: log_channel(cluster) log [INF] : Health check cleared: CEPHADM_FAILED_DAEMON (was: 1 failed cephadm daemon(s))
Sep 30 14:31:06 compute-0 sudo[232341]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjjkdkxygvfmyywrvhtbtmvfhyrcbptr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242666.3738403-1118-173531167498022/AnsiballZ_copy.py'
Sep 30 14:31:06 compute-0 sudo[232341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:31:06 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:31:06 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f38003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:31:06 compute-0 podman[232358]: 2025-09-30 14:31:06.918037029 +0000 UTC m=+0.035825783 container create 55f83d3593a73ae2b94ef8fd36e406726fa887276e812aaf5375ee8d796529e9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_herschel, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Sep 30 14:31:06 compute-0 systemd[1]: Started libpod-conmon-55f83d3593a73ae2b94ef8fd36e406726fa887276e812aaf5375ee8d796529e9.scope.
Sep 30 14:31:06 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:31:06 compute-0 podman[232358]: 2025-09-30 14:31:06.901003161 +0000 UTC m=+0.018791915 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:31:07 compute-0 podman[232358]: 2025-09-30 14:31:07.004635063 +0000 UTC m=+0.122423837 container init 55f83d3593a73ae2b94ef8fd36e406726fa887276e812aaf5375ee8d796529e9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:31:07 compute-0 podman[232358]: 2025-09-30 14:31:07.012331789 +0000 UTC m=+0.130120553 container start 55f83d3593a73ae2b94ef8fd36e406726fa887276e812aaf5375ee8d796529e9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_herschel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Sep 30 14:31:07 compute-0 podman[232358]: 2025-09-30 14:31:07.015482224 +0000 UTC m=+0.133271008 container attach 55f83d3593a73ae2b94ef8fd36e406726fa887276e812aaf5375ee8d796529e9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_herschel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Sep 30 14:31:07 compute-0 wizardly_herschel[232374]: 167 167
Sep 30 14:31:07 compute-0 systemd[1]: libpod-55f83d3593a73ae2b94ef8fd36e406726fa887276e812aaf5375ee8d796529e9.scope: Deactivated successfully.
Sep 30 14:31:07 compute-0 conmon[232374]: conmon 55f83d3593a73ae2b94e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-55f83d3593a73ae2b94ef8fd36e406726fa887276e812aaf5375ee8d796529e9.scope/container/memory.events
Sep 30 14:31:07 compute-0 podman[232358]: 2025-09-30 14:31:07.019279256 +0000 UTC m=+0.137068010 container died 55f83d3593a73ae2b94ef8fd36e406726fa887276e812aaf5375ee8d796529e9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_herschel, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:31:07 compute-0 python3.9[232348]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759242666.3738403-1118-173531167498022/source dest=/etc/systemd/system/edpm_iscsid.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:31:07 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:31:07.036Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:31:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-eea5d4451ee57a5b87c597615beb5be94322200b8a1b7cd3e560fabee0fb8158-merged.mount: Deactivated successfully.
Sep 30 14:31:07 compute-0 podman[232358]: 2025-09-30 14:31:07.058420956 +0000 UTC m=+0.176209710 container remove 55f83d3593a73ae2b94ef8fd36e406726fa887276e812aaf5375ee8d796529e9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_herschel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:31:07 compute-0 sudo[232341]: pam_unix(sudo:session): session closed for user root
Sep 30 14:31:07 compute-0 systemd[1]: libpod-conmon-55f83d3593a73ae2b94ef8fd36e406726fa887276e812aaf5375ee8d796529e9.scope: Deactivated successfully.
Sep 30 14:31:07 compute-0 podman[232422]: 2025-09-30 14:31:07.212767079 +0000 UTC m=+0.041020512 container create 1e9edc847048092d4afa87c774a9fa06507909ce471e8ca6c0ea594e6b753eb5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_lamport, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:31:07 compute-0 systemd[1]: Started libpod-conmon-1e9edc847048092d4afa87c774a9fa06507909ce471e8ca6c0ea594e6b753eb5.scope.
Sep 30 14:31:07 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:31:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a441fd5f4ed513cac2aaf76c9297a522d96133064dd3aab3d615adccfa996c9d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:31:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a441fd5f4ed513cac2aaf76c9297a522d96133064dd3aab3d615adccfa996c9d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:31:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a441fd5f4ed513cac2aaf76c9297a522d96133064dd3aab3d615adccfa996c9d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:31:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a441fd5f4ed513cac2aaf76c9297a522d96133064dd3aab3d615adccfa996c9d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:31:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a441fd5f4ed513cac2aaf76c9297a522d96133064dd3aab3d615adccfa996c9d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 14:31:07 compute-0 podman[232422]: 2025-09-30 14:31:07.194310414 +0000 UTC m=+0.022563867 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:31:07 compute-0 sudo[232491]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmjvfddizbggsnzixbpxtfcyahexhmch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242666.3738403-1118-173531167498022/AnsiballZ_systemd.py'
Sep 30 14:31:07 compute-0 podman[232422]: 2025-09-30 14:31:07.294620686 +0000 UTC m=+0.122874129 container init 1e9edc847048092d4afa87c774a9fa06507909ce471e8ca6c0ea594e6b753eb5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Sep 30 14:31:07 compute-0 sudo[232491]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:31:07 compute-0 podman[232422]: 2025-09-30 14:31:07.302631191 +0000 UTC m=+0.130884624 container start 1e9edc847048092d4afa87c774a9fa06507909ce471e8ca6c0ea594e6b753eb5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:31:07 compute-0 podman[232422]: 2025-09-30 14:31:07.307297746 +0000 UTC m=+0.135551199 container attach 1e9edc847048092d4afa87c774a9fa06507909ce471e8ca6c0ea594e6b753eb5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_lamport, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:31:07 compute-0 python3.9[232494]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Sep 30 14:31:07 compute-0 systemd[1]: Reloading.
Sep 30 14:31:07 compute-0 priceless_lamport[232473]: --> passed data devices: 0 physical, 1 LVM
Sep 30 14:31:07 compute-0 priceless_lamport[232473]: --> All data devices are unavailable
Sep 30 14:31:07 compute-0 ceph-mon[74194]: pgmap v460: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 680 B/s rd, 97 B/s wr, 0 op/s
Sep 30 14:31:07 compute-0 ceph-mon[74194]: Health check cleared: CEPHADM_FAILED_DAEMON (was: 1 failed cephadm daemon(s))
Sep 30 14:31:07 compute-0 podman[232422]: 2025-09-30 14:31:07.708889745 +0000 UTC m=+0.537143178 container died 1e9edc847048092d4afa87c774a9fa06507909ce471e8ca6c0ea594e6b753eb5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Sep 30 14:31:07 compute-0 systemd-sysv-generator[232546]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 14:31:07 compute-0 systemd-rc-local-generator[232542]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:31:08 compute-0 systemd[1]: libpod-1e9edc847048092d4afa87c774a9fa06507909ce471e8ca6c0ea594e6b753eb5.scope: Deactivated successfully.
Sep 30 14:31:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-a441fd5f4ed513cac2aaf76c9297a522d96133064dd3aab3d615adccfa996c9d-merged.mount: Deactivated successfully.
Sep 30 14:31:08 compute-0 podman[232422]: 2025-09-30 14:31:08.059150766 +0000 UTC m=+0.887404199 container remove 1e9edc847048092d4afa87c774a9fa06507909ce471e8ca6c0ea594e6b753eb5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_lamport, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Sep 30 14:31:08 compute-0 systemd[1]: libpod-conmon-1e9edc847048092d4afa87c774a9fa06507909ce471e8ca6c0ea594e6b753eb5.scope: Deactivated successfully.
Sep 30 14:31:08 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:31:08 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f54003480 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:31:08 compute-0 sudo[232491]: pam_unix(sudo:session): session closed for user root
Sep 30 14:31:08 compute-0 sudo[232192]: pam_unix(sudo:session): session closed for user root
Sep 30 14:31:08 compute-0 sudo[232554]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:31:08 compute-0 sudo[232554]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:31:08 compute-0 sudo[232554]: pam_unix(sudo:session): session closed for user root
Sep 30 14:31:08 compute-0 sudo[232602]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -- lvm list --format json
Sep 30 14:31:08 compute-0 sudo[232602]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:31:08 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:31:08 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:31:08 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:31:08 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:31:08 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:31:08.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:31:08 compute-0 sudo[232677]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slhhlyoanxegttdomsbzbwutieyrvsuo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242666.3738403-1118-173531167498022/AnsiballZ_systemd.py'
Sep 30 14:31:08 compute-0 sudo[232677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:31:08 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v461: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 680 B/s rd, 97 B/s wr, 0 op/s
Sep 30 14:31:08 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:31:08 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f480023d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:31:08 compute-0 podman[232722]: 2025-09-30 14:31:08.648995316 +0000 UTC m=+0.044084424 container create 87f36a6a68be13afb2db7d3fc93b4d990cf3343fc1d2105cb00e9ae48954e0ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_faraday, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Sep 30 14:31:08 compute-0 python3.9[232679]: ansible-systemd Invoked with state=restarted name=edpm_iscsid.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 14:31:08 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:31:08 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:31:08 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:31:08.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:31:08 compute-0 systemd[1]: Started libpod-conmon-87f36a6a68be13afb2db7d3fc93b4d990cf3343fc1d2105cb00e9ae48954e0ab.scope.
Sep 30 14:31:08 compute-0 podman[232722]: 2025-09-30 14:31:08.629222765 +0000 UTC m=+0.024311893 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:31:08 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:31:08 compute-0 systemd[1]: Reloading.
Sep 30 14:31:08 compute-0 podman[232722]: 2025-09-30 14:31:08.758923357 +0000 UTC m=+0.154012545 container init 87f36a6a68be13afb2db7d3fc93b4d990cf3343fc1d2105cb00e9ae48954e0ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Sep 30 14:31:08 compute-0 podman[232722]: 2025-09-30 14:31:08.766379697 +0000 UTC m=+0.161468805 container start 87f36a6a68be13afb2db7d3fc93b4d990cf3343fc1d2105cb00e9ae48954e0ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Sep 30 14:31:08 compute-0 podman[232722]: 2025-09-30 14:31:08.769425058 +0000 UTC m=+0.164514196 container attach 87f36a6a68be13afb2db7d3fc93b4d990cf3343fc1d2105cb00e9ae48954e0ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_faraday, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:31:08 compute-0 keen_faraday[232740]: 167 167
Sep 30 14:31:08 compute-0 podman[232722]: 2025-09-30 14:31:08.774285199 +0000 UTC m=+0.169374307 container died 87f36a6a68be13afb2db7d3fc93b4d990cf3343fc1d2105cb00e9ae48954e0ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_faraday, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:31:08 compute-0 systemd-rc-local-generator[232780]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:31:08 compute-0 systemd-sysv-generator[232785]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 14:31:08 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:31:08 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f3c004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:31:08 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:31:08.919Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:31:09 compute-0 systemd[1]: libpod-87f36a6a68be13afb2db7d3fc93b4d990cf3343fc1d2105cb00e9ae48954e0ab.scope: Deactivated successfully.
Sep 30 14:31:09 compute-0 systemd[1]: Starting iscsid container...
Sep 30 14:31:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-da835af9ac43c590f4a96050a07a9e15ca44a85dc23af17906f8f853f0f49889-merged.mount: Deactivated successfully.
Sep 30 14:31:09 compute-0 podman[232722]: 2025-09-30 14:31:09.174428659 +0000 UTC m=+0.569517767 container remove 87f36a6a68be13afb2db7d3fc93b4d990cf3343fc1d2105cb00e9ae48954e0ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_faraday, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:31:09 compute-0 systemd[1]: libpod-conmon-87f36a6a68be13afb2db7d3fc93b4d990cf3343fc1d2105cb00e9ae48954e0ab.scope: Deactivated successfully.
Sep 30 14:31:09 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:31:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f2979a5ad389f3f51384f1c74fc26557f9c18e001553a712c47ad15ae232787/merged/etc/iscsi supports timestamps until 2038 (0x7fffffff)
Sep 30 14:31:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f2979a5ad389f3f51384f1c74fc26557f9c18e001553a712c47ad15ae232787/merged/etc/target supports timestamps until 2038 (0x7fffffff)
Sep 30 14:31:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f2979a5ad389f3f51384f1c74fc26557f9c18e001553a712c47ad15ae232787/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Sep 30 14:31:09 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 3f9405f717bf7bccb1d94628a6cea0442375ebf8d5cf43ef2536ee30dce6c6e0.
Sep 30 14:31:09 compute-0 podman[232792]: 2025-09-30 14:31:09.293282139 +0000 UTC m=+0.172962854 container init 3f9405f717bf7bccb1d94628a6cea0442375ebf8d5cf43ef2536ee30dce6c6e0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, tcib_build_tag=36bccb96575468ec919301205d8daa2c, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, io.buildah.version=1.41.3)
Sep 30 14:31:09 compute-0 iscsid[232809]: + sudo -E kolla_set_configs
Sep 30 14:31:09 compute-0 sudo[232830]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Sep 30 14:31:09 compute-0 podman[232792]: 2025-09-30 14:31:09.318467155 +0000 UTC m=+0.198147860 container start 3f9405f717bf7bccb1d94628a6cea0442375ebf8d5cf43ef2536ee30dce6c6e0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, managed_by=edpm_ansible)
Sep 30 14:31:09 compute-0 systemd[1]: Created slice User Slice of UID 0.
Sep 30 14:31:09 compute-0 systemd[1]: Starting User Runtime Directory /run/user/0...
Sep 30 14:31:09 compute-0 podman[232792]: iscsid
Sep 30 14:31:09 compute-0 systemd[1]: Started iscsid container.
Sep 30 14:31:09 compute-0 systemd[1]: Finished User Runtime Directory /run/user/0.
Sep 30 14:31:09 compute-0 systemd[1]: Starting User Manager for UID 0...
Sep 30 14:31:09 compute-0 systemd[232845]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Sep 30 14:31:09 compute-0 podman[232818]: 2025-09-30 14:31:09.371913869 +0000 UTC m=+0.079156525 container create 9f1055f4eb3b8512eb5a5b7671601b2405b4df06cca3903b004668af42786951 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_almeida, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Sep 30 14:31:09 compute-0 sudo[232677]: pam_unix(sudo:session): session closed for user root
Sep 30 14:31:09 compute-0 systemd[1]: Started libpod-conmon-9f1055f4eb3b8512eb5a5b7671601b2405b4df06cca3903b004668af42786951.scope.
Sep 30 14:31:09 compute-0 podman[232818]: 2025-09-30 14:31:09.318264869 +0000 UTC m=+0.025507535 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:31:09 compute-0 podman[232831]: 2025-09-30 14:31:09.414339628 +0000 UTC m=+0.084884279 container health_status 3f9405f717bf7bccb1d94628a6cea0442375ebf8d5cf43ef2536ee30dce6c6e0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=starting, health_failing_streak=1, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=iscsid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Sep 30 14:31:09 compute-0 systemd[1]: 3f9405f717bf7bccb1d94628a6cea0442375ebf8d5cf43ef2536ee30dce6c6e0-523f6602538847c9.service: Main process exited, code=exited, status=1/FAILURE
Sep 30 14:31:09 compute-0 systemd[1]: 3f9405f717bf7bccb1d94628a6cea0442375ebf8d5cf43ef2536ee30dce6c6e0-523f6602538847c9.service: Failed with result 'exit-code'.
Sep 30 14:31:09 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:31:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65498c62bd3ef6657615136482283396501d423445efdf76366dfe3ed86fc0c1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:31:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65498c62bd3ef6657615136482283396501d423445efdf76366dfe3ed86fc0c1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:31:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65498c62bd3ef6657615136482283396501d423445efdf76366dfe3ed86fc0c1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:31:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65498c62bd3ef6657615136482283396501d423445efdf76366dfe3ed86fc0c1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:31:09 compute-0 podman[232818]: 2025-09-30 14:31:09.464990397 +0000 UTC m=+0.172233033 container init 9f1055f4eb3b8512eb5a5b7671601b2405b4df06cca3903b004668af42786951 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_almeida, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:31:09 compute-0 podman[232818]: 2025-09-30 14:31:09.473150996 +0000 UTC m=+0.180393652 container start 9f1055f4eb3b8512eb5a5b7671601b2405b4df06cca3903b004668af42786951 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_almeida, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Sep 30 14:31:09 compute-0 podman[232818]: 2025-09-30 14:31:09.47700955 +0000 UTC m=+0.184252206 container attach 9f1055f4eb3b8512eb5a5b7671601b2405b4df06cca3903b004668af42786951 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_almeida, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:31:09 compute-0 systemd[232845]: Queued start job for default target Main User Target.
Sep 30 14:31:09 compute-0 systemd[232845]: Created slice User Application Slice.
Sep 30 14:31:09 compute-0 systemd[232845]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Sep 30 14:31:09 compute-0 systemd[232845]: Started Daily Cleanup of User's Temporary Directories.
Sep 30 14:31:09 compute-0 systemd[232845]: Reached target Paths.
Sep 30 14:31:09 compute-0 systemd[232845]: Reached target Timers.
Sep 30 14:31:09 compute-0 systemd[232845]: Starting D-Bus User Message Bus Socket...
Sep 30 14:31:09 compute-0 systemd[232845]: Starting Create User's Volatile Files and Directories...
Sep 30 14:31:09 compute-0 systemd[232845]: Finished Create User's Volatile Files and Directories.
Sep 30 14:31:09 compute-0 systemd[232845]: Listening on D-Bus User Message Bus Socket.
Sep 30 14:31:09 compute-0 systemd[232845]: Reached target Sockets.
Sep 30 14:31:09 compute-0 systemd[232845]: Reached target Basic System.
Sep 30 14:31:09 compute-0 systemd[232845]: Reached target Main User Target.
Sep 30 14:31:09 compute-0 systemd[232845]: Startup finished in 160ms.
Sep 30 14:31:09 compute-0 systemd[1]: Started User Manager for UID 0.
Sep 30 14:31:09 compute-0 systemd[1]: Started Session c3 of User root.
Sep 30 14:31:09 compute-0 sudo[232830]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Sep 30 14:31:09 compute-0 iscsid[232809]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Sep 30 14:31:09 compute-0 iscsid[232809]: INFO:__main__:Validating config file
Sep 30 14:31:09 compute-0 iscsid[232809]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Sep 30 14:31:09 compute-0 iscsid[232809]: INFO:__main__:Writing out command to execute
Sep 30 14:31:09 compute-0 sudo[232830]: pam_unix(sudo:session): session closed for user root
Sep 30 14:31:09 compute-0 systemd[1]: session-c3.scope: Deactivated successfully.
Sep 30 14:31:09 compute-0 iscsid[232809]: ++ cat /run_command
Sep 30 14:31:09 compute-0 iscsid[232809]: + CMD='/usr/sbin/iscsid -f'
Sep 30 14:31:09 compute-0 iscsid[232809]: + ARGS=
Sep 30 14:31:09 compute-0 iscsid[232809]: + sudo kolla_copy_cacerts
Sep 30 14:31:09 compute-0 sudo[232944]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Sep 30 14:31:09 compute-0 systemd[1]: Started Session c4 of User root.
Sep 30 14:31:09 compute-0 sudo[232944]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Sep 30 14:31:09 compute-0 sudo[232944]: pam_unix(sudo:session): session closed for user root
Sep 30 14:31:09 compute-0 systemd[1]: session-c4.scope: Deactivated successfully.
Sep 30 14:31:09 compute-0 iscsid[232809]: + [[ ! -n '' ]]
Sep 30 14:31:09 compute-0 iscsid[232809]: + . kolla_extend_start
Sep 30 14:31:09 compute-0 iscsid[232809]: ++ [[ ! -f /etc/iscsi/initiatorname.iscsi ]]
Sep 30 14:31:09 compute-0 iscsid[232809]: + echo 'Running command: '\''/usr/sbin/iscsid -f'\'''
Sep 30 14:31:09 compute-0 iscsid[232809]: Running command: '/usr/sbin/iscsid -f'
Sep 30 14:31:09 compute-0 iscsid[232809]: + umask 0022
Sep 30 14:31:09 compute-0 iscsid[232809]: + exec /usr/sbin/iscsid -f
Sep 30 14:31:09 compute-0 kernel: Loading iSCSI transport class v2.0-870.
Sep 30 14:31:09 compute-0 ceph-mon[74194]: pgmap v461: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 680 B/s rd, 97 B/s wr, 0 op/s
Sep 30 14:31:09 compute-0 happy_almeida[232867]: {
Sep 30 14:31:09 compute-0 happy_almeida[232867]:     "0": [
Sep 30 14:31:09 compute-0 happy_almeida[232867]:         {
Sep 30 14:31:09 compute-0 happy_almeida[232867]:             "devices": [
Sep 30 14:31:09 compute-0 happy_almeida[232867]:                 "/dev/loop3"
Sep 30 14:31:09 compute-0 happy_almeida[232867]:             ],
Sep 30 14:31:09 compute-0 happy_almeida[232867]:             "lv_name": "ceph_lv0",
Sep 30 14:31:09 compute-0 happy_almeida[232867]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:31:09 compute-0 happy_almeida[232867]:             "lv_size": "21470642176",
Sep 30 14:31:09 compute-0 happy_almeida[232867]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5e3c7776-ac03-5698-b79f-a6dc2d80cae6,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1bf35304-bfb4-41f5-b832-570aa31de1b2,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 14:31:09 compute-0 happy_almeida[232867]:             "lv_uuid": "f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L",
Sep 30 14:31:09 compute-0 happy_almeida[232867]:             "name": "ceph_lv0",
Sep 30 14:31:09 compute-0 happy_almeida[232867]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:31:09 compute-0 happy_almeida[232867]:             "tags": {
Sep 30 14:31:09 compute-0 happy_almeida[232867]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:31:09 compute-0 happy_almeida[232867]:                 "ceph.block_uuid": "f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L",
Sep 30 14:31:09 compute-0 happy_almeida[232867]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 14:31:09 compute-0 happy_almeida[232867]:                 "ceph.cluster_fsid": "5e3c7776-ac03-5698-b79f-a6dc2d80cae6",
Sep 30 14:31:09 compute-0 happy_almeida[232867]:                 "ceph.cluster_name": "ceph",
Sep 30 14:31:09 compute-0 happy_almeida[232867]:                 "ceph.crush_device_class": "",
Sep 30 14:31:09 compute-0 happy_almeida[232867]:                 "ceph.encrypted": "0",
Sep 30 14:31:09 compute-0 happy_almeida[232867]:                 "ceph.osd_fsid": "1bf35304-bfb4-41f5-b832-570aa31de1b2",
Sep 30 14:31:09 compute-0 happy_almeida[232867]:                 "ceph.osd_id": "0",
Sep 30 14:31:09 compute-0 happy_almeida[232867]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 14:31:09 compute-0 happy_almeida[232867]:                 "ceph.type": "block",
Sep 30 14:31:09 compute-0 happy_almeida[232867]:                 "ceph.vdo": "0",
Sep 30 14:31:09 compute-0 happy_almeida[232867]:                 "ceph.with_tpm": "0"
Sep 30 14:31:09 compute-0 happy_almeida[232867]:             },
Sep 30 14:31:09 compute-0 happy_almeida[232867]:             "type": "block",
Sep 30 14:31:09 compute-0 happy_almeida[232867]:             "vg_name": "ceph_vg0"
Sep 30 14:31:09 compute-0 happy_almeida[232867]:         }
Sep 30 14:31:09 compute-0 happy_almeida[232867]:     ]
Sep 30 14:31:09 compute-0 happy_almeida[232867]: }
Sep 30 14:31:09 compute-0 systemd[1]: libpod-9f1055f4eb3b8512eb5a5b7671601b2405b4df06cca3903b004668af42786951.scope: Deactivated successfully.
Sep 30 14:31:09 compute-0 podman[232818]: 2025-09-30 14:31:09.795118878 +0000 UTC m=+0.502361534 container died 9f1055f4eb3b8512eb5a5b7671601b2405b4df06cca3903b004668af42786951 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_almeida, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:31:09 compute-0 podman[232818]: 2025-09-30 14:31:09.83731402 +0000 UTC m=+0.544556656 container remove 9f1055f4eb3b8512eb5a5b7671601b2405b4df06cca3903b004668af42786951 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Sep 30 14:31:09 compute-0 systemd[1]: libpod-conmon-9f1055f4eb3b8512eb5a5b7671601b2405b4df06cca3903b004668af42786951.scope: Deactivated successfully.
Sep 30 14:31:09 compute-0 sudo[232602]: pam_unix(sudo:session): session closed for user root
Sep 30 14:31:09 compute-0 sudo[233060]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:31:09 compute-0 sudo[233060]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:31:09 compute-0 sudo[233060]: pam_unix(sudo:session): session closed for user root
Sep 30 14:31:09 compute-0 sudo[233085]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -- raw list --format json
Sep 30 14:31:10 compute-0 sudo[233085]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:31:10 compute-0 python3.9[233059]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.iscsid_restart_required follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 14:31:10 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:31:10 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f54003480 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:31:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-65498c62bd3ef6657615136482283396501d423445efdf76366dfe3ed86fc0c1-merged.mount: Deactivated successfully.
Sep 30 14:31:10 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:31:10 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:31:10 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:31:10.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:31:10 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v462: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 680 B/s rd, 97 B/s wr, 0 op/s
Sep 30 14:31:10 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:31:10 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f38003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:31:10 compute-0 podman[233232]: 2025-09-30 14:31:10.427899732 +0000 UTC m=+0.051774931 container create 8ee524a1478081f18378f0ba977ad07595747e97383d94d419dbda3ff9de1a8b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_jepsen, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:31:10 compute-0 systemd[1]: Started libpod-conmon-8ee524a1478081f18378f0ba977ad07595747e97383d94d419dbda3ff9de1a8b.scope.
Sep 30 14:31:10 compute-0 podman[233232]: 2025-09-30 14:31:10.398000569 +0000 UTC m=+0.021875798 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:31:10 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:31:10 compute-0 podman[233232]: 2025-09-30 14:31:10.524910565 +0000 UTC m=+0.148785784 container init 8ee524a1478081f18378f0ba977ad07595747e97383d94d419dbda3ff9de1a8b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_jepsen, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Sep 30 14:31:10 compute-0 podman[233232]: 2025-09-30 14:31:10.532074388 +0000 UTC m=+0.155949587 container start 8ee524a1478081f18378f0ba977ad07595747e97383d94d419dbda3ff9de1a8b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_jepsen, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Sep 30 14:31:10 compute-0 stoic_jepsen[233271]: 167 167
Sep 30 14:31:10 compute-0 systemd[1]: libpod-8ee524a1478081f18378f0ba977ad07595747e97383d94d419dbda3ff9de1a8b.scope: Deactivated successfully.
Sep 30 14:31:10 compute-0 podman[233232]: 2025-09-30 14:31:10.546348471 +0000 UTC m=+0.170223670 container attach 8ee524a1478081f18378f0ba977ad07595747e97383d94d419dbda3ff9de1a8b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_jepsen, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Sep 30 14:31:10 compute-0 podman[233232]: 2025-09-30 14:31:10.547160113 +0000 UTC m=+0.171035312 container died 8ee524a1478081f18378f0ba977ad07595747e97383d94d419dbda3ff9de1a8b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_jepsen, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Sep 30 14:31:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-b470e33a7fcc949d568b1ef27f3ed10cac4960d0d7a577866c8bfe61c5ec0c47-merged.mount: Deactivated successfully.
Sep 30 14:31:10 compute-0 podman[233232]: 2025-09-30 14:31:10.588661336 +0000 UTC m=+0.212536535 container remove 8ee524a1478081f18378f0ba977ad07595747e97383d94d419dbda3ff9de1a8b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_jepsen, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:31:10 compute-0 systemd[1]: libpod-conmon-8ee524a1478081f18378f0ba977ad07595747e97383d94d419dbda3ff9de1a8b.scope: Deactivated successfully.
Sep 30 14:31:10 compute-0 sudo[233338]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xoqzclzmxocfxbnmnnjnolbejiwafgvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242670.301127-1229-191844642938974/AnsiballZ_file.py'
Sep 30 14:31:10 compute-0 sudo[233338]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:31:10 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:31:10 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:31:10 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:31:10 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:31:10.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:31:10 compute-0 podman[233348]: 2025-09-30 14:31:10.747027947 +0000 UTC m=+0.043147909 container create 3bc73524e4585e8947406a09368629bff8965a3c8c902c15f8bffeddd466af4b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_williamson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Sep 30 14:31:10 compute-0 systemd[1]: Started libpod-conmon-3bc73524e4585e8947406a09368629bff8965a3c8c902c15f8bffeddd466af4b.scope.
Sep 30 14:31:10 compute-0 podman[233348]: 2025-09-30 14:31:10.726776793 +0000 UTC m=+0.022896785 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:31:10 compute-0 python3.9[233342]: ansible-ansible.builtin.file Invoked with path=/etc/iscsi/.iscsid_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:31:10 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:31:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b7a0e098884122da67c86184dcf79c3c60924d4717e4bad968a1d6727a8b724/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:31:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b7a0e098884122da67c86184dcf79c3c60924d4717e4bad968a1d6727a8b724/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:31:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b7a0e098884122da67c86184dcf79c3c60924d4717e4bad968a1d6727a8b724/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:31:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b7a0e098884122da67c86184dcf79c3c60924d4717e4bad968a1d6727a8b724/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:31:10 compute-0 sudo[233338]: pam_unix(sudo:session): session closed for user root
Sep 30 14:31:10 compute-0 podman[233348]: 2025-09-30 14:31:10.860059681 +0000 UTC m=+0.156179663 container init 3bc73524e4585e8947406a09368629bff8965a3c8c902c15f8bffeddd466af4b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_williamson, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Sep 30 14:31:10 compute-0 podman[233348]: 2025-09-30 14:31:10.869055102 +0000 UTC m=+0.165175054 container start 3bc73524e4585e8947406a09368629bff8965a3c8c902c15f8bffeddd466af4b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_williamson, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Sep 30 14:31:10 compute-0 podman[233348]: 2025-09-30 14:31:10.872307629 +0000 UTC m=+0.168427631 container attach 3bc73524e4585e8947406a09368629bff8965a3c8c902c15f8bffeddd466af4b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_williamson, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Sep 30 14:31:10 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:31:10 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f5c0013a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:31:11 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:31:11 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:31:11 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:31:11 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:31:11 compute-0 lvm[233584]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 14:31:11 compute-0 lvm[233584]: VG ceph_vg0 finished
Sep 30 14:31:11 compute-0 sudo[233588]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlhpycyzxdtmknyievrtcnjafhkzckxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242671.2574983-1262-250262715408130/AnsiballZ_service_facts.py'
Sep 30 14:31:11 compute-0 sudo[233588]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:31:11 compute-0 pensive_williamson[233364]: {}
Sep 30 14:31:11 compute-0 systemd[1]: libpod-3bc73524e4585e8947406a09368629bff8965a3c8c902c15f8bffeddd466af4b.scope: Deactivated successfully.
Sep 30 14:31:11 compute-0 podman[233348]: 2025-09-30 14:31:11.619137594 +0000 UTC m=+0.915257576 container died 3bc73524e4585e8947406a09368629bff8965a3c8c902c15f8bffeddd466af4b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid)
Sep 30 14:31:11 compute-0 systemd[1]: libpod-3bc73524e4585e8947406a09368629bff8965a3c8c902c15f8bffeddd466af4b.scope: Consumed 1.082s CPU time.
Sep 30 14:31:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-6b7a0e098884122da67c86184dcf79c3c60924d4717e4bad968a1d6727a8b724-merged.mount: Deactivated successfully.
Sep 30 14:31:11 compute-0 podman[233348]: 2025-09-30 14:31:11.670689508 +0000 UTC m=+0.966809470 container remove 3bc73524e4585e8947406a09368629bff8965a3c8c902c15f8bffeddd466af4b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_williamson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Sep 30 14:31:11 compute-0 systemd[1]: libpod-conmon-3bc73524e4585e8947406a09368629bff8965a3c8c902c15f8bffeddd466af4b.scope: Deactivated successfully.
Sep 30 14:31:11 compute-0 python3.9[233590]: ansible-ansible.builtin.service_facts Invoked
Sep 30 14:31:11 compute-0 sudo[233085]: pam_unix(sudo:session): session closed for user root
Sep 30 14:31:11 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 14:31:11 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:31:11 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 14:31:11 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:31:11 compute-0 podman[233603]: 2025-09-30 14:31:11.774778732 +0000 UTC m=+0.065350365 container health_status c96c6cc1b89de09f21811bef665f35e069874e3ad1bbd4c37d02227af98e3458 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Sep 30 14:31:11 compute-0 ceph-mon[74194]: pgmap v462: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 680 B/s rd, 97 B/s wr, 0 op/s
Sep 30 14:31:11 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:31:11 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:31:11 compute-0 network[233655]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Sep 30 14:31:11 compute-0 network[233659]: 'network-scripts' will be removed from distribution in near future.
Sep 30 14:31:11 compute-0 network[233661]: It is advised to switch to 'NetworkManager' instead for network management.
Sep 30 14:31:11 compute-0 sudo[233627]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 14:31:11 compute-0 sudo[233627]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:31:11 compute-0 sudo[233627]: pam_unix(sudo:session): session closed for user root
Sep 30 14:31:12 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:31:12 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f30000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:31:12 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:31:12 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:31:12 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:31:12.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:31:12 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v463: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1.0 KiB/s wr, 3 op/s
Sep 30 14:31:12 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:31:12 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f54003480 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:31:12 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:31:12 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:31:12 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:31:12.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:31:12 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:31:12 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f38003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:31:13 compute-0 ceph-mon[74194]: pgmap v463: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1.0 KiB/s wr, 3 op/s
Sep 30 14:31:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:31:14 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f5c001eb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:31:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:31:14 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Sep 30 14:31:14 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:31:14 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:31:14 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:31:14.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:31:14 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v464: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1.0 KiB/s wr, 3 op/s
Sep 30 14:31:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:31:14 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f300016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:31:14 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Sep 30 14:31:14 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:31:14 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:31:14 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:31:14 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:31:14 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:31:14 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:31:14.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:31:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:31:14] "GET /metrics HTTP/1.1" 200 48422 "" "Prometheus/2.51.0"
Sep 30 14:31:14 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:31:14] "GET /metrics HTTP/1.1" 200 48422 "" "Prometheus/2.51.0"
Sep 30 14:31:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:31:14 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f54003480 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:31:14 compute-0 sudo[233588]: pam_unix(sudo:session): session closed for user root
Sep 30 14:31:15 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:31:15 compute-0 ceph-mon[74194]: pgmap v464: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1.0 KiB/s wr, 3 op/s
Sep 30 14:31:15 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:31:15 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:31:16 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:31:16 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f38003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:31:16 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:31:16 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:31:16 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:31:16.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:31:16 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v465: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Sep 30 14:31:16 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:31:16 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f5c001eb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:31:16 compute-0 sudo[233942]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukdxcveucujlohxuomtxrlonofxgzcch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242676.3498306-1292-43796639820198/AnsiballZ_file.py'
Sep 30 14:31:16 compute-0 sudo[233942]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:31:16 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:31:16 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:31:16 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:31:16.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:31:16 compute-0 python3.9[233944]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Sep 30 14:31:16 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:31:16 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f5c001eb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:31:16 compute-0 sudo[233942]: pam_unix(sudo:session): session closed for user root
Sep 30 14:31:17 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:31:17.038Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:31:17 compute-0 sudo[234095]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtbnpubspsamunsptrpqitnubprbzwgx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242677.0671597-1316-9959700260879/AnsiballZ_modprobe.py'
Sep 30 14:31:17 compute-0 sudo[234095]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:31:17 compute-0 python3.9[234097]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Sep 30 14:31:17 compute-0 sudo[234095]: pam_unix(sudo:session): session closed for user root
Sep 30 14:31:17 compute-0 ceph-mon[74194]: pgmap v465: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Sep 30 14:31:18 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:31:18 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f54003480 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:31:18 compute-0 sudo[234252]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzqxwgimkmowdovzstwvpwlvuskyzpcp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242677.9058466-1340-55608132522163/AnsiballZ_stat.py'
Sep 30 14:31:18 compute-0 sudo[234252]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:31:18 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:31:18 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:31:18 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:31:18.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:31:18 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v466: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Sep 30 14:31:18 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:31:18 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f38003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:31:18 compute-0 python3.9[234254]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:31:18 compute-0 sudo[234252]: pam_unix(sudo:session): session closed for user root
Sep 30 14:31:18 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:31:18 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:31:18 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:31:18.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:31:18 compute-0 sudo[234375]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzmcbiojdmvovtwwqxshjrzluyoknnfk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242677.9058466-1340-55608132522163/AnsiballZ_copy.py'
Sep 30 14:31:18 compute-0 sudo[234375]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:31:18 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:31:18 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f5c001eb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:31:18 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:31:18.919Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:31:18 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:31:18.919Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:31:18 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:31:18.920Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:31:18 compute-0 ceph-mgr[74485]: [dashboard INFO request] [192.168.122.100:50790] [POST] [200] [0.002s] [4.0B] [ace18d58-ec2f-41ac-baa1-22b452658d8a] /api/prometheus_receiver
Sep 30 14:31:19 compute-0 python3.9[234377]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759242677.9058466-1340-55608132522163/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:31:19 compute-0 sudo[234375]: pam_unix(sudo:session): session closed for user root
Sep 30 14:31:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-nfs-cephfs-compute-0-yvkpei[97239]: [WARNING] 272/143119 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Sep 30 14:31:19 compute-0 sudo[234528]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvixmdcvsvwzhjamagdomkvhnignstuj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242679.3670628-1388-77470309093720/AnsiballZ_lineinfile.py'
Sep 30 14:31:19 compute-0 sudo[234528]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:31:19 compute-0 ceph-mon[74194]: pgmap v466: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Sep 30 14:31:19 compute-0 systemd[1]: Stopping User Manager for UID 0...
Sep 30 14:31:19 compute-0 systemd[232845]: Activating special unit Exit the Session...
Sep 30 14:31:19 compute-0 systemd[232845]: Stopped target Main User Target.
Sep 30 14:31:19 compute-0 systemd[232845]: Stopped target Basic System.
Sep 30 14:31:19 compute-0 systemd[232845]: Stopped target Paths.
Sep 30 14:31:19 compute-0 systemd[232845]: Stopped target Sockets.
Sep 30 14:31:19 compute-0 systemd[232845]: Stopped target Timers.
Sep 30 14:31:19 compute-0 systemd[232845]: Stopped Daily Cleanup of User's Temporary Directories.
Sep 30 14:31:19 compute-0 systemd[232845]: Closed D-Bus User Message Bus Socket.
Sep 30 14:31:19 compute-0 systemd[232845]: Stopped Create User's Volatile Files and Directories.
Sep 30 14:31:19 compute-0 systemd[232845]: Removed slice User Application Slice.
Sep 30 14:31:19 compute-0 systemd[232845]: Reached target Shutdown.
Sep 30 14:31:19 compute-0 systemd[232845]: Finished Exit the Session.
Sep 30 14:31:19 compute-0 systemd[232845]: Reached target Exit the Session.
Sep 30 14:31:19 compute-0 python3.9[234530]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:31:19 compute-0 systemd[1]: user@0.service: Deactivated successfully.
Sep 30 14:31:19 compute-0 systemd[1]: Stopped User Manager for UID 0.
Sep 30 14:31:19 compute-0 sudo[234528]: pam_unix(sudo:session): session closed for user root
Sep 30 14:31:19 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/0...
Sep 30 14:31:19 compute-0 systemd[1]: run-user-0.mount: Deactivated successfully.
Sep 30 14:31:19 compute-0 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Sep 30 14:31:19 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/0.
Sep 30 14:31:19 compute-0 systemd[1]: Removed slice User Slice of UID 0.
Sep 30 14:31:20 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:31:20 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f5c001eb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:31:20 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:31:20 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:31:20 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:31:20.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:31:20 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v467: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Sep 30 14:31:20 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:31:20 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f54003480 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:31:20 compute-0 sudo[234682]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvmgcvwcpniezblfazvjattsjdntrpse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242680.329991-1412-321623136568/AnsiballZ_systemd.py'
Sep 30 14:31:20 compute-0 sudo[234682]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:31:20 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:31:20 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:31:20 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:31:20 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:31:20.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:31:20 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:31:20 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f38003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:31:20 compute-0 python3.9[234684]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Sep 30 14:31:20 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Sep 30 14:31:20 compute-0 systemd[1]: Stopped Load Kernel Modules.
Sep 30 14:31:20 compute-0 systemd[1]: Stopping Load Kernel Modules...
Sep 30 14:31:20 compute-0 systemd[1]: Starting Load Kernel Modules...
Sep 30 14:31:21 compute-0 systemd[1]: Finished Load Kernel Modules.
Sep 30 14:31:21 compute-0 sudo[234682]: pam_unix(sudo:session): session closed for user root
Sep 30 14:31:21 compute-0 sudo[234709]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:31:21 compute-0 sudo[234709]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:31:21 compute-0 sudo[234709]: pam_unix(sudo:session): session closed for user root
Sep 30 14:31:21 compute-0 sudo[234864]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crzjkxlwhevuowlkkwkybmkosvwewwhh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242681.216248-1436-245550955808977/AnsiballZ_file.py'
Sep 30 14:31:21 compute-0 sudo[234864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:31:21 compute-0 python3.9[234866]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:31:21 compute-0 sudo[234864]: pam_unix(sudo:session): session closed for user root
Sep 30 14:31:21 compute-0 ceph-mon[74194]: pgmap v467: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Sep 30 14:31:22 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:31:22 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f38003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:31:22 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:31:22 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:31:22 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:31:22.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:31:22 compute-0 sudo[235017]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijccigfdvpkegrumvzmprpzgknlvdnyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242682.1030884-1463-114058330784787/AnsiballZ_stat.py'
Sep 30 14:31:22 compute-0 sudo[235017]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:31:22 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v468: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Sep 30 14:31:22 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:31:22 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f38003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:31:22 compute-0 python3.9[235019]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 14:31:22 compute-0 sudo[235017]: pam_unix(sudo:session): session closed for user root
Sep 30 14:31:22 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:31:22 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:31:22 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:31:22.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:31:22 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:31:22 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f54003480 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:31:23 compute-0 sudo[235169]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcadyylzjgztzmkyjkhgzvcwjquoseem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242682.9363177-1490-27255779764485/AnsiballZ_stat.py'
Sep 30 14:31:23 compute-0 sudo[235169]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:31:23 compute-0 python3.9[235171]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 14:31:23 compute-0 sudo[235169]: pam_unix(sudo:session): session closed for user root
Sep 30 14:31:23 compute-0 ceph-mon[74194]: pgmap v468: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Sep 30 14:31:23 compute-0 sudo[235323]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tsgbkgsrfllfxibnrlwyyycwghqynpqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242683.6484365-1514-249430103684299/AnsiballZ_stat.py'
Sep 30 14:31:23 compute-0 sudo[235323]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:31:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:31:24 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f5c001eb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:31:24 compute-0 python3.9[235325]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:31:24 compute-0 sudo[235323]: pam_unix(sudo:session): session closed for user root
Sep 30 14:31:24 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:31:24 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:31:24 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:31:24.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:31:24 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v469: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:31:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:31:24 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f30002720 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:31:24 compute-0 sudo[235446]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbryexlybrlwevxeoburgskdyegwzltb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242683.6484365-1514-249430103684299/AnsiballZ_copy.py'
Sep 30 14:31:24 compute-0 sudo[235446]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:31:24 compute-0 python3.9[235448]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759242683.6484365-1514-249430103684299/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:31:24 compute-0 sudo[235446]: pam_unix(sudo:session): session closed for user root
Sep 30 14:31:24 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:31:24 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:31:24 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:31:24.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:31:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:31:24] "GET /metrics HTTP/1.1" 200 48422 "" "Prometheus/2.51.0"
Sep 30 14:31:24 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:31:24] "GET /metrics HTTP/1.1" 200 48422 "" "Prometheus/2.51.0"
Sep 30 14:31:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:31:24 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f38003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:31:25 compute-0 sudo[235599]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-doqczcdbphggirkjjuyfqsrbxfpbulyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242684.9250362-1559-233233590677041/AnsiballZ_command.py'
Sep 30 14:31:25 compute-0 sudo[235599]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:31:25 compute-0 python3.9[235601]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:31:25 compute-0 sudo[235599]: pam_unix(sudo:session): session closed for user root
Sep 30 14:31:25 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:31:25 compute-0 ceph-mon[74194]: pgmap v469: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:31:26 compute-0 sudo[235753]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlzbobhnmqrtgazxkvbdfxbxqszeysmq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242685.7882893-1583-106395844295964/AnsiballZ_lineinfile.py'
Sep 30 14:31:26 compute-0 sudo[235753]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:31:26 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:31:26 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f54003480 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:31:26 compute-0 python3.9[235755]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:31:26 compute-0 sudo[235753]: pam_unix(sudo:session): session closed for user root
Sep 30 14:31:26 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:31:26 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:31:26 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:31:26.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:31:26 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v470: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:31:26 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:31:26 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f5c003f10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:31:26 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:31:26 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:31:26 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:31:26.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:31:26 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:31:26 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f30002720 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:31:26 compute-0 sudo[235905]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxgrhmflnodzvdluibkaxjykqpslxsfm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242686.4516473-1607-254821172455059/AnsiballZ_replace.py'
Sep 30 14:31:26 compute-0 sudo[235905]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:31:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:31:27.038Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:31:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:31:27.038Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:31:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:31:27.038Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:31:27 compute-0 python3.9[235907]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:31:27 compute-0 sudo[235905]: pam_unix(sudo:session): session closed for user root
Sep 30 14:31:27 compute-0 sudo[236058]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmpqidvfaqecsaocyomavidyviwouodc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242687.309725-1631-185624424641602/AnsiballZ_replace.py'
Sep 30 14:31:27 compute-0 sudo[236058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:31:27 compute-0 python3.9[236060]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:31:27 compute-0 sudo[236058]: pam_unix(sudo:session): session closed for user root
Sep 30 14:31:27 compute-0 ceph-mon[74194]: pgmap v470: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:31:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:31:28 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f38003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:31:28 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:31:28 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:31:28 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:31:28.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:31:28 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v471: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Sep 30 14:31:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:31:28 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f54003480 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:31:28 compute-0 sudo[236211]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqyuraoxefulxmigftlqqvlrsmcvelny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242688.1369286-1658-138251273719319/AnsiballZ_lineinfile.py'
Sep 30 14:31:28 compute-0 sudo[236211]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:31:28 compute-0 python3.9[236213]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:31:28 compute-0 sudo[236211]: pam_unix(sudo:session): session closed for user root
Sep 30 14:31:28 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:31:28 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:31:28 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:31:28.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:31:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:31:28 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f5c003f10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:31:29 compute-0 sudo[236363]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umjarbyvaxyhmcoqibdoefdfdlcgacjx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242688.795397-1658-231685123182053/AnsiballZ_lineinfile.py'
Sep 30 14:31:29 compute-0 sudo[236363]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:31:29 compute-0 python3.9[236365]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:31:29 compute-0 sudo[236363]: pam_unix(sudo:session): session closed for user root
Sep 30 14:31:29 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:31:29 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:31:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:31:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:31:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:31:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:31:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:31:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:31:29 compute-0 sudo[236517]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jiufhwmkonnuwnwwdqmtabrwdjtlzytj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242689.5507903-1658-188822982874105/AnsiballZ_lineinfile.py'
Sep 30 14:31:29 compute-0 sudo[236517]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:31:29 compute-0 ceph-mon[74194]: pgmap v471: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Sep 30 14:31:29 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:31:30 compute-0 python3.9[236519]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:31:30 compute-0 sudo[236517]: pam_unix(sudo:session): session closed for user root
Sep 30 14:31:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:31:30 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f5c003f10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:31:30 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:31:30 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:31:30 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:31:30.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:31:30 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v472: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Sep 30 14:31:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:31:30 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f38003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:31:30 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:31:30 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:31:30 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:31:30 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:31:30.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:31:30 compute-0 sudo[236669]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnkyhpinvvxsvjaswlukaceunxnouhns ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242690.4122798-1658-41674703752413/AnsiballZ_lineinfile.py'
Sep 30 14:31:30 compute-0 sudo[236669]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:31:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:31:30 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f54003480 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:31:30 compute-0 python3.9[236671]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:31:30 compute-0 sudo[236669]: pam_unix(sudo:session): session closed for user root
Sep 30 14:31:31 compute-0 sudo[236822]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkhlfgcdpbfwwjfdnggrxyynyazqrdfu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242691.1589365-1745-47797243201176/AnsiballZ_stat.py'
Sep 30 14:31:31 compute-0 sudo[236822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:31:31 compute-0 python3.9[236824]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 14:31:31 compute-0 sudo[236822]: pam_unix(sudo:session): session closed for user root
Sep 30 14:31:31 compute-0 ceph-mon[74194]: pgmap v472: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Sep 30 14:31:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:31:32 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f5c003f10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:31:32 compute-0 sudo[236977]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tspodcqmpaouwmxbigunclmdzgggvzjf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242691.9775932-1769-225668933223914/AnsiballZ_file.py'
Sep 30 14:31:32 compute-0 sudo[236977]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:31:32 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:31:32 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:31:32 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:31:32.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:31:32 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v473: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Sep 30 14:31:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:31:32 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f5c003f10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:31:32 compute-0 python3.9[236979]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:31:32 compute-0 sudo[236977]: pam_unix(sudo:session): session closed for user root
Sep 30 14:31:32 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:31:32 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:31:32 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:31:32.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:31:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:31:32 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f38003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:31:33 compute-0 sudo[237129]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtxlbdtiomelnozhdmujnctzxpgydswi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242692.827895-1796-4888151832248/AnsiballZ_file.py'
Sep 30 14:31:33 compute-0 sudo[237129]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:31:33 compute-0 python3.9[237131]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:31:33 compute-0 sudo[237129]: pam_unix(sudo:session): session closed for user root
Sep 30 14:31:33 compute-0 sudo[237283]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntclghsiduqgenumptxwzrpdcoqptxgt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242693.5688636-1820-204458185665317/AnsiballZ_stat.py'
Sep 30 14:31:33 compute-0 sudo[237283]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:31:33 compute-0 ceph-mon[74194]: pgmap v473: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Sep 30 14:31:34 compute-0 python3.9[237285]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:31:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:31:34 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f54003480 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:31:34 compute-0 sudo[237283]: pam_unix(sudo:session): session closed for user root
Sep 30 14:31:34 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:31:34 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:31:34 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:31:34.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:31:34 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v474: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:31:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:31:34 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f38003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:31:34 compute-0 sudo[237361]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drjamxrzlzfeahbvuhzewoerlmvxyedq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242693.5688636-1820-204458185665317/AnsiballZ_file.py'
Sep 30 14:31:34 compute-0 sudo[237361]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:31:34 compute-0 python3.9[237363]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:31:34 compute-0 sudo[237361]: pam_unix(sudo:session): session closed for user root
Sep 30 14:31:34 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:31:34 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:31:34 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:31:34.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:31:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:31:34] "GET /metrics HTTP/1.1" 200 48421 "" "Prometheus/2.51.0"
Sep 30 14:31:34 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:31:34] "GET /metrics HTTP/1.1" 200 48421 "" "Prometheus/2.51.0"
Sep 30 14:31:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:31:34 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f5c003f10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:31:35 compute-0 sudo[237513]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axwkcnoetofhjkgyyqblotlorhnxyngy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242694.8194277-1820-273990786890759/AnsiballZ_stat.py'
Sep 30 14:31:35 compute-0 sudo[237513]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:31:35 compute-0 python3.9[237515]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:31:35 compute-0 sudo[237513]: pam_unix(sudo:session): session closed for user root
Sep 30 14:31:35 compute-0 sudo[237592]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cculhpybrhqxzsfegnbhwghvywzjlrxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242694.8194277-1820-273990786890759/AnsiballZ_file.py'
Sep 30 14:31:35 compute-0 sudo[237592]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:31:35 compute-0 python3.9[237594]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:31:35 compute-0 sudo[237592]: pam_unix(sudo:session): session closed for user root
Sep 30 14:31:35 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:31:36 compute-0 ceph-mon[74194]: pgmap v474: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:31:36 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:31:36 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f5c003f10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:31:36 compute-0 podman[237627]: 2025-09-30 14:31:36.192976735 +0000 UTC m=+0.109306005 container health_status 8ace90c1ad44d98a416105b72e1c541724757ab08efa10adfaa0df7e8c2027c6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2)
Sep 30 14:31:36 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:31:36 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:31:36 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:31:36.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:31:36 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v475: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Sep 30 14:31:36 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:31:36 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f54003480 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:31:36 compute-0 sudo[237773]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mnuhrzluswbgrggfduyptpftbzzlsoic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242696.0902514-1889-238244248529403/AnsiballZ_file.py'
Sep 30 14:31:36 compute-0 sudo[237773]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:31:36 compute-0 python3.9[237775]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:31:36 compute-0 sudo[237773]: pam_unix(sudo:session): session closed for user root
Sep 30 14:31:36 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:31:36 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:31:36 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:31:36.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:31:36 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:31:36 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f38003c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:31:37 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:31:37.039Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:31:37 compute-0 sudo[237925]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhnlretqdojvtijnhfasctgbfpotlakw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242696.7957973-1913-265245490752942/AnsiballZ_stat.py'
Sep 30 14:31:37 compute-0 sudo[237925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:31:37 compute-0 python3.9[237927]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:31:37 compute-0 sudo[237925]: pam_unix(sudo:session): session closed for user root
Sep 30 14:31:37 compute-0 sudo[238004]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqccasedjfbgixnglxahetgwziajvuvp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242696.7957973-1913-265245490752942/AnsiballZ_file.py'
Sep 30 14:31:37 compute-0 sudo[238004]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:31:37 compute-0 python3.9[238006]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:31:37 compute-0 sudo[238004]: pam_unix(sudo:session): session closed for user root
Sep 30 14:31:38 compute-0 ceph-mon[74194]: pgmap v475: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Sep 30 14:31:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:31:38 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f5c003f10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:31:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:31:38.246 163966 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:31:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:31:38.246 163966 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:31:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:31:38.247 163966 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:31:38 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:31:38 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:31:38 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:31:38.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:31:38 compute-0 sudo[238157]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvubhmholcilknerqidsspxdfpajqbun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242698.0288708-1949-175768121433367/AnsiballZ_stat.py'
Sep 30 14:31:38 compute-0 sudo[238157]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:31:38 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v476: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:31:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:31:38 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f5c003f10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:31:38 compute-0 systemd[1]: virtnodedevd.service: Deactivated successfully.
Sep 30 14:31:38 compute-0 python3.9[238159]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:31:38 compute-0 sudo[238157]: pam_unix(sudo:session): session closed for user root
Sep 30 14:31:38 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:31:38 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:31:38 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:31:38.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:31:38 compute-0 sudo[238236]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mknsjmmbownkbmpklktbysdscpduoxox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242698.0288708-1949-175768121433367/AnsiballZ_file.py'
Sep 30 14:31:38 compute-0 sudo[238236]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:31:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:31:38 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f54003480 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:31:38 compute-0 python3.9[238238]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:31:39 compute-0 sudo[238236]: pam_unix(sudo:session): session closed for user root
Sep 30 14:31:39 compute-0 ceph-mon[74194]: pgmap v476: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:31:39 compute-0 sudo[238389]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-asremtwagozdryquavpxrdfkxjyfrwkv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242699.1864388-1985-280581309491283/AnsiballZ_systemd.py'
Sep 30 14:31:39 compute-0 sudo[238389]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:31:39 compute-0 podman[238391]: 2025-09-30 14:31:39.564003691 +0000 UTC m=+0.058214403 container health_status 3f9405f717bf7bccb1d94628a6cea0442375ebf8d5cf43ef2536ee30dce6c6e0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:31:39 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Sep 30 14:31:39 compute-0 python3.9[238392]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 14:31:39 compute-0 systemd[1]: Reloading.
Sep 30 14:31:39 compute-0 systemd-sysv-generator[238443]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 14:31:39 compute-0 systemd-rc-local-generator[238439]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:31:40 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:31:40 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f38003c50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:31:40 compute-0 sudo[238389]: pam_unix(sudo:session): session closed for user root
Sep 30 14:31:40 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:31:40 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:31:40 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:31:40.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:31:40 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v477: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:31:40 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:31:40 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f5c003f10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:31:40 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:31:40 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:31:40 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:31:40 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:31:40.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:31:40 compute-0 sudo[238601]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjynnpucpqprzmngqewqawtqhxshzjpv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242700.4624283-2009-78204664633082/AnsiballZ_stat.py'
Sep 30 14:31:40 compute-0 sudo[238601]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:31:40 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:31:40 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f48000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:31:40 compute-0 python3.9[238603]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:31:41 compute-0 sudo[238601]: pam_unix(sudo:session): session closed for user root
Sep 30 14:31:41 compute-0 sudo[238633]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:31:41 compute-0 sudo[238633]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:31:41 compute-0 sudo[238633]: pam_unix(sudo:session): session closed for user root
Sep 30 14:31:41 compute-0 sudo[238704]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktluzrtiaapiyooufxghtkxlhaambwrj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242700.4624283-2009-78204664633082/AnsiballZ_file.py'
Sep 30 14:31:41 compute-0 sudo[238704]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:31:41 compute-0 ceph-mon[74194]: pgmap v477: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:31:41 compute-0 python3.9[238706]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:31:41 compute-0 sudo[238704]: pam_unix(sudo:session): session closed for user root
Sep 30 14:31:42 compute-0 sudo[238869]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qozjrussoexeszggzcutsyxqvvzjrhtj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242701.6704617-2045-52968163781560/AnsiballZ_stat.py'
Sep 30 14:31:42 compute-0 sudo[238869]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:31:42 compute-0 podman[238832]: 2025-09-30 14:31:42.0412851 +0000 UTC m=+0.084844818 container health_status c96c6cc1b89de09f21811bef665f35e069874e3ad1bbd4c37d02227af98e3458 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Sep 30 14:31:42 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:31:42 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f54003480 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:31:42 compute-0 python3.9[238879]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:31:42 compute-0 sudo[238869]: pam_unix(sudo:session): session closed for user root
Sep 30 14:31:42 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:31:42 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:31:42 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:31:42.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:31:42 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v478: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Sep 30 14:31:42 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:31:42 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f38003c70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:31:42 compute-0 sudo[238956]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-erjksxcpauzbozaxqtjrsrieuijoowrp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242701.6704617-2045-52968163781560/AnsiballZ_file.py'
Sep 30 14:31:42 compute-0 sudo[238956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:31:42 compute-0 python3.9[238958]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:31:42 compute-0 sudo[238956]: pam_unix(sudo:session): session closed for user root
Sep 30 14:31:42 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:31:42 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:31:42 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:31:42.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:31:42 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:31:42 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f3c002ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:31:43 compute-0 sudo[239108]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkxuqhjiiurioierbhucjscogscqzsju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242702.8219125-2081-165016149235703/AnsiballZ_systemd.py'
Sep 30 14:31:43 compute-0 sudo[239108]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:31:43 compute-0 python3.9[239110]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 14:31:43 compute-0 ceph-mon[74194]: pgmap v478: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Sep 30 14:31:43 compute-0 systemd[1]: Reloading.
Sep 30 14:31:43 compute-0 systemd-sysv-generator[239143]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 14:31:43 compute-0 systemd-rc-local-generator[239140]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:31:43 compute-0 systemd[1]: Starting Create netns directory...
Sep 30 14:31:43 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Sep 30 14:31:43 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Sep 30 14:31:43 compute-0 systemd[1]: Finished Create netns directory.
Sep 30 14:31:43 compute-0 sudo[239108]: pam_unix(sudo:session): session closed for user root
Sep 30 14:31:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:31:44 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f48000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:31:44 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:31:44 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:31:44 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:31:44.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:31:44 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v479: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:31:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:31:44 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f54003480 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:31:44 compute-0 sudo[239304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vohxvivkjjmqsyzupbxrsvlohsooaita ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242704.2521346-2111-188224350738090/AnsiballZ_file.py'
Sep 30 14:31:44 compute-0 sudo[239304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:31:44 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:31:44 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:31:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:31:44] "GET /metrics HTTP/1.1" 200 48421 "" "Prometheus/2.51.0"
Sep 30 14:31:44 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:31:44] "GET /metrics HTTP/1.1" 200 48421 "" "Prometheus/2.51.0"
Sep 30 14:31:44 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:31:44 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:31:44 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:31:44.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:31:44 compute-0 python3.9[239306]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:31:44 compute-0 sudo[239304]: pam_unix(sudo:session): session closed for user root
Sep 30 14:31:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:31:44 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f38003c90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:31:45 compute-0 sudo[239456]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgqitptavpsgnxdaznekorbsgzmnjbdf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242704.9889617-2135-90206798343001/AnsiballZ_stat.py'
Sep 30 14:31:45 compute-0 sudo[239456]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:31:45 compute-0 python3.9[239458]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:31:45 compute-0 ceph-mon[74194]: pgmap v479: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:31:45 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:31:45 compute-0 sudo[239456]: pam_unix(sudo:session): session closed for user root
Sep 30 14:31:45 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:31:45 compute-0 sudo[239581]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhsflcuicyvgynmrxnyluudjcnlzbrkd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242704.9889617-2135-90206798343001/AnsiballZ_copy.py'
Sep 30 14:31:45 compute-0 sudo[239581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:31:45 compute-0 python3.9[239583]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759242704.9889617-2135-90206798343001/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:31:46 compute-0 sudo[239581]: pam_unix(sudo:session): session closed for user root
Sep 30 14:31:46 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:31:46 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f3c002ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:31:46 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:31:46 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:31:46 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:31:46.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:31:46 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v480: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Sep 30 14:31:46 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:31:46 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f48000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:31:46 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:31:46 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:31:46 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:31:46.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:31:46 compute-0 sudo[239733]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntrhvylybnjrjpdzwgarkoubnfhrsoku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242706.5530632-2186-52391405260570/AnsiballZ_file.py'
Sep 30 14:31:46 compute-0 sudo[239733]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:31:46 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:31:46 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f54003480 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:31:46 compute-0 python3.9[239735]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:31:47 compute-0 sudo[239733]: pam_unix(sudo:session): session closed for user root
Sep 30 14:31:47 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:31:47.040Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:31:47 compute-0 ceph-mon[74194]: pgmap v480: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Sep 30 14:31:47 compute-0 sudo[239886]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eyzfonfuhmhuycxhcvtedddqngkwreni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242707.4768295-2210-89499040617095/AnsiballZ_stat.py'
Sep 30 14:31:47 compute-0 sudo[239886]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:31:47 compute-0 python3.9[239889]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:31:47 compute-0 sudo[239886]: pam_unix(sudo:session): session closed for user root
Sep 30 14:31:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:31:48 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f38003cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:31:48 compute-0 sudo[240010]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fovzkhdllogfxsspgfztcvzdeekdgfkg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242707.4768295-2210-89499040617095/AnsiballZ_copy.py'
Sep 30 14:31:48 compute-0 sudo[240010]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:31:48 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:31:48 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:31:48 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:31:48.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:31:48 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v481: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:31:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:31:48 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f3c002ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:31:48 compute-0 python3.9[240012]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759242707.4768295-2210-89499040617095/.source.json _original_basename=.4sqrn63n follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:31:48 compute-0 sudo[240010]: pam_unix(sudo:session): session closed for user root
Sep 30 14:31:48 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:31:48 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:31:48 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:31:48.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:31:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:31:48 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f48002650 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:31:49 compute-0 sudo[240162]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzashklicnhrzckenrfsmhzxvskndfvb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242708.725401-2255-11075884646138/AnsiballZ_file.py'
Sep 30 14:31:49 compute-0 sudo[240162]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:31:49 compute-0 python3.9[240164]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:31:49 compute-0 sudo[240162]: pam_unix(sudo:session): session closed for user root
Sep 30 14:31:49 compute-0 ceph-mon[74194]: pgmap v481: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:31:49 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Sep 30 14:31:49 compute-0 systemd[1]: virtqemud.service: Deactivated successfully.
Sep 30 14:31:49 compute-0 sudo[240318]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfxawninspazmnmvospkpgacepdziegz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242709.5441332-2279-126717101162657/AnsiballZ_stat.py'
Sep 30 14:31:49 compute-0 sudo[240318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:31:50 compute-0 sudo[240318]: pam_unix(sudo:session): session closed for user root
Sep 30 14:31:50 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:31:50 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f54003480 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:31:50 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:31:50 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:31:50 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:31:50.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:31:50 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v482: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:31:50 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:31:50 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f38003cd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:31:50 compute-0 sudo[240441]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jaobgynkswpthhoalcqxtmeaukoiupyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242709.5441332-2279-126717101162657/AnsiballZ_copy.py'
Sep 30 14:31:50 compute-0 sudo[240441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:31:50 compute-0 sudo[240441]: pam_unix(sudo:session): session closed for user root
Sep 30 14:31:50 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:31:50 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:31:50 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:31:50 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:31:50.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:31:50 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:31:50 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f3c002ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:31:51 compute-0 sudo[240593]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzfqmcocquycnjpulidxekqhiltyznpd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242711.033793-2330-46043920872098/AnsiballZ_container_config_data.py'
Sep 30 14:31:51 compute-0 sudo[240593]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:31:51 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-nfs-cephfs-compute-0-yvkpei[97239]: [WARNING] 272/143151 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Sep 30 14:31:51 compute-0 python3.9[240595]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Sep 30 14:31:51 compute-0 ceph-mon[74194]: pgmap v482: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:31:51 compute-0 sudo[240593]: pam_unix(sudo:session): session closed for user root
Sep 30 14:31:52 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:31:52 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f48002650 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:31:52 compute-0 sudo[240747]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewjwhqxtxwbtuwtriqgqrrldkkxpliwj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242711.8270886-2357-122975738968648/AnsiballZ_container_config_hash.py'
Sep 30 14:31:52 compute-0 sudo[240747]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:31:52 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:31:52 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:31:52 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:31:52.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:31:52 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v483: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:31:52 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:31:52 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f54003480 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:31:52 compute-0 python3.9[240749]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Sep 30 14:31:52 compute-0 sudo[240747]: pam_unix(sudo:session): session closed for user root
Sep 30 14:31:52 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:31:52 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:31:52 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:31:52.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:31:52 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:31:52 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f38003cf0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:31:53 compute-0 sudo[240899]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcuzqfctkajmwnfiwvvwkvnkqijhwlxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242712.9254522-2384-89803530626135/AnsiballZ_podman_container_info.py'
Sep 30 14:31:53 compute-0 sudo[240899]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:31:53 compute-0 python3.9[240901]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Sep 30 14:31:53 compute-0 ceph-mon[74194]: pgmap v483: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:31:53 compute-0 sudo[240899]: pam_unix(sudo:session): session closed for user root
Sep 30 14:31:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:31:54 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f3c002ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:31:54 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:31:54 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:31:54 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:31:54.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:31:54 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v484: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Sep 30 14:31:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:31:54 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f48003360 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:31:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:31:54] "GET /metrics HTTP/1.1" 200 48421 "" "Prometheus/2.51.0"
Sep 30 14:31:54 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:31:54] "GET /metrics HTTP/1.1" 200 48421 "" "Prometheus/2.51.0"
Sep 30 14:31:54 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:31:54 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:31:54 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:31:54.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:31:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:31:54 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f54003480 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:31:55 compute-0 sudo[241080]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pyqpgxdsaugyyvnzavhbcndnzyikmxrv ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759242714.9177606-2423-213259918602751/AnsiballZ_edpm_container_manage.py'
Sep 30 14:31:55 compute-0 sudo[241080]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:31:55 compute-0 python3[241082]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Sep 30 14:31:55 compute-0 ceph-mon[74194]: pgmap v484: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Sep 30 14:31:55 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:31:56 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:31:56 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f38003d10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:31:56 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:31:56 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:31:56 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:31:56.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:31:56 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v485: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Sep 30 14:31:56 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:31:56 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f3c003960 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:31:56 compute-0 podman[241097]: 2025-09-30 14:31:56.658776374 +0000 UTC m=+1.175203032 image pull 80aeb93432d60c5f52c5325081f51dbf5658fe1615083ed284852e8f6df43250 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Sep 30 14:31:56 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:31:56 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:31:56 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:31:56.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:31:56 compute-0 podman[241154]: 2025-09-30 14:31:56.779216491 +0000 UTC m=+0.019343050 image pull 80aeb93432d60c5f52c5325081f51dbf5658fe1615083ed284852e8f6df43250 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Sep 30 14:31:56 compute-0 podman[241154]: 2025-09-30 14:31:56.878592753 +0000 UTC m=+0.118719292 container create b3fb2fd96e3ed0561f41ccf5f3a6a8f5edc1610051f196882b9fe64f54041c07 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20250923, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_id=multipathd, tcib_managed=true, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Sep 30 14:31:56 compute-0 python3[241082]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Sep 30 14:31:56 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:31:56 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f48003360 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:31:57 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:31:57.041Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:31:57 compute-0 sudo[241080]: pam_unix(sudo:session): session closed for user root
Sep 30 14:31:57 compute-0 ceph-mon[74194]: pgmap v485: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Sep 30 14:31:57 compute-0 sudo[241344]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-goxnffhfvfqgqiplnzudfegtjnzoxtip ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242717.382765-2447-112476678570307/AnsiballZ_stat.py'
Sep 30 14:31:57 compute-0 sudo[241344]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:31:57 compute-0 python3.9[241346]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 14:31:57 compute-0 sudo[241344]: pam_unix(sudo:session): session closed for user root
Sep 30 14:31:58 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:31:58 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f54003480 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:31:58 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:31:58 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:31:58 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:31:58.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:31:58 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v486: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Sep 30 14:31:58 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:31:58 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f38003d30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:31:58 compute-0 sudo[241499]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glxrszemrryikuvnugarrfviqvnayazu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242718.2096913-2474-151939669645999/AnsiballZ_file.py'
Sep 30 14:31:58 compute-0 sudo[241499]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:31:58 compute-0 python3.9[241501]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:31:58 compute-0 sudo[241499]: pam_unix(sudo:session): session closed for user root
Sep 30 14:31:58 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:31:58 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:31:58 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:31:58.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:31:58 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:31:58 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f3c003960 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:31:58 compute-0 sudo[241575]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lexznymealfnxvazhblklwpuuhnzfefm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242718.2096913-2474-151939669645999/AnsiballZ_stat.py'
Sep 30 14:31:58 compute-0 sudo[241575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:31:59 compute-0 python3.9[241577]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 14:31:59 compute-0 sudo[241575]: pam_unix(sudo:session): session closed for user root
Sep 30 14:31:59 compute-0 ceph-mgr[74485]: [balancer INFO root] Optimize plan auto_2025-09-30_14:31:59
Sep 30 14:31:59 compute-0 ceph-mgr[74485]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 14:31:59 compute-0 ceph-mgr[74485]: [balancer INFO root] do_upmap
Sep 30 14:31:59 compute-0 ceph-mgr[74485]: [balancer INFO root] pools ['cephfs.cephfs.data', '.rgw.root', 'default.rgw.control', '.mgr', '.nfs', 'cephfs.cephfs.meta', 'default.rgw.meta', 'volumes', 'backups', 'default.rgw.log', 'images', 'vms']
Sep 30 14:31:59 compute-0 ceph-mgr[74485]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 14:31:59 compute-0 ceph-mon[74194]: pgmap v486: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Sep 30 14:31:59 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:31:59 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:31:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:31:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:31:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 14:31:59 compute-0 sudo[241727]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqyufysnakpyurqejmsrdtfxajkbfixp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242719.2901785-2474-127448253260168/AnsiballZ_copy.py'
Sep 30 14:31:59 compute-0 sudo[241727]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:31:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:31:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 14:31:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:31:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:31:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:31:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:31:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:31:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:31:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:31:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:31:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:31:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Sep 30 14:31:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:31:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:31:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:31:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Sep 30 14:31:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:31:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Sep 30 14:31:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:31:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:31:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:31:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 14:31:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:31:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 14:31:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:31:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:31:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:31:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:31:59 compute-0 python3.9[241729]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759242719.2901785-2474-127448253260168/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:31:59 compute-0 sudo[241727]: pam_unix(sudo:session): session closed for user root
Sep 30 14:32:00 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:32:00 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f3c003960 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:32:00 compute-0 sudo[241804]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xawkttsfnfkotvososummkpghnihsoae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242719.2901785-2474-127448253260168/AnsiballZ_systemd.py'
Sep 30 14:32:00 compute-0 sudo[241804]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:32:00 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:32:00 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:32:00 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:32:00 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:32:00 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:32:00.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:32:00 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v487: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Sep 30 14:32:00 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:32:00 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f54003480 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:32:00 compute-0 python3.9[241806]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Sep 30 14:32:00 compute-0 systemd[1]: Reloading.
Sep 30 14:32:00 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:32:00 compute-0 systemd-rc-local-generator[241834]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:32:00 compute-0 systemd-sysv-generator[241837]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 14:32:00 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:32:00 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:32:00 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:32:00 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:32:00.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:32:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 14:32:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 14:32:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 14:32:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 14:32:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 14:32:00 compute-0 sudo[241804]: pam_unix(sudo:session): session closed for user root
Sep 30 14:32:00 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:32:00 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f38003d50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:32:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 14:32:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 14:32:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 14:32:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 14:32:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 14:32:01 compute-0 sudo[241915]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-saxskmxzqidwrzsiblqygcncefbeumpr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242719.2901785-2474-127448253260168/AnsiballZ_systemd.py'
Sep 30 14:32:01 compute-0 sudo[241915]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:32:01 compute-0 sudo[241918]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:32:01 compute-0 sudo[241918]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:32:01 compute-0 sudo[241918]: pam_unix(sudo:session): session closed for user root
Sep 30 14:32:01 compute-0 python3.9[241917]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 14:32:01 compute-0 systemd[1]: Reloading.
Sep 30 14:32:01 compute-0 systemd-sysv-generator[241976]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 14:32:01 compute-0 systemd-rc-local-generator[241972]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:32:01 compute-0 ceph-mon[74194]: pgmap v487: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Sep 30 14:32:01 compute-0 systemd[1]: Starting multipathd container...
Sep 30 14:32:01 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:32:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d089b21612bbf47370fecf3f166153ed966289606833c8f9570f7df98abd27d4/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Sep 30 14:32:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d089b21612bbf47370fecf3f166153ed966289606833c8f9570f7df98abd27d4/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Sep 30 14:32:01 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run b3fb2fd96e3ed0561f41ccf5f3a6a8f5edc1610051f196882b9fe64f54041c07.
Sep 30 14:32:01 compute-0 podman[241984]: 2025-09-30 14:32:01.970957833 +0000 UTC m=+0.115127405 container init b3fb2fd96e3ed0561f41ccf5f3a6a8f5edc1610051f196882b9fe64f54041c07 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Sep 30 14:32:01 compute-0 multipathd[241999]: + sudo -E kolla_set_configs
Sep 30 14:32:01 compute-0 podman[241984]: 2025-09-30 14:32:01.989556643 +0000 UTC m=+0.133726195 container start b3fb2fd96e3ed0561f41ccf5f3a6a8f5edc1610051f196882b9fe64f54041c07 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Sep 30 14:32:01 compute-0 podman[241984]: multipathd
Sep 30 14:32:02 compute-0 systemd[1]: Started multipathd container.
Sep 30 14:32:02 compute-0 sudo[242006]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Sep 30 14:32:02 compute-0 sudo[242006]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Sep 30 14:32:02 compute-0 sudo[242006]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Sep 30 14:32:02 compute-0 sudo[241915]: pam_unix(sudo:session): session closed for user root
Sep 30 14:32:02 compute-0 multipathd[241999]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Sep 30 14:32:02 compute-0 multipathd[241999]: INFO:__main__:Validating config file
Sep 30 14:32:02 compute-0 multipathd[241999]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Sep 30 14:32:02 compute-0 multipathd[241999]: INFO:__main__:Writing out command to execute
Sep 30 14:32:02 compute-0 sudo[242006]: pam_unix(sudo:session): session closed for user root
Sep 30 14:32:02 compute-0 multipathd[241999]: ++ cat /run_command
Sep 30 14:32:02 compute-0 multipathd[241999]: + CMD='/usr/sbin/multipathd -d'
Sep 30 14:32:02 compute-0 multipathd[241999]: + ARGS=
Sep 30 14:32:02 compute-0 multipathd[241999]: + sudo kolla_copy_cacerts
Sep 30 14:32:02 compute-0 sudo[242046]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Sep 30 14:32:02 compute-0 podman[242005]: 2025-09-30 14:32:02.092361537 +0000 UTC m=+0.091915592 container health_status b3fb2fd96e3ed0561f41ccf5f3a6a8f5edc1610051f196882b9fe64f54041c07 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Sep 30 14:32:02 compute-0 sudo[242046]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Sep 30 14:32:02 compute-0 sudo[242046]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Sep 30 14:32:02 compute-0 sudo[242046]: pam_unix(sudo:session): session closed for user root
Sep 30 14:32:02 compute-0 multipathd[241999]: + [[ ! -n '' ]]
Sep 30 14:32:02 compute-0 multipathd[241999]: + . kolla_extend_start
Sep 30 14:32:02 compute-0 multipathd[241999]: Running command: '/usr/sbin/multipathd -d'
Sep 30 14:32:02 compute-0 multipathd[241999]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Sep 30 14:32:02 compute-0 multipathd[241999]: + umask 0022
Sep 30 14:32:02 compute-0 multipathd[241999]: + exec /usr/sbin/multipathd -d
Sep 30 14:32:02 compute-0 systemd[1]: b3fb2fd96e3ed0561f41ccf5f3a6a8f5edc1610051f196882b9fe64f54041c07-68bdbfb18b01491a.service: Main process exited, code=exited, status=1/FAILURE
Sep 30 14:32:02 compute-0 systemd[1]: b3fb2fd96e3ed0561f41ccf5f3a6a8f5edc1610051f196882b9fe64f54041c07-68bdbfb18b01491a.service: Failed with result 'exit-code'.
Sep 30 14:32:02 compute-0 multipathd[241999]: 6084.524823 | --------start up--------
Sep 30 14:32:02 compute-0 multipathd[241999]: 6084.524841 | read /etc/multipath.conf
Sep 30 14:32:02 compute-0 multipathd[241999]: 6084.529679 | path checkers start up
Sep 30 14:32:02 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:32:02 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f3c003960 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:32:02 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:32:02 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:32:02 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:32:02.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:32:02 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v488: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Sep 30 14:32:02 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:32:02 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f3c003960 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:32:02 compute-0 python3.9[242189]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 14:32:02 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:32:02 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:32:02 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:32:02.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:32:02 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:32:02 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f48004070 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:32:03 compute-0 sudo[242341]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebqvgpxhleboyrcrffakvkxgzkrmwzxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242722.9642844-2582-52004562109024/AnsiballZ_command.py'
Sep 30 14:32:03 compute-0 sudo[242341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:32:03 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:32:03 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:32:03 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:32:03 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:32:03 compute-0 python3.9[242343]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:32:03 compute-0 sudo[242341]: pam_unix(sudo:session): session closed for user root
Sep 30 14:32:03 compute-0 ceph-mon[74194]: pgmap v488: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Sep 30 14:32:04 compute-0 sudo[242508]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-unttltklsohkezjccbguvdczkksizisg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242723.7809372-2606-210953565847247/AnsiballZ_systemd.py'
Sep 30 14:32:04 compute-0 sudo[242508]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:32:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:32:04 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f48004070 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:32:04 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:32:04 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:32:04 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:32:04.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:32:04 compute-0 python3.9[242510]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Sep 30 14:32:04 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v489: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Sep 30 14:32:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:32:04 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f54003480 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:32:04 compute-0 systemd[1]: Stopping multipathd container...
Sep 30 14:32:04 compute-0 multipathd[241999]: 6086.902388 | exit (signal)
Sep 30 14:32:04 compute-0 multipathd[241999]: 6086.902644 | --------shut down-------
Sep 30 14:32:04 compute-0 systemd[1]: libpod-b3fb2fd96e3ed0561f41ccf5f3a6a8f5edc1610051f196882b9fe64f54041c07.scope: Deactivated successfully.
Sep 30 14:32:04 compute-0 podman[242514]: 2025-09-30 14:32:04.519943754 +0000 UTC m=+0.089123696 container died b3fb2fd96e3ed0561f41ccf5f3a6a8f5edc1610051f196882b9fe64f54041c07 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=multipathd, io.buildah.version=1.41.3, tcib_managed=true)
Sep 30 14:32:04 compute-0 systemd[1]: b3fb2fd96e3ed0561f41ccf5f3a6a8f5edc1610051f196882b9fe64f54041c07-68bdbfb18b01491a.timer: Deactivated successfully.
Sep 30 14:32:04 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run b3fb2fd96e3ed0561f41ccf5f3a6a8f5edc1610051f196882b9fe64f54041c07.
Sep 30 14:32:04 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b3fb2fd96e3ed0561f41ccf5f3a6a8f5edc1610051f196882b9fe64f54041c07-userdata-shm.mount: Deactivated successfully.
Sep 30 14:32:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-d089b21612bbf47370fecf3f166153ed966289606833c8f9570f7df98abd27d4-merged.mount: Deactivated successfully.
Sep 30 14:32:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:32:04] "GET /metrics HTTP/1.1" 200 48419 "" "Prometheus/2.51.0"
Sep 30 14:32:04 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:32:04] "GET /metrics HTTP/1.1" 200 48419 "" "Prometheus/2.51.0"
Sep 30 14:32:04 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:32:04 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:32:04 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:32:04.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:32:04 compute-0 podman[242514]: 2025-09-30 14:32:04.891216325 +0000 UTC m=+0.460396267 container cleanup b3fb2fd96e3ed0561f41ccf5f3a6a8f5edc1610051f196882b9fe64f54041c07 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=multipathd, tcib_managed=true)
Sep 30 14:32:04 compute-0 podman[242514]: multipathd
Sep 30 14:32:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:32:04 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f38003d90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:32:04 compute-0 podman[242541]: multipathd
Sep 30 14:32:04 compute-0 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Sep 30 14:32:04 compute-0 systemd[1]: Stopped multipathd container.
Sep 30 14:32:04 compute-0 systemd[1]: Starting multipathd container...
Sep 30 14:32:05 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:32:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d089b21612bbf47370fecf3f166153ed966289606833c8f9570f7df98abd27d4/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Sep 30 14:32:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d089b21612bbf47370fecf3f166153ed966289606833c8f9570f7df98abd27d4/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Sep 30 14:32:05 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run b3fb2fd96e3ed0561f41ccf5f3a6a8f5edc1610051f196882b9fe64f54041c07.
Sep 30 14:32:05 compute-0 podman[242554]: 2025-09-30 14:32:05.09080613 +0000 UTC m=+0.097195664 container init b3fb2fd96e3ed0561f41ccf5f3a6a8f5edc1610051f196882b9fe64f54041c07 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Sep 30 14:32:05 compute-0 multipathd[242569]: + sudo -E kolla_set_configs
Sep 30 14:32:05 compute-0 sudo[242575]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Sep 30 14:32:05 compute-0 sudo[242575]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Sep 30 14:32:05 compute-0 sudo[242575]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Sep 30 14:32:05 compute-0 podman[242554]: 2025-09-30 14:32:05.122635126 +0000 UTC m=+0.129024660 container start b3fb2fd96e3ed0561f41ccf5f3a6a8f5edc1610051f196882b9fe64f54041c07 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Sep 30 14:32:05 compute-0 podman[242554]: multipathd
Sep 30 14:32:05 compute-0 systemd[1]: Started multipathd container.
Sep 30 14:32:05 compute-0 multipathd[242569]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Sep 30 14:32:05 compute-0 multipathd[242569]: INFO:__main__:Validating config file
Sep 30 14:32:05 compute-0 multipathd[242569]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Sep 30 14:32:05 compute-0 multipathd[242569]: INFO:__main__:Writing out command to execute
Sep 30 14:32:05 compute-0 sudo[242575]: pam_unix(sudo:session): session closed for user root
Sep 30 14:32:05 compute-0 sudo[242508]: pam_unix(sudo:session): session closed for user root
Sep 30 14:32:05 compute-0 multipathd[242569]: ++ cat /run_command
Sep 30 14:32:05 compute-0 multipathd[242569]: + CMD='/usr/sbin/multipathd -d'
Sep 30 14:32:05 compute-0 multipathd[242569]: + ARGS=
Sep 30 14:32:05 compute-0 multipathd[242569]: + sudo kolla_copy_cacerts
Sep 30 14:32:05 compute-0 sudo[242596]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Sep 30 14:32:05 compute-0 sudo[242596]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Sep 30 14:32:05 compute-0 sudo[242596]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Sep 30 14:32:05 compute-0 podman[242576]: 2025-09-30 14:32:05.190464179 +0000 UTC m=+0.058561445 container health_status b3fb2fd96e3ed0561f41ccf5f3a6a8f5edc1610051f196882b9fe64f54041c07 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Sep 30 14:32:05 compute-0 sudo[242596]: pam_unix(sudo:session): session closed for user root
Sep 30 14:32:05 compute-0 multipathd[242569]: + [[ ! -n '' ]]
Sep 30 14:32:05 compute-0 multipathd[242569]: + . kolla_extend_start
Sep 30 14:32:05 compute-0 multipathd[242569]: Running command: '/usr/sbin/multipathd -d'
Sep 30 14:32:05 compute-0 multipathd[242569]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Sep 30 14:32:05 compute-0 multipathd[242569]: + umask 0022
Sep 30 14:32:05 compute-0 multipathd[242569]: + exec /usr/sbin/multipathd -d
Sep 30 14:32:05 compute-0 systemd[1]: b3fb2fd96e3ed0561f41ccf5f3a6a8f5edc1610051f196882b9fe64f54041c07-622e723d7424b5eb.service: Main process exited, code=exited, status=1/FAILURE
Sep 30 14:32:05 compute-0 systemd[1]: b3fb2fd96e3ed0561f41ccf5f3a6a8f5edc1610051f196882b9fe64f54041c07-622e723d7424b5eb.service: Failed with result 'exit-code'.
Sep 30 14:32:05 compute-0 multipathd[242569]: 6087.622932 | --------start up--------
Sep 30 14:32:05 compute-0 multipathd[242569]: 6087.622947 | read /etc/multipath.conf
Sep 30 14:32:05 compute-0 multipathd[242569]: 6087.627664 | path checkers start up
Sep 30 14:32:05 compute-0 sudo[242759]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xuolisaxzeuxynwegudnzkepcfmmkqzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242725.333242-2630-275731444387051/AnsiballZ_file.py'
Sep 30 14:32:05 compute-0 sudo[242759]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:32:05 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:32:05 compute-0 python3.9[242761]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:32:05 compute-0 sudo[242759]: pam_unix(sudo:session): session closed for user root
Sep 30 14:32:05 compute-0 ceph-mon[74194]: pgmap v489: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Sep 30 14:32:06 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:32:06 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f48004070 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:32:06 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:32:06 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Sep 30 14:32:06 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:32:06 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:32:06 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:32:06.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:32:06 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v490: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Sep 30 14:32:06 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:32:06 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f3c003960 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:32:06 compute-0 sudo[242927]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhofruqmnwcpkaulnzxbbrxnqfewbeyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242726.3590548-2666-214517064797257/AnsiballZ_file.py'
Sep 30 14:32:06 compute-0 sudo[242927]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:32:06 compute-0 podman[242886]: 2025-09-30 14:32:06.642517452 +0000 UTC m=+0.076079816 container health_status 8ace90c1ad44d98a416105b72e1c541724757ab08efa10adfaa0df7e8c2027c6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:32:06 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:32:06 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:32:06 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:32:06.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:32:06 compute-0 python3.9[242937]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Sep 30 14:32:06 compute-0 sudo[242927]: pam_unix(sudo:session): session closed for user root
Sep 30 14:32:06 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:32:06 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f54003480 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:32:07 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:32:07.042Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:32:07 compute-0 sudo[243091]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbiuurefgqusncetowmaqbvoexxjtpsd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242727.0453966-2690-256966014688713/AnsiballZ_modprobe.py'
Sep 30 14:32:07 compute-0 sudo[243091]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:32:07 compute-0 python3.9[243093]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Sep 30 14:32:07 compute-0 kernel: Key type psk registered
Sep 30 14:32:07 compute-0 sudo[243091]: pam_unix(sudo:session): session closed for user root
Sep 30 14:32:07 compute-0 ceph-mon[74194]: pgmap v490: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Sep 30 14:32:08 compute-0 sudo[243255]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qetcomfwehksauhwfdokfsepyyexvltp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242727.8026495-2714-101425921072398/AnsiballZ_stat.py'
Sep 30 14:32:08 compute-0 sudo[243255]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:32:08 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:32:08 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f38003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:32:08 compute-0 python3.9[243257]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:32:08 compute-0 sudo[243255]: pam_unix(sudo:session): session closed for user root
Sep 30 14:32:08 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:32:08 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:32:08 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:32:08.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:32:08 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v491: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Sep 30 14:32:08 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:32:08 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f48004070 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:32:08 compute-0 sudo[243378]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvzkxxrhowwbzpfwbcfjpycmkyznrnsy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242727.8026495-2714-101425921072398/AnsiballZ_copy.py'
Sep 30 14:32:08 compute-0 sudo[243378]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:32:08 compute-0 python3.9[243380]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759242727.8026495-2714-101425921072398/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:32:08 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:32:08 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:32:08 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:32:08.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:32:08 compute-0 sudo[243378]: pam_unix(sudo:session): session closed for user root
Sep 30 14:32:08 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:32:08 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f3c003960 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:32:09 compute-0 sudo[243531]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbhnuswtxhsrzjwabtjewlptbpovcjxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242729.218475-2762-74147682426543/AnsiballZ_lineinfile.py'
Sep 30 14:32:09 compute-0 sudo[243531]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:32:09 compute-0 python3.9[243533]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:32:09 compute-0 sudo[243531]: pam_unix(sudo:session): session closed for user root
Sep 30 14:32:09 compute-0 ceph-mon[74194]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Sep 30 14:32:09 compute-0 ceph-mon[74194]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Cumulative writes: 4029 writes, 18K keys, 4029 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.03 MB/s
                                           Cumulative WAL: 4029 writes, 4029 syncs, 1.00 writes per sync, written: 0.03 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1479 writes, 6001 keys, 1479 commit groups, 1.0 writes per commit group, ingest: 11.26 MB, 0.02 MB/s
                                           Interval WAL: 1479 writes, 1479 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     66.7      0.41              0.06         8    0.051       0      0       0.0       0.0
                                             L6      1/0   11.30 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.2    108.5     92.0      0.95              0.19         7    0.136     32K   3655       0.0       0.0
                                            Sum      1/0   11.30 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.2     76.1     84.4      1.36              0.25        15    0.090     32K   3655       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   6.7    141.8    135.9      0.32              0.09         6    0.053     16K   1848       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    108.5     92.0      0.95              0.19         7    0.136     32K   3655       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     67.2      0.40              0.06         7    0.057       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     13.7      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.026, interval 0.006
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.11 GB write, 0.10 MB/s write, 0.10 GB read, 0.09 MB/s read, 1.4 seconds
                                           Interval compaction: 0.04 GB write, 0.07 MB/s write, 0.04 GB read, 0.08 MB/s read, 0.3 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5596d7211350#2 capacity: 304.00 MB usage: 5.50 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 0.000171 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(340,5.21 MB,1.71225%) FilterBlock(16,101.30 KB,0.0325404%) IndexBlock(16,195.58 KB,0.0628271%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Sep 30 14:32:09 compute-0 ceph-mon[74194]: pgmap v491: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Sep 30 14:32:10 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:32:10 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f54003480 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:32:10 compute-0 podman[243634]: 2025-09-30 14:32:10.172010481 +0000 UTC m=+0.092685163 container health_status 3f9405f717bf7bccb1d94628a6cea0442375ebf8d5cf43ef2536ee30dce6c6e0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Sep 30 14:32:10 compute-0 sudo[243704]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znmwsodkozsqlwcgiakhzztdknwxynal ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242729.916302-2786-175524541904900/AnsiballZ_systemd.py'
Sep 30 14:32:10 compute-0 sudo[243704]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:32:10 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:32:10 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:32:10 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:32:10.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:32:10 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v492: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Sep 30 14:32:10 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:32:10 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f38003dd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:32:10 compute-0 python3.9[243706]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Sep 30 14:32:10 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Sep 30 14:32:10 compute-0 systemd[1]: Stopped Load Kernel Modules.
Sep 30 14:32:10 compute-0 systemd[1]: Stopping Load Kernel Modules...
Sep 30 14:32:10 compute-0 systemd[1]: Starting Load Kernel Modules...
Sep 30 14:32:10 compute-0 systemd[1]: Finished Load Kernel Modules.
Sep 30 14:32:10 compute-0 sudo[243704]: pam_unix(sudo:session): session closed for user root
Sep 30 14:32:10 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:32:10 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:32:10 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:32:10 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:32:10.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:32:10 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:32:10 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f48004070 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:32:11 compute-0 sudo[243860]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrkpmmcieyljfwplalcjuxzqoneiyevg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242730.9223006-2810-119488228687548/AnsiballZ_setup.py'
Sep 30 14:32:11 compute-0 sudo[243860]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:32:11 compute-0 python3.9[243862]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Sep 30 14:32:11 compute-0 sudo[243860]: pam_unix(sudo:session): session closed for user root
Sep 30 14:32:11 compute-0 ceph-mon[74194]: pgmap v492: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Sep 30 14:32:12 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:32:12 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f3c003960 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:32:12 compute-0 sudo[243960]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uyqfrkevewykdadwqxvqaoqajdoblyju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242730.9223006-2810-119488228687548/AnsiballZ_dnf.py'
Sep 30 14:32:12 compute-0 sudo[243960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:32:12 compute-0 podman[243920]: 2025-09-30 14:32:12.194062067 +0000 UTC m=+0.052660427 container health_status c96c6cc1b89de09f21811bef665f35e069874e3ad1bbd4c37d02227af98e3458 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true)
Sep 30 14:32:12 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:32:12 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:32:12 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:32:12.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:32:12 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v493: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Sep 30 14:32:12 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:32:12 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f54003480 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:32:12 compute-0 python3.9[243968]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Sep 30 14:32:12 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:32:12 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:32:12 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:32:12.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:32:12 compute-0 sudo[243970]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:32:12 compute-0 sudo[243970]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:32:12 compute-0 sudo[243970]: pam_unix(sudo:session): session closed for user root
Sep 30 14:32:12 compute-0 sudo[243995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 14:32:12 compute-0 sudo[243995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:32:12 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:32:12 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f30001bd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:32:13 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-nfs-cephfs-compute-0-yvkpei[97239]: [WARNING] 272/143213 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Sep 30 14:32:13 compute-0 sudo[243995]: pam_unix(sudo:session): session closed for user root
Sep 30 14:32:13 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:32:13 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:32:13 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 14:32:13 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 14:32:13 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 14:32:13 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:32:13 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 14:32:13 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:32:13 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 14:32:13 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 14:32:13 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 14:32:13 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 14:32:13 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:32:13 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:32:13 compute-0 sudo[244051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:32:13 compute-0 sudo[244051]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:32:13 compute-0 sudo[244051]: pam_unix(sudo:session): session closed for user root
Sep 30 14:32:13 compute-0 sudo[244076]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 14:32:13 compute-0 sudo[244076]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:32:14 compute-0 ceph-mon[74194]: pgmap v493: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Sep 30 14:32:14 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:32:14 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 14:32:14 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:32:14 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:32:14 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 14:32:14 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 14:32:14 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:32:14 compute-0 podman[244144]: 2025-09-30 14:32:14.123117322 +0000 UTC m=+0.041651010 container create 78c6c2acdfc0df05470880f6e0d787beba4aa7b21cbd860c03fb395157261889 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:32:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:32:14 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f48004070 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:32:14 compute-0 systemd[1]: Started libpod-conmon-78c6c2acdfc0df05470880f6e0d787beba4aa7b21cbd860c03fb395157261889.scope.
Sep 30 14:32:14 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:32:14 compute-0 podman[244144]: 2025-09-30 14:32:14.105387666 +0000 UTC m=+0.023921374 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:32:14 compute-0 podman[244144]: 2025-09-30 14:32:14.2097035 +0000 UTC m=+0.128237208 container init 78c6c2acdfc0df05470880f6e0d787beba4aa7b21cbd860c03fb395157261889 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Sep 30 14:32:14 compute-0 podman[244144]: 2025-09-30 14:32:14.222408861 +0000 UTC m=+0.140942559 container start 78c6c2acdfc0df05470880f6e0d787beba4aa7b21cbd860c03fb395157261889 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_sutherland, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Sep 30 14:32:14 compute-0 podman[244144]: 2025-09-30 14:32:14.225677969 +0000 UTC m=+0.144211687 container attach 78c6c2acdfc0df05470880f6e0d787beba4aa7b21cbd860c03fb395157261889 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Sep 30 14:32:14 compute-0 elegant_sutherland[244162]: 167 167
Sep 30 14:32:14 compute-0 systemd[1]: libpod-78c6c2acdfc0df05470880f6e0d787beba4aa7b21cbd860c03fb395157261889.scope: Deactivated successfully.
Sep 30 14:32:14 compute-0 podman[244144]: 2025-09-30 14:32:14.235803362 +0000 UTC m=+0.154337050 container died 78c6c2acdfc0df05470880f6e0d787beba4aa7b21cbd860c03fb395157261889 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_sutherland, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Sep 30 14:32:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-bffe2f4980cc0e7726df25b0b686fc08b717c7d33a8f950a1d0eda2b16b3321e-merged.mount: Deactivated successfully.
Sep 30 14:32:14 compute-0 podman[244144]: 2025-09-30 14:32:14.297698315 +0000 UTC m=+0.216232003 container remove 78c6c2acdfc0df05470880f6e0d787beba4aa7b21cbd860c03fb395157261889 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_sutherland, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Sep 30 14:32:14 compute-0 systemd[1]: libpod-conmon-78c6c2acdfc0df05470880f6e0d787beba4aa7b21cbd860c03fb395157261889.scope: Deactivated successfully.
Sep 30 14:32:14 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:32:14 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:32:14 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:32:14.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:32:14 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v494: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Sep 30 14:32:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:32:14 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f3c003960 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:32:14 compute-0 podman[244187]: 2025-09-30 14:32:14.481574618 +0000 UTC m=+0.049818470 container create 89e32091da87ed14ca875dd855e22d8504b214ab25ec8cff7e47d8ab2da61dad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_solomon, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Sep 30 14:32:14 compute-0 systemd[1]: Started libpod-conmon-89e32091da87ed14ca875dd855e22d8504b214ab25ec8cff7e47d8ab2da61dad.scope.
Sep 30 14:32:14 compute-0 podman[244187]: 2025-09-30 14:32:14.459800443 +0000 UTC m=+0.028044325 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:32:14 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:32:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23eb3e9990a55cc397b225187682e98f8d7c578a9cecedc26ff6a27f127580ab/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:32:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23eb3e9990a55cc397b225187682e98f8d7c578a9cecedc26ff6a27f127580ab/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:32:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23eb3e9990a55cc397b225187682e98f8d7c578a9cecedc26ff6a27f127580ab/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:32:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23eb3e9990a55cc397b225187682e98f8d7c578a9cecedc26ff6a27f127580ab/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:32:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23eb3e9990a55cc397b225187682e98f8d7c578a9cecedc26ff6a27f127580ab/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 14:32:14 compute-0 podman[244187]: 2025-09-30 14:32:14.584653489 +0000 UTC m=+0.152897361 container init 89e32091da87ed14ca875dd855e22d8504b214ab25ec8cff7e47d8ab2da61dad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_solomon, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Sep 30 14:32:14 compute-0 podman[244187]: 2025-09-30 14:32:14.593469806 +0000 UTC m=+0.161713658 container start 89e32091da87ed14ca875dd855e22d8504b214ab25ec8cff7e47d8ab2da61dad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_solomon, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid)
Sep 30 14:32:14 compute-0 podman[244187]: 2025-09-30 14:32:14.59768376 +0000 UTC m=+0.165927622 container attach 89e32091da87ed14ca875dd855e22d8504b214ab25ec8cff7e47d8ab2da61dad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_solomon, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:32:14 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:32:14 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:32:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:32:14] "GET /metrics HTTP/1.1" 200 48421 "" "Prometheus/2.51.0"
Sep 30 14:32:14 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:32:14] "GET /metrics HTTP/1.1" 200 48421 "" "Prometheus/2.51.0"
Sep 30 14:32:14 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:32:14 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:32:14 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:32:14.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:32:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:32:14 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f54003480 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:32:14 compute-0 great_solomon[244204]: --> passed data devices: 0 physical, 1 LVM
Sep 30 14:32:14 compute-0 great_solomon[244204]: --> All data devices are unavailable
Sep 30 14:32:15 compute-0 systemd[1]: libpod-89e32091da87ed14ca875dd855e22d8504b214ab25ec8cff7e47d8ab2da61dad.scope: Deactivated successfully.
Sep 30 14:32:15 compute-0 podman[244187]: 2025-09-30 14:32:15.021724009 +0000 UTC m=+0.589967891 container died 89e32091da87ed14ca875dd855e22d8504b214ab25ec8cff7e47d8ab2da61dad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_solomon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:32:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-23eb3e9990a55cc397b225187682e98f8d7c578a9cecedc26ff6a27f127580ab-merged.mount: Deactivated successfully.
Sep 30 14:32:15 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:32:15 compute-0 podman[244187]: 2025-09-30 14:32:15.07162198 +0000 UTC m=+0.639865842 container remove 89e32091da87ed14ca875dd855e22d8504b214ab25ec8cff7e47d8ab2da61dad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_solomon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Sep 30 14:32:15 compute-0 systemd[1]: libpod-conmon-89e32091da87ed14ca875dd855e22d8504b214ab25ec8cff7e47d8ab2da61dad.scope: Deactivated successfully.
Sep 30 14:32:15 compute-0 sudo[244076]: pam_unix(sudo:session): session closed for user root
Sep 30 14:32:15 compute-0 sudo[244231]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:32:15 compute-0 sudo[244231]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:32:15 compute-0 sudo[244231]: pam_unix(sudo:session): session closed for user root
Sep 30 14:32:15 compute-0 sudo[244256]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -- lvm list --format json
Sep 30 14:32:15 compute-0 sudo[244256]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:32:15 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:32:15 compute-0 podman[244323]: 2025-09-30 14:32:15.733320828 +0000 UTC m=+0.050878769 container create 135bff417db80bcb88a1cf57cf2f5894bef2aeb9af4766489071a5fec4f0a2ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_booth, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Sep 30 14:32:15 compute-0 systemd[1]: Started libpod-conmon-135bff417db80bcb88a1cf57cf2f5894bef2aeb9af4766489071a5fec4f0a2ce.scope.
Sep 30 14:32:15 compute-0 podman[244323]: 2025-09-30 14:32:15.710844573 +0000 UTC m=+0.028402524 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:32:15 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:32:15 compute-0 podman[244323]: 2025-09-30 14:32:15.830318205 +0000 UTC m=+0.147876156 container init 135bff417db80bcb88a1cf57cf2f5894bef2aeb9af4766489071a5fec4f0a2ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_booth, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Sep 30 14:32:15 compute-0 podman[244323]: 2025-09-30 14:32:15.838403522 +0000 UTC m=+0.155961463 container start 135bff417db80bcb88a1cf57cf2f5894bef2aeb9af4766489071a5fec4f0a2ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_booth, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:32:15 compute-0 podman[244323]: 2025-09-30 14:32:15.841961638 +0000 UTC m=+0.159519589 container attach 135bff417db80bcb88a1cf57cf2f5894bef2aeb9af4766489071a5fec4f0a2ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_booth, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Sep 30 14:32:15 compute-0 quirky_booth[244340]: 167 167
Sep 30 14:32:15 compute-0 systemd[1]: libpod-135bff417db80bcb88a1cf57cf2f5894bef2aeb9af4766489071a5fec4f0a2ce.scope: Deactivated successfully.
Sep 30 14:32:15 compute-0 podman[244323]: 2025-09-30 14:32:15.844771404 +0000 UTC m=+0.162329335 container died 135bff417db80bcb88a1cf57cf2f5894bef2aeb9af4766489071a5fec4f0a2ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_booth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:32:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-f8f7656c061fa89ec8c4bcb0f4838643ec9cbe06d21a772f441082b2ac9ace92-merged.mount: Deactivated successfully.
Sep 30 14:32:15 compute-0 podman[244323]: 2025-09-30 14:32:15.887155483 +0000 UTC m=+0.204713414 container remove 135bff417db80bcb88a1cf57cf2f5894bef2aeb9af4766489071a5fec4f0a2ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_booth, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:32:15 compute-0 systemd[1]: libpod-conmon-135bff417db80bcb88a1cf57cf2f5894bef2aeb9af4766489071a5fec4f0a2ce.scope: Deactivated successfully.
Sep 30 14:32:16 compute-0 podman[244365]: 2025-09-30 14:32:16.090591782 +0000 UTC m=+0.061866744 container create cd1b92a87cde507435d5c0100aeb18a66aa8adcf1018ce65b18bf507b254f4b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_tharp, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:32:16 compute-0 ceph-mon[74194]: pgmap v494: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Sep 30 14:32:16 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:32:16 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f5c001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:32:16 compute-0 systemd[1]: Started libpod-conmon-cd1b92a87cde507435d5c0100aeb18a66aa8adcf1018ce65b18bf507b254f4b4.scope.
Sep 30 14:32:16 compute-0 podman[244365]: 2025-09-30 14:32:16.060683628 +0000 UTC m=+0.031958620 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:32:16 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:32:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cc7bbf408d059b86df29c93eefeaa6b14c2c03322f359ad1256926f8925565d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:32:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cc7bbf408d059b86df29c93eefeaa6b14c2c03322f359ad1256926f8925565d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:32:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cc7bbf408d059b86df29c93eefeaa6b14c2c03322f359ad1256926f8925565d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:32:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cc7bbf408d059b86df29c93eefeaa6b14c2c03322f359ad1256926f8925565d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:32:16 compute-0 podman[244365]: 2025-09-30 14:32:16.194288979 +0000 UTC m=+0.165563961 container init cd1b92a87cde507435d5c0100aeb18a66aa8adcf1018ce65b18bf507b254f4b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Sep 30 14:32:16 compute-0 podman[244365]: 2025-09-30 14:32:16.201450412 +0000 UTC m=+0.172725374 container start cd1b92a87cde507435d5c0100aeb18a66aa8adcf1018ce65b18bf507b254f4b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325)
Sep 30 14:32:16 compute-0 podman[244365]: 2025-09-30 14:32:16.204889784 +0000 UTC m=+0.176164776 container attach cd1b92a87cde507435d5c0100aeb18a66aa8adcf1018ce65b18bf507b254f4b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_tharp, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Sep 30 14:32:16 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:32:16 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:32:16 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:32:16.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:32:16 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v495: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Sep 30 14:32:16 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:32:16 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f48004070 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:32:16 compute-0 musing_tharp[244382]: {
Sep 30 14:32:16 compute-0 musing_tharp[244382]:     "0": [
Sep 30 14:32:16 compute-0 musing_tharp[244382]:         {
Sep 30 14:32:16 compute-0 musing_tharp[244382]:             "devices": [
Sep 30 14:32:16 compute-0 musing_tharp[244382]:                 "/dev/loop3"
Sep 30 14:32:16 compute-0 musing_tharp[244382]:             ],
Sep 30 14:32:16 compute-0 musing_tharp[244382]:             "lv_name": "ceph_lv0",
Sep 30 14:32:16 compute-0 musing_tharp[244382]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:32:16 compute-0 musing_tharp[244382]:             "lv_size": "21470642176",
Sep 30 14:32:16 compute-0 musing_tharp[244382]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5e3c7776-ac03-5698-b79f-a6dc2d80cae6,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1bf35304-bfb4-41f5-b832-570aa31de1b2,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 14:32:16 compute-0 musing_tharp[244382]:             "lv_uuid": "f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L",
Sep 30 14:32:16 compute-0 musing_tharp[244382]:             "name": "ceph_lv0",
Sep 30 14:32:16 compute-0 musing_tharp[244382]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:32:16 compute-0 musing_tharp[244382]:             "tags": {
Sep 30 14:32:16 compute-0 musing_tharp[244382]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:32:16 compute-0 musing_tharp[244382]:                 "ceph.block_uuid": "f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L",
Sep 30 14:32:16 compute-0 musing_tharp[244382]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 14:32:16 compute-0 musing_tharp[244382]:                 "ceph.cluster_fsid": "5e3c7776-ac03-5698-b79f-a6dc2d80cae6",
Sep 30 14:32:16 compute-0 musing_tharp[244382]:                 "ceph.cluster_name": "ceph",
Sep 30 14:32:16 compute-0 musing_tharp[244382]:                 "ceph.crush_device_class": "",
Sep 30 14:32:16 compute-0 musing_tharp[244382]:                 "ceph.encrypted": "0",
Sep 30 14:32:16 compute-0 musing_tharp[244382]:                 "ceph.osd_fsid": "1bf35304-bfb4-41f5-b832-570aa31de1b2",
Sep 30 14:32:16 compute-0 musing_tharp[244382]:                 "ceph.osd_id": "0",
Sep 30 14:32:16 compute-0 musing_tharp[244382]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 14:32:16 compute-0 musing_tharp[244382]:                 "ceph.type": "block",
Sep 30 14:32:16 compute-0 musing_tharp[244382]:                 "ceph.vdo": "0",
Sep 30 14:32:16 compute-0 musing_tharp[244382]:                 "ceph.with_tpm": "0"
Sep 30 14:32:16 compute-0 musing_tharp[244382]:             },
Sep 30 14:32:16 compute-0 musing_tharp[244382]:             "type": "block",
Sep 30 14:32:16 compute-0 musing_tharp[244382]:             "vg_name": "ceph_vg0"
Sep 30 14:32:16 compute-0 musing_tharp[244382]:         }
Sep 30 14:32:16 compute-0 musing_tharp[244382]:     ]
Sep 30 14:32:16 compute-0 musing_tharp[244382]: }
Sep 30 14:32:16 compute-0 systemd[1]: libpod-cd1b92a87cde507435d5c0100aeb18a66aa8adcf1018ce65b18bf507b254f4b4.scope: Deactivated successfully.
Sep 30 14:32:16 compute-0 podman[244365]: 2025-09-30 14:32:16.537086413 +0000 UTC m=+0.508361395 container died cd1b92a87cde507435d5c0100aeb18a66aa8adcf1018ce65b18bf507b254f4b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_tharp, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Sep 30 14:32:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-5cc7bbf408d059b86df29c93eefeaa6b14c2c03322f359ad1256926f8925565d-merged.mount: Deactivated successfully.
Sep 30 14:32:16 compute-0 podman[244365]: 2025-09-30 14:32:16.590755656 +0000 UTC m=+0.562030618 container remove cd1b92a87cde507435d5c0100aeb18a66aa8adcf1018ce65b18bf507b254f4b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_tharp, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Sep 30 14:32:16 compute-0 systemd[1]: libpod-conmon-cd1b92a87cde507435d5c0100aeb18a66aa8adcf1018ce65b18bf507b254f4b4.scope: Deactivated successfully.
Sep 30 14:32:16 compute-0 sudo[244256]: pam_unix(sudo:session): session closed for user root
Sep 30 14:32:16 compute-0 sudo[244404]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:32:16 compute-0 sudo[244404]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:32:16 compute-0 sudo[244404]: pam_unix(sudo:session): session closed for user root
Sep 30 14:32:16 compute-0 sudo[244429]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -- raw list --format json
Sep 30 14:32:16 compute-0 sudo[244429]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:32:16 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:32:16 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:32:16 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:32:16.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:32:16 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:32:16 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f3c003960 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:32:17 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:32:17.042Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:32:17 compute-0 ceph-mon[74194]: pgmap v495: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Sep 30 14:32:17 compute-0 podman[244493]: 2025-09-30 14:32:17.160581704 +0000 UTC m=+0.035086164 container create b2612bd4ade37e0a880cb5ed7ad0e07be0f80ee7857d89c3f8eaf5eaca51e021 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_snyder, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:32:17 compute-0 systemd[1]: Started libpod-conmon-b2612bd4ade37e0a880cb5ed7ad0e07be0f80ee7857d89c3f8eaf5eaca51e021.scope.
Sep 30 14:32:17 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:32:17 compute-0 podman[244493]: 2025-09-30 14:32:17.232384584 +0000 UTC m=+0.106889064 container init b2612bd4ade37e0a880cb5ed7ad0e07be0f80ee7857d89c3f8eaf5eaca51e021 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_snyder, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Sep 30 14:32:17 compute-0 podman[244493]: 2025-09-30 14:32:17.24040532 +0000 UTC m=+0.114909780 container start b2612bd4ade37e0a880cb5ed7ad0e07be0f80ee7857d89c3f8eaf5eaca51e021 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_snyder, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:32:17 compute-0 podman[244493]: 2025-09-30 14:32:17.146368472 +0000 UTC m=+0.020872962 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:32:17 compute-0 podman[244493]: 2025-09-30 14:32:17.243443351 +0000 UTC m=+0.117947831 container attach b2612bd4ade37e0a880cb5ed7ad0e07be0f80ee7857d89c3f8eaf5eaca51e021 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_snyder, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:32:17 compute-0 suspicious_snyder[244509]: 167 167
Sep 30 14:32:17 compute-0 systemd[1]: libpod-b2612bd4ade37e0a880cb5ed7ad0e07be0f80ee7857d89c3f8eaf5eaca51e021.scope: Deactivated successfully.
Sep 30 14:32:17 compute-0 podman[244493]: 2025-09-30 14:32:17.245802775 +0000 UTC m=+0.120307235 container died b2612bd4ade37e0a880cb5ed7ad0e07be0f80ee7857d89c3f8eaf5eaca51e021 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_snyder, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Sep 30 14:32:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-0b52adbfc9800568604a400db515278cc03af1b8e6fd5b9675d203b924245def-merged.mount: Deactivated successfully.
Sep 30 14:32:17 compute-0 podman[244493]: 2025-09-30 14:32:17.281226227 +0000 UTC m=+0.155730697 container remove b2612bd4ade37e0a880cb5ed7ad0e07be0f80ee7857d89c3f8eaf5eaca51e021 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_snyder, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0)
Sep 30 14:32:17 compute-0 systemd[1]: libpod-conmon-b2612bd4ade37e0a880cb5ed7ad0e07be0f80ee7857d89c3f8eaf5eaca51e021.scope: Deactivated successfully.
Sep 30 14:32:17 compute-0 podman[244533]: 2025-09-30 14:32:17.445473952 +0000 UTC m=+0.039008139 container create 7e2458d6a485376d74520d24548e2d3131a5647cbb97baa67b1509b351095e7b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Sep 30 14:32:17 compute-0 systemd[1]: Started libpod-conmon-7e2458d6a485376d74520d24548e2d3131a5647cbb97baa67b1509b351095e7b.scope.
Sep 30 14:32:17 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:32:17 compute-0 podman[244533]: 2025-09-30 14:32:17.428404203 +0000 UTC m=+0.021938410 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:32:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6df421e9a93571f51ed705b3a60096d9cfa95ba33311514871cb9b7b4cf054e7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:32:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6df421e9a93571f51ed705b3a60096d9cfa95ba33311514871cb9b7b4cf054e7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:32:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6df421e9a93571f51ed705b3a60096d9cfa95ba33311514871cb9b7b4cf054e7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:32:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6df421e9a93571f51ed705b3a60096d9cfa95ba33311514871cb9b7b4cf054e7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:32:17 compute-0 podman[244533]: 2025-09-30 14:32:17.545393798 +0000 UTC m=+0.138928005 container init 7e2458d6a485376d74520d24548e2d3131a5647cbb97baa67b1509b351095e7b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_joliot, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1)
Sep 30 14:32:17 compute-0 podman[244533]: 2025-09-30 14:32:17.552609852 +0000 UTC m=+0.146144029 container start 7e2458d6a485376d74520d24548e2d3131a5647cbb97baa67b1509b351095e7b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_joliot, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Sep 30 14:32:17 compute-0 podman[244533]: 2025-09-30 14:32:17.555543181 +0000 UTC m=+0.149077368 container attach 7e2458d6a485376d74520d24548e2d3131a5647cbb97baa67b1509b351095e7b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:32:18 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:32:18 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f54003480 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:32:18 compute-0 lvm[244625]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 14:32:18 compute-0 lvm[244625]: VG ceph_vg0 finished
Sep 30 14:32:18 compute-0 pedantic_joliot[244549]: {}
Sep 30 14:32:18 compute-0 systemd[1]: libpod-7e2458d6a485376d74520d24548e2d3131a5647cbb97baa67b1509b351095e7b.scope: Deactivated successfully.
Sep 30 14:32:18 compute-0 systemd[1]: libpod-7e2458d6a485376d74520d24548e2d3131a5647cbb97baa67b1509b351095e7b.scope: Consumed 1.058s CPU time.
Sep 30 14:32:18 compute-0 podman[244533]: 2025-09-30 14:32:18.246641789 +0000 UTC m=+0.840175976 container died 7e2458d6a485376d74520d24548e2d3131a5647cbb97baa67b1509b351095e7b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_joliot, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Sep 30 14:32:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-6df421e9a93571f51ed705b3a60096d9cfa95ba33311514871cb9b7b4cf054e7-merged.mount: Deactivated successfully.
Sep 30 14:32:18 compute-0 podman[244533]: 2025-09-30 14:32:18.305225454 +0000 UTC m=+0.898759641 container remove 7e2458d6a485376d74520d24548e2d3131a5647cbb97baa67b1509b351095e7b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_joliot, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Sep 30 14:32:18 compute-0 systemd[1]: libpod-conmon-7e2458d6a485376d74520d24548e2d3131a5647cbb97baa67b1509b351095e7b.scope: Deactivated successfully.
Sep 30 14:32:18 compute-0 sudo[244429]: pam_unix(sudo:session): session closed for user root
Sep 30 14:32:18 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 14:32:18 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:32:18 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 14:32:18 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:32:18 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:32:18 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:32:18.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:32:18 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v496: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:32:18 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:32:18 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:32:18 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f5c001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:32:18 compute-0 sudo[244639]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 14:32:18 compute-0 sudo[244639]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:32:18 compute-0 sudo[244639]: pam_unix(sudo:session): session closed for user root
Sep 30 14:32:18 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:32:18 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:32:18 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:32:18.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:32:18 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:32:18 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f48004070 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:32:19 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:32:19 compute-0 ceph-mon[74194]: pgmap v496: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:32:19 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:32:19 compute-0 systemd[1]: Reloading.
Sep 30 14:32:19 compute-0 systemd-rc-local-generator[244694]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:32:19 compute-0 systemd-sysv-generator[244698]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 14:32:19 compute-0 systemd[1]: Reloading.
Sep 30 14:32:19 compute-0 systemd-rc-local-generator[244731]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:32:19 compute-0 systemd-sysv-generator[244735]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 14:32:20 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:32:20 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f3c003960 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:32:20 compute-0 systemd-logind[808]: Watching system buttons on /dev/input/event0 (Power Button)
Sep 30 14:32:20 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:32:20 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:32:20 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:32:20.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:32:20 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v497: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:32:20 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:32:20 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f54003480 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:32:20 compute-0 systemd-logind[808]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Sep 30 14:32:20 compute-0 lvm[244776]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 14:32:20 compute-0 lvm[244776]: VG ceph_vg0 finished
Sep 30 14:32:20 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Sep 30 14:32:20 compute-0 systemd[1]: Starting man-db-cache-update.service...
Sep 30 14:32:20 compute-0 systemd[1]: Reloading.
Sep 30 14:32:20 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:32:20 compute-0 systemd-rc-local-generator[244826]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:32:20 compute-0 systemd-sysv-generator[244832]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 14:32:20 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:32:20 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:32:20 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:32:20.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:32:20 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:32:20 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f5c001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:32:21 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Sep 30 14:32:21 compute-0 sudo[245203]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:32:21 compute-0 sudo[245203]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:32:21 compute-0 sudo[245203]: pam_unix(sudo:session): session closed for user root
Sep 30 14:32:21 compute-0 ceph-mon[74194]: pgmap v497: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:32:21 compute-0 sudo[243960]: pam_unix(sudo:session): session closed for user root
Sep 30 14:32:22 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Sep 30 14:32:22 compute-0 systemd[1]: Finished man-db-cache-update.service.
Sep 30 14:32:22 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.558s CPU time.
Sep 30 14:32:22 compute-0 systemd[1]: run-r0cdd168c7b6f4e3c8b0578dbcc7edcb8.service: Deactivated successfully.
Sep 30 14:32:22 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:32:22 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f48004070 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:32:22 compute-0 sudo[246144]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zltmvvamehvdsxccxqbydaitumkjqzqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242741.8902323-2846-137289154692383/AnsiballZ_file.py'
Sep 30 14:32:22 compute-0 sudo[246144]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:32:22 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:32:22 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:32:22 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:32:22.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:32:22 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v498: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:32:22 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:32:22 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f3c003960 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:32:22 compute-0 python3.9[246146]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.iscsid_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:32:22 compute-0 sudo[246144]: pam_unix(sudo:session): session closed for user root
Sep 30 14:32:22 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:32:22 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:32:22 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:32:22.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:32:22 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:32:22 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f54003480 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:32:23 compute-0 python3.9[246296]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Sep 30 14:32:23 compute-0 ceph-mon[74194]: pgmap v498: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:32:23 compute-0 sudo[246452]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xullygtvqaldjiwkhywywffavdktkgwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242743.7257242-2898-36453489722037/AnsiballZ_file.py'
Sep 30 14:32:23 compute-0 sudo[246452]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:32:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:32:24 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f5c002e80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:32:24 compute-0 python3.9[246454]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:32:24 compute-0 sudo[246452]: pam_unix(sudo:session): session closed for user root
Sep 30 14:32:24 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:32:24 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:32:24 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:32:24.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:32:24 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v499: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:32:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:32:24 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f48004070 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:32:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:32:24] "GET /metrics HTTP/1.1" 200 48421 "" "Prometheus/2.51.0"
Sep 30 14:32:24 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:32:24] "GET /metrics HTTP/1.1" 200 48421 "" "Prometheus/2.51.0"
Sep 30 14:32:24 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:32:24 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:32:24 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:32:24.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:32:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:32:24 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f3c003960 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:32:25 compute-0 sudo[246605]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-putcyzttsjzkbovzbnesdovylaqdqhmv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242744.7239063-2931-35637249775924/AnsiballZ_systemd_service.py'
Sep 30 14:32:25 compute-0 sudo[246605]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:32:25 compute-0 ceph-mon[74194]: pgmap v499: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:32:25 compute-0 python3.9[246607]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Sep 30 14:32:25 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:32:25 compute-0 systemd[1]: Reloading.
Sep 30 14:32:25 compute-0 systemd-rc-local-generator[246634]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:32:25 compute-0 systemd-sysv-generator[246637]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 14:32:26 compute-0 sudo[246605]: pam_unix(sudo:session): session closed for user root
Sep 30 14:32:26 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:32:26 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f54003480 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:32:26 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v500: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:32:26 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:32:26 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:32:26 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:32:26.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:32:26 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:32:26 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f5c002e80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:32:26 compute-0 python3.9[246793]: ansible-ansible.builtin.service_facts Invoked
Sep 30 14:32:26 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:32:26 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:32:26 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:32:26.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:32:26 compute-0 network[246810]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Sep 30 14:32:26 compute-0 network[246811]: 'network-scripts' will be removed from distribution in near future.
Sep 30 14:32:26 compute-0 network[246812]: It is advised to switch to 'NetworkManager' instead for network management.
Sep 30 14:32:26 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:32:26 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f48004070 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:32:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:32:27.044Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:32:27 compute-0 ceph-mon[74194]: pgmap v500: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:32:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:32:28 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f3c003960 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:32:28 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v501: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:32:28 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:32:28 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:32:28 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:32:28.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:32:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:32:28 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f54003480 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:32:28 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:32:28 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:32:28 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:32:28.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:32:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:32:28 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f5c003b90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:32:29 compute-0 ceph-mon[74194]: pgmap v501: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:32:29 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:32:29 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:32:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:32:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:32:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:32:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:32:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:32:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:32:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:32:30 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f48004070 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:32:30 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v502: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:32:30 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:32:30 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.002000054s ======
Sep 30 14:32:30 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:32:30.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Sep 30 14:32:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:32:30 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f3c003960 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:32:30 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:32:30 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:32:30 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:32:30 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:32:30 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:32:30.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:32:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:32:30 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f54004580 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:32:31 compute-0 ceph-mon[74194]: pgmap v502: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:32:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:32:32 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f54004580 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:32:32 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v503: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:32:32 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:32:32 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:32:32 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:32:32.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:32:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:32:32 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f48004070 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:32:32 compute-0 sudo[247094]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvwkkyjvywuayqzrkuxibccitdkfnjll ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242752.4774299-2988-54222674634125/AnsiballZ_systemd_service.py'
Sep 30 14:32:32 compute-0 sudo[247094]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:32:32 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:32:32 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:32:32 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:32:32.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:32:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:32:32 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f3c003960 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:32:33 compute-0 python3.9[247096]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 14:32:33 compute-0 sudo[247094]: pam_unix(sudo:session): session closed for user root
Sep 30 14:32:33 compute-0 sudo[247248]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzarkwgswsmeijhhgykplljevuzuodrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242753.1842475-2988-174356805267600/AnsiballZ_systemd_service.py'
Sep 30 14:32:33 compute-0 sudo[247248]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:32:33 compute-0 ceph-mon[74194]: pgmap v503: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:32:33 compute-0 python3.9[247250]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 14:32:33 compute-0 sudo[247248]: pam_unix(sudo:session): session closed for user root
Sep 30 14:32:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:32:34 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f5c003b90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:32:34 compute-0 sudo[247402]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-caoucjofaxoiqlptumbmfwmwzpuxjchp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242753.9133055-2988-250277506550439/AnsiballZ_systemd_service.py'
Sep 30 14:32:34 compute-0 sudo[247402]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:32:34 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v504: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:32:34 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:32:34 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:32:34 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:32:34.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:32:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:32:34 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f54004580 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:32:34 compute-0 python3.9[247404]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 14:32:34 compute-0 sudo[247402]: pam_unix(sudo:session): session closed for user root
Sep 30 14:32:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:32:34] "GET /metrics HTTP/1.1" 200 48418 "" "Prometheus/2.51.0"
Sep 30 14:32:34 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:32:34] "GET /metrics HTTP/1.1" 200 48418 "" "Prometheus/2.51.0"
Sep 30 14:32:34 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:32:34 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:32:34 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:32:34.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:32:34 compute-0 sudo[247555]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdfunhbvokxcobnddjjzkdzbkkfbcint ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242754.6299043-2988-87835857832454/AnsiballZ_systemd_service.py'
Sep 30 14:32:34 compute-0 sudo[247555]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:32:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:32:34 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f48004070 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:32:35 compute-0 python3.9[247557]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 14:32:35 compute-0 sudo[247555]: pam_unix(sudo:session): session closed for user root
Sep 30 14:32:35 compute-0 podman[247559]: 2025-09-30 14:32:35.312078633 +0000 UTC m=+0.065861741 container health_status b3fb2fd96e3ed0561f41ccf5f3a6a8f5edc1610051f196882b9fe64f54041c07 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, container_name=multipathd, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_id=multipathd)
Sep 30 14:32:35 compute-0 ceph-mon[74194]: pgmap v504: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:32:35 compute-0 sudo[247731]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tsbtoktyingbwblpwwxkfarmiceornit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242755.3924015-2988-121012665058393/AnsiballZ_systemd_service.py'
Sep 30 14:32:35 compute-0 sudo[247731]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:32:35 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:32:35 compute-0 python3.9[247733]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 14:32:35 compute-0 sudo[247731]: pam_unix(sudo:session): session closed for user root
Sep 30 14:32:36 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:32:36 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f3c003960 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:32:36 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v505: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:32:36 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:32:36 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:32:36 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:32:36.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:32:36 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:32:36 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f5c0048a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:32:36 compute-0 sudo[247885]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knvghhmllwvqhliiiyspammcqewnwawb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242756.1228273-2988-158861431185003/AnsiballZ_systemd_service.py'
Sep 30 14:32:36 compute-0 sudo[247885]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:32:36 compute-0 python3.9[247887]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 14:32:36 compute-0 sudo[247885]: pam_unix(sudo:session): session closed for user root
Sep 30 14:32:36 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:32:36 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:32:36 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:32:36.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:32:36 compute-0 podman[247889]: 2025-09-30 14:32:36.863797186 +0000 UTC m=+0.078419429 container health_status 8ace90c1ad44d98a416105b72e1c541724757ab08efa10adfaa0df7e8c2027c6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Sep 30 14:32:36 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:32:36 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f54004580 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:32:37 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:32:37.046Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:32:37 compute-0 sudo[248065]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukfgssachtndjyekhbkitflghnirygfk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242756.9277768-2988-4845739288951/AnsiballZ_systemd_service.py'
Sep 30 14:32:37 compute-0 sudo[248065]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:32:37 compute-0 python3.9[248067]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 14:32:37 compute-0 sudo[248065]: pam_unix(sudo:session): session closed for user root
Sep 30 14:32:37 compute-0 ceph-mon[74194]: pgmap v505: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:32:37 compute-0 sudo[248220]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-finboxdujojvdchwbuahatdgwhxdbvhn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242757.6074271-2988-215357644022540/AnsiballZ_systemd_service.py'
Sep 30 14:32:37 compute-0 sudo[248220]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:32:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:32:38 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f48004070 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:32:38 compute-0 python3.9[248222]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 14:32:38 compute-0 sudo[248220]: pam_unix(sudo:session): session closed for user root
Sep 30 14:32:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:32:38.248 163966 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:32:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:32:38.249 163966 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:32:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:32:38.249 163966 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:32:38 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v506: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:32:38 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:32:38 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:32:38 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:32:38.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:32:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:32:38 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f48004070 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:32:38 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:32:38 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:32:38 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:32:38.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:32:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:32:38 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f5c0048a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:32:39 compute-0 sudo[248373]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvyxayvvabmfbjeaxzqzycuxnzoqmtmr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242759.0089588-3165-152004563087116/AnsiballZ_file.py'
Sep 30 14:32:39 compute-0 sudo[248373]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:32:39 compute-0 python3.9[248375]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:32:39 compute-0 sudo[248373]: pam_unix(sudo:session): session closed for user root
Sep 30 14:32:39 compute-0 ceph-mon[74194]: pgmap v506: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:32:39 compute-0 sudo[248527]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufwgayfqylimjlbewpjfcahnniorkfyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242759.6316404-3165-19635303001420/AnsiballZ_file.py'
Sep 30 14:32:39 compute-0 sudo[248527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:32:40 compute-0 python3.9[248529]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:32:40 compute-0 sudo[248527]: pam_unix(sudo:session): session closed for user root
Sep 30 14:32:40 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:32:40 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f3c003960 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:32:40 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v507: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:32:40 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:32:40 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:32:40 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:32:40.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:32:40 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:32:40 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f54004580 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:32:40 compute-0 sudo[248694]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdtvxfigbgyvpizjvswlapvieefazljw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242760.2527735-3165-57637481544283/AnsiballZ_file.py'
Sep 30 14:32:40 compute-0 sudo[248694]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:32:40 compute-0 podman[248653]: 2025-09-30 14:32:40.528460698 +0000 UTC m=+0.059625714 container health_status 3f9405f717bf7bccb1d94628a6cea0442375ebf8d5cf43ef2536ee30dce6c6e0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, container_name=iscsid, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Sep 30 14:32:40 compute-0 python3.9[248702]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:32:40 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:32:40 compute-0 ceph-mon[74194]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #39. Immutable memtables: 0.
Sep 30 14:32:40 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:32:40.719787) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Sep 30 14:32:40 compute-0 ceph-mon[74194]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 39
Sep 30 14:32:40 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759242760719818, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 1156, "num_deletes": 255, "total_data_size": 2127210, "memory_usage": 2168192, "flush_reason": "Manual Compaction"}
Sep 30 14:32:40 compute-0 ceph-mon[74194]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #40: started
Sep 30 14:32:40 compute-0 sudo[248694]: pam_unix(sudo:session): session closed for user root
Sep 30 14:32:40 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759242760737279, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 40, "file_size": 2058707, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17781, "largest_seqno": 18935, "table_properties": {"data_size": 2053266, "index_size": 2836, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 11130, "raw_average_key_size": 18, "raw_value_size": 2042326, "raw_average_value_size": 3426, "num_data_blocks": 128, "num_entries": 596, "num_filter_entries": 596, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759242655, "oldest_key_time": 1759242655, "file_creation_time": 1759242760, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4a74fe2f-a33e-416b-ba25-743e7942b3ac", "db_session_id": "KY5CTSKWFSFJYE5835A9", "orig_file_number": 40, "seqno_to_time_mapping": "N/A"}}
Sep 30 14:32:40 compute-0 ceph-mon[74194]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 17537 microseconds, and 5060 cpu microseconds.
Sep 30 14:32:40 compute-0 ceph-mon[74194]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 14:32:40 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:32:40.737322) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #40: 2058707 bytes OK
Sep 30 14:32:40 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:32:40.737341) [db/memtable_list.cc:519] [default] Level-0 commit table #40 started
Sep 30 14:32:40 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:32:40.739604) [db/memtable_list.cc:722] [default] Level-0 commit table #40: memtable #1 done
Sep 30 14:32:40 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:32:40.739622) EVENT_LOG_v1 {"time_micros": 1759242760739617, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Sep 30 14:32:40 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:32:40.739639) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Sep 30 14:32:40 compute-0 ceph-mon[74194]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 2122055, prev total WAL file size 2122055, number of live WAL files 2.
Sep 30 14:32:40 compute-0 ceph-mon[74194]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000036.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 14:32:40 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:32:40.740314) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323532' seq:0, type:0; will stop at (end)
Sep 30 14:32:40 compute-0 ceph-mon[74194]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Sep 30 14:32:40 compute-0 ceph-mon[74194]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [40(2010KB)], [38(11MB)]
Sep 30 14:32:40 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759242760740431, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [40], "files_L6": [38], "score": -1, "input_data_size": 13907888, "oldest_snapshot_seqno": -1}
Sep 30 14:32:40 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:32:40 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:32:40 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:32:40.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:32:40 compute-0 ceph-mon[74194]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #41: 4948 keys, 13438080 bytes, temperature: kUnknown
Sep 30 14:32:40 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759242760864050, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 41, "file_size": 13438080, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13403495, "index_size": 21099, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12421, "raw_key_size": 125882, "raw_average_key_size": 25, "raw_value_size": 13312043, "raw_average_value_size": 2690, "num_data_blocks": 866, "num_entries": 4948, "num_filter_entries": 4948, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759241526, "oldest_key_time": 0, "file_creation_time": 1759242760, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4a74fe2f-a33e-416b-ba25-743e7942b3ac", "db_session_id": "KY5CTSKWFSFJYE5835A9", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Sep 30 14:32:40 compute-0 ceph-mon[74194]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 14:32:40 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:32:40.864422) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 13438080 bytes
Sep 30 14:32:40 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:32:40.866852) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 112.4 rd, 108.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 11.3 +0.0 blob) out(12.8 +0.0 blob), read-write-amplify(13.3) write-amplify(6.5) OK, records in: 5473, records dropped: 525 output_compression: NoCompression
Sep 30 14:32:40 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:32:40.866877) EVENT_LOG_v1 {"time_micros": 1759242760866865, "job": 18, "event": "compaction_finished", "compaction_time_micros": 123743, "compaction_time_cpu_micros": 30127, "output_level": 6, "num_output_files": 1, "total_output_size": 13438080, "num_input_records": 5473, "num_output_records": 4948, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Sep 30 14:32:40 compute-0 ceph-mon[74194]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000040.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 14:32:40 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759242760867374, "job": 18, "event": "table_file_deletion", "file_number": 40}
Sep 30 14:32:40 compute-0 ceph-mon[74194]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 14:32:40 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759242760869193, "job": 18, "event": "table_file_deletion", "file_number": 38}
Sep 30 14:32:40 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:32:40.740125) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:32:40 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:32:40.869322) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:32:40 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:32:40.869331) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:32:40 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:32:40.869333) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:32:40 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:32:40.869335) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:32:40 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:32:40.869337) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:32:40 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:32:40 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f48004070 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:32:41 compute-0 sudo[248852]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvzpwfdrxxmvacdafxljuifaqrykcrhs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242760.8558357-3165-44931378325012/AnsiballZ_file.py'
Sep 30 14:32:41 compute-0 sudo[248852]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:32:41 compute-0 python3.9[248854]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:32:41 compute-0 sudo[248852]: pam_unix(sudo:session): session closed for user root
Sep 30 14:32:41 compute-0 sudo[248860]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:32:41 compute-0 sudo[248860]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:32:41 compute-0 sudo[248860]: pam_unix(sudo:session): session closed for user root
Sep 30 14:32:41 compute-0 ceph-mon[74194]: pgmap v507: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:32:41 compute-0 sudo[249031]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxtffpamwtapyovkndbvjggleteldoge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242761.5117037-3165-111092219517300/AnsiballZ_file.py'
Sep 30 14:32:41 compute-0 sudo[249031]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:32:41 compute-0 python3.9[249033]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:32:41 compute-0 sudo[249031]: pam_unix(sudo:session): session closed for user root
Sep 30 14:32:42 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:32:42 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f48004070 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:32:42 compute-0 podman[249157]: 2025-09-30 14:32:42.391060337 +0000 UTC m=+0.043237303 container health_status c96c6cc1b89de09f21811bef665f35e069874e3ad1bbd4c37d02227af98e3458 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20250923, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Sep 30 14:32:42 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v508: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:32:42 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:32:42 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:32:42 compute-0 sudo[249197]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivtrqkskkuatwefdgpfnbudkcndnefvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242762.120249-3165-163200011695655/AnsiballZ_file.py'
Sep 30 14:32:42 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:32:42.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:32:42 compute-0 sudo[249197]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:32:42 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:32:42 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f48004070 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:32:42 compute-0 python3.9[249201]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:32:42 compute-0 sudo[249197]: pam_unix(sudo:session): session closed for user root
Sep 30 14:32:42 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:32:42 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:32:42 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:32:42.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:32:42 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:32:42 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f54004580 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:32:42 compute-0 sudo[249351]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgzropnjesbhbpnykyznuxkxtzeftxvv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242762.7125804-3165-86037568444870/AnsiballZ_file.py'
Sep 30 14:32:42 compute-0 sudo[249351]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:32:43 compute-0 python3.9[249353]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:32:43 compute-0 sudo[249351]: pam_unix(sudo:session): session closed for user root
Sep 30 14:32:43 compute-0 sudo[249504]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwnzgzovxdpqagkocowsjvomesixqifq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242763.2972822-3165-5864815095458/AnsiballZ_file.py'
Sep 30 14:32:43 compute-0 sudo[249504]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:32:43 compute-0 python3.9[249506]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:32:43 compute-0 ceph-mon[74194]: pgmap v508: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:32:43 compute-0 sudo[249504]: pam_unix(sudo:session): session closed for user root
Sep 30 14:32:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:32:44 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f48004070 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:32:44 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v509: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:32:44 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:32:44 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:32:44 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:32:44.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:32:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:32:44 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f48004070 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:32:44 compute-0 sudo[249658]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjjrgejyqlhgbskhasyerpsqetcpcmdr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242764.1641648-3336-259957811129092/AnsiballZ_file.py'
Sep 30 14:32:44 compute-0 sudo[249658]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:32:44 compute-0 python3.9[249660]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:32:44 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:32:44 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:32:44 compute-0 sudo[249658]: pam_unix(sudo:session): session closed for user root
Sep 30 14:32:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:32:44] "GET /metrics HTTP/1.1" 200 48419 "" "Prometheus/2.51.0"
Sep 30 14:32:44 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:32:44] "GET /metrics HTTP/1.1" 200 48419 "" "Prometheus/2.51.0"
Sep 30 14:32:44 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:32:44 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:32:44 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:32:44 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:32:44.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:32:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:32:44 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f38002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:32:45 compute-0 sudo[249810]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cyyvrsbbressxvbuzmgjyjcbwmmiimig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242764.7804646-3336-172490143109730/AnsiballZ_file.py'
Sep 30 14:32:45 compute-0 sudo[249810]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:32:45 compute-0 python3.9[249812]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:32:45 compute-0 sudo[249810]: pam_unix(sudo:session): session closed for user root
Sep 30 14:32:45 compute-0 sudo[249963]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtxhdkeizgckctrcyckvuftqbajoioza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242765.4029512-3336-163596117609973/AnsiballZ_file.py'
Sep 30 14:32:45 compute-0 sudo[249963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:32:45 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:32:45 compute-0 ceph-mon[74194]: pgmap v509: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:32:45 compute-0 python3.9[249965]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:32:45 compute-0 sudo[249963]: pam_unix(sudo:session): session closed for user root
Sep 30 14:32:46 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:32:46 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f54004580 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:32:46 compute-0 sudo[250117]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfnqvsjtkjpsrdawapyumrtbbrumkcgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242765.988676-3336-120470985788609/AnsiballZ_file.py'
Sep 30 14:32:46 compute-0 sudo[250117]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:32:46 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v510: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:32:46 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:32:46 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:32:46 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:32:46.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:32:46 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:32:46 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f54004580 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:32:46 compute-0 python3.9[250119]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:32:46 compute-0 sudo[250117]: pam_unix(sudo:session): session closed for user root
Sep 30 14:32:46 compute-0 sudo[250269]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eubzbifyjwnuyrkplcrptprcwfzmqoyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242766.5732152-3336-150284461738706/AnsiballZ_file.py'
Sep 30 14:32:46 compute-0 sudo[250269]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:32:46 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:32:46 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:32:46 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:32:46.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:32:46 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:32:46 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f30001bd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:32:47 compute-0 python3.9[250271]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:32:47 compute-0 sudo[250269]: pam_unix(sudo:session): session closed for user root
Sep 30 14:32:47 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:32:47.047Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:32:47 compute-0 sudo[250422]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-peuawigyzthrlskdimsqzkzyqtaixzdu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242767.1479175-3336-217817436870054/AnsiballZ_file.py'
Sep 30 14:32:47 compute-0 sudo[250422]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:32:47 compute-0 python3.9[250424]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:32:47 compute-0 sudo[250422]: pam_unix(sudo:session): session closed for user root
Sep 30 14:32:47 compute-0 ceph-mon[74194]: pgmap v510: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:32:47 compute-0 sudo[250575]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ailpgupjelmewjfjuebxcogcjnsdtmwl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242767.712777-3336-27142410503943/AnsiballZ_file.py'
Sep 30 14:32:47 compute-0 sudo[250575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:32:48 compute-0 python3.9[250577]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:32:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:32:48 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f38002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:32:48 compute-0 sudo[250575]: pam_unix(sudo:session): session closed for user root
Sep 30 14:32:48 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v511: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:32:48 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:32:48 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:32:48 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:32:48.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:32:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:32:48 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f48004070 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:32:48 compute-0 sudo[250727]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aopernarpatjdrphyfwfdstunnaxttiv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242768.2903888-3336-34366170599220/AnsiballZ_file.py'
Sep 30 14:32:48 compute-0 sudo[250727]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:32:48 compute-0 python3.9[250729]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:32:48 compute-0 sudo[250727]: pam_unix(sudo:session): session closed for user root
Sep 30 14:32:48 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:32:48 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:32:48 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:32:48.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:32:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:32:48 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f54004580 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:32:49 compute-0 sudo[250880]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gznboasryfpgrwcpgubnfmouwrakzbwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242769.2095826-3510-33100377362440/AnsiballZ_command.py'
Sep 30 14:32:49 compute-0 sudo[250880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:32:49 compute-0 python3.9[250882]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:32:49 compute-0 sudo[250880]: pam_unix(sudo:session): session closed for user root
Sep 30 14:32:49 compute-0 ceph-mon[74194]: pgmap v511: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:32:50 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:32:50 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f30001bd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:32:50 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-nfs-cephfs-compute-0-yvkpei[97239]: [WARNING] 272/143250 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Sep 30 14:32:50 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v512: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:32:50 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:32:50 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:32:50 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:32:50.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:32:50 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:32:50 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f38001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:32:50 compute-0 python3.9[251035]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Sep 30 14:32:50 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:32:50 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:32:50 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:32:50 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:32:50.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:32:50 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:32:50 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f48004070 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:32:51 compute-0 sudo[251185]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhceebedwhifpvnfqxtbmathpkbchgpi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242770.9633849-3564-39383820214005/AnsiballZ_systemd_service.py'
Sep 30 14:32:51 compute-0 sudo[251185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:32:51 compute-0 python3.9[251187]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Sep 30 14:32:51 compute-0 systemd[1]: Reloading.
Sep 30 14:32:51 compute-0 systemd-rc-local-generator[251213]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:32:51 compute-0 systemd-sysv-generator[251217]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 14:32:51 compute-0 ceph-mon[74194]: pgmap v512: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:32:51 compute-0 sudo[251185]: pam_unix(sudo:session): session closed for user root
Sep 30 14:32:52 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:32:52 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f54004580 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:32:52 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v513: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Sep 30 14:32:52 compute-0 sudo[251373]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewsicacifmxlwgpovmijhauucaqomuys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242772.13475-3588-45135456869172/AnsiballZ_command.py'
Sep 30 14:32:52 compute-0 sudo[251373]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:32:52 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:32:52 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:32:52 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:32:52.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:32:52 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:32:52 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f30001bd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:32:52 compute-0 python3.9[251375]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:32:52 compute-0 sudo[251373]: pam_unix(sudo:session): session closed for user root
Sep 30 14:32:52 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:32:52 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:32:52 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:32:52.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:32:52 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:32:52 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f38001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:32:53 compute-0 sudo[251526]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jaizbftsmlwzjmpafcpeoodwsmutvkjl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242772.7711504-3588-85523888315611/AnsiballZ_command.py'
Sep 30 14:32:53 compute-0 sudo[251526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:32:53 compute-0 python3.9[251528]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:32:53 compute-0 sudo[251526]: pam_unix(sudo:session): session closed for user root
Sep 30 14:32:53 compute-0 sudo[251680]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bctnbzufvebffjzgmygcykzjddyfyjql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242773.3852882-3588-79544833703102/AnsiballZ_command.py'
Sep 30 14:32:53 compute-0 sudo[251680]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:32:53 compute-0 python3.9[251682]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:32:53 compute-0 sudo[251680]: pam_unix(sudo:session): session closed for user root
Sep 30 14:32:53 compute-0 ceph-mon[74194]: pgmap v513: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Sep 30 14:32:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:32:54 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f38001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:32:54 compute-0 sudo[251835]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znpgcirtwqmhhjlaqrdlbhuwfdufheiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242773.9936204-3588-185250483280978/AnsiballZ_command.py'
Sep 30 14:32:54 compute-0 sudo[251835]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:32:54 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v514: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Sep 30 14:32:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:32:54 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f38001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:32:54 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:32:54 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:32:54 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:32:54.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:32:54 compute-0 python3.9[251837]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:32:54 compute-0 sudo[251835]: pam_unix(sudo:session): session closed for user root
Sep 30 14:32:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:32:54] "GET /metrics HTTP/1.1" 200 48419 "" "Prometheus/2.51.0"
Sep 30 14:32:54 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:32:54] "GET /metrics HTTP/1.1" 200 48419 "" "Prometheus/2.51.0"
Sep 30 14:32:54 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:32:54 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:32:54 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:32:54.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:32:54 compute-0 sudo[251988]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qykaeaxnykokocmhcxkfystghdxgdeks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242774.6701317-3588-204264743783086/AnsiballZ_command.py'
Sep 30 14:32:54 compute-0 sudo[251988]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:32:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:32:54 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f2c000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:32:55 compute-0 python3.9[251990]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:32:55 compute-0 sudo[251988]: pam_unix(sudo:session): session closed for user root
Sep 30 14:32:55 compute-0 sudo[252142]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqswzwyuniwuvccvwgplysugdvrxdhmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242775.2701118-3588-280847969844270/AnsiballZ_command.py'
Sep 30 14:32:55 compute-0 sudo[252142]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:32:55 compute-0 python3.9[252144]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:32:55 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:32:55 compute-0 sudo[252142]: pam_unix(sudo:session): session closed for user root
Sep 30 14:32:55 compute-0 ceph-mon[74194]: pgmap v514: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Sep 30 14:32:56 compute-0 sudo[252296]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzkknbdztbcjqqpavfbnxfskgbrxojlt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242775.8843822-3588-153461151630671/AnsiballZ_command.py'
Sep 30 14:32:56 compute-0 sudo[252296]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:32:56 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:32:56 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f30001bd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:32:56 compute-0 python3.9[252298]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:32:56 compute-0 sudo[252296]: pam_unix(sudo:session): session closed for user root
Sep 30 14:32:56 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v515: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Sep 30 14:32:56 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:32:56 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f30001bd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:32:56 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:32:56 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:32:56 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:32:56.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:32:56 compute-0 sudo[252449]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gismlngfyqvqlredrjbkwtepcixlieuu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242776.5314856-3588-99775634571446/AnsiballZ_command.py'
Sep 30 14:32:56 compute-0 sudo[252449]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:32:56 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:32:56 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:32:56 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:32:56.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:32:56 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:32:56 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f54004580 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:32:56 compute-0 python3.9[252451]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Sep 30 14:32:57 compute-0 sudo[252449]: pam_unix(sudo:session): session closed for user root
Sep 30 14:32:57 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:32:57.048Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:32:57 compute-0 ceph-mon[74194]: pgmap v515: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Sep 30 14:32:58 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:32:58 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f2c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:32:58 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v516: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Sep 30 14:32:58 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:32:58 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f30001bd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:32:58 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:32:58 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:32:58 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:32:58.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:32:58 compute-0 sudo[252604]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-huyvgrllyjlsserxetzpygynyijwkatb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242778.3529944-3795-85934939744240/AnsiballZ_file.py'
Sep 30 14:32:58 compute-0 sudo[252604]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:32:58 compute-0 python3.9[252606]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:32:58 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:32:58 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:32:58 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:32:58.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:32:58 compute-0 sudo[252604]: pam_unix(sudo:session): session closed for user root
Sep 30 14:32:58 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:32:58 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f30001bd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:32:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-crash-compute-0[79646]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Sep 30 14:32:59 compute-0 sudo[252756]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ngdvfjyqkadwqvkzhncpywragwbjkwix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242778.9980927-3795-52861901933345/AnsiballZ_file.py'
Sep 30 14:32:59 compute-0 sudo[252756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:32:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:32:59 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:32:59 compute-0 python3.9[252758]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:32:59 compute-0 ceph-mgr[74485]: [balancer INFO root] Optimize plan auto_2025-09-30_14:32:59
Sep 30 14:32:59 compute-0 ceph-mgr[74485]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 14:32:59 compute-0 ceph-mgr[74485]: [balancer INFO root] do_upmap
Sep 30 14:32:59 compute-0 ceph-mgr[74485]: [balancer INFO root] pools ['images', 'cephfs.cephfs.meta', 'backups', 'vms', '.rgw.root', '.mgr', 'volumes', 'default.rgw.log', 'cephfs.cephfs.data', '.nfs', 'default.rgw.control', 'default.rgw.meta']
Sep 30 14:32:59 compute-0 ceph-mgr[74485]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 14:32:59 compute-0 sudo[252756]: pam_unix(sudo:session): session closed for user root
Sep 30 14:32:59 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:32:59 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:32:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:32:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:32:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 14:32:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:32:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 14:32:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:32:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:32:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:32:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:32:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:32:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:32:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:32:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:32:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:32:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Sep 30 14:32:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:32:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:32:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:32:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Sep 30 14:32:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:32:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Sep 30 14:32:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:32:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:32:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:32:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 14:32:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:32:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 14:32:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:32:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:32:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:32:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:32:59 compute-0 ceph-mon[74194]: pgmap v516: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Sep 30 14:32:59 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:32:59 compute-0 sudo[252910]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bezzyxxxoiaskgknfkznvbtkycgoumhy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242779.6637604-3795-165926470758215/AnsiballZ_file.py'
Sep 30 14:32:59 compute-0 sudo[252910]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:33:00 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:33:00 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f54004580 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:33:00 compute-0 python3.9[252912]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:33:00 compute-0 sudo[252910]: pam_unix(sudo:session): session closed for user root
Sep 30 14:33:00 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v517: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Sep 30 14:33:00 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:33:00 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f54004580 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:33:00 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:33:00 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:33:00 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:33:00.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:33:00 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:33:00 compute-0 sudo[253062]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sodbjmduadfwhrhidtxldyztcgznvqkd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242780.4198341-3861-82916500798853/AnsiballZ_file.py'
Sep 30 14:33:00 compute-0 sudo[253062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:33:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 14:33:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 14:33:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 14:33:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 14:33:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 14:33:00 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:33:00 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:33:00 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:33:00.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:33:00 compute-0 python3.9[253064]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:33:00 compute-0 sudo[253062]: pam_unix(sudo:session): session closed for user root
Sep 30 14:33:00 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:33:00 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f54004580 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:33:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 14:33:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 14:33:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 14:33:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 14:33:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 14:33:01 compute-0 sudo[253215]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpvndyvcbrpazqeqaucjpxrmbhzdtiex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242781.1098506-3861-36098467827439/AnsiballZ_file.py'
Sep 30 14:33:01 compute-0 sudo[253215]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:33:01 compute-0 sudo[253218]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:33:01 compute-0 sudo[253218]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:33:01 compute-0 sudo[253218]: pam_unix(sudo:session): session closed for user root
Sep 30 14:33:01 compute-0 python3.9[253217]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:33:01 compute-0 sudo[253215]: pam_unix(sudo:session): session closed for user root
Sep 30 14:33:01 compute-0 ceph-mon[74194]: pgmap v517: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Sep 30 14:33:01 compute-0 sudo[253393]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbshmpyhnpcybzrguyvkdmdkerlperdn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242781.7054248-3861-249746488845978/AnsiballZ_file.py'
Sep 30 14:33:01 compute-0 sudo[253393]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:33:02 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:33:02 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f54004580 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:33:02 compute-0 python3.9[253395]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:33:02 compute-0 sudo[253393]: pam_unix(sudo:session): session closed for user root
Sep 30 14:33:02 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:33:02 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:33:02 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:33:02 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:33:02 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v518: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Sep 30 14:33:02 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:33:02 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f30001bd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:33:02 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:33:02 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:33:02 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:33:02.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:33:02 compute-0 sudo[253545]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlrqhqonfekvnvxaisfcgscmrhpootrl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242782.3483546-3861-83388763455332/AnsiballZ_file.py'
Sep 30 14:33:02 compute-0 sudo[253545]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:33:02 compute-0 python3.9[253547]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:33:02 compute-0 sudo[253545]: pam_unix(sudo:session): session closed for user root
Sep 30 14:33:02 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:33:02 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:33:02 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:33:02.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:33:02 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:33:02 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f38003df0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:33:03 compute-0 sudo[253697]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elvacdctjytfxedepawtczzbxnnworio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242782.9227881-3861-106354480385364/AnsiballZ_file.py'
Sep 30 14:33:03 compute-0 sudo[253697]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:33:03 compute-0 python3.9[253699]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:33:03 compute-0 sudo[253697]: pam_unix(sudo:session): session closed for user root
Sep 30 14:33:03 compute-0 sudo[253851]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjxtarzoamywkqembuaibiogaflgexdf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242783.5755963-3861-227575554772954/AnsiballZ_file.py'
Sep 30 14:33:03 compute-0 sudo[253851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:33:03 compute-0 ceph-mon[74194]: pgmap v518: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Sep 30 14:33:04 compute-0 python3.9[253853]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:33:04 compute-0 sudo[253851]: pam_unix(sudo:session): session closed for user root
Sep 30 14:33:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:33:04 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f2c001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:33:04 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v519: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Sep 30 14:33:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:33:04 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f54004580 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:33:04 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:33:04 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:33:04 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:33:04.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:33:04 compute-0 sudo[254003]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnaewwnkjqdvlysqnlxocqimilettjpt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242784.189499-3861-140225122334812/AnsiballZ_file.py'
Sep 30 14:33:04 compute-0 sudo[254003]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:33:04 compute-0 python3.9[254005]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:33:04 compute-0 sudo[254003]: pam_unix(sudo:session): session closed for user root
Sep 30 14:33:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:33:04] "GET /metrics HTTP/1.1" 200 48418 "" "Prometheus/2.51.0"
Sep 30 14:33:04 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:33:04] "GET /metrics HTTP/1.1" 200 48418 "" "Prometheus/2.51.0"
Sep 30 14:33:04 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:33:04 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:33:04 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:33:04.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:33:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:33:04 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f300040a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:33:05 compute-0 sudo[254155]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bkhftecifmssrwbtilvfuujypbzwgjjj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242784.7952282-3861-5331210280862/AnsiballZ_file.py'
Sep 30 14:33:05 compute-0 sudo[254155]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:33:05 compute-0 python3.9[254157]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:33:05 compute-0 sudo[254155]: pam_unix(sudo:session): session closed for user root
Sep 30 14:33:05 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:33:05 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Sep 30 14:33:05 compute-0 sudo[254321]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jogcyxsioqjvygmfhnpplyxjppfmidzy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242785.343269-3861-19523574511623/AnsiballZ_file.py'
Sep 30 14:33:05 compute-0 sudo[254321]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:33:05 compute-0 podman[254282]: 2025-09-30 14:33:05.609611961 +0000 UTC m=+0.065540100 container health_status b3fb2fd96e3ed0561f41ccf5f3a6a8f5edc1610051f196882b9fe64f54041c07 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20250923)
Sep 30 14:33:05 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:33:05 compute-0 python3.9[254329]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:33:05 compute-0 sudo[254321]: pam_unix(sudo:session): session closed for user root
Sep 30 14:33:05 compute-0 ceph-mon[74194]: pgmap v519: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Sep 30 14:33:06 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:33:06 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f38003df0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:33:06 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v520: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Sep 30 14:33:06 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:33:06 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f2c001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:33:06 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:33:06 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:33:06 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:33:06.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:33:06 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:33:06 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:33:06 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:33:06.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:33:06 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:33:06 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f54004580 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:33:07 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:33:07.048Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:33:07 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:33:07.049Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:33:07 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:33:07.049Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:33:07 compute-0 podman[254356]: 2025-09-30 14:33:07.157975186 +0000 UTC m=+0.084831627 container health_status 8ace90c1ad44d98a416105b72e1c541724757ab08efa10adfaa0df7e8c2027c6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Sep 30 14:33:07 compute-0 ceph-mon[74194]: pgmap v520: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Sep 30 14:33:08 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:33:08 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f300040a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:33:08 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v521: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Sep 30 14:33:08 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:33:08 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f300040a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:33:08 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:33:08 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:33:08 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:33:08.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:33:08 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:33:08 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:33:08 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:33:08.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:33:08 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:33:08 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f2c001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:33:10 compute-0 ceph-mon[74194]: pgmap v521: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Sep 30 14:33:10 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:33:10 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f2c001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:33:10 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v522: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Sep 30 14:33:10 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:33:10 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f54004580 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:33:10 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:33:10 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:33:10 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:33:10.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:33:10 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:33:10 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:33:10 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:33:10 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:33:10.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:33:10 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:33:10 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f54004580 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:33:11 compute-0 podman[254386]: 2025-09-30 14:33:11.123072625 +0000 UTC m=+0.056064006 container health_status 3f9405f717bf7bccb1d94628a6cea0442375ebf8d5cf43ef2536ee30dce6c6e0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923)
Sep 30 14:33:11 compute-0 sudo[254532]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jexszamobofnutezvzwyaqvelwaoikbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242791.2826438-4228-125488783129882/AnsiballZ_getent.py'
Sep 30 14:33:11 compute-0 sudo[254532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:33:11 compute-0 python3.9[254534]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Sep 30 14:33:11 compute-0 sudo[254532]: pam_unix(sudo:session): session closed for user root
Sep 30 14:33:12 compute-0 ceph-mon[74194]: pgmap v522: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Sep 30 14:33:12 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:33:12 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f48004280 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:33:12 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-nfs-cephfs-compute-0-yvkpei[97239]: [WARNING] 272/143312 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Sep 30 14:33:12 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v523: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Sep 30 14:33:12 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:33:12 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f2c001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:33:12 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:33:12 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:33:12 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:33:12.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:33:12 compute-0 podman[254660]: 2025-09-30 14:33:12.78094134 +0000 UTC m=+0.049643633 container health_status c96c6cc1b89de09f21811bef665f35e069874e3ad1bbd4c37d02227af98e3458 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Sep 30 14:33:12 compute-0 sudo[254704]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpsbngbdkmfcirizjnxfvwowhwkfemxp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242792.3477433-4252-195194387740157/AnsiballZ_group.py'
Sep 30 14:33:12 compute-0 sudo[254704]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:33:12 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:33:12 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:33:12 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:33:12.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:33:12 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:33:12 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f38004710 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:33:12 compute-0 python3.9[254706]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Sep 30 14:33:13 compute-0 groupadd[254707]: group added to /etc/group: name=nova, GID=42436
Sep 30 14:33:13 compute-0 groupadd[254707]: group added to /etc/gshadow: name=nova
Sep 30 14:33:13 compute-0 groupadd[254707]: new group: name=nova, GID=42436
Sep 30 14:33:13 compute-0 sudo[254704]: pam_unix(sudo:session): session closed for user root
Sep 30 14:33:13 compute-0 sudo[254863]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hinjaelbzryxgplsglgeaydkqrzbbbad ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242793.2414165-4276-198217219435312/AnsiballZ_user.py'
Sep 30 14:33:13 compute-0 sudo[254863]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:33:13 compute-0 python3.9[254865]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Sep 30 14:33:13 compute-0 useradd[254868]: new user: name=nova, UID=42436, GID=42436, home=/home/nova, shell=/bin/sh, from=/dev/pts/0
Sep 30 14:33:13 compute-0 useradd[254868]: add 'nova' to group 'libvirt'
Sep 30 14:33:13 compute-0 useradd[254868]: add 'nova' to shadow group 'libvirt'
Sep 30 14:33:14 compute-0 ceph-mon[74194]: pgmap v523: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Sep 30 14:33:14 compute-0 sudo[254863]: pam_unix(sudo:session): session closed for user root
Sep 30 14:33:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:33:14 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f54004580 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:33:14 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v524: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Sep 30 14:33:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:33:14 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f48004280 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:33:14 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:33:14 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:33:14 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:33:14.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:33:14 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:33:14 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:33:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:33:14] "GET /metrics HTTP/1.1" 200 48419 "" "Prometheus/2.51.0"
Sep 30 14:33:14 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:33:14] "GET /metrics HTTP/1.1" 200 48419 "" "Prometheus/2.51.0"
Sep 30 14:33:14 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:33:14 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:33:14 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:33:14.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:33:14 compute-0 sshd-session[254899]: Accepted publickey for zuul from 192.168.122.30 port 33932 ssh2: ECDSA SHA256:bXV1aFTGAGwGo0hLh6HZ3pTGxlJrPf0VedxXflT3nU8
Sep 30 14:33:14 compute-0 systemd-logind[808]: New session 56 of user zuul.
Sep 30 14:33:14 compute-0 systemd[1]: Started Session 56 of User zuul.
Sep 30 14:33:14 compute-0 sshd-session[254899]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Sep 30 14:33:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:33:14 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f2c003ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:33:15 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:33:15 compute-0 sshd-session[254902]: Received disconnect from 192.168.122.30 port 33932:11: disconnected by user
Sep 30 14:33:15 compute-0 sshd-session[254902]: Disconnected from user zuul 192.168.122.30 port 33932
Sep 30 14:33:15 compute-0 sshd-session[254899]: pam_unix(sshd:session): session closed for user zuul
Sep 30 14:33:15 compute-0 systemd[1]: session-56.scope: Deactivated successfully.
Sep 30 14:33:15 compute-0 systemd-logind[808]: Session 56 logged out. Waiting for processes to exit.
Sep 30 14:33:15 compute-0 systemd-logind[808]: Removed session 56.
Sep 30 14:33:15 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:33:15 compute-0 python3.9[255053]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:33:16 compute-0 ceph-mon[74194]: pgmap v524: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Sep 30 14:33:16 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:33:16 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f38004710 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:33:16 compute-0 python3.9[255175]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759242795.2909725-4351-85487467151530/.source.json follow=False _original_basename=config.json.j2 checksum=2c2474b5f24ef7c9ed37f49680082593e0d1100b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:33:16 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v525: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Sep 30 14:33:16 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:33:16 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f54004580 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:33:16 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:33:16 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:33:16 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:33:16.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:33:16 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:33:16 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:33:16 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:33:16.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:33:16 compute-0 python3.9[255325]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:33:16 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:33:16 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f48004280 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:33:17 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:33:17.051Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:33:17 compute-0 python3.9[255401]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:33:18 compute-0 python3.9[255553]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:33:18 compute-0 ceph-mon[74194]: pgmap v525: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Sep 30 14:33:18 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:33:18 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f2c003ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:33:18 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v526: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Sep 30 14:33:18 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:33:18 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f38004710 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:33:18 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:33:18 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:33:18 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:33:18.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:33:18 compute-0 python3.9[255674]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759242797.6545863-4351-241449826729571/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:33:18 compute-0 sudo[255699]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:33:18 compute-0 sudo[255699]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:33:18 compute-0 sudo[255699]: pam_unix(sudo:session): session closed for user root
Sep 30 14:33:18 compute-0 sudo[255748]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Sep 30 14:33:18 compute-0 sudo[255748]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:33:18 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:33:18 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:33:18 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:33:18.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:33:18 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:33:18 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f540045a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:33:19 compute-0 sudo[255748]: pam_unix(sudo:session): session closed for user root
Sep 30 14:33:19 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Sep 30 14:33:19 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 14:33:19 compute-0 python3.9[255881]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:33:19 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:33:19 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Sep 30 14:33:19 compute-0 ceph-mon[74194]: pgmap v526: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Sep 30 14:33:19 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:33:19 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 14:33:19 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:33:19 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:33:19 compute-0 sudo[256019]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:33:19 compute-0 sudo[256019]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:33:19 compute-0 sudo[256019]: pam_unix(sudo:session): session closed for user root
Sep 30 14:33:19 compute-0 sudo[256044]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 14:33:19 compute-0 sudo[256044]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:33:19 compute-0 python3.9[256018]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759242798.7502387-4351-241085426526577/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:33:20 compute-0 sudo[256044]: pam_unix(sudo:session): session closed for user root
Sep 30 14:33:20 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:33:20 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:33:20 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 14:33:20 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 14:33:20 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 14:33:20 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:33:20 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:33:20 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f480042a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:33:20 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 14:33:20 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:33:20 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 14:33:20 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 14:33:20 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 14:33:20 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 14:33:20 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:33:20 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:33:20 compute-0 sudo[256253]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:33:20 compute-0 sudo[256253]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:33:20 compute-0 sudo[256253]: pam_unix(sudo:session): session closed for user root
Sep 30 14:33:20 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:33:20 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:33:20 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:33:20 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:33:20 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:33:20 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 14:33:20 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:33:20 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:33:20 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 14:33:20 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 14:33:20 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:33:20 compute-0 sudo[256278]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 14:33:20 compute-0 python3.9[256252]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:33:20 compute-0 sudo[256278]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:33:20 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v527: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Sep 30 14:33:20 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:33:20 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f2c003ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:33:20 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:33:20 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:33:20 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:33:20.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:33:20 compute-0 podman[256464]: 2025-09-30 14:33:20.718448421 +0000 UTC m=+0.038161325 container create 4dba5e0b9b7e2b92beb88b472d8142539d61a3094167e0c6e3ff3cc091f187a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_leakey, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:33:20 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:33:20 compute-0 systemd[1]: Started libpod-conmon-4dba5e0b9b7e2b92beb88b472d8142539d61a3094167e0c6e3ff3cc091f187a6.scope.
Sep 30 14:33:20 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:33:20 compute-0 podman[256464]: 2025-09-30 14:33:20.701231869 +0000 UTC m=+0.020944793 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:33:20 compute-0 podman[256464]: 2025-09-30 14:33:20.797675277 +0000 UTC m=+0.117388201 container init 4dba5e0b9b7e2b92beb88b472d8142539d61a3094167e0c6e3ff3cc091f187a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_leakey, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Sep 30 14:33:20 compute-0 podman[256464]: 2025-09-30 14:33:20.804056648 +0000 UTC m=+0.123769552 container start 4dba5e0b9b7e2b92beb88b472d8142539d61a3094167e0c6e3ff3cc091f187a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_leakey, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:33:20 compute-0 podman[256464]: 2025-09-30 14:33:20.807260734 +0000 UTC m=+0.126973638 container attach 4dba5e0b9b7e2b92beb88b472d8142539d61a3094167e0c6e3ff3cc091f187a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_leakey, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:33:20 compute-0 nifty_leakey[256481]: 167 167
Sep 30 14:33:20 compute-0 systemd[1]: libpod-4dba5e0b9b7e2b92beb88b472d8142539d61a3094167e0c6e3ff3cc091f187a6.scope: Deactivated successfully.
Sep 30 14:33:20 compute-0 conmon[256481]: conmon 4dba5e0b9b7e2b92beb8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4dba5e0b9b7e2b92beb88b472d8142539d61a3094167e0c6e3ff3cc091f187a6.scope/container/memory.events
Sep 30 14:33:20 compute-0 podman[256464]: 2025-09-30 14:33:20.811683973 +0000 UTC m=+0.131396887 container died 4dba5e0b9b7e2b92beb88b472d8142539d61a3094167e0c6e3ff3cc091f187a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Sep 30 14:33:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-98e07d68087c21062d2c02f10c454b323a24897c37683730f2495e3b4bac3448-merged.mount: Deactivated successfully.
Sep 30 14:33:20 compute-0 python3.9[256463]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759242799.9108524-4351-278443131691514/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:33:20 compute-0 podman[256464]: 2025-09-30 14:33:20.854469901 +0000 UTC m=+0.174182805 container remove 4dba5e0b9b7e2b92beb88b472d8142539d61a3094167e0c6e3ff3cc091f187a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_leakey, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:33:20 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:33:20 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:33:20 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:33:20.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:33:20 compute-0 systemd[1]: libpod-conmon-4dba5e0b9b7e2b92beb88b472d8142539d61a3094167e0c6e3ff3cc091f187a6.scope: Deactivated successfully.
Sep 30 14:33:20 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:33:20 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f38004710 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:33:21 compute-0 podman[256527]: 2025-09-30 14:33:21.018475243 +0000 UTC m=+0.049697045 container create 507fe36860bff0258abf4def5d5ebe96e163b067cb738fd70d5d29ce841f79b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_swanson, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:33:21 compute-0 systemd[1]: Started libpod-conmon-507fe36860bff0258abf4def5d5ebe96e163b067cb738fd70d5d29ce841f79b2.scope.
Sep 30 14:33:21 compute-0 podman[256527]: 2025-09-30 14:33:20.990974295 +0000 UTC m=+0.022196147 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:33:21 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:33:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9154d99168b95b2fce21f59e2020055dd9bca692500e0cd590df67bc83553f55/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:33:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9154d99168b95b2fce21f59e2020055dd9bca692500e0cd590df67bc83553f55/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:33:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9154d99168b95b2fce21f59e2020055dd9bca692500e0cd590df67bc83553f55/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:33:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9154d99168b95b2fce21f59e2020055dd9bca692500e0cd590df67bc83553f55/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:33:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9154d99168b95b2fce21f59e2020055dd9bca692500e0cd590df67bc83553f55/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 14:33:21 compute-0 podman[256527]: 2025-09-30 14:33:21.113024111 +0000 UTC m=+0.144245923 container init 507fe36860bff0258abf4def5d5ebe96e163b067cb738fd70d5d29ce841f79b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_swanson, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:33:21 compute-0 podman[256527]: 2025-09-30 14:33:21.119854674 +0000 UTC m=+0.151076476 container start 507fe36860bff0258abf4def5d5ebe96e163b067cb738fd70d5d29ce841f79b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_swanson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid)
Sep 30 14:33:21 compute-0 podman[256527]: 2025-09-30 14:33:21.123211624 +0000 UTC m=+0.154433426 container attach 507fe36860bff0258abf4def5d5ebe96e163b067cb738fd70d5d29ce841f79b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_swanson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Sep 30 14:33:21 compute-0 ceph-mon[74194]: pgmap v527: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Sep 30 14:33:21 compute-0 cool_swanson[256543]: --> passed data devices: 0 physical, 1 LVM
Sep 30 14:33:21 compute-0 cool_swanson[256543]: --> All data devices are unavailable
Sep 30 14:33:21 compute-0 systemd[1]: libpod-507fe36860bff0258abf4def5d5ebe96e163b067cb738fd70d5d29ce841f79b2.scope: Deactivated successfully.
Sep 30 14:33:21 compute-0 podman[256527]: 2025-09-30 14:33:21.458933624 +0000 UTC m=+0.490155436 container died 507fe36860bff0258abf4def5d5ebe96e163b067cb738fd70d5d29ce841f79b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_swanson, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Sep 30 14:33:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-9154d99168b95b2fce21f59e2020055dd9bca692500e0cd590df67bc83553f55-merged.mount: Deactivated successfully.
Sep 30 14:33:21 compute-0 podman[256527]: 2025-09-30 14:33:21.498871956 +0000 UTC m=+0.530093758 container remove 507fe36860bff0258abf4def5d5ebe96e163b067cb738fd70d5d29ce841f79b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_swanson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:33:21 compute-0 systemd[1]: libpod-conmon-507fe36860bff0258abf4def5d5ebe96e163b067cb738fd70d5d29ce841f79b2.scope: Deactivated successfully.
Sep 30 14:33:21 compute-0 sudo[256278]: pam_unix(sudo:session): session closed for user root
Sep 30 14:33:21 compute-0 sudo[256670]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:33:21 compute-0 sudo[256670]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:33:21 compute-0 sudo[256676]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:33:21 compute-0 sudo[256670]: pam_unix(sudo:session): session closed for user root
Sep 30 14:33:21 compute-0 sudo[256676]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:33:21 compute-0 sudo[256676]: pam_unix(sudo:session): session closed for user root
Sep 30 14:33:21 compute-0 sudo[256743]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szdzanyrlpznuwkmifqufnbpxoybtoyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242801.3547192-4558-246378175336688/AnsiballZ_file.py'
Sep 30 14:33:21 compute-0 sudo[256743]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:33:21 compute-0 sudo[256747]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -- lvm list --format json
Sep 30 14:33:21 compute-0 sudo[256747]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:33:21 compute-0 python3.9[256752]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:33:21 compute-0 sudo[256743]: pam_unix(sudo:session): session closed for user root
Sep 30 14:33:22 compute-0 podman[256840]: 2025-09-30 14:33:22.044561552 +0000 UTC m=+0.044416063 container create 8ee10bab6d08d9c851af053af0750fc9594051385181d51b17f4eae911c57dbe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_chaplygin, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:33:22 compute-0 systemd[1]: Started libpod-conmon-8ee10bab6d08d9c851af053af0750fc9594051385181d51b17f4eae911c57dbe.scope.
Sep 30 14:33:22 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:33:22 compute-0 podman[256840]: 2025-09-30 14:33:22.112093184 +0000 UTC m=+0.111947715 container init 8ee10bab6d08d9c851af053af0750fc9594051385181d51b17f4eae911c57dbe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_chaplygin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Sep 30 14:33:22 compute-0 podman[256840]: 2025-09-30 14:33:22.118091085 +0000 UTC m=+0.117945596 container start 8ee10bab6d08d9c851af053af0750fc9594051385181d51b17f4eae911c57dbe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Sep 30 14:33:22 compute-0 podman[256840]: 2025-09-30 14:33:22.025371007 +0000 UTC m=+0.025225538 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:33:22 compute-0 podman[256840]: 2025-09-30 14:33:22.12088017 +0000 UTC m=+0.120734681 container attach 8ee10bab6d08d9c851af053af0750fc9594051385181d51b17f4eae911c57dbe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_chaplygin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:33:22 compute-0 systemd[1]: libpod-8ee10bab6d08d9c851af053af0750fc9594051385181d51b17f4eae911c57dbe.scope: Deactivated successfully.
Sep 30 14:33:22 compute-0 determined_chaplygin[256857]: 167 167
Sep 30 14:33:22 compute-0 conmon[256857]: conmon 8ee10bab6d08d9c851af <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8ee10bab6d08d9c851af053af0750fc9594051385181d51b17f4eae911c57dbe.scope/container/memory.events
Sep 30 14:33:22 compute-0 podman[256840]: 2025-09-30 14:33:22.123517751 +0000 UTC m=+0.123372282 container died 8ee10bab6d08d9c851af053af0750fc9594051385181d51b17f4eae911c57dbe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_chaplygin, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:33:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-6e64d0a7572edc78d36e57ebf44447e01cdad4fc37ff929749553fc83dd456ef-merged.mount: Deactivated successfully.
Sep 30 14:33:22 compute-0 podman[256840]: 2025-09-30 14:33:22.15661999 +0000 UTC m=+0.156474511 container remove 8ee10bab6d08d9c851af053af0750fc9594051385181d51b17f4eae911c57dbe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Sep 30 14:33:22 compute-0 systemd[1]: libpod-conmon-8ee10bab6d08d9c851af053af0750fc9594051385181d51b17f4eae911c57dbe.scope: Deactivated successfully.
Sep 30 14:33:22 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:33:22 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f540045c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:33:22 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-nfs-cephfs-compute-0-yvkpei[97239]: [WARNING] 272/143322 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Sep 30 14:33:22 compute-0 podman[256930]: 2025-09-30 14:33:22.323096048 +0000 UTC m=+0.042071411 container create 32bb082e4918338a7580c093d7df617827bae6405f6bea28b51e26a12bcb1994 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_allen, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1)
Sep 30 14:33:22 compute-0 systemd[1]: Started libpod-conmon-32bb082e4918338a7580c093d7df617827bae6405f6bea28b51e26a12bcb1994.scope.
Sep 30 14:33:22 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:33:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dedbd37dfc06703ef4335b11980791d2b1f6446fb1e63fc514c4addc2b2a0d7b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:33:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dedbd37dfc06703ef4335b11980791d2b1f6446fb1e63fc514c4addc2b2a0d7b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:33:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dedbd37dfc06703ef4335b11980791d2b1f6446fb1e63fc514c4addc2b2a0d7b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:33:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dedbd37dfc06703ef4335b11980791d2b1f6446fb1e63fc514c4addc2b2a0d7b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:33:22 compute-0 podman[256930]: 2025-09-30 14:33:22.30344915 +0000 UTC m=+0.022424563 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:33:22 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v528: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Sep 30 14:33:22 compute-0 podman[256930]: 2025-09-30 14:33:22.410899934 +0000 UTC m=+0.129875327 container init 32bb082e4918338a7580c093d7df617827bae6405f6bea28b51e26a12bcb1994 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_allen, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:33:22 compute-0 podman[256930]: 2025-09-30 14:33:22.417602494 +0000 UTC m=+0.136577857 container start 32bb082e4918338a7580c093d7df617827bae6405f6bea28b51e26a12bcb1994 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_allen, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:33:22 compute-0 podman[256930]: 2025-09-30 14:33:22.420549913 +0000 UTC m=+0.139525296 container attach 32bb082e4918338a7580c093d7df617827bae6405f6bea28b51e26a12bcb1994 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_allen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:33:22 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:33:22 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f480042c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:33:22 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:33:22 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:33:22 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:33:22.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:33:22 compute-0 sudo[257027]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xisccppgtqkzxgaigfremaqzlfqkvmkr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242802.2242208-4582-196257615614247/AnsiballZ_copy.py'
Sep 30 14:33:22 compute-0 sudo[257027]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:33:22 compute-0 python3.9[257029]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:33:22 compute-0 happy_allen[256972]: {
Sep 30 14:33:22 compute-0 happy_allen[256972]:     "0": [
Sep 30 14:33:22 compute-0 happy_allen[256972]:         {
Sep 30 14:33:22 compute-0 happy_allen[256972]:             "devices": [
Sep 30 14:33:22 compute-0 happy_allen[256972]:                 "/dev/loop3"
Sep 30 14:33:22 compute-0 happy_allen[256972]:             ],
Sep 30 14:33:22 compute-0 happy_allen[256972]:             "lv_name": "ceph_lv0",
Sep 30 14:33:22 compute-0 happy_allen[256972]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:33:22 compute-0 happy_allen[256972]:             "lv_size": "21470642176",
Sep 30 14:33:22 compute-0 happy_allen[256972]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5e3c7776-ac03-5698-b79f-a6dc2d80cae6,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1bf35304-bfb4-41f5-b832-570aa31de1b2,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 14:33:22 compute-0 happy_allen[256972]:             "lv_uuid": "f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L",
Sep 30 14:33:22 compute-0 happy_allen[256972]:             "name": "ceph_lv0",
Sep 30 14:33:22 compute-0 happy_allen[256972]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:33:22 compute-0 happy_allen[256972]:             "tags": {
Sep 30 14:33:22 compute-0 happy_allen[256972]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:33:22 compute-0 happy_allen[256972]:                 "ceph.block_uuid": "f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L",
Sep 30 14:33:22 compute-0 happy_allen[256972]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 14:33:22 compute-0 happy_allen[256972]:                 "ceph.cluster_fsid": "5e3c7776-ac03-5698-b79f-a6dc2d80cae6",
Sep 30 14:33:22 compute-0 happy_allen[256972]:                 "ceph.cluster_name": "ceph",
Sep 30 14:33:22 compute-0 happy_allen[256972]:                 "ceph.crush_device_class": "",
Sep 30 14:33:22 compute-0 happy_allen[256972]:                 "ceph.encrypted": "0",
Sep 30 14:33:22 compute-0 happy_allen[256972]:                 "ceph.osd_fsid": "1bf35304-bfb4-41f5-b832-570aa31de1b2",
Sep 30 14:33:22 compute-0 happy_allen[256972]:                 "ceph.osd_id": "0",
Sep 30 14:33:22 compute-0 happy_allen[256972]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 14:33:22 compute-0 happy_allen[256972]:                 "ceph.type": "block",
Sep 30 14:33:22 compute-0 happy_allen[256972]:                 "ceph.vdo": "0",
Sep 30 14:33:22 compute-0 happy_allen[256972]:                 "ceph.with_tpm": "0"
Sep 30 14:33:22 compute-0 happy_allen[256972]:             },
Sep 30 14:33:22 compute-0 happy_allen[256972]:             "type": "block",
Sep 30 14:33:22 compute-0 happy_allen[256972]:             "vg_name": "ceph_vg0"
Sep 30 14:33:22 compute-0 happy_allen[256972]:         }
Sep 30 14:33:22 compute-0 happy_allen[256972]:     ]
Sep 30 14:33:22 compute-0 happy_allen[256972]: }
Sep 30 14:33:22 compute-0 sudo[257027]: pam_unix(sudo:session): session closed for user root
Sep 30 14:33:22 compute-0 systemd[1]: libpod-32bb082e4918338a7580c093d7df617827bae6405f6bea28b51e26a12bcb1994.scope: Deactivated successfully.
Sep 30 14:33:22 compute-0 podman[257035]: 2025-09-30 14:33:22.818800072 +0000 UTC m=+0.027723165 container died 32bb082e4918338a7580c093d7df617827bae6405f6bea28b51e26a12bcb1994 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_allen, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:33:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-dedbd37dfc06703ef4335b11980791d2b1f6446fb1e63fc514c4addc2b2a0d7b-merged.mount: Deactivated successfully.
Sep 30 14:33:22 compute-0 podman[257035]: 2025-09-30 14:33:22.862276439 +0000 UTC m=+0.071199562 container remove 32bb082e4918338a7580c093d7df617827bae6405f6bea28b51e26a12bcb1994 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_allen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Sep 30 14:33:22 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:33:22 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:33:22 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:33:22.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:33:22 compute-0 systemd[1]: libpod-conmon-32bb082e4918338a7580c093d7df617827bae6405f6bea28b51e26a12bcb1994.scope: Deactivated successfully.
Sep 30 14:33:22 compute-0 sudo[256747]: pam_unix(sudo:session): session closed for user root
Sep 30 14:33:22 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:33:22 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f2c003ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:33:22 compute-0 sudo[257099]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:33:22 compute-0 sudo[257099]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:33:22 compute-0 sudo[257099]: pam_unix(sudo:session): session closed for user root
Sep 30 14:33:23 compute-0 sudo[257151]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -- raw list --format json
Sep 30 14:33:23 compute-0 sudo[257151]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:33:23 compute-0 sudo[257248]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wyeqghoolbmyibjzwkbnvibpvosbymkz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242802.9191089-4606-122415606486387/AnsiballZ_stat.py'
Sep 30 14:33:23 compute-0 sudo[257248]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:33:23 compute-0 python3.9[257250]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 14:33:23 compute-0 sudo[257248]: pam_unix(sudo:session): session closed for user root
Sep 30 14:33:23 compute-0 podman[257293]: 2025-09-30 14:33:23.466924907 +0000 UTC m=+0.044402373 container create 2c6496d183a7e1a9094b092e128c78853f9b2b43961597903d8ae2544815c73e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_bell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:33:23 compute-0 ceph-mon[74194]: pgmap v528: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Sep 30 14:33:23 compute-0 systemd[1]: Started libpod-conmon-2c6496d183a7e1a9094b092e128c78853f9b2b43961597903d8ae2544815c73e.scope.
Sep 30 14:33:23 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:33:23 compute-0 podman[257293]: 2025-09-30 14:33:23.445185333 +0000 UTC m=+0.022662859 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:33:23 compute-0 podman[257293]: 2025-09-30 14:33:23.544076637 +0000 UTC m=+0.121554123 container init 2c6496d183a7e1a9094b092e128c78853f9b2b43961597903d8ae2544815c73e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_bell, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:33:23 compute-0 podman[257293]: 2025-09-30 14:33:23.551219939 +0000 UTC m=+0.128697395 container start 2c6496d183a7e1a9094b092e128c78853f9b2b43961597903d8ae2544815c73e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Sep 30 14:33:23 compute-0 podman[257293]: 2025-09-30 14:33:23.554387474 +0000 UTC m=+0.131864950 container attach 2c6496d183a7e1a9094b092e128c78853f9b2b43961597903d8ae2544815c73e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_bell, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Sep 30 14:33:23 compute-0 youthful_bell[257332]: 167 167
Sep 30 14:33:23 compute-0 systemd[1]: libpod-2c6496d183a7e1a9094b092e128c78853f9b2b43961597903d8ae2544815c73e.scope: Deactivated successfully.
Sep 30 14:33:23 compute-0 podman[257293]: 2025-09-30 14:33:23.558017871 +0000 UTC m=+0.135495347 container died 2c6496d183a7e1a9094b092e128c78853f9b2b43961597903d8ae2544815c73e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_bell, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:33:23 compute-0 ceph-mgr[74485]: [dashboard INFO request] [192.168.122.100:52216] [POST] [200] [0.002s] [4.0B] [587e4ab5-0d2e-4a73-9a2c-d028679a5a78] /api/prometheus_receiver
Sep 30 14:33:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-f942003578b2ba30ac6c11f897b970aeaed0902a25f40510b3b36766a64bde17-merged.mount: Deactivated successfully.
Sep 30 14:33:23 compute-0 podman[257293]: 2025-09-30 14:33:23.600929553 +0000 UTC m=+0.178407019 container remove 2c6496d183a7e1a9094b092e128c78853f9b2b43961597903d8ae2544815c73e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Sep 30 14:33:23 compute-0 systemd[1]: libpod-conmon-2c6496d183a7e1a9094b092e128c78853f9b2b43961597903d8ae2544815c73e.scope: Deactivated successfully.
Sep 30 14:33:23 compute-0 podman[257431]: 2025-09-30 14:33:23.7774094 +0000 UTC m=+0.042528553 container create 66ea5468c155ea1a4f56db6b871f5f8338b517b63c80857ff3908668e6596c3c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True)
Sep 30 14:33:23 compute-0 systemd[1]: Started libpod-conmon-66ea5468c155ea1a4f56db6b871f5f8338b517b63c80857ff3908668e6596c3c.scope.
Sep 30 14:33:23 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:33:23 compute-0 sudo[257501]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onmscuuoegwykgtnrfnbkmgsrootlgbj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242803.5931213-4630-112070111551516/AnsiballZ_stat.py'
Sep 30 14:33:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35c38222644eb5efaf5f31b1aaca0e5538547396855b42dad50032ca2539f1c1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:33:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35c38222644eb5efaf5f31b1aaca0e5538547396855b42dad50032ca2539f1c1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:33:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35c38222644eb5efaf5f31b1aaca0e5538547396855b42dad50032ca2539f1c1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:33:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35c38222644eb5efaf5f31b1aaca0e5538547396855b42dad50032ca2539f1c1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:33:23 compute-0 sudo[257501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:33:23 compute-0 podman[257431]: 2025-09-30 14:33:23.759847918 +0000 UTC m=+0.024967081 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:33:23 compute-0 podman[257431]: 2025-09-30 14:33:23.853738108 +0000 UTC m=+0.118857281 container init 66ea5468c155ea1a4f56db6b871f5f8338b517b63c80857ff3908668e6596c3c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_cerf, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Sep 30 14:33:23 compute-0 podman[257431]: 2025-09-30 14:33:23.860232883 +0000 UTC m=+0.125352036 container start 66ea5468c155ea1a4f56db6b871f5f8338b517b63c80857ff3908668e6596c3c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_cerf, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325)
Sep 30 14:33:23 compute-0 podman[257431]: 2025-09-30 14:33:23.866513501 +0000 UTC m=+0.131632754 container attach 66ea5468c155ea1a4f56db6b871f5f8338b517b63c80857ff3908668e6596c3c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_cerf, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1)
Sep 30 14:33:24 compute-0 python3.9[257503]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:33:24 compute-0 sudo[257501]: pam_unix(sudo:session): session closed for user root
Sep 30 14:33:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:33:24 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f38004710 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:33:24 compute-0 sudo[257687]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjlenxznwwfksfgcdqwhbqfwwrzkixhx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242803.5931213-4630-112070111551516/AnsiballZ_copy.py'
Sep 30 14:33:24 compute-0 sudo[257687]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:33:24 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v529: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Sep 30 14:33:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:33:24 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f540045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:33:24 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:33:24 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:33:24 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:33:24.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:33:24 compute-0 lvm[257698]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 14:33:24 compute-0 lvm[257698]: VG ceph_vg0 finished
Sep 30 14:33:24 compute-0 gifted_cerf[257496]: {}
Sep 30 14:33:24 compute-0 systemd[1]: libpod-66ea5468c155ea1a4f56db6b871f5f8338b517b63c80857ff3908668e6596c3c.scope: Deactivated successfully.
Sep 30 14:33:24 compute-0 podman[257431]: 2025-09-30 14:33:24.559886219 +0000 UTC m=+0.825005392 container died 66ea5468c155ea1a4f56db6b871f5f8338b517b63c80857ff3908668e6596c3c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_cerf, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Sep 30 14:33:24 compute-0 systemd[1]: libpod-66ea5468c155ea1a4f56db6b871f5f8338b517b63c80857ff3908668e6596c3c.scope: Consumed 1.083s CPU time.
Sep 30 14:33:24 compute-0 python3.9[257691]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1759242803.5931213-4630-112070111551516/.source _original_basename=.n0e9dis0 follow=False checksum=c8020cdf8f3b1a8a6c3292b810b02f7d7530909f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Sep 30 14:33:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-35c38222644eb5efaf5f31b1aaca0e5538547396855b42dad50032ca2539f1c1-merged.mount: Deactivated successfully.
Sep 30 14:33:24 compute-0 podman[257431]: 2025-09-30 14:33:24.605396461 +0000 UTC m=+0.870515614 container remove 66ea5468c155ea1a4f56db6b871f5f8338b517b63c80857ff3908668e6596c3c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_cerf, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:33:24 compute-0 sudo[257687]: pam_unix(sudo:session): session closed for user root
Sep 30 14:33:24 compute-0 systemd[1]: libpod-conmon-66ea5468c155ea1a4f56db6b871f5f8338b517b63c80857ff3908668e6596c3c.scope: Deactivated successfully.
Sep 30 14:33:24 compute-0 sudo[257151]: pam_unix(sudo:session): session closed for user root
Sep 30 14:33:24 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 14:33:24 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:33:24 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 14:33:24 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:33:24 compute-0 sudo[257738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 14:33:24 compute-0 sudo[257738]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:33:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:33:24] "GET /metrics HTTP/1.1" 200 48419 "" "Prometheus/2.51.0"
Sep 30 14:33:24 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:33:24] "GET /metrics HTTP/1.1" 200 48419 "" "Prometheus/2.51.0"
Sep 30 14:33:24 compute-0 sudo[257738]: pam_unix(sudo:session): session closed for user root
Sep 30 14:33:24 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:33:24 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:33:24 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:33:24.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:33:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:33:24 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f480042e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:33:25 compute-0 python3.9[257889]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 14:33:25 compute-0 ceph-mon[74194]: pgmap v529: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Sep 30 14:33:25 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:33:25 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:33:25 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:33:26 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:33:26 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f2c003ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:33:26 compute-0 python3.9[258042]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:33:26 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v530: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Sep 30 14:33:26 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:33:26 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f38004710 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:33:26 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:33:26 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:33:26 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:33:26.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:33:26 compute-0 python3.9[258164]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759242805.7846308-4708-84405439441156/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=f022386746472553146d29f689b545df70fa8a60 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:33:26 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:33:26 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:33:26 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:33:26.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:33:26 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:33:26 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f3c0014d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:33:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:33:27.051Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:33:27 compute-0 python3.9[258315]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Sep 30 14:33:27 compute-0 ceph-mon[74194]: pgmap v530: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Sep 30 14:33:28 compute-0 python3.9[258437]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759242807.1118615-4753-220202975469939/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Sep 30 14:33:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:33:28 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f480042e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:33:28 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v531: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Sep 30 14:33:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:33:28 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f2c003ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:33:28 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:33:28 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:33:28 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:33:28.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:33:28 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:33:28 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:33:28 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:33:28.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:33:28 compute-0 sudo[258587]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqvrublcsqeftvyqmpavkmhrzdjphfxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242808.5940204-4804-52329388484762/AnsiballZ_container_config_data.py'
Sep 30 14:33:28 compute-0 sudo[258587]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:33:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:33:28 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f38004710 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:33:29 compute-0 python3.9[258589]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Sep 30 14:33:29 compute-0 sudo[258587]: pam_unix(sudo:session): session closed for user root
Sep 30 14:33:29 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:33:29 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:33:29 compute-0 sudo[258740]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzucsuodklqirrtjgdtjejvcohoqkjnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242809.4289322-4831-137019435185400/AnsiballZ_container_config_hash.py'
Sep 30 14:33:29 compute-0 sudo[258740]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:33:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:33:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:33:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:33:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:33:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:33:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:33:29 compute-0 ceph-mon[74194]: pgmap v531: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Sep 30 14:33:29 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:33:29 compute-0 python3.9[258742]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Sep 30 14:33:29 compute-0 sudo[258740]: pam_unix(sudo:session): session closed for user root
Sep 30 14:33:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:33:30 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:33:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:33:30 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f3c0014d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:33:30 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v532: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Sep 30 14:33:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:33:30 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f48004300 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:33:30 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:33:30 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:33:30 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:33:30.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:33:30 compute-0 sudo[258893]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vsxashznwuhddqciohorlgqyifgpdpml ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759242810.3882008-4861-174168190932467/AnsiballZ_edpm_container_manage.py'
Sep 30 14:33:30 compute-0 sudo[258893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:33:30 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:33:30 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:33:30 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:33:30 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:33:30.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:33:30 compute-0 python3[258895]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Sep 30 14:33:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:33:30 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f2c003ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:33:31 compute-0 ceph-mon[74194]: pgmap v532: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Sep 30 14:33:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:33:32 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f38004710 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:33:32 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v533: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 597 B/s wr, 2 op/s
Sep 30 14:33:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:33:32 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f3c0014d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:33:32 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:33:32 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:33:32 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:33:32.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:33:32 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:33:32 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:33:32 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:33:32.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:33:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:33:32 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f3c0014d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:33:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:33:33 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:33:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:33:33 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:33:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:33:33.563Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:33:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:33:33.564Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:33:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:33:33.565Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:33:33 compute-0 ceph-mon[74194]: pgmap v533: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 597 B/s wr, 2 op/s
Sep 30 14:33:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:33:34 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f2c003ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:33:34 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v534: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 597 B/s wr, 2 op/s
Sep 30 14:33:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:33:34 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f38004710 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:33:34 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:33:34 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:33:34 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:33:34.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:33:34 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:33:34] "GET /metrics HTTP/1.1" 200 48418 "" "Prometheus/2.51.0"
Sep 30 14:33:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:33:34] "GET /metrics HTTP/1.1" 200 48418 "" "Prometheus/2.51.0"
Sep 30 14:33:34 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:33:34 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:33:34 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:33:34.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:33:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:33:34 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f3c0014d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:33:35 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:33:36 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:33:36 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f3c0014d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:33:36 compute-0 ceph-mon[74194]: pgmap v534: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 597 B/s wr, 2 op/s
Sep 30 14:33:36 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v535: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Sep 30 14:33:36 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:33:36 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f2c003ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:33:36 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:33:36 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:33:36 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:33:36.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:33:36 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:33:36 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Sep 30 14:33:36 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:33:36 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:33:36 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:33:36.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:33:36 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:33:36 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f38004710 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:33:37 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:33:37.062Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:33:37 compute-0 podman[258955]: 2025-09-30 14:33:37.389329704 +0000 UTC m=+1.323707798 container health_status b3fb2fd96e3ed0561f41ccf5f3a6a8f5edc1610051f196882b9fe64f54041c07 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2)
Sep 30 14:33:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:33:38 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f3c0014d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:33:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:33:38.249 163966 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:33:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:33:38.249 163966 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:33:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:33:38.250 163966 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:33:38 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v536: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Sep 30 14:33:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:33:38 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f3c0014d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:33:38 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:33:38 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:33:38 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:33:38.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:33:38 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:33:38 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:33:38 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:33:38.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:33:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:33:38 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f2c003ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:33:38 compute-0 ceph-mon[74194]: pgmap v535: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Sep 30 14:33:40 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:33:40 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f38004710 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:33:40 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v537: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Sep 30 14:33:40 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:33:40 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f3c0014d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:33:40 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:33:40 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:33:40 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:33:40.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:33:40 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:33:40 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:33:40 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:33:40.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:33:40 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:33:40 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f3c0014d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:33:41 compute-0 sudo[259010]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:33:41 compute-0 sudo[259010]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:33:41 compute-0 sudo[259010]: pam_unix(sudo:session): session closed for user root
Sep 30 14:33:41 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:33:42 compute-0 podman[259034]: 2025-09-30 14:33:42.163952427 +0000 UTC m=+0.461217818 container health_status 3f9405f717bf7bccb1d94628a6cea0442375ebf8d5cf43ef2536ee30dce6c6e0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3, config_id=iscsid)
Sep 30 14:33:42 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-nfs-cephfs-compute-0-yvkpei[97239]: [WARNING] 272/143342 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Sep 30 14:33:42 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:33:42 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f2c003ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:33:42 compute-0 podman[258984]: 2025-09-30 14:33:42.30155905 +0000 UTC m=+4.876519479 container health_status 8ace90c1ad44d98a416105b72e1c541724757ab08efa10adfaa0df7e8c2027c6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Sep 30 14:33:42 compute-0 ceph-mon[74194]: pgmap v536: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Sep 30 14:33:42 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v538: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Sep 30 14:33:42 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:33:42 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f38004710 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:33:42 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:33:42 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:33:42 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:33:42.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:33:42 compute-0 podman[258908]: 2025-09-30 14:33:42.855586139 +0000 UTC m=+11.828324884 image pull 613e2b735827096139e990f475c5ac5de0e55d8048941a4521c0c17a4351e975 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Sep 30 14:33:42 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:33:42 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:33:42 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:33:42.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:33:43 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:33:43 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f30002900 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:33:43 compute-0 podman[259096]: 2025-09-30 14:33:43.009287334 +0000 UTC m=+0.060931386 container create 04c40afe98c03ead5a19c6f7a30aef6b625f019f8462388742f25d51fa92522d (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.schema-version=1.0, config_id=edpm, container_name=nova_compute_init, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:33:43 compute-0 podman[259096]: 2025-09-30 14:33:42.984347885 +0000 UTC m=+0.035992017 image pull 613e2b735827096139e990f475c5ac5de0e55d8048941a4521c0c17a4351e975 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Sep 30 14:33:43 compute-0 python3[258895]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Sep 30 14:33:43 compute-0 podman[259122]: 2025-09-30 14:33:43.118825924 +0000 UTC m=+0.050328342 container health_status c96c6cc1b89de09f21811bef665f35e069874e3ad1bbd4c37d02227af98e3458 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Sep 30 14:33:43 compute-0 sudo[258893]: pam_unix(sudo:session): session closed for user root
Sep 30 14:33:43 compute-0 ceph-mon[74194]: pgmap v537: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Sep 30 14:33:43 compute-0 ceph-mon[74194]: pgmap v538: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Sep 30 14:33:43 compute-0 sudo[259305]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-isfaepnqwccxrszsdnbxvbdihxlqgocw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242823.2768714-4885-43821406879330/AnsiballZ_stat.py'
Sep 30 14:33:43 compute-0 sudo[259305]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:33:43 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:33:43.566Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:33:43 compute-0 python3.9[259307]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 14:33:43 compute-0 sudo[259305]: pam_unix(sudo:session): session closed for user root
Sep 30 14:33:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:33:44 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f3c0014d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:33:44 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v539: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Sep 30 14:33:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:33:44 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f2c003ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:33:44 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:33:44 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:33:44 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:33:44.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:33:44 compute-0 sudo[259460]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thmwzabnzwtwfjsdzlpqxusyfpglcjnp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242824.332164-4921-187516566117858/AnsiballZ_container_config_data.py'
Sep 30 14:33:44 compute-0 sudo[259460]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:33:44 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:33:44 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:33:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:33:44] "GET /metrics HTTP/1.1" 200 48350 "" "Prometheus/2.51.0"
Sep 30 14:33:44 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:33:44] "GET /metrics HTTP/1.1" 200 48350 "" "Prometheus/2.51.0"
Sep 30 14:33:44 compute-0 python3.9[259462]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Sep 30 14:33:44 compute-0 sudo[259460]: pam_unix(sudo:session): session closed for user root
Sep 30 14:33:44 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:33:44 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:33:44 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:33:44.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:33:45 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:33:45 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f38004710 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:33:45 compute-0 sudo[259613]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrtbmwxdfsmhxyjpvszlzrudrgjfuahm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242825.140243-4948-138034675702921/AnsiballZ_container_config_hash.py'
Sep 30 14:33:45 compute-0 sudo[259613]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:33:45 compute-0 ceph-mon[74194]: pgmap v539: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Sep 30 14:33:45 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:33:45 compute-0 python3.9[259615]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Sep 30 14:33:45 compute-0 sudo[259613]: pam_unix(sudo:session): session closed for user root
Sep 30 14:33:46 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:33:46 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f30002900 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:33:46 compute-0 sudo[259766]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhuhmowunlawccvcpxpulnesmaudauxg ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759242826.0751588-4978-44130709273973/AnsiballZ_edpm_container_manage.py'
Sep 30 14:33:46 compute-0 sudo[259766]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:33:46 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v540: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Sep 30 14:33:46 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:33:46 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f3c004530 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:33:46 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:33:46 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:33:46 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:33:46.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:33:46 compute-0 python3[259768]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Sep 30 14:33:46 compute-0 podman[259804]: 2025-09-30 14:33:46.80437545 +0000 UTC m=+0.061228614 container create 79b56353f797a95c3fad2537e4c6c28f664e9fb123787f88de96d68a2401be2b (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.build-date=20250923)
Sep 30 14:33:46 compute-0 podman[259804]: 2025-09-30 14:33:46.764468139 +0000 UTC m=+0.021321323 image pull 613e2b735827096139e990f475c5ac5de0e55d8048941a4521c0c17a4351e975 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Sep 30 14:33:46 compute-0 python3[259768]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Sep 30 14:33:46 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:33:46 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:33:46 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:33:46.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:33:46 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:33:46 compute-0 sudo[259766]: pam_unix(sudo:session): session closed for user root
Sep 30 14:33:47 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:33:47 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f2c003ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:33:47 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:33:47.063Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:33:47 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:33:47.064Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:33:47 compute-0 ceph-mon[74194]: pgmap v540: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Sep 30 14:33:47 compute-0 sudo[259993]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzlrtekgebndvkosryonbmxxqckmqccc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242827.4030392-5002-74467959569518/AnsiballZ_stat.py'
Sep 30 14:33:47 compute-0 sudo[259993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:33:47 compute-0 python3.9[259995]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 14:33:47 compute-0 sudo[259993]: pam_unix(sudo:session): session closed for user root
Sep 30 14:33:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:33:48 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f38004710 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:33:48 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v541: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Sep 30 14:33:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:33:48 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f30002900 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:33:48 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:33:48 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:33:48 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:33:48.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:33:48 compute-0 sudo[260148]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jauofirgqvozvmsgnsrdfcwrjydgljfg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242828.227582-5029-198378111525214/AnsiballZ_file.py'
Sep 30 14:33:48 compute-0 sudo[260148]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:33:48 compute-0 python3.9[260150]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:33:48 compute-0 sudo[260148]: pam_unix(sudo:session): session closed for user root
Sep 30 14:33:48 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:33:48 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:33:48 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:33:48.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:33:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:33:49 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f3c004530 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:33:49 compute-0 sudo[260299]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zulnldoenwpoevagmtoxqwtxwmcsuxwr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242828.7977006-5029-18154469821341/AnsiballZ_copy.py'
Sep 30 14:33:49 compute-0 sudo[260299]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:33:49 compute-0 python3.9[260301]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759242828.7977006-5029-18154469821341/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Sep 30 14:33:49 compute-0 sudo[260299]: pam_unix(sudo:session): session closed for user root
Sep 30 14:33:49 compute-0 ceph-mon[74194]: pgmap v541: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Sep 30 14:33:49 compute-0 sudo[260377]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nftfeyyweqmfwsjyodqjdvoegitutueq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242828.7977006-5029-18154469821341/AnsiballZ_systemd.py'
Sep 30 14:33:49 compute-0 sudo[260377]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:33:50 compute-0 python3.9[260379]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Sep 30 14:33:50 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:33:50 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f2c003ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:33:50 compute-0 systemd[1]: Reloading.
Sep 30 14:33:50 compute-0 systemd-rc-local-generator[260407]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:33:50 compute-0 systemd-sysv-generator[260411]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 14:33:50 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v542: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Sep 30 14:33:50 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:33:50 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f38004710 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:33:50 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:33:50 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:33:50 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:33:50.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:33:50 compute-0 sudo[260377]: pam_unix(sudo:session): session closed for user root
Sep 30 14:33:50 compute-0 sudo[260488]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epozwxxkvdcjpbplruunqsuxnquldttp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242828.7977006-5029-18154469821341/AnsiballZ_systemd.py'
Sep 30 14:33:50 compute-0 sudo[260488]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:33:50 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:33:50 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:33:50 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:33:50.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:33:51 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:33:51 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f30003220 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:33:51 compute-0 python3.9[260490]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Sep 30 14:33:51 compute-0 systemd[1]: Reloading.
Sep 30 14:33:51 compute-0 systemd-rc-local-generator[260521]: /etc/rc.d/rc.local is not marked executable, skipping.
Sep 30 14:33:51 compute-0 systemd-sysv-generator[260525]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Sep 30 14:33:51 compute-0 systemd[1]: Starting nova_compute container...
Sep 30 14:33:51 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:33:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90049131ca2aeab1eace2d787ec95056093e1259ebf13a930146dfebe7574f0b/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Sep 30 14:33:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90049131ca2aeab1eace2d787ec95056093e1259ebf13a930146dfebe7574f0b/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Sep 30 14:33:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90049131ca2aeab1eace2d787ec95056093e1259ebf13a930146dfebe7574f0b/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Sep 30 14:33:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90049131ca2aeab1eace2d787ec95056093e1259ebf13a930146dfebe7574f0b/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Sep 30 14:33:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90049131ca2aeab1eace2d787ec95056093e1259ebf13a930146dfebe7574f0b/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Sep 30 14:33:51 compute-0 ceph-mon[74194]: pgmap v542: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Sep 30 14:33:51 compute-0 podman[260530]: 2025-09-30 14:33:51.749599993 +0000 UTC m=+0.109218692 container init 79b56353f797a95c3fad2537e4c6c28f664e9fb123787f88de96d68a2401be2b (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Sep 30 14:33:51 compute-0 podman[260530]: 2025-09-30 14:33:51.762786657 +0000 UTC m=+0.122405336 container start 79b56353f797a95c3fad2537e4c6c28f664e9fb123787f88de96d68a2401be2b (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Sep 30 14:33:51 compute-0 podman[260530]: nova_compute
Sep 30 14:33:51 compute-0 nova_compute[260546]: + sudo -E kolla_set_configs
Sep 30 14:33:51 compute-0 systemd[1]: Started nova_compute container.
Sep 30 14:33:51 compute-0 sudo[260488]: pam_unix(sudo:session): session closed for user root
Sep 30 14:33:51 compute-0 nova_compute[260546]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Sep 30 14:33:51 compute-0 nova_compute[260546]: INFO:__main__:Validating config file
Sep 30 14:33:51 compute-0 nova_compute[260546]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Sep 30 14:33:51 compute-0 nova_compute[260546]: INFO:__main__:Copying service configuration files
Sep 30 14:33:51 compute-0 nova_compute[260546]: INFO:__main__:Deleting /etc/nova/nova.conf
Sep 30 14:33:51 compute-0 nova_compute[260546]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Sep 30 14:33:51 compute-0 nova_compute[260546]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Sep 30 14:33:51 compute-0 nova_compute[260546]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Sep 30 14:33:51 compute-0 nova_compute[260546]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Sep 30 14:33:51 compute-0 nova_compute[260546]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Sep 30 14:33:51 compute-0 nova_compute[260546]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Sep 30 14:33:51 compute-0 nova_compute[260546]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Sep 30 14:33:51 compute-0 nova_compute[260546]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Sep 30 14:33:51 compute-0 nova_compute[260546]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Sep 30 14:33:51 compute-0 nova_compute[260546]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Sep 30 14:33:51 compute-0 nova_compute[260546]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Sep 30 14:33:51 compute-0 nova_compute[260546]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Sep 30 14:33:51 compute-0 nova_compute[260546]: INFO:__main__:Deleting /etc/ceph
Sep 30 14:33:51 compute-0 nova_compute[260546]: INFO:__main__:Creating directory /etc/ceph
Sep 30 14:33:51 compute-0 nova_compute[260546]: INFO:__main__:Setting permission for /etc/ceph
Sep 30 14:33:51 compute-0 nova_compute[260546]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Sep 30 14:33:51 compute-0 nova_compute[260546]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Sep 30 14:33:51 compute-0 nova_compute[260546]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Sep 30 14:33:51 compute-0 nova_compute[260546]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Sep 30 14:33:51 compute-0 nova_compute[260546]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Sep 30 14:33:51 compute-0 nova_compute[260546]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Sep 30 14:33:51 compute-0 nova_compute[260546]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Sep 30 14:33:51 compute-0 nova_compute[260546]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Sep 30 14:33:51 compute-0 nova_compute[260546]: INFO:__main__:Writing out command to execute
Sep 30 14:33:51 compute-0 nova_compute[260546]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Sep 30 14:33:51 compute-0 nova_compute[260546]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Sep 30 14:33:51 compute-0 nova_compute[260546]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Sep 30 14:33:51 compute-0 nova_compute[260546]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Sep 30 14:33:51 compute-0 nova_compute[260546]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Sep 30 14:33:51 compute-0 nova_compute[260546]: ++ cat /run_command
Sep 30 14:33:51 compute-0 nova_compute[260546]: + CMD=nova-compute
Sep 30 14:33:51 compute-0 nova_compute[260546]: + ARGS=
Sep 30 14:33:51 compute-0 nova_compute[260546]: + sudo kolla_copy_cacerts
Sep 30 14:33:51 compute-0 nova_compute[260546]: + [[ ! -n '' ]]
Sep 30 14:33:51 compute-0 nova_compute[260546]: + . kolla_extend_start
Sep 30 14:33:51 compute-0 nova_compute[260546]: Running command: 'nova-compute'
Sep 30 14:33:51 compute-0 nova_compute[260546]: + echo 'Running command: '\''nova-compute'\'''
Sep 30 14:33:51 compute-0 nova_compute[260546]: + umask 0022
Sep 30 14:33:51 compute-0 nova_compute[260546]: + exec nova-compute
Sep 30 14:33:51 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:33:52 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:33:52 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f3c004530 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:33:52 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v543: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Sep 30 14:33:52 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:33:52 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f2c003ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:33:52 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:33:52 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:33:52 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:33:52.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:33:52 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:33:52 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:33:52 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:33:52.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:33:53 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:33:53 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f38004710 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:33:53 compute-0 ceph-osd[82707]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Sep 30 14:33:53 compute-0 ceph-osd[82707]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 9389 writes, 35K keys, 9389 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 9389 writes, 2394 syncs, 3.92 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 812 writes, 1249 keys, 812 commit groups, 1.0 writes per commit group, ingest: 0.42 MB, 0.00 MB/s
                                           Interval WAL: 812 writes, 406 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.06              0.00         1    0.065       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.06              0.00         1    0.065       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.06              0.00         1    0.065       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559a3376f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559a3376f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559a3376f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559a3376f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559a3376f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559a3376f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559a3376f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559a3376e9b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559a3376e9b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559a3376e9b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559a3376f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559a3376f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Sep 30 14:33:53 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:33:53.567Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:33:53 compute-0 ceph-mon[74194]: pgmap v543: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Sep 30 14:33:53 compute-0 python3.9[260711]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 14:33:54 compute-0 nova_compute[260546]: 2025-09-30 14:33:54.180 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Sep 30 14:33:54 compute-0 nova_compute[260546]: 2025-09-30 14:33:54.180 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Sep 30 14:33:54 compute-0 nova_compute[260546]: 2025-09-30 14:33:54.180 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Sep 30 14:33:54 compute-0 nova_compute[260546]: 2025-09-30 14:33:54.181 2 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Sep 30 14:33:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:33:54 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f30003220 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:33:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-nfs-cephfs-compute-0-yvkpei[97239]: [WARNING] 272/143354 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Sep 30 14:33:54 compute-0 nova_compute[260546]: 2025-09-30 14:33:54.347 2 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Sep 30 14:33:54 compute-0 nova_compute[260546]: 2025-09-30 14:33:54.379 2 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 0 in 0.032s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Sep 30 14:33:54 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v544: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Sep 30 14:33:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:33:54 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f3c004530 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:33:54 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:33:54 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:33:54 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:33:54.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:33:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:33:54] "GET /metrics HTTP/1.1" 200 48350 "" "Prometheus/2.51.0"
Sep 30 14:33:54 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:33:54] "GET /metrics HTTP/1.1" 200 48350 "" "Prometheus/2.51.0"
Sep 30 14:33:54 compute-0 python3.9[260865]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 14:33:54 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:33:54 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:33:54 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:33:54.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:33:54 compute-0 nova_compute[260546]: 2025-09-30 14:33:54.989 2 INFO nova.virt.driver [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Sep 30 14:33:55 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:33:55 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f2c003ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.169 2 INFO nova.compute.provider_config [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.189 2 DEBUG oslo_concurrency.lockutils [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.190 2 DEBUG oslo_concurrency.lockutils [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.190 2 DEBUG oslo_concurrency.lockutils [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.190 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.191 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.191 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.191 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.191 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.191 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.191 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.192 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.192 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.192 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.192 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.192 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.192 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.193 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.193 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.193 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.193 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.193 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.193 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.193 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.194 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.194 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.194 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.194 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.194 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.194 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.194 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.195 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.195 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.195 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.195 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.196 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.196 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.196 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.196 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.196 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.197 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.197 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.197 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.197 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.197 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.198 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.198 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.198 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.198 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.198 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.198 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.199 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.199 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.199 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.199 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.199 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.199 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.200 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.200 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.200 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.200 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.200 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.200 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.200 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.201 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.201 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.201 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.201 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.201 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.202 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.202 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.202 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.202 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.202 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.203 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.203 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.203 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.203 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.203 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.204 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.204 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.204 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.204 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.204 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.205 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.205 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.205 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.205 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.205 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.206 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.206 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.206 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.206 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.206 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.206 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.207 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.207 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.207 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.207 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.207 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.207 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.207 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.208 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.208 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.208 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.208 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.208 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.208 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.209 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.209 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.209 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.209 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.209 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.209 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.210 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.210 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.210 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.210 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.210 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.210 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.211 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.211 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.211 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.211 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.211 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.211 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.212 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.212 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.212 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.212 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.212 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.213 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.213 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.213 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.213 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.213 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.214 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.214 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.214 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.214 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.214 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.214 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.215 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.215 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.215 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.215 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.215 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.216 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.216 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.216 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.216 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.216 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.216 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.217 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.217 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.217 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.217 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.217 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.217 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.218 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.218 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.218 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.218 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.218 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.218 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.219 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.219 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.219 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.219 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.219 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.219 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.220 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.220 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.220 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.220 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.220 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.220 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.221 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.221 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.221 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.221 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.221 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.222 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.222 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.222 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.222 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.222 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.223 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.223 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.223 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.223 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.223 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.223 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.223 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.224 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.224 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.224 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.224 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.224 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.224 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.225 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.225 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.225 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.225 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.225 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.225 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.225 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.226 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.226 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.226 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.226 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.226 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.226 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.227 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.227 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.227 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.227 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.228 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.228 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.228 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.228 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.228 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.228 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.229 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.229 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.229 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.229 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.229 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.229 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.229 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.230 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.230 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.230 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.230 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.230 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.230 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.231 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.231 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.231 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.231 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.231 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.232 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.232 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.232 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.232 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.232 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.232 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.233 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.233 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.233 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.233 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.233 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.234 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.234 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.234 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.234 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.234 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.235 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.235 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.235 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.235 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.235 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.235 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.236 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.236 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.236 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.236 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.236 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.236 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.237 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.237 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.237 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.237 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.237 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.237 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.237 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.238 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.238 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.238 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.238 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.238 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.238 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.238 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.239 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.239 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.239 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.239 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.239 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.239 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.240 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.240 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.240 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.240 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.240 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.240 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.240 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.241 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.241 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.241 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.241 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.241 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.241 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.241 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.242 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.242 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.242 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.242 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.242 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.242 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.243 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.243 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.243 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.243 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.243 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.243 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.244 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.244 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.244 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.244 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.244 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.244 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.245 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.245 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.245 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.245 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.245 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.246 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.246 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.246 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.246 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.246 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.246 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.247 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.247 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.247 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.247 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.247 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.247 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.248 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.248 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.248 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.248 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.248 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.249 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.249 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.249 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.249 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.249 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.250 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.250 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.250 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.250 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.250 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.251 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.251 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.251 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.251 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.251 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.251 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.252 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.252 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.252 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.253 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.253 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.253 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.253 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.253 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.254 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.254 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.254 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.254 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.254 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.255 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.255 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.255 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.255 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.255 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.255 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.255 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.256 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.256 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.256 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.256 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.256 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.256 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.256 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.257 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.257 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.257 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.257 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.257 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.257 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.258 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.258 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.258 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.258 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.258 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.258 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.259 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.259 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.259 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.259 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.259 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.259 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.259 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.260 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.260 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.260 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.260 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.260 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.260 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.261 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.261 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.261 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.261 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.261 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.261 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.262 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.262 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.262 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.262 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.262 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.262 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.263 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.263 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.263 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.263 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.263 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.263 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.264 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.264 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.264 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.264 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.264 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.264 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.265 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.265 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.265 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.265 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.265 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.265 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.266 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.266 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.266 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.266 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.266 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.266 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.267 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.267 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.267 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.267 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.267 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.267 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.267 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.268 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.268 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.268 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.268 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.268 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.269 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.269 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.269 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.269 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.269 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.270 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.270 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.270 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.270 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.270 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.271 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.271 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.271 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.271 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.271 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.271 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.272 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.272 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.272 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.272 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.273 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.273 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.273 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.273 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.273 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.274 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.274 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.274 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.274 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.274 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.275 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.275 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.275 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.275 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.275 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.275 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.276 2 WARNING oslo_config.cfg [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Sep 30 14:33:55 compute-0 nova_compute[260546]: live_migration_uri is deprecated for removal in favor of two other options that
Sep 30 14:33:55 compute-0 nova_compute[260546]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Sep 30 14:33:55 compute-0 nova_compute[260546]: and ``live_migration_inbound_addr`` respectively.
Sep 30 14:33:55 compute-0 nova_compute[260546]: ).  Its value may be silently ignored in the future.
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.276 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.276 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.277 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.277 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.277 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.277 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.277 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.278 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.278 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.278 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.278 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.279 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.279 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.279 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.279 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.279 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.280 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.280 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.280 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] libvirt.rbd_secret_uuid        = 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.280 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.280 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.281 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.281 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.281 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.281 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.281 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.282 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.282 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.282 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.282 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.283 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.283 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.283 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.283 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.283 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.284 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.284 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.284 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.284 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.284 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.285 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.285 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.285 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.285 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.286 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.286 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.286 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.286 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.286 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.287 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.287 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.287 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.287 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.287 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.288 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.288 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.288 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.288 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.289 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.289 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.289 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.289 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.289 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.290 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.290 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.290 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.290 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.290 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.291 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.291 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.291 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.291 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.291 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.292 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.292 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.292 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.292 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.292 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.292 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.293 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.293 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.293 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.293 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.293 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.293 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.293 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.294 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.294 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.294 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.294 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.294 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.294 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.295 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.295 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.295 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.295 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.295 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.295 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.296 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.296 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.296 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.296 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.297 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.297 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.297 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.297 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.297 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.297 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.298 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.298 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.298 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.298 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.298 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.299 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.299 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.299 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.299 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.299 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.299 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.300 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.300 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.300 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.300 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.300 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.301 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.301 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.301 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.301 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.302 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.302 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.302 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.302 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.302 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.303 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.303 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.303 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.303 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.303 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.304 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.304 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.304 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.305 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.305 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.305 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.305 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.305 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.305 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.306 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.306 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.306 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.306 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.306 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.306 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.307 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.307 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.307 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.307 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.307 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.307 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.308 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.308 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.308 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.308 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.308 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.308 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.309 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.309 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.309 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.309 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.309 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.310 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.310 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.310 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.310 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.310 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.310 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.310 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.311 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.311 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.311 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.311 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.311 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.311 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.312 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.312 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.312 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.312 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.312 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.312 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.313 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.313 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.313 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.313 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.313 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.313 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.314 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.314 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.314 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.314 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.314 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.314 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.315 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.315 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.315 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.315 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.315 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.315 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.316 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.316 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.316 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.316 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.316 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.316 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.317 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.317 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.317 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.317 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.317 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.317 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.317 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.318 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.318 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.318 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.318 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.318 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.318 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.319 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.319 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.319 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.319 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.319 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.319 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.319 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.320 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.320 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.320 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.320 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.320 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.321 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.321 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.321 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.321 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.321 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.322 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.322 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.322 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.322 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.322 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.322 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.323 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.323 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.323 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.323 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.324 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.324 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.324 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.324 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.324 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.325 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.325 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.325 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.325 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.325 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.325 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.326 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.326 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.326 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.326 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.326 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.327 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.327 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.327 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.327 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.327 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.328 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.328 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.328 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.328 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.328 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.329 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.329 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.329 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.329 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.329 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.330 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.330 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.330 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.330 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.331 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.331 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.331 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.331 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.331 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.332 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.332 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.332 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.332 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.332 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.332 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.333 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.333 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.333 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.333 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.333 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.333 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.333 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.334 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.334 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.334 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.334 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.334 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.334 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.335 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.335 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.335 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.335 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.335 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.335 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.335 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.336 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.336 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.336 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.336 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.336 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.336 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.337 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.337 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.337 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.337 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.337 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.337 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.338 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.338 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.338 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.338 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.338 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.339 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.339 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.339 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.339 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.340 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.340 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.340 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.340 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.340 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.341 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.341 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.341 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.341 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.341 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.342 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.342 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.342 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.342 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.342 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.342 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.343 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.343 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.343 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.343 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.343 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.343 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.344 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.344 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.344 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.344 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.344 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.344 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.345 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.345 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.345 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.345 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.345 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.346 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.346 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.346 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.346 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.346 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.346 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.346 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.347 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.347 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.347 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.347 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.347 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.347 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.348 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.348 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.348 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.348 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.348 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.348 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.349 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.349 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.349 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.349 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.350 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.350 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.350 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.350 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.350 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.351 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.351 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.351 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.351 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.351 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.352 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.352 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.352 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.352 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.352 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.353 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.353 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.353 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.353 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.353 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.353 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.354 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.354 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.354 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.354 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.354 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.354 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.355 2 DEBUG oslo_service.service [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.356 2 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.369 2 DEBUG nova.virt.libvirt.host [None req-c2366d8d-ed73-4c42-a7ba-a6e695973656 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.370 2 DEBUG nova.virt.libvirt.host [None req-c2366d8d-ed73-4c42-a7ba-a6e695973656 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.370 2 DEBUG nova.virt.libvirt.host [None req-c2366d8d-ed73-4c42-a7ba-a6e695973656 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.371 2 DEBUG nova.virt.libvirt.host [None req-c2366d8d-ed73-4c42-a7ba-a6e695973656 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Sep 30 14:33:55 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Sep 30 14:33:55 compute-0 systemd[1]: Started libvirt QEMU daemon.
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.458 2 DEBUG nova.virt.libvirt.host [None req-c2366d8d-ed73-4c42-a7ba-a6e695973656 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7efc3c7941f0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.462 2 DEBUG nova.virt.libvirt.host [None req-c2366d8d-ed73-4c42-a7ba-a6e695973656 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7efc3c7941f0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.463 2 INFO nova.virt.libvirt.driver [None req-c2366d8d-ed73-4c42-a7ba-a6e695973656 - - - - - -] Connection event '1' reason 'None'
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.480 2 WARNING nova.virt.libvirt.driver [None req-c2366d8d-ed73-4c42-a7ba-a6e695973656 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Sep 30 14:33:55 compute-0 nova_compute[260546]: 2025-09-30 14:33:55.481 2 DEBUG nova.virt.libvirt.volume.mount [None req-c2366d8d-ed73-4c42-a7ba-a6e695973656 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Sep 30 14:33:55 compute-0 python3.9[261056]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Sep 30 14:33:55 compute-0 ceph-mon[74194]: pgmap v544: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Sep 30 14:33:56 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:33:56 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f38004710 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:33:56 compute-0 sudo[261227]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-unfrtuewabbjldswxilhcdzozmceftsm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242836.0556376-5209-29743407458063/AnsiballZ_podman_container.py'
Sep 30 14:33:56 compute-0 sudo[261227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:33:56 compute-0 nova_compute[260546]: 2025-09-30 14:33:56.347 2 INFO nova.virt.libvirt.host [None req-c2366d8d-ed73-4c42-a7ba-a6e695973656 - - - - - -] Libvirt host capabilities <capabilities>
Sep 30 14:33:56 compute-0 nova_compute[260546]: 
Sep 30 14:33:56 compute-0 nova_compute[260546]:   <host>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <uuid>294e3813-d409-45a1-9fb5-458cf671b312</uuid>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <cpu>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <arch>x86_64</arch>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model>EPYC-Rome-v4</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <vendor>AMD</vendor>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <microcode version='16777317'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <signature family='23' model='49' stepping='0'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <maxphysaddr mode='emulate' bits='40'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature name='x2apic'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature name='tsc-deadline'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature name='osxsave'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature name='hypervisor'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature name='tsc_adjust'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature name='spec-ctrl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature name='stibp'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature name='arch-capabilities'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature name='ssbd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature name='cmp_legacy'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature name='topoext'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature name='virt-ssbd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature name='lbrv'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature name='tsc-scale'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature name='vmcb-clean'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature name='pause-filter'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature name='pfthreshold'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature name='svme-addr-chk'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature name='rdctl-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature name='skip-l1dfl-vmentry'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature name='mds-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature name='pschange-mc-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <pages unit='KiB' size='4'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <pages unit='KiB' size='2048'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <pages unit='KiB' size='1048576'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     </cpu>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <power_management>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <suspend_mem/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     </power_management>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <iommu support='no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <migration_features>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <live/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <uri_transports>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <uri_transport>tcp</uri_transport>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <uri_transport>rdma</uri_transport>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </uri_transports>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     </migration_features>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <topology>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <cells num='1'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <cell id='0'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:           <memory unit='KiB'>7864116</memory>
Sep 30 14:33:56 compute-0 nova_compute[260546]:           <pages unit='KiB' size='4'>1966029</pages>
Sep 30 14:33:56 compute-0 nova_compute[260546]:           <pages unit='KiB' size='2048'>0</pages>
Sep 30 14:33:56 compute-0 nova_compute[260546]:           <pages unit='KiB' size='1048576'>0</pages>
Sep 30 14:33:56 compute-0 nova_compute[260546]:           <distances>
Sep 30 14:33:56 compute-0 nova_compute[260546]:             <sibling id='0' value='10'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:           </distances>
Sep 30 14:33:56 compute-0 nova_compute[260546]:           <cpus num='8'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:           </cpus>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         </cell>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </cells>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     </topology>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <cache>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     </cache>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <secmodel>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model>selinux</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <doi>0</doi>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     </secmodel>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <secmodel>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model>dac</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <doi>0</doi>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <baselabel type='kvm'>+107:+107</baselabel>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <baselabel type='qemu'>+107:+107</baselabel>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     </secmodel>
Sep 30 14:33:56 compute-0 nova_compute[260546]:   </host>
Sep 30 14:33:56 compute-0 nova_compute[260546]: 
Sep 30 14:33:56 compute-0 nova_compute[260546]:   <guest>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <os_type>hvm</os_type>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <arch name='i686'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <wordsize>32</wordsize>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <domain type='qemu'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <domain type='kvm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     </arch>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <features>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <pae/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <nonpae/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <acpi default='on' toggle='yes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <apic default='on' toggle='no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <cpuselection/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <deviceboot/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <disksnapshot default='on' toggle='no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <externalSnapshot/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     </features>
Sep 30 14:33:56 compute-0 nova_compute[260546]:   </guest>
Sep 30 14:33:56 compute-0 nova_compute[260546]: 
Sep 30 14:33:56 compute-0 nova_compute[260546]:   <guest>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <os_type>hvm</os_type>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <arch name='x86_64'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <wordsize>64</wordsize>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <domain type='qemu'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <domain type='kvm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     </arch>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <features>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <acpi default='on' toggle='yes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <apic default='on' toggle='no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <cpuselection/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <deviceboot/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <disksnapshot default='on' toggle='no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <externalSnapshot/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     </features>
Sep 30 14:33:56 compute-0 nova_compute[260546]:   </guest>
Sep 30 14:33:56 compute-0 nova_compute[260546]: 
Sep 30 14:33:56 compute-0 nova_compute[260546]: </capabilities>
Sep 30 14:33:56 compute-0 nova_compute[260546]: 
Sep 30 14:33:56 compute-0 nova_compute[260546]: 2025-09-30 14:33:56.356 2 DEBUG nova.virt.libvirt.host [None req-c2366d8d-ed73-4c42-a7ba-a6e695973656 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Sep 30 14:33:56 compute-0 nova_compute[260546]: 2025-09-30 14:33:56.385 2 DEBUG nova.virt.libvirt.host [None req-c2366d8d-ed73-4c42-a7ba-a6e695973656 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Sep 30 14:33:56 compute-0 nova_compute[260546]: <domainCapabilities>
Sep 30 14:33:56 compute-0 nova_compute[260546]:   <path>/usr/libexec/qemu-kvm</path>
Sep 30 14:33:56 compute-0 nova_compute[260546]:   <domain>kvm</domain>
Sep 30 14:33:56 compute-0 nova_compute[260546]:   <machine>pc-i440fx-rhel7.6.0</machine>
Sep 30 14:33:56 compute-0 nova_compute[260546]:   <arch>i686</arch>
Sep 30 14:33:56 compute-0 nova_compute[260546]:   <vcpu max='240'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:   <iothreads supported='yes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:   <os supported='yes'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <enum name='firmware'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <loader supported='yes'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='type'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>rom</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>pflash</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='readonly'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>yes</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>no</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='secure'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>no</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     </loader>
Sep 30 14:33:56 compute-0 nova_compute[260546]:   </os>
Sep 30 14:33:56 compute-0 nova_compute[260546]:   <cpu>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <mode name='host-passthrough' supported='yes'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='hostPassthroughMigratable'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>on</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>off</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     </mode>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <mode name='maximum' supported='yes'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='maximumMigratable'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>on</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>off</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     </mode>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <mode name='host-model' supported='yes'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model fallback='forbid'>EPYC-Rome</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <vendor>AMD</vendor>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <maxphysaddr mode='passthrough' limit='40'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='x2apic'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='tsc-deadline'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='hypervisor'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='tsc_adjust'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='spec-ctrl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='stibp'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='arch-capabilities'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='ssbd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='cmp_legacy'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='overflow-recov'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='succor'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='ibrs'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='amd-ssbd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='virt-ssbd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='lbrv'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='tsc-scale'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='vmcb-clean'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='flushbyasid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='pause-filter'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='pfthreshold'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='svme-addr-chk'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='lfence-always-serializing'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='rdctl-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='mds-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='pschange-mc-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='gds-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='rfds-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='disable' name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     </mode>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <mode name='custom' supported='yes'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Broadwell'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Broadwell-IBRS'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Broadwell-noTSX'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Broadwell-noTSX-IBRS'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Broadwell-v1'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Broadwell-v2'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Broadwell-v3'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Broadwell-v4'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Cascadelake-Server'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Cascadelake-Server-noTSX'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ibrs-all'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Cascadelake-Server-v1'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Cascadelake-Server-v2'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ibrs-all'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Cascadelake-Server-v3'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ibrs-all'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Cascadelake-Server-v4'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ibrs-all'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Cascadelake-Server-v5'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ibrs-all'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Cooperlake'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-bf16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ibrs-all'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='taa-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Cooperlake-v1'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-bf16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ibrs-all'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='taa-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Cooperlake-v2'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-bf16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ibrs-all'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='taa-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Denverton'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='mpx'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Denverton-v1'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='mpx'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Denverton-v2'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Denverton-v3'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Dhyana-v2'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='EPYC-Genoa'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amd-psfd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='auto-ibrs'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-bf16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bitalg'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512ifma'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi2'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='gfni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='la57'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='no-nested-data-bp'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='null-sel-clr-base'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='stibp-always-on'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vaes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vpclmulqdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='EPYC-Genoa-v1'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amd-psfd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='auto-ibrs'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-bf16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bitalg'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512ifma'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi2'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='gfni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='la57'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='no-nested-data-bp'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='null-sel-clr-base'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='stibp-always-on'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vaes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vpclmulqdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='EPYC-Milan'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='EPYC-Milan-v1'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='EPYC-Milan-v2'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amd-psfd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='no-nested-data-bp'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='null-sel-clr-base'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='stibp-always-on'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vaes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vpclmulqdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='EPYC-Rome'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='EPYC-Rome-v1'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='EPYC-Rome-v2'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='EPYC-Rome-v3'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='EPYC-v3'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='EPYC-v4'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='GraniteRapids'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amx-bf16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amx-fp16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amx-int8'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amx-tile'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx-vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-bf16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-fp16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bitalg'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512ifma'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi2'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='bus-lock-detect'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fbsdp-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrc'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrs'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fzrm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='gfni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ibrs-all'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='la57'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='mcdt-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pbrsb-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='prefetchiti'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='psdp-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='sbdr-ssdp-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='serialize'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='taa-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='tsx-ldtrk'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vaes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vpclmulqdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xfd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='GraniteRapids-v1'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amx-bf16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amx-fp16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amx-int8'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amx-tile'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx-vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-bf16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-fp16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bitalg'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512ifma'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi2'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='bus-lock-detect'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fbsdp-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrc'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrs'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fzrm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='gfni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ibrs-all'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='la57'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='mcdt-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pbrsb-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='prefetchiti'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='psdp-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='sbdr-ssdp-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='serialize'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='taa-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='tsx-ldtrk'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vaes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vpclmulqdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xfd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='GraniteRapids-v2'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amx-bf16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amx-fp16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amx-int8'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amx-tile'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx-vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx10'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx10-128'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx10-256'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx10-512'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-bf16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-fp16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bitalg'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512ifma'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi2'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='bus-lock-detect'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='cldemote'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fbsdp-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrc'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrs'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fzrm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='gfni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ibrs-all'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='la57'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='mcdt-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='movdir64b'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='movdiri'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pbrsb-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='prefetchiti'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='psdp-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='sbdr-ssdp-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='serialize'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ss'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='taa-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='tsx-ldtrk'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vaes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vpclmulqdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xfd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Haswell'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Haswell-IBRS'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Haswell-noTSX'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Haswell-noTSX-IBRS'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Haswell-v1'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Haswell-v2'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Haswell-v3'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Haswell-v4'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Icelake-Server'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bitalg'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi2'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='gfni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='la57'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vaes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vpclmulqdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Icelake-Server-noTSX'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bitalg'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi2'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='gfni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='la57'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vaes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vpclmulqdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Icelake-Server-v1'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bitalg'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi2'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='gfni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='la57'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vaes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vpclmulqdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Icelake-Server-v2'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bitalg'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi2'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='gfni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='la57'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vaes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vpclmulqdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Icelake-Server-v3'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bitalg'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi2'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='gfni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ibrs-all'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='la57'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='taa-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vaes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vpclmulqdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Icelake-Server-v4'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bitalg'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512ifma'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi2'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='gfni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ibrs-all'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='la57'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='taa-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vaes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vpclmulqdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Icelake-Server-v5'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bitalg'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512ifma'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi2'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='gfni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ibrs-all'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='la57'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='taa-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vaes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vpclmulqdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Icelake-Server-v6'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bitalg'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512ifma'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi2'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='gfni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ibrs-all'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='la57'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='taa-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vaes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vpclmulqdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Icelake-Server-v7'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bitalg'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512ifma'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi2'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='gfni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ibrs-all'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='la57'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='taa-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vaes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vpclmulqdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='IvyBridge'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='IvyBridge-IBRS'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='IvyBridge-v1'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='IvyBridge-v2'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='KnightsMill'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-4fmaps'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-4vnniw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512er'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512pf'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ss'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='KnightsMill-v1'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-4fmaps'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-4vnniw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512er'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512pf'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ss'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Opteron_G4'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fma4'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xop'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Opteron_G4-v1'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fma4'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xop'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Opteron_G5'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fma4'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='tbm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xop'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Opteron_G5-v1'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fma4'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='tbm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xop'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='SapphireRapids'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amx-bf16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amx-int8'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amx-tile'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx-vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-bf16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-fp16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bitalg'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512ifma'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi2'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='bus-lock-detect'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrc'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrs'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fzrm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='gfni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ibrs-all'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='la57'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='serialize'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='taa-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='tsx-ldtrk'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vaes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vpclmulqdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xfd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='SapphireRapids-v1'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amx-bf16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amx-int8'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amx-tile'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx-vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-bf16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-fp16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bitalg'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512ifma'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi2'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='bus-lock-detect'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrc'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrs'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fzrm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='gfni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ibrs-all'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='la57'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='serialize'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='taa-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='tsx-ldtrk'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vaes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vpclmulqdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xfd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='SapphireRapids-v2'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amx-bf16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amx-int8'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amx-tile'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx-vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-bf16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-fp16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bitalg'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512ifma'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi2'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='bus-lock-detect'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fbsdp-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrc'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrs'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fzrm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='gfni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ibrs-all'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='la57'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='psdp-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='sbdr-ssdp-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='serialize'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='taa-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='tsx-ldtrk'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vaes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vpclmulqdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xfd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='SapphireRapids-v3'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amx-bf16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amx-int8'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amx-tile'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx-vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-bf16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-fp16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bitalg'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512ifma'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi2'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='bus-lock-detect'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='cldemote'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fbsdp-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrc'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrs'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fzrm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='gfni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ibrs-all'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='la57'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='movdir64b'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='movdiri'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='psdp-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='sbdr-ssdp-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='serialize'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ss'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='taa-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='tsx-ldtrk'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vaes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vpclmulqdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xfd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='SierraForest'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx-ifma'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx-ne-convert'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx-vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx-vnni-int8'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='bus-lock-detect'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='cmpccxadd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fbsdp-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrs'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='gfni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ibrs-all'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='mcdt-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pbrsb-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='psdp-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='sbdr-ssdp-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='serialize'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vaes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vpclmulqdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='SierraForest-v1'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx-ifma'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx-ne-convert'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx-vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx-vnni-int8'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='bus-lock-detect'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='cmpccxadd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fbsdp-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrs'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='gfni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ibrs-all'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='mcdt-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pbrsb-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='psdp-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='sbdr-ssdp-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='serialize'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vaes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vpclmulqdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Skylake-Client'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Skylake-Client-IBRS'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Skylake-Client-v1'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Skylake-Client-v2'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Skylake-Client-v3'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Sep 30 14:33:56 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v545: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Skylake-Client-v4'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Skylake-Server'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Skylake-Server-IBRS'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Skylake-Server-v1'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Skylake-Server-v2'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Skylake-Server-v3'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Skylake-Server-v4'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Skylake-Server-v5'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Snowridge'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='cldemote'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='core-capability'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='gfni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='movdir64b'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='movdiri'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='mpx'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='split-lock-detect'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Snowridge-v1'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='cldemote'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='core-capability'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='gfni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='movdir64b'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='movdiri'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='mpx'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='split-lock-detect'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Snowridge-v2'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='cldemote'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='core-capability'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='gfni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='movdir64b'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='movdiri'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='split-lock-detect'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Snowridge-v3'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='cldemote'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='core-capability'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='gfni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='movdir64b'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='movdiri'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='split-lock-detect'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Snowridge-v4'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='cldemote'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='gfni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='movdir64b'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='movdiri'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='athlon'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='3dnow'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='3dnowext'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='athlon-v1'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='3dnow'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='3dnowext'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='core2duo'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ss'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='core2duo-v1'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ss'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='coreduo'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ss'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='coreduo-v1'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ss'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='n270'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ss'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='n270-v1'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ss'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='phenom'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='3dnow'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='3dnowext'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='phenom-v1'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='3dnow'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='3dnowext'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     </mode>
Sep 30 14:33:56 compute-0 nova_compute[260546]:   </cpu>
Sep 30 14:33:56 compute-0 nova_compute[260546]:   <memoryBacking supported='yes'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <enum name='sourceType'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <value>file</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <value>anonymous</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <value>memfd</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:   </memoryBacking>
Sep 30 14:33:56 compute-0 nova_compute[260546]:   <devices>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <disk supported='yes'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='diskDevice'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>disk</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>cdrom</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>floppy</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>lun</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='bus'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>ide</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>fdc</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>scsi</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>virtio</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>usb</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>sata</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='model'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>virtio</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>virtio-transitional</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>virtio-non-transitional</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     </disk>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <graphics supported='yes'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='type'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>vnc</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>egl-headless</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>dbus</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     </graphics>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <video supported='yes'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='modelType'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>vga</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>cirrus</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>virtio</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>none</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>bochs</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>ramfb</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     </video>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <hostdev supported='yes'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='mode'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>subsystem</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='startupPolicy'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>default</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>mandatory</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>requisite</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>optional</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='subsysType'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>usb</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>pci</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>scsi</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='capsType'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='pciBackend'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     </hostdev>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <rng supported='yes'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='model'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>virtio</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>virtio-transitional</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>virtio-non-transitional</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='backendModel'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>random</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>egd</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>builtin</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     </rng>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <filesystem supported='yes'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='driverType'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>path</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>handle</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>virtiofs</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     </filesystem>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <tpm supported='yes'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='model'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>tpm-tis</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>tpm-crb</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='backendModel'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>emulator</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>external</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='backendVersion'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>2.0</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     </tpm>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <redirdev supported='yes'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='bus'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>usb</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     </redirdev>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <channel supported='yes'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='type'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>pty</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>unix</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     </channel>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <crypto supported='yes'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='model'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='type'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>qemu</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='backendModel'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>builtin</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     </crypto>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <interface supported='yes'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='backendType'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>default</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>passt</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     </interface>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <panic supported='yes'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='model'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>isa</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>hyperv</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     </panic>
Sep 30 14:33:56 compute-0 nova_compute[260546]:   </devices>
Sep 30 14:33:56 compute-0 nova_compute[260546]:   <features>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <gic supported='no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <vmcoreinfo supported='yes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <genid supported='yes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <backingStoreInput supported='yes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <backup supported='yes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <async-teardown supported='yes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <ps2 supported='yes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <sev supported='no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <sgx supported='no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <hyperv supported='yes'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='features'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>relaxed</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>vapic</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>spinlocks</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>vpindex</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>runtime</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>synic</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>stimer</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>reset</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>vendor_id</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>frequencies</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>reenlightenment</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>tlbflush</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>ipi</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>avic</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>emsr_bitmap</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>xmm_input</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     </hyperv>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <launchSecurity supported='no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:   </features>
Sep 30 14:33:56 compute-0 nova_compute[260546]: </domainCapabilities>
Sep 30 14:33:56 compute-0 nova_compute[260546]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Sep 30 14:33:56 compute-0 nova_compute[260546]: 2025-09-30 14:33:56.391 2 DEBUG nova.virt.libvirt.host [None req-c2366d8d-ed73-4c42-a7ba-a6e695973656 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Sep 30 14:33:56 compute-0 nova_compute[260546]: <domainCapabilities>
Sep 30 14:33:56 compute-0 nova_compute[260546]:   <path>/usr/libexec/qemu-kvm</path>
Sep 30 14:33:56 compute-0 nova_compute[260546]:   <domain>kvm</domain>
Sep 30 14:33:56 compute-0 nova_compute[260546]:   <machine>pc-q35-rhel9.6.0</machine>
Sep 30 14:33:56 compute-0 nova_compute[260546]:   <arch>i686</arch>
Sep 30 14:33:56 compute-0 nova_compute[260546]:   <vcpu max='4096'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:   <iothreads supported='yes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:   <os supported='yes'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <enum name='firmware'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <loader supported='yes'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='type'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>rom</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>pflash</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='readonly'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>yes</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>no</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='secure'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>no</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     </loader>
Sep 30 14:33:56 compute-0 nova_compute[260546]:   </os>
Sep 30 14:33:56 compute-0 nova_compute[260546]:   <cpu>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <mode name='host-passthrough' supported='yes'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='hostPassthroughMigratable'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>on</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>off</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:33:56 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f30003220 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:33:56 compute-0 nova_compute[260546]:     </mode>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <mode name='maximum' supported='yes'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='maximumMigratable'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>on</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>off</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     </mode>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <mode name='host-model' supported='yes'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model fallback='forbid'>EPYC-Rome</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <vendor>AMD</vendor>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <maxphysaddr mode='passthrough' limit='40'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='x2apic'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='tsc-deadline'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='hypervisor'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='tsc_adjust'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='spec-ctrl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='stibp'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='arch-capabilities'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='ssbd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='cmp_legacy'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='overflow-recov'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='succor'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='ibrs'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='amd-ssbd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='virt-ssbd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='lbrv'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='tsc-scale'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='vmcb-clean'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='flushbyasid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='pause-filter'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='pfthreshold'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='svme-addr-chk'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='lfence-always-serializing'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='rdctl-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='mds-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='pschange-mc-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='gds-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='rfds-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='disable' name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     </mode>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <mode name='custom' supported='yes'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Broadwell'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Broadwell-IBRS'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Broadwell-noTSX'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Broadwell-noTSX-IBRS'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Broadwell-v1'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Broadwell-v2'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Broadwell-v3'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Broadwell-v4'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Cascadelake-Server'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Cascadelake-Server-noTSX'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ibrs-all'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Cascadelake-Server-v1'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Cascadelake-Server-v2'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ibrs-all'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Cascadelake-Server-v3'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ibrs-all'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Cascadelake-Server-v4'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ibrs-all'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Cascadelake-Server-v5'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ibrs-all'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Cooperlake'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-bf16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ibrs-all'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='taa-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Cooperlake-v1'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-bf16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ibrs-all'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='taa-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Cooperlake-v2'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-bf16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ibrs-all'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='taa-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Denverton'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='mpx'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Denverton-v1'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='mpx'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Denverton-v2'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Denverton-v3'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Dhyana-v2'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='EPYC-Genoa'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amd-psfd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='auto-ibrs'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-bf16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bitalg'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512ifma'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi2'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='gfni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='la57'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='no-nested-data-bp'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='null-sel-clr-base'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='stibp-always-on'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vaes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vpclmulqdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='EPYC-Genoa-v1'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amd-psfd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='auto-ibrs'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-bf16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bitalg'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512ifma'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi2'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='gfni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='la57'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='no-nested-data-bp'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='null-sel-clr-base'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='stibp-always-on'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vaes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vpclmulqdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='EPYC-Milan'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='EPYC-Milan-v1'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='EPYC-Milan-v2'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amd-psfd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='no-nested-data-bp'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='null-sel-clr-base'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='stibp-always-on'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vaes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vpclmulqdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='EPYC-Rome'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='EPYC-Rome-v1'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='EPYC-Rome-v2'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='EPYC-Rome-v3'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='EPYC-v3'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='EPYC-v4'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='GraniteRapids'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amx-bf16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amx-fp16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amx-int8'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amx-tile'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx-vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-bf16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-fp16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bitalg'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512ifma'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi2'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='bus-lock-detect'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fbsdp-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrc'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrs'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fzrm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='gfni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ibrs-all'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='la57'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='mcdt-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pbrsb-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='prefetchiti'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='psdp-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='sbdr-ssdp-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='serialize'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='taa-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='tsx-ldtrk'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vaes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vpclmulqdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xfd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='GraniteRapids-v1'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amx-bf16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amx-fp16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amx-int8'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amx-tile'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx-vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-bf16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-fp16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bitalg'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512ifma'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi2'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='bus-lock-detect'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fbsdp-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrc'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrs'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fzrm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='gfni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ibrs-all'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='la57'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='mcdt-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pbrsb-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='prefetchiti'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='psdp-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='sbdr-ssdp-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='serialize'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='taa-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='tsx-ldtrk'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vaes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vpclmulqdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xfd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='GraniteRapids-v2'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amx-bf16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amx-fp16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amx-int8'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amx-tile'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx-vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx10'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx10-128'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx10-256'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx10-512'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-bf16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-fp16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bitalg'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512ifma'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi2'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='bus-lock-detect'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='cldemote'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fbsdp-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrc'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrs'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fzrm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='gfni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ibrs-all'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='la57'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='mcdt-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='movdir64b'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='movdiri'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pbrsb-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='prefetchiti'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='psdp-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='sbdr-ssdp-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='serialize'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ss'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='taa-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='tsx-ldtrk'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vaes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vpclmulqdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xfd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Haswell'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Haswell-IBRS'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Haswell-noTSX'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Haswell-noTSX-IBRS'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Haswell-v1'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Haswell-v2'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Haswell-v3'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Haswell-v4'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Icelake-Server'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bitalg'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi2'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='gfni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='la57'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vaes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vpclmulqdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Icelake-Server-noTSX'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bitalg'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi2'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='gfni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='la57'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vaes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vpclmulqdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Icelake-Server-v1'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bitalg'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi2'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='gfni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='la57'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vaes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vpclmulqdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Icelake-Server-v2'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bitalg'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi2'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='gfni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='la57'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vaes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vpclmulqdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Icelake-Server-v3'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bitalg'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi2'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='gfni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ibrs-all'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='la57'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='taa-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vaes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vpclmulqdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Icelake-Server-v4'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bitalg'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512ifma'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi2'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='gfni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ibrs-all'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='la57'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='taa-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vaes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vpclmulqdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Icelake-Server-v5'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bitalg'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512ifma'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi2'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='gfni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ibrs-all'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='la57'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='taa-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vaes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vpclmulqdq'/>
Sep 30 14:33:56 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Icelake-Server-v6'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bitalg'/>
Sep 30 14:33:56 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:33:56.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512ifma'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi2'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='gfni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ibrs-all'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='la57'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='taa-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vaes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vpclmulqdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Icelake-Server-v7'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bitalg'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512ifma'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi2'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='gfni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ibrs-all'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='la57'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='taa-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vaes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vpclmulqdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='IvyBridge'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='IvyBridge-IBRS'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='IvyBridge-v1'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='IvyBridge-v2'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='KnightsMill'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-4fmaps'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-4vnniw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512er'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512pf'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ss'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='KnightsMill-v1'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-4fmaps'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-4vnniw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512er'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512pf'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ss'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Opteron_G4'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fma4'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xop'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Opteron_G4-v1'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fma4'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xop'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Opteron_G5'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fma4'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='tbm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xop'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Opteron_G5-v1'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fma4'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='tbm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xop'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='SapphireRapids'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amx-bf16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amx-int8'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amx-tile'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx-vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-bf16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-fp16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bitalg'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512ifma'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi2'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='bus-lock-detect'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrc'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrs'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fzrm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='gfni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ibrs-all'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='la57'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='serialize'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='taa-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='tsx-ldtrk'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vaes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vpclmulqdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xfd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='SapphireRapids-v1'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amx-bf16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amx-int8'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amx-tile'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx-vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-bf16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-fp16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bitalg'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512ifma'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi2'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='bus-lock-detect'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrc'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrs'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fzrm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='gfni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ibrs-all'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='la57'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='serialize'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='taa-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='tsx-ldtrk'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vaes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vpclmulqdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xfd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='SapphireRapids-v2'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amx-bf16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amx-int8'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amx-tile'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx-vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-bf16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-fp16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bitalg'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512ifma'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi2'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='bus-lock-detect'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fbsdp-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrc'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrs'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fzrm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='gfni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ibrs-all'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='la57'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='psdp-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='sbdr-ssdp-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='serialize'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='taa-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='tsx-ldtrk'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vaes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vpclmulqdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xfd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='SapphireRapids-v3'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amx-bf16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amx-int8'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amx-tile'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx-vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-bf16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-fp16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bitalg'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512ifma'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi2'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='bus-lock-detect'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='cldemote'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fbsdp-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrc'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrs'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fzrm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='gfni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ibrs-all'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='la57'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='movdir64b'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='movdiri'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='psdp-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='sbdr-ssdp-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='serialize'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ss'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='taa-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='tsx-ldtrk'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vaes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vpclmulqdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xfd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='SierraForest'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx-ifma'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx-ne-convert'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx-vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx-vnni-int8'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='bus-lock-detect'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='cmpccxadd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fbsdp-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrs'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='gfni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ibrs-all'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='mcdt-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pbrsb-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='psdp-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='sbdr-ssdp-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='serialize'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vaes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vpclmulqdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='SierraForest-v1'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx-ifma'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx-ne-convert'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx-vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx-vnni-int8'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='bus-lock-detect'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='cmpccxadd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fbsdp-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrs'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='gfni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ibrs-all'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='mcdt-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pbrsb-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='psdp-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='sbdr-ssdp-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='serialize'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vaes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vpclmulqdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Skylake-Client'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Skylake-Client-IBRS'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Skylake-Client-v1'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Skylake-Client-v2'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Skylake-Client-v3'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Skylake-Client-v4'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Skylake-Server'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Skylake-Server-IBRS'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Skylake-Server-v1'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Skylake-Server-v2'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Skylake-Server-v3'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Skylake-Server-v4'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Skylake-Server-v5'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Snowridge'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='cldemote'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='core-capability'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='gfni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='movdir64b'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='movdiri'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='mpx'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='split-lock-detect'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Snowridge-v1'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='cldemote'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='core-capability'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='gfni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='movdir64b'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='movdiri'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='mpx'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='split-lock-detect'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Snowridge-v2'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='cldemote'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='core-capability'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='gfni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='movdir64b'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='movdiri'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='split-lock-detect'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Snowridge-v3'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='cldemote'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='core-capability'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='gfni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='movdir64b'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='movdiri'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='split-lock-detect'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Snowridge-v4'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='cldemote'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='gfni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='movdir64b'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='movdiri'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='athlon'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='3dnow'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='3dnowext'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='athlon-v1'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='3dnow'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='3dnowext'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='core2duo'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ss'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='core2duo-v1'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ss'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='coreduo'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ss'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='coreduo-v1'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ss'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='n270'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ss'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='n270-v1'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ss'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='phenom'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='3dnow'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='3dnowext'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='phenom-v1'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='3dnow'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='3dnowext'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     </mode>
Sep 30 14:33:56 compute-0 nova_compute[260546]:   </cpu>
Sep 30 14:33:56 compute-0 nova_compute[260546]:   <memoryBacking supported='yes'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <enum name='sourceType'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <value>file</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <value>anonymous</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <value>memfd</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:   </memoryBacking>
Sep 30 14:33:56 compute-0 nova_compute[260546]:   <devices>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <disk supported='yes'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='diskDevice'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>disk</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>cdrom</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>floppy</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>lun</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='bus'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>fdc</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>scsi</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>virtio</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>usb</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>sata</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='model'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>virtio</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>virtio-transitional</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>virtio-non-transitional</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     </disk>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <graphics supported='yes'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='type'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>vnc</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>egl-headless</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>dbus</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     </graphics>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <video supported='yes'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='modelType'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>vga</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>cirrus</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>virtio</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>none</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>bochs</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>ramfb</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     </video>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <hostdev supported='yes'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='mode'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>subsystem</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='startupPolicy'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>default</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>mandatory</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>requisite</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>optional</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='subsysType'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>usb</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>pci</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>scsi</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='capsType'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='pciBackend'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     </hostdev>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <rng supported='yes'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='model'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>virtio</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>virtio-transitional</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>virtio-non-transitional</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='backendModel'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>random</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>egd</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>builtin</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     </rng>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <filesystem supported='yes'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='driverType'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>path</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>handle</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>virtiofs</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     </filesystem>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <tpm supported='yes'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='model'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>tpm-tis</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>tpm-crb</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='backendModel'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>emulator</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>external</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='backendVersion'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>2.0</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     </tpm>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <redirdev supported='yes'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='bus'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>usb</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     </redirdev>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <channel supported='yes'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='type'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>pty</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>unix</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     </channel>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <crypto supported='yes'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='model'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='type'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>qemu</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='backendModel'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>builtin</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     </crypto>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <interface supported='yes'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='backendType'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>default</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>passt</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     </interface>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <panic supported='yes'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='model'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>isa</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>hyperv</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     </panic>
Sep 30 14:33:56 compute-0 nova_compute[260546]:   </devices>
Sep 30 14:33:56 compute-0 nova_compute[260546]:   <features>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <gic supported='no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <vmcoreinfo supported='yes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <genid supported='yes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <backingStoreInput supported='yes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <backup supported='yes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <async-teardown supported='yes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <ps2 supported='yes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <sev supported='no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <sgx supported='no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <hyperv supported='yes'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='features'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>relaxed</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>vapic</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>spinlocks</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>vpindex</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>runtime</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>synic</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>stimer</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>reset</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>vendor_id</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>frequencies</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>reenlightenment</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>tlbflush</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>ipi</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>avic</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>emsr_bitmap</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>xmm_input</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     </hyperv>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <launchSecurity supported='no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:   </features>
Sep 30 14:33:56 compute-0 nova_compute[260546]: </domainCapabilities>
Sep 30 14:33:56 compute-0 nova_compute[260546]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Sep 30 14:33:56 compute-0 nova_compute[260546]: 2025-09-30 14:33:56.428 2 DEBUG nova.virt.libvirt.host [None req-c2366d8d-ed73-4c42-a7ba-a6e695973656 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Sep 30 14:33:56 compute-0 nova_compute[260546]: 2025-09-30 14:33:56.432 2 DEBUG nova.virt.libvirt.host [None req-c2366d8d-ed73-4c42-a7ba-a6e695973656 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Sep 30 14:33:56 compute-0 nova_compute[260546]: <domainCapabilities>
Sep 30 14:33:56 compute-0 nova_compute[260546]:   <path>/usr/libexec/qemu-kvm</path>
Sep 30 14:33:56 compute-0 nova_compute[260546]:   <domain>kvm</domain>
Sep 30 14:33:56 compute-0 nova_compute[260546]:   <machine>pc-i440fx-rhel7.6.0</machine>
Sep 30 14:33:56 compute-0 nova_compute[260546]:   <arch>x86_64</arch>
Sep 30 14:33:56 compute-0 nova_compute[260546]:   <vcpu max='240'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:   <iothreads supported='yes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:   <os supported='yes'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <enum name='firmware'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <loader supported='yes'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='type'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>rom</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>pflash</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='readonly'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>yes</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>no</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='secure'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>no</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     </loader>
Sep 30 14:33:56 compute-0 nova_compute[260546]:   </os>
Sep 30 14:33:56 compute-0 nova_compute[260546]:   <cpu>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <mode name='host-passthrough' supported='yes'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='hostPassthroughMigratable'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>on</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>off</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     </mode>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <mode name='maximum' supported='yes'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='maximumMigratable'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>on</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>off</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     </mode>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <mode name='host-model' supported='yes'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model fallback='forbid'>EPYC-Rome</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <vendor>AMD</vendor>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <maxphysaddr mode='passthrough' limit='40'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='x2apic'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='tsc-deadline'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='hypervisor'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='tsc_adjust'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='spec-ctrl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='stibp'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='arch-capabilities'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='ssbd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='cmp_legacy'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='overflow-recov'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='succor'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='ibrs'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='amd-ssbd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='virt-ssbd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='lbrv'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='tsc-scale'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='vmcb-clean'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='flushbyasid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='pause-filter'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='pfthreshold'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='svme-addr-chk'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='lfence-always-serializing'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='rdctl-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='mds-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='pschange-mc-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='gds-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='rfds-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='disable' name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     </mode>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <mode name='custom' supported='yes'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Broadwell'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Broadwell-IBRS'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Broadwell-noTSX'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Broadwell-noTSX-IBRS'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Broadwell-v1'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Broadwell-v2'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Broadwell-v3'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Broadwell-v4'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Cascadelake-Server'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Cascadelake-Server-noTSX'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ibrs-all'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Cascadelake-Server-v1'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Cascadelake-Server-v2'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ibrs-all'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Cascadelake-Server-v3'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ibrs-all'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Cascadelake-Server-v4'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ibrs-all'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Cascadelake-Server-v5'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ibrs-all'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Cooperlake'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-bf16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ibrs-all'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='taa-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Cooperlake-v1'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-bf16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ibrs-all'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='taa-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Cooperlake-v2'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-bf16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ibrs-all'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='taa-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Denverton'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='mpx'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Denverton-v1'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='mpx'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Denverton-v2'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Denverton-v3'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Dhyana-v2'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='EPYC-Genoa'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amd-psfd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='auto-ibrs'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-bf16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bitalg'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512ifma'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi2'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='gfni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='la57'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='no-nested-data-bp'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='null-sel-clr-base'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='stibp-always-on'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vaes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vpclmulqdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='EPYC-Genoa-v1'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amd-psfd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='auto-ibrs'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-bf16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bitalg'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512ifma'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi2'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='gfni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='la57'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='no-nested-data-bp'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='null-sel-clr-base'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='stibp-always-on'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vaes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vpclmulqdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='EPYC-Milan'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='EPYC-Milan-v1'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='EPYC-Milan-v2'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amd-psfd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='no-nested-data-bp'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='null-sel-clr-base'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='stibp-always-on'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vaes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vpclmulqdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='EPYC-Rome'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='EPYC-Rome-v1'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='EPYC-Rome-v2'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='EPYC-Rome-v3'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='EPYC-v3'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='EPYC-v4'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='GraniteRapids'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amx-bf16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amx-fp16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amx-int8'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amx-tile'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx-vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-bf16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-fp16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bitalg'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512ifma'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi2'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='bus-lock-detect'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fbsdp-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrc'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrs'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fzrm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='gfni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ibrs-all'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='la57'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='mcdt-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pbrsb-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='prefetchiti'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='psdp-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='sbdr-ssdp-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='serialize'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='taa-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='tsx-ldtrk'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vaes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vpclmulqdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xfd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='GraniteRapids-v1'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amx-bf16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amx-fp16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amx-int8'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amx-tile'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx-vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-bf16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-fp16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bitalg'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512ifma'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi2'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='bus-lock-detect'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fbsdp-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrc'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrs'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fzrm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='gfni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ibrs-all'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='la57'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='mcdt-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pbrsb-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='prefetchiti'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='psdp-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='sbdr-ssdp-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='serialize'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='taa-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='tsx-ldtrk'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vaes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vpclmulqdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xfd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='GraniteRapids-v2'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amx-bf16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amx-fp16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amx-int8'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amx-tile'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx-vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx10'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx10-128'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx10-256'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx10-512'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-bf16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-fp16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bitalg'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512ifma'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi2'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='bus-lock-detect'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='cldemote'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fbsdp-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrc'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrs'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fzrm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='gfni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ibrs-all'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='la57'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='mcdt-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='movdir64b'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='movdiri'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pbrsb-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='prefetchiti'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='psdp-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='sbdr-ssdp-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='serialize'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ss'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='taa-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='tsx-ldtrk'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vaes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vpclmulqdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xfd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Haswell'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Haswell-IBRS'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Haswell-noTSX'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Haswell-noTSX-IBRS'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Haswell-v1'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Haswell-v2'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Haswell-v3'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Haswell-v4'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Icelake-Server'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bitalg'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi2'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='gfni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='la57'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vaes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vpclmulqdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Icelake-Server-noTSX'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bitalg'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi2'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='gfni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='la57'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vaes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vpclmulqdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Icelake-Server-v1'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bitalg'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi2'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='gfni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='la57'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vaes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vpclmulqdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Icelake-Server-v2'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bitalg'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi2'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='gfni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='la57'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vaes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vpclmulqdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Icelake-Server-v3'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bitalg'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi2'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='gfni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ibrs-all'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='la57'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='taa-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vaes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vpclmulqdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Icelake-Server-v4'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bitalg'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512ifma'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi2'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='gfni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ibrs-all'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='la57'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='taa-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vaes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vpclmulqdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Icelake-Server-v5'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bitalg'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512ifma'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi2'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='gfni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ibrs-all'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='la57'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='taa-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vaes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vpclmulqdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Icelake-Server-v6'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bitalg'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512ifma'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi2'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='gfni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ibrs-all'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='la57'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='taa-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vaes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vpclmulqdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Icelake-Server-v7'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bitalg'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512ifma'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi2'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='gfni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ibrs-all'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='la57'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='taa-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vaes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vpclmulqdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='IvyBridge'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='IvyBridge-IBRS'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='IvyBridge-v1'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='IvyBridge-v2'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='KnightsMill'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-4fmaps'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-4vnniw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512er'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512pf'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ss'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='KnightsMill-v1'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-4fmaps'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-4vnniw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512er'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512pf'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ss'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Opteron_G4'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fma4'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xop'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Opteron_G4-v1'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fma4'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xop'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Opteron_G5'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fma4'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='tbm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xop'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Opteron_G5-v1'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fma4'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='tbm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xop'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='SapphireRapids'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amx-bf16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amx-int8'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amx-tile'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx-vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-bf16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-fp16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bitalg'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512ifma'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi2'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='bus-lock-detect'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrc'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrs'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fzrm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='gfni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ibrs-all'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='la57'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='serialize'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='taa-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='tsx-ldtrk'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vaes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vpclmulqdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xfd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='SapphireRapids-v1'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amx-bf16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amx-int8'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amx-tile'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx-vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-bf16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-fp16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bitalg'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512ifma'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi2'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='bus-lock-detect'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrc'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrs'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fzrm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='gfni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ibrs-all'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='la57'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='serialize'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='taa-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='tsx-ldtrk'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vaes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vpclmulqdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xfd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='SapphireRapids-v2'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amx-bf16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amx-int8'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amx-tile'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx-vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-bf16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-fp16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bitalg'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512ifma'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi2'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='bus-lock-detect'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fbsdp-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrc'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrs'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fzrm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='gfni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ibrs-all'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='la57'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='psdp-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='sbdr-ssdp-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='serialize'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='taa-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='tsx-ldtrk'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vaes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vpclmulqdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xfd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='SapphireRapids-v3'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amx-bf16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amx-int8'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amx-tile'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx-vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-bf16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-fp16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bitalg'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512ifma'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi2'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='bus-lock-detect'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='cldemote'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fbsdp-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrc'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrs'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fzrm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='gfni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ibrs-all'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='la57'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='movdir64b'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='movdiri'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='psdp-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='sbdr-ssdp-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='serialize'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ss'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='taa-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='tsx-ldtrk'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vaes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vpclmulqdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xfd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='SierraForest'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx-ifma'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx-ne-convert'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx-vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx-vnni-int8'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='bus-lock-detect'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='cmpccxadd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fbsdp-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrs'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='gfni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ibrs-all'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='mcdt-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pbrsb-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='psdp-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='sbdr-ssdp-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='serialize'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vaes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vpclmulqdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='SierraForest-v1'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx-ifma'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx-ne-convert'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx-vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx-vnni-int8'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='bus-lock-detect'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='cmpccxadd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fbsdp-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrs'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='gfni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ibrs-all'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='mcdt-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pbrsb-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='psdp-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='sbdr-ssdp-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='serialize'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vaes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vpclmulqdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Skylake-Client'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Skylake-Client-IBRS'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Skylake-Client-v1'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Skylake-Client-v2'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Skylake-Client-v3'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Skylake-Client-v4'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Skylake-Server'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Skylake-Server-IBRS'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Skylake-Server-v1'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Skylake-Server-v2'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Skylake-Server-v3'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Skylake-Server-v4'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Skylake-Server-v5'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Snowridge'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='cldemote'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='core-capability'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='gfni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='movdir64b'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='movdiri'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='mpx'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='split-lock-detect'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Snowridge-v1'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='cldemote'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='core-capability'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='gfni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='movdir64b'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='movdiri'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='mpx'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='split-lock-detect'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Snowridge-v2'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='cldemote'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='core-capability'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='gfni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='movdir64b'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='movdiri'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='split-lock-detect'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Snowridge-v3'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='cldemote'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='core-capability'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='gfni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='movdir64b'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='movdiri'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='split-lock-detect'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Snowridge-v4'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='cldemote'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='gfni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='movdir64b'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='movdiri'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='athlon'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='3dnow'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='3dnowext'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='athlon-v1'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='3dnow'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='3dnowext'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='core2duo'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ss'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='core2duo-v1'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ss'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='coreduo'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ss'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='coreduo-v1'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ss'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='n270'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ss'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='n270-v1'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ss'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='phenom'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='3dnow'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='3dnowext'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='phenom-v1'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='3dnow'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='3dnowext'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     </mode>
Sep 30 14:33:56 compute-0 nova_compute[260546]:   </cpu>
Sep 30 14:33:56 compute-0 nova_compute[260546]:   <memoryBacking supported='yes'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <enum name='sourceType'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <value>file</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <value>anonymous</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <value>memfd</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:   </memoryBacking>
Sep 30 14:33:56 compute-0 nova_compute[260546]:   <devices>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <disk supported='yes'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='diskDevice'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>disk</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>cdrom</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>floppy</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>lun</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='bus'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>ide</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>fdc</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>scsi</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>virtio</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>usb</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>sata</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='model'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>virtio</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>virtio-transitional</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>virtio-non-transitional</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     </disk>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <graphics supported='yes'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='type'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>vnc</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>egl-headless</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>dbus</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     </graphics>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <video supported='yes'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='modelType'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>vga</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>cirrus</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>virtio</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>none</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>bochs</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>ramfb</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     </video>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <hostdev supported='yes'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='mode'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>subsystem</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='startupPolicy'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>default</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>mandatory</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>requisite</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>optional</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='subsysType'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>usb</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>pci</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>scsi</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='capsType'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='pciBackend'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     </hostdev>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <rng supported='yes'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='model'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>virtio</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>virtio-transitional</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>virtio-non-transitional</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='backendModel'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>random</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>egd</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>builtin</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     </rng>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <filesystem supported='yes'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='driverType'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>path</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>handle</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>virtiofs</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     </filesystem>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <tpm supported='yes'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='model'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>tpm-tis</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>tpm-crb</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='backendModel'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>emulator</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>external</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='backendVersion'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>2.0</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     </tpm>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <redirdev supported='yes'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='bus'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>usb</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     </redirdev>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <channel supported='yes'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='type'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>pty</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>unix</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     </channel>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <crypto supported='yes'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='model'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='type'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>qemu</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='backendModel'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>builtin</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     </crypto>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <interface supported='yes'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='backendType'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>default</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>passt</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     </interface>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <panic supported='yes'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='model'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>isa</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>hyperv</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     </panic>
Sep 30 14:33:56 compute-0 nova_compute[260546]:   </devices>
Sep 30 14:33:56 compute-0 nova_compute[260546]:   <features>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <gic supported='no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <vmcoreinfo supported='yes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <genid supported='yes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <backingStoreInput supported='yes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <backup supported='yes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <async-teardown supported='yes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <ps2 supported='yes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <sev supported='no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <sgx supported='no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <hyperv supported='yes'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='features'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>relaxed</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>vapic</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>spinlocks</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>vpindex</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>runtime</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>synic</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>stimer</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>reset</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>vendor_id</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>frequencies</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>reenlightenment</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>tlbflush</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>ipi</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>avic</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>emsr_bitmap</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>xmm_input</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     </hyperv>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <launchSecurity supported='no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:   </features>
Sep 30 14:33:56 compute-0 nova_compute[260546]: </domainCapabilities>
Sep 30 14:33:56 compute-0 nova_compute[260546]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Sep 30 14:33:56 compute-0 nova_compute[260546]: 2025-09-30 14:33:56.490 2 DEBUG nova.virt.libvirt.host [None req-c2366d8d-ed73-4c42-a7ba-a6e695973656 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Sep 30 14:33:56 compute-0 nova_compute[260546]: <domainCapabilities>
Sep 30 14:33:56 compute-0 nova_compute[260546]:   <path>/usr/libexec/qemu-kvm</path>
Sep 30 14:33:56 compute-0 nova_compute[260546]:   <domain>kvm</domain>
Sep 30 14:33:56 compute-0 nova_compute[260546]:   <machine>pc-q35-rhel9.6.0</machine>
Sep 30 14:33:56 compute-0 nova_compute[260546]:   <arch>x86_64</arch>
Sep 30 14:33:56 compute-0 nova_compute[260546]:   <vcpu max='4096'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:   <iothreads supported='yes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:   <os supported='yes'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <enum name='firmware'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <value>efi</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <loader supported='yes'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='type'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>rom</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>pflash</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='readonly'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>yes</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>no</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='secure'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>yes</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>no</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     </loader>
Sep 30 14:33:56 compute-0 nova_compute[260546]:   </os>
Sep 30 14:33:56 compute-0 nova_compute[260546]:   <cpu>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <mode name='host-passthrough' supported='yes'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='hostPassthroughMigratable'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>on</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>off</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     </mode>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <mode name='maximum' supported='yes'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='maximumMigratable'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>on</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>off</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     </mode>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <mode name='host-model' supported='yes'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model fallback='forbid'>EPYC-Rome</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <vendor>AMD</vendor>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <maxphysaddr mode='passthrough' limit='40'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='x2apic'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='tsc-deadline'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='hypervisor'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='tsc_adjust'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='spec-ctrl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='stibp'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='arch-capabilities'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='ssbd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='cmp_legacy'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='overflow-recov'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='succor'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='ibrs'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='amd-ssbd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='virt-ssbd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='lbrv'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='tsc-scale'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='vmcb-clean'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='flushbyasid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='pause-filter'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='pfthreshold'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='svme-addr-chk'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='lfence-always-serializing'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='rdctl-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='mds-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='pschange-mc-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='gds-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='require' name='rfds-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <feature policy='disable' name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     </mode>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <mode name='custom' supported='yes'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Broadwell'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Broadwell-IBRS'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Broadwell-noTSX'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Broadwell-noTSX-IBRS'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Broadwell-v1'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Broadwell-v2'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Broadwell-v3'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Broadwell-v4'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Cascadelake-Server'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Cascadelake-Server-noTSX'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ibrs-all'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Cascadelake-Server-v1'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Cascadelake-Server-v2'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ibrs-all'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Cascadelake-Server-v3'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ibrs-all'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Cascadelake-Server-v4'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ibrs-all'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Cascadelake-Server-v5'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ibrs-all'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Cooperlake'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-bf16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ibrs-all'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='taa-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Cooperlake-v1'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-bf16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ibrs-all'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='taa-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Cooperlake-v2'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-bf16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ibrs-all'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='taa-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Denverton'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='mpx'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Denverton-v1'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='mpx'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Denverton-v2'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Denverton-v3'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Dhyana-v2'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='EPYC-Genoa'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amd-psfd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='auto-ibrs'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-bf16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bitalg'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512ifma'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi2'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='gfni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='la57'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='no-nested-data-bp'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='null-sel-clr-base'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='stibp-always-on'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vaes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vpclmulqdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='EPYC-Genoa-v1'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amd-psfd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='auto-ibrs'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-bf16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bitalg'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512ifma'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi2'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='gfni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='la57'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='no-nested-data-bp'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='null-sel-clr-base'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='stibp-always-on'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vaes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vpclmulqdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='EPYC-Milan'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='EPYC-Milan-v1'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='EPYC-Milan-v2'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amd-psfd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='no-nested-data-bp'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='null-sel-clr-base'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='stibp-always-on'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vaes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vpclmulqdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='EPYC-Rome'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='EPYC-Rome-v1'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='EPYC-Rome-v2'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='EPYC-Rome-v3'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='EPYC-v3'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='EPYC-v4'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='GraniteRapids'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amx-bf16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amx-fp16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amx-int8'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amx-tile'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx-vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-bf16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-fp16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bitalg'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512ifma'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi2'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='bus-lock-detect'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fbsdp-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrc'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrs'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fzrm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='gfni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ibrs-all'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='la57'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='mcdt-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pbrsb-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='prefetchiti'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='psdp-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='sbdr-ssdp-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='serialize'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='taa-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='tsx-ldtrk'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vaes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vpclmulqdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xfd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='GraniteRapids-v1'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amx-bf16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amx-fp16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amx-int8'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amx-tile'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx-vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-bf16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-fp16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bitalg'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512ifma'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi2'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='bus-lock-detect'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fbsdp-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrc'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrs'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fzrm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='gfni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ibrs-all'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='la57'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='mcdt-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pbrsb-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='prefetchiti'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='psdp-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='sbdr-ssdp-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='serialize'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='taa-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='tsx-ldtrk'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vaes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vpclmulqdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xfd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='GraniteRapids-v2'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amx-bf16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amx-fp16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amx-int8'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amx-tile'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx-vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx10'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx10-128'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx10-256'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx10-512'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-bf16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-fp16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bitalg'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512ifma'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi2'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='bus-lock-detect'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='cldemote'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fbsdp-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrc'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrs'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fzrm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='gfni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ibrs-all'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='la57'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='mcdt-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='movdir64b'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='movdiri'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pbrsb-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='prefetchiti'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='psdp-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='sbdr-ssdp-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='serialize'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ss'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='taa-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='tsx-ldtrk'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vaes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vpclmulqdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xfd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Haswell'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Haswell-IBRS'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Haswell-noTSX'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Haswell-noTSX-IBRS'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Haswell-v1'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Haswell-v2'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Haswell-v3'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Haswell-v4'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Icelake-Server'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bitalg'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi2'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='gfni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='la57'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vaes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vpclmulqdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Icelake-Server-noTSX'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bitalg'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi2'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='gfni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='la57'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vaes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vpclmulqdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Icelake-Server-v1'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bitalg'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi2'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='gfni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='la57'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vaes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vpclmulqdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Icelake-Server-v2'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bitalg'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi2'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='gfni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='la57'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vaes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vpclmulqdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Icelake-Server-v3'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bitalg'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi2'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='gfni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ibrs-all'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='la57'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='taa-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vaes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vpclmulqdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Icelake-Server-v4'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bitalg'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512ifma'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi2'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='gfni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ibrs-all'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='la57'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='taa-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vaes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vpclmulqdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Icelake-Server-v5'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bitalg'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512ifma'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi2'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='gfni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ibrs-all'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='la57'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='taa-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vaes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vpclmulqdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Icelake-Server-v6'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bitalg'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512ifma'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi2'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='gfni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ibrs-all'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='la57'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='taa-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vaes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vpclmulqdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Icelake-Server-v7'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bitalg'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512ifma'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi2'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='gfni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 python3.9[261230]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ibrs-all'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='la57'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='taa-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vaes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vpclmulqdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='IvyBridge'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='IvyBridge-IBRS'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='IvyBridge-v1'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='IvyBridge-v2'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='KnightsMill'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-4fmaps'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-4vnniw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512er'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512pf'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ss'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='KnightsMill-v1'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-4fmaps'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-4vnniw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512er'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512pf'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ss'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Opteron_G4'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fma4'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xop'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Opteron_G4-v1'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fma4'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xop'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Opteron_G5'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fma4'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='tbm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xop'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Opteron_G5-v1'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fma4'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='tbm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xop'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='SapphireRapids'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amx-bf16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amx-int8'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amx-tile'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx-vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-bf16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-fp16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bitalg'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512ifma'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi2'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='bus-lock-detect'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrc'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrs'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fzrm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='gfni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ibrs-all'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='la57'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='serialize'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='taa-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='tsx-ldtrk'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vaes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vpclmulqdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xfd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='SapphireRapids-v1'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amx-bf16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amx-int8'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amx-tile'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx-vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-bf16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-fp16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bitalg'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512ifma'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi2'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='bus-lock-detect'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrc'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrs'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fzrm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='gfni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ibrs-all'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='la57'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='serialize'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='taa-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='tsx-ldtrk'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vaes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vpclmulqdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xfd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='SapphireRapids-v2'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amx-bf16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amx-int8'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amx-tile'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx-vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-bf16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-fp16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bitalg'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512ifma'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi2'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='bus-lock-detect'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fbsdp-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrc'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrs'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fzrm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='gfni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ibrs-all'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='la57'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='psdp-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='sbdr-ssdp-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='serialize'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='taa-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='tsx-ldtrk'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vaes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vpclmulqdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xfd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='SapphireRapids-v3'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amx-bf16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amx-int8'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='amx-tile'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx-vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-bf16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-fp16'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bitalg'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512ifma'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vbmi2'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='bus-lock-detect'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='cldemote'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fbsdp-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrc'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrs'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fzrm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='gfni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ibrs-all'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='la57'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='movdir64b'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='movdiri'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='psdp-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='sbdr-ssdp-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='serialize'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ss'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='taa-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='tsx-ldtrk'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vaes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vpclmulqdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xfd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='SierraForest'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx-ifma'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx-ne-convert'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx-vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx-vnni-int8'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='bus-lock-detect'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='cmpccxadd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fbsdp-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrs'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='gfni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ibrs-all'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='mcdt-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pbrsb-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='psdp-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='sbdr-ssdp-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='serialize'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vaes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vpclmulqdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='SierraForest-v1'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx-ifma'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx-ne-convert'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx-vnni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx-vnni-int8'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='bus-lock-detect'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='cmpccxadd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fbsdp-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='fsrs'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='gfni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ibrs-all'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='mcdt-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pbrsb-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='psdp-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='sbdr-ssdp-no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='serialize'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vaes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='vpclmulqdq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Skylake-Client'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Skylake-Client-IBRS'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Skylake-Client-v1'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Skylake-Client-v2'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Skylake-Client-v3'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Skylake-Client-v4'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Skylake-Server'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Skylake-Server-IBRS'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Skylake-Server-v1'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Skylake-Server-v2'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='hle'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='rtm'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Skylake-Server-v3'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Skylake-Server-v4'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Skylake-Server-v5'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512bw'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512cd'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512dq'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512f'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='avx512vl'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='invpcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pcid'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='pku'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Snowridge'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='cldemote'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='core-capability'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='gfni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='movdir64b'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='movdiri'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='mpx'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='split-lock-detect'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Snowridge-v1'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='cldemote'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='core-capability'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='gfni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='movdir64b'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='movdiri'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='mpx'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='split-lock-detect'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Snowridge-v2'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='cldemote'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='core-capability'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='gfni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='movdir64b'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='movdiri'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='split-lock-detect'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Snowridge-v3'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='cldemote'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='core-capability'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='gfni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='movdir64b'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='movdiri'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='split-lock-detect'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='Snowridge-v4'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='cldemote'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='erms'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='gfni'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='movdir64b'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='movdiri'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='xsaves'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='athlon'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='3dnow'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='3dnowext'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='athlon-v1'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='3dnow'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='3dnowext'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='core2duo'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ss'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='core2duo-v1'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ss'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='coreduo'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ss'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='coreduo-v1'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ss'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='n270'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ss'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='n270-v1'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='ss'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='phenom'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='3dnow'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='3dnowext'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <blockers model='phenom-v1'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='3dnow'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <feature name='3dnowext'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </blockers>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     </mode>
Sep 30 14:33:56 compute-0 nova_compute[260546]:   </cpu>
Sep 30 14:33:56 compute-0 nova_compute[260546]:   <memoryBacking supported='yes'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <enum name='sourceType'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <value>file</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <value>anonymous</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <value>memfd</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:   </memoryBacking>
Sep 30 14:33:56 compute-0 nova_compute[260546]:   <devices>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <disk supported='yes'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='diskDevice'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>disk</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>cdrom</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>floppy</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>lun</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='bus'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>fdc</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>scsi</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>virtio</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>usb</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>sata</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='model'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>virtio</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>virtio-transitional</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>virtio-non-transitional</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     </disk>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <graphics supported='yes'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='type'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>vnc</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>egl-headless</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>dbus</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     </graphics>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <video supported='yes'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='modelType'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>vga</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>cirrus</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>virtio</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>none</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>bochs</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>ramfb</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     </video>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <hostdev supported='yes'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='mode'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>subsystem</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='startupPolicy'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>default</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>mandatory</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>requisite</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>optional</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='subsysType'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>usb</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>pci</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>scsi</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='capsType'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='pciBackend'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     </hostdev>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <rng supported='yes'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='model'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>virtio</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>virtio-transitional</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>virtio-non-transitional</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='backendModel'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>random</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>egd</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>builtin</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     </rng>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <filesystem supported='yes'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='driverType'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>path</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>handle</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>virtiofs</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     </filesystem>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <tpm supported='yes'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='model'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>tpm-tis</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>tpm-crb</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='backendModel'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>emulator</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>external</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='backendVersion'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>2.0</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     </tpm>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <redirdev supported='yes'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='bus'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>usb</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     </redirdev>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <channel supported='yes'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='type'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>pty</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>unix</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     </channel>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <crypto supported='yes'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='model'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='type'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>qemu</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='backendModel'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>builtin</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     </crypto>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <interface supported='yes'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='backendType'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>default</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>passt</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     </interface>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <panic supported='yes'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='model'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>isa</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>hyperv</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     </panic>
Sep 30 14:33:56 compute-0 nova_compute[260546]:   </devices>
Sep 30 14:33:56 compute-0 nova_compute[260546]:   <features>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <gic supported='no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <vmcoreinfo supported='yes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <genid supported='yes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <backingStoreInput supported='yes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <backup supported='yes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <async-teardown supported='yes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <ps2 supported='yes'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <sev supported='no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <sgx supported='no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <hyperv supported='yes'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       <enum name='features'>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>relaxed</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>vapic</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>spinlocks</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>vpindex</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>runtime</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>synic</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>stimer</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>reset</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>vendor_id</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>frequencies</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>reenlightenment</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>tlbflush</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>ipi</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>avic</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>emsr_bitmap</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:         <value>xmm_input</value>
Sep 30 14:33:56 compute-0 nova_compute[260546]:       </enum>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     </hyperv>
Sep 30 14:33:56 compute-0 nova_compute[260546]:     <launchSecurity supported='no'/>
Sep 30 14:33:56 compute-0 nova_compute[260546]:   </features>
Sep 30 14:33:56 compute-0 nova_compute[260546]: </domainCapabilities>
Sep 30 14:33:56 compute-0 nova_compute[260546]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Sep 30 14:33:56 compute-0 nova_compute[260546]: 2025-09-30 14:33:56.545 2 DEBUG nova.virt.libvirt.host [None req-c2366d8d-ed73-4c42-a7ba-a6e695973656 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Sep 30 14:33:56 compute-0 nova_compute[260546]: 2025-09-30 14:33:56.545 2 DEBUG nova.virt.libvirt.host [None req-c2366d8d-ed73-4c42-a7ba-a6e695973656 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Sep 30 14:33:56 compute-0 nova_compute[260546]: 2025-09-30 14:33:56.546 2 DEBUG nova.virt.libvirt.host [None req-c2366d8d-ed73-4c42-a7ba-a6e695973656 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Sep 30 14:33:56 compute-0 nova_compute[260546]: 2025-09-30 14:33:56.546 2 INFO nova.virt.libvirt.host [None req-c2366d8d-ed73-4c42-a7ba-a6e695973656 - - - - - -] Secure Boot support detected
Sep 30 14:33:56 compute-0 nova_compute[260546]: 2025-09-30 14:33:56.548 2 INFO nova.virt.libvirt.driver [None req-c2366d8d-ed73-4c42-a7ba-a6e695973656 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Sep 30 14:33:56 compute-0 nova_compute[260546]: 2025-09-30 14:33:56.548 2 INFO nova.virt.libvirt.driver [None req-c2366d8d-ed73-4c42-a7ba-a6e695973656 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Sep 30 14:33:56 compute-0 nova_compute[260546]: 2025-09-30 14:33:56.556 2 DEBUG nova.virt.libvirt.driver [None req-c2366d8d-ed73-4c42-a7ba-a6e695973656 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Sep 30 14:33:56 compute-0 nova_compute[260546]: 2025-09-30 14:33:56.629 2 INFO nova.virt.node [None req-c2366d8d-ed73-4c42-a7ba-a6e695973656 - - - - - -] Determined node identity 06783cfc-6d32-454d-9501-ebd8adea3735 from /var/lib/nova/compute_id
Sep 30 14:33:56 compute-0 nova_compute[260546]: 2025-09-30 14:33:56.661 2 WARNING nova.compute.manager [None req-c2366d8d-ed73-4c42-a7ba-a6e695973656 - - - - - -] Compute nodes ['06783cfc-6d32-454d-9501-ebd8adea3735'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Sep 30 14:33:56 compute-0 nova_compute[260546]: 2025-09-30 14:33:56.699 2 INFO nova.compute.manager [None req-c2366d8d-ed73-4c42-a7ba-a6e695973656 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Sep 30 14:33:56 compute-0 sudo[261227]: pam_unix(sudo:session): session closed for user root
Sep 30 14:33:56 compute-0 nova_compute[260546]: 2025-09-30 14:33:56.749 2 WARNING nova.compute.manager [None req-c2366d8d-ed73-4c42-a7ba-a6e695973656 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Sep 30 14:33:56 compute-0 nova_compute[260546]: 2025-09-30 14:33:56.749 2 DEBUG oslo_concurrency.lockutils [None req-c2366d8d-ed73-4c42-a7ba-a6e695973656 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:33:56 compute-0 nova_compute[260546]: 2025-09-30 14:33:56.749 2 DEBUG oslo_concurrency.lockutils [None req-c2366d8d-ed73-4c42-a7ba-a6e695973656 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:33:56 compute-0 nova_compute[260546]: 2025-09-30 14:33:56.750 2 DEBUG oslo_concurrency.lockutils [None req-c2366d8d-ed73-4c42-a7ba-a6e695973656 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:33:56 compute-0 nova_compute[260546]: 2025-09-30 14:33:56.750 2 DEBUG nova.compute.resource_tracker [None req-c2366d8d-ed73-4c42-a7ba-a6e695973656 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Sep 30 14:33:56 compute-0 nova_compute[260546]: 2025-09-30 14:33:56.750 2 DEBUG oslo_concurrency.processutils [None req-c2366d8d-ed73-4c42-a7ba-a6e695973656 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Sep 30 14:33:56 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:33:56 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:33:56 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:33:56.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:33:56 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:33:57 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Sep 30 14:33:57 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:33:57 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f3c004530 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:33:57 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:33:57.064Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:33:57 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:33:57.065Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:33:57 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 14:33:57 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/997970254' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:33:57 compute-0 sudo[261427]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmtqohszqvntyusceaehohlqcyfpuxti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242836.9672434-5233-226037963594537/AnsiballZ_systemd.py'
Sep 30 14:33:57 compute-0 sudo[261427]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:33:57 compute-0 nova_compute[260546]: 2025-09-30 14:33:57.274 2 DEBUG oslo_concurrency.processutils [None req-c2366d8d-ed73-4c42-a7ba-a6e695973656 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Sep 30 14:33:57 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Sep 30 14:33:57 compute-0 systemd[1]: Started libvirt nodedev daemon.
Sep 30 14:33:57 compute-0 nova_compute[260546]: 2025-09-30 14:33:57.607 2 WARNING nova.virt.libvirt.driver [None req-c2366d8d-ed73-4c42-a7ba-a6e695973656 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 14:33:57 compute-0 nova_compute[260546]: 2025-09-30 14:33:57.609 2 DEBUG nova.compute.resource_tracker [None req-c2366d8d-ed73-4c42-a7ba-a6e695973656 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4913MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Sep 30 14:33:57 compute-0 nova_compute[260546]: 2025-09-30 14:33:57.609 2 DEBUG oslo_concurrency.lockutils [None req-c2366d8d-ed73-4c42-a7ba-a6e695973656 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:33:57 compute-0 nova_compute[260546]: 2025-09-30 14:33:57.609 2 DEBUG oslo_concurrency.lockutils [None req-c2366d8d-ed73-4c42-a7ba-a6e695973656 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:33:57 compute-0 python3.9[261431]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Sep 30 14:33:57 compute-0 nova_compute[260546]: 2025-09-30 14:33:57.627 2 WARNING nova.compute.resource_tracker [None req-c2366d8d-ed73-4c42-a7ba-a6e695973656 - - - - - -] No compute node record for compute-0.ctlplane.example.com:06783cfc-6d32-454d-9501-ebd8adea3735: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 06783cfc-6d32-454d-9501-ebd8adea3735 could not be found.
Sep 30 14:33:57 compute-0 systemd[1]: Stopping nova_compute container...
Sep 30 14:33:57 compute-0 nova_compute[260546]: 2025-09-30 14:33:57.675 2 INFO nova.compute.resource_tracker [None req-c2366d8d-ed73-4c42-a7ba-a6e695973656 - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: 06783cfc-6d32-454d-9501-ebd8adea3735
Sep 30 14:33:57 compute-0 nova_compute[260546]: 2025-09-30 14:33:57.740 2 DEBUG oslo_concurrency.lockutils [None req-c2366d8d-ed73-4c42-a7ba-a6e695973656 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.131s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:33:57 compute-0 nova_compute[260546]: 2025-09-30 14:33:57.741 2 DEBUG oslo_concurrency.lockutils [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Sep 30 14:33:57 compute-0 nova_compute[260546]: 2025-09-30 14:33:57.741 2 DEBUG oslo_concurrency.lockutils [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Sep 30 14:33:57 compute-0 nova_compute[260546]: 2025-09-30 14:33:57.742 2 DEBUG oslo_concurrency.lockutils [None req-3daea2df-9471-4acf-b371-46669881534d - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Sep 30 14:33:57 compute-0 ceph-mon[74194]: pgmap v545: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:33:57 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/2494005445' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:33:57 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/997970254' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:33:58 compute-0 virtqemud[261000]: libvirt version: 10.10.0, package: 15.el9 (builder@centos.org, 2025-08-18-13:22:20, )
Sep 30 14:33:58 compute-0 systemd[1]: libpod-79b56353f797a95c3fad2537e4c6c28f664e9fb123787f88de96d68a2401be2b.scope: Deactivated successfully.
Sep 30 14:33:58 compute-0 virtqemud[261000]: hostname: compute-0
Sep 30 14:33:58 compute-0 virtqemud[261000]: End of file while reading data: Input/output error
Sep 30 14:33:58 compute-0 systemd[1]: libpod-79b56353f797a95c3fad2537e4c6c28f664e9fb123787f88de96d68a2401be2b.scope: Consumed 3.774s CPU time.
Sep 30 14:33:58 compute-0 podman[261458]: 2025-09-30 14:33:58.153207467 +0000 UTC m=+0.465078144 container died 79b56353f797a95c3fad2537e4c6c28f664e9fb123787f88de96d68a2401be2b (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=nova_compute, managed_by=edpm_ansible, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm)
Sep 30 14:33:58 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-79b56353f797a95c3fad2537e4c6c28f664e9fb123787f88de96d68a2401be2b-userdata-shm.mount: Deactivated successfully.
Sep 30 14:33:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-90049131ca2aeab1eace2d787ec95056093e1259ebf13a930146dfebe7574f0b-merged.mount: Deactivated successfully.
Sep 30 14:33:58 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:33:58 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f2c003ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:33:58 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v546: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Sep 30 14:33:58 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:33:58 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f38004710 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:33:58 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:33:58 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:33:58 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:33:58.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:33:58 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:33:58 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:33:58 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:33:58.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:33:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:33:59 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f54002480 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:33:59 compute-0 ceph-mgr[74485]: [balancer INFO root] Optimize plan auto_2025-09-30_14:33:59
Sep 30 14:33:59 compute-0 ceph-mgr[74485]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 14:33:59 compute-0 ceph-mgr[74485]: [balancer INFO root] do_upmap
Sep 30 14:33:59 compute-0 ceph-mgr[74485]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.data', 'vms', 'cephfs.cephfs.meta', 'default.rgw.meta', '.mgr', 'images', '.nfs', '.rgw.root', 'backups', 'default.rgw.log', 'default.rgw.control']
Sep 30 14:33:59 compute-0 ceph-mgr[74485]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 14:33:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:33:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:33:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 14:33:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:33:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 14:33:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:33:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:33:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:33:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:33:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:33:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:33:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:33:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:33:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:33:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Sep 30 14:33:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:33:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:33:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:33:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Sep 30 14:33:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:33:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Sep 30 14:33:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:33:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:33:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:33:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 14:33:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:33:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 14:33:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:33:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:33:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:33:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:34:00 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:34:00 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f3c004530 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:34:00 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:34:00 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:34:00 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v547: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Sep 30 14:34:00 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:34:00 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f2c003ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:34:00 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:34:00 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:34:00 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:34:00.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:34:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 14:34:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 14:34:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 14:34:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 14:34:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 14:34:00 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:34:00 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:34:00 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:34:00.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:34:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 14:34:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 14:34:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 14:34:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 14:34:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 14:34:01 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:34:01 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f38004710 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:34:01 compute-0 podman[261458]: 2025-09-30 14:34:01.364090692 +0000 UTC m=+3.675961349 container cleanup 79b56353f797a95c3fad2537e4c6c28f664e9fb123787f88de96d68a2401be2b (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Sep 30 14:34:01 compute-0 podman[261458]: nova_compute
Sep 30 14:34:01 compute-0 ceph-mon[74194]: pgmap v546: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Sep 30 14:34:01 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:34:01 compute-0 podman[261495]: nova_compute
Sep 30 14:34:01 compute-0 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Sep 30 14:34:01 compute-0 systemd[1]: Stopped nova_compute container.
Sep 30 14:34:01 compute-0 systemd[1]: Starting nova_compute container...
Sep 30 14:34:01 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:34:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90049131ca2aeab1eace2d787ec95056093e1259ebf13a930146dfebe7574f0b/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Sep 30 14:34:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90049131ca2aeab1eace2d787ec95056093e1259ebf13a930146dfebe7574f0b/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Sep 30 14:34:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90049131ca2aeab1eace2d787ec95056093e1259ebf13a930146dfebe7574f0b/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Sep 30 14:34:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90049131ca2aeab1eace2d787ec95056093e1259ebf13a930146dfebe7574f0b/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Sep 30 14:34:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90049131ca2aeab1eace2d787ec95056093e1259ebf13a930146dfebe7574f0b/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Sep 30 14:34:01 compute-0 podman[261508]: 2025-09-30 14:34:01.573763099 +0000 UTC m=+0.101601678 container init 79b56353f797a95c3fad2537e4c6c28f664e9fb123787f88de96d68a2401be2b (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=edpm, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Sep 30 14:34:01 compute-0 podman[261508]: 2025-09-30 14:34:01.58606374 +0000 UTC m=+0.113902289 container start 79b56353f797a95c3fad2537e4c6c28f664e9fb123787f88de96d68a2401be2b (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:34:01 compute-0 podman[261508]: nova_compute
Sep 30 14:34:01 compute-0 nova_compute[261524]: + sudo -E kolla_set_configs
Sep 30 14:34:01 compute-0 systemd[1]: Started nova_compute container.
Sep 30 14:34:01 compute-0 sudo[261427]: pam_unix(sudo:session): session closed for user root
Sep 30 14:34:01 compute-0 nova_compute[261524]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Sep 30 14:34:01 compute-0 nova_compute[261524]: INFO:__main__:Validating config file
Sep 30 14:34:01 compute-0 nova_compute[261524]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Sep 30 14:34:01 compute-0 nova_compute[261524]: INFO:__main__:Copying service configuration files
Sep 30 14:34:01 compute-0 nova_compute[261524]: INFO:__main__:Deleting /etc/nova/nova.conf
Sep 30 14:34:01 compute-0 nova_compute[261524]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Sep 30 14:34:01 compute-0 nova_compute[261524]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Sep 30 14:34:01 compute-0 nova_compute[261524]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Sep 30 14:34:01 compute-0 nova_compute[261524]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Sep 30 14:34:01 compute-0 nova_compute[261524]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Sep 30 14:34:01 compute-0 nova_compute[261524]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Sep 30 14:34:01 compute-0 nova_compute[261524]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Sep 30 14:34:01 compute-0 nova_compute[261524]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Sep 30 14:34:01 compute-0 nova_compute[261524]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Sep 30 14:34:01 compute-0 nova_compute[261524]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Sep 30 14:34:01 compute-0 nova_compute[261524]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Sep 30 14:34:01 compute-0 nova_compute[261524]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Sep 30 14:34:01 compute-0 nova_compute[261524]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Sep 30 14:34:01 compute-0 nova_compute[261524]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Sep 30 14:34:01 compute-0 nova_compute[261524]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Sep 30 14:34:01 compute-0 nova_compute[261524]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Sep 30 14:34:01 compute-0 nova_compute[261524]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Sep 30 14:34:01 compute-0 nova_compute[261524]: INFO:__main__:Deleting /etc/ceph
Sep 30 14:34:01 compute-0 nova_compute[261524]: INFO:__main__:Creating directory /etc/ceph
Sep 30 14:34:01 compute-0 nova_compute[261524]: INFO:__main__:Setting permission for /etc/ceph
Sep 30 14:34:01 compute-0 nova_compute[261524]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Sep 30 14:34:01 compute-0 nova_compute[261524]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Sep 30 14:34:01 compute-0 nova_compute[261524]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Sep 30 14:34:01 compute-0 nova_compute[261524]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Sep 30 14:34:01 compute-0 nova_compute[261524]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Sep 30 14:34:01 compute-0 nova_compute[261524]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Sep 30 14:34:01 compute-0 nova_compute[261524]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Sep 30 14:34:01 compute-0 nova_compute[261524]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Sep 30 14:34:01 compute-0 nova_compute[261524]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Sep 30 14:34:01 compute-0 nova_compute[261524]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Sep 30 14:34:01 compute-0 nova_compute[261524]: INFO:__main__:Writing out command to execute
Sep 30 14:34:01 compute-0 nova_compute[261524]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Sep 30 14:34:01 compute-0 nova_compute[261524]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Sep 30 14:34:01 compute-0 nova_compute[261524]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Sep 30 14:34:01 compute-0 nova_compute[261524]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Sep 30 14:34:01 compute-0 nova_compute[261524]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Sep 30 14:34:01 compute-0 nova_compute[261524]: ++ cat /run_command
Sep 30 14:34:01 compute-0 nova_compute[261524]: + CMD=nova-compute
Sep 30 14:34:01 compute-0 nova_compute[261524]: + ARGS=
Sep 30 14:34:01 compute-0 nova_compute[261524]: + sudo kolla_copy_cacerts
Sep 30 14:34:01 compute-0 nova_compute[261524]: Running command: 'nova-compute'
Sep 30 14:34:01 compute-0 nova_compute[261524]: + [[ ! -n '' ]]
Sep 30 14:34:01 compute-0 nova_compute[261524]: + . kolla_extend_start
Sep 30 14:34:01 compute-0 nova_compute[261524]: + echo 'Running command: '\''nova-compute'\'''
Sep 30 14:34:01 compute-0 nova_compute[261524]: + umask 0022
Sep 30 14:34:01 compute-0 nova_compute[261524]: + exec nova-compute
Sep 30 14:34:01 compute-0 sudo[261560]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:34:01 compute-0 sudo[261560]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:34:01 compute-0 sudo[261560]: pam_unix(sudo:session): session closed for user root
Sep 30 14:34:01 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:34:02 compute-0 sudo[261711]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-enywbvxlpqdpqgtevvgqvemtspnhywjc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759242841.8569772-5260-135079464437032/AnsiballZ_podman_container.py'
Sep 30 14:34:02 compute-0 sudo[261711]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:34:02 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:34:02 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f54002480 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:34:02 compute-0 ceph-mon[74194]: pgmap v547: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Sep 30 14:34:02 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/3515895829' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:34:02 compute-0 python3.9[261713]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Sep 30 14:34:02 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v548: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 681 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:34:02 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:34:02 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f3c004530 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:34:02 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:34:02 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:34:02 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:34:02.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:34:02 compute-0 systemd[1]: Started libpod-conmon-04c40afe98c03ead5a19c6f7a30aef6b625f019f8462388742f25d51fa92522d.scope.
Sep 30 14:34:02 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:34:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0807da6fbf1387ee0e7a6ae8a795e34ac07e999983a94fdff4778d61846dd013/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Sep 30 14:34:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0807da6fbf1387ee0e7a6ae8a795e34ac07e999983a94fdff4778d61846dd013/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Sep 30 14:34:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0807da6fbf1387ee0e7a6ae8a795e34ac07e999983a94fdff4778d61846dd013/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Sep 30 14:34:02 compute-0 podman[261739]: 2025-09-30 14:34:02.633256315 +0000 UTC m=+0.118804650 container init 04c40afe98c03ead5a19c6f7a30aef6b625f019f8462388742f25d51fa92522d (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=nova_compute_init, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2)
Sep 30 14:34:02 compute-0 podman[261739]: 2025-09-30 14:34:02.64277923 +0000 UTC m=+0.128327545 container start 04c40afe98c03ead5a19c6f7a30aef6b625f019f8462388742f25d51fa92522d (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, managed_by=edpm_ansible, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Sep 30 14:34:02 compute-0 python3.9[261713]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Sep 30 14:34:02 compute-0 nova_compute_init[261761]: INFO:nova_statedir:Applying nova statedir ownership
Sep 30 14:34:02 compute-0 nova_compute_init[261761]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Sep 30 14:34:02 compute-0 nova_compute_init[261761]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Sep 30 14:34:02 compute-0 nova_compute_init[261761]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Sep 30 14:34:02 compute-0 nova_compute_init[261761]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Sep 30 14:34:02 compute-0 nova_compute_init[261761]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Sep 30 14:34:02 compute-0 nova_compute_init[261761]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Sep 30 14:34:02 compute-0 nova_compute_init[261761]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Sep 30 14:34:02 compute-0 nova_compute_init[261761]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Sep 30 14:34:02 compute-0 nova_compute_init[261761]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Sep 30 14:34:02 compute-0 nova_compute_init[261761]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Sep 30 14:34:02 compute-0 nova_compute_init[261761]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Sep 30 14:34:02 compute-0 nova_compute_init[261761]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Sep 30 14:34:02 compute-0 nova_compute_init[261761]: INFO:nova_statedir:Nova statedir ownership complete
Sep 30 14:34:02 compute-0 systemd[1]: libpod-04c40afe98c03ead5a19c6f7a30aef6b625f019f8462388742f25d51fa92522d.scope: Deactivated successfully.
Sep 30 14:34:02 compute-0 podman[261762]: 2025-09-30 14:34:02.699545934 +0000 UTC m=+0.030041837 container died 04c40afe98c03ead5a19c6f7a30aef6b625f019f8462388742f25d51fa92522d (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=edpm, container_name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Sep 30 14:34:02 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-04c40afe98c03ead5a19c6f7a30aef6b625f019f8462388742f25d51fa92522d-userdata-shm.mount: Deactivated successfully.
Sep 30 14:34:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-0807da6fbf1387ee0e7a6ae8a795e34ac07e999983a94fdff4778d61846dd013-merged.mount: Deactivated successfully.
Sep 30 14:34:02 compute-0 podman[261775]: 2025-09-30 14:34:02.786846407 +0000 UTC m=+0.069874766 container cleanup 04c40afe98c03ead5a19c6f7a30aef6b625f019f8462388742f25d51fa92522d (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=edpm, container_name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Sep 30 14:34:02 compute-0 sudo[261711]: pam_unix(sudo:session): session closed for user root
Sep 30 14:34:02 compute-0 systemd[1]: libpod-conmon-04c40afe98c03ead5a19c6f7a30aef6b625f019f8462388742f25d51fa92522d.scope: Deactivated successfully.
Sep 30 14:34:02 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:34:02 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:34:02 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:34:02.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:34:03 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:34:03 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f2c003ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:34:03 compute-0 ceph-mon[74194]: pgmap v548: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 681 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:34:03 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:34:03.568Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:34:03 compute-0 sshd-session[223460]: Connection closed by 192.168.122.30 port 53268
Sep 30 14:34:03 compute-0 sshd-session[223457]: pam_unix(sshd:session): session closed for user zuul
Sep 30 14:34:03 compute-0 systemd[1]: session-54.scope: Deactivated successfully.
Sep 30 14:34:03 compute-0 systemd[1]: session-54.scope: Consumed 2min 42.373s CPU time.
Sep 30 14:34:03 compute-0 systemd-logind[808]: Session 54 logged out. Waiting for processes to exit.
Sep 30 14:34:03 compute-0 systemd-logind[808]: Removed session 54.
Sep 30 14:34:03 compute-0 nova_compute[261524]: 2025-09-30 14:34:03.791 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Sep 30 14:34:03 compute-0 nova_compute[261524]: 2025-09-30 14:34:03.792 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Sep 30 14:34:03 compute-0 nova_compute[261524]: 2025-09-30 14:34:03.792 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Sep 30 14:34:03 compute-0 nova_compute[261524]: 2025-09-30 14:34:03.792 2 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Sep 30 14:34:03 compute-0 nova_compute[261524]: 2025-09-30 14:34:03.928 2 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Sep 30 14:34:03 compute-0 nova_compute[261524]: 2025-09-30 14:34:03.942 2 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Sep 30 14:34:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:34:04 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f38004710 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:34:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:34:04 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:34:04 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/2623464074' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.415 2 INFO nova.virt.driver [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Sep 30 14:34:04 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v549: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 596 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:34:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:34:04 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f54002480 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:34:04 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:34:04 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:34:04 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:34:04.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.535 2 INFO nova.compute.provider_config [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.549 2 DEBUG oslo_concurrency.lockutils [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.550 2 DEBUG oslo_concurrency.lockutils [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.550 2 DEBUG oslo_concurrency.lockutils [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.550 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.550 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.551 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.551 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.551 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.551 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.551 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.552 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.552 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.552 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.552 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.552 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.553 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.553 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.553 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.553 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.553 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.553 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.554 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.554 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.554 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.554 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.554 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.555 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.555 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.555 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.555 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.555 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.556 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.556 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.556 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.556 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.556 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.556 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.557 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.557 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.557 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.557 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.557 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.558 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.558 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.558 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.558 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.558 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.559 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.559 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.559 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.559 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.559 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.559 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.560 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.560 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.560 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.560 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.560 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.561 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.561 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.561 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.561 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.561 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.562 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.562 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.562 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.562 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.562 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.562 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.563 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.563 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.563 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.563 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.563 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.563 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.564 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.564 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.564 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.564 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.564 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.565 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.565 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.565 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.565 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.565 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.565 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.566 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.566 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.566 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.566 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.566 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.567 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.567 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.567 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.567 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.567 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.567 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.568 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.568 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.568 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.568 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.568 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.569 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.569 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.569 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.569 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.569 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.569 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.570 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.570 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.570 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.570 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.570 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.570 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.571 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.571 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.571 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.571 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.571 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.572 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.572 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.572 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.572 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.572 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.573 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.573 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.573 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.573 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.573 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.573 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.574 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.574 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.574 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.574 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.574 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.574 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.575 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.575 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.575 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.575 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.575 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.576 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.576 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.576 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.576 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.576 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.576 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.577 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.577 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.577 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.577 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.577 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.578 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.578 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.578 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.578 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.578 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.579 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.579 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.579 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.579 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.580 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.580 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.580 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.580 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.580 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.581 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.581 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.581 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.581 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.581 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.581 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.582 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.582 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.582 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.582 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.582 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.583 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.583 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.583 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.583 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.583 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.584 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.584 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.584 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.584 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.584 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.585 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.585 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.585 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.585 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.585 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.586 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.586 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.586 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.586 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.586 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.586 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.587 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.587 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.587 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.587 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.587 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.588 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.588 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.588 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.588 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.588 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.588 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.589 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.589 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.589 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.589 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.589 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.590 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.590 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.590 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.590 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.591 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.591 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.591 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.591 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.591 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.592 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.592 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.592 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.592 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.593 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.593 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.593 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.593 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.593 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.594 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.594 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.594 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.594 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.595 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.595 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.595 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.595 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.596 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.596 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.596 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.596 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.596 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.597 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.597 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.597 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.597 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.598 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.598 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.598 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.598 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.598 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.599 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.599 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.599 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.599 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.600 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.600 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.600 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.600 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.600 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.601 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.601 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.601 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.601 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.602 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.602 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.602 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.602 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.603 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.603 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.603 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.603 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.603 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.604 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.604 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.604 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.604 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.605 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.605 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.605 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.605 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.606 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.606 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.606 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.606 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.606 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.607 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.607 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.607 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.607 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.608 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.608 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.608 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.608 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.609 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.609 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.609 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.609 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.609 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.610 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.610 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.610 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.610 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.611 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.611 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.611 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.611 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.612 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.612 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.612 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.612 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.612 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.613 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.613 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.613 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.613 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.614 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.614 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.614 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.614 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.614 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.615 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.615 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.615 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.615 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.616 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.616 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.616 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.616 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.617 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.617 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.617 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.617 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.618 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.618 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.618 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.618 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.618 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.619 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.619 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.619 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.619 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.620 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.620 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.620 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.620 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.620 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.621 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.621 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.621 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.621 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.622 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.622 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.622 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.622 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.623 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.623 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.623 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.623 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.624 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.624 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.624 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.624 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.625 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.625 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.625 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.625 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.625 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.626 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.626 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.626 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.626 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.627 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.627 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.627 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.627 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.628 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.628 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.628 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.628 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.628 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.629 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.629 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.629 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.629 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.629 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.630 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.630 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.630 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.630 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.631 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.631 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.631 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.631 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.632 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.632 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.632 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.632 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.632 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.633 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.633 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.633 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.633 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.634 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.634 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.634 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.634 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.634 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.635 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.635 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.635 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.635 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.636 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.636 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.636 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.636 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.636 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.637 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.637 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.637 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.637 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.638 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.638 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.638 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.638 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.638 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.639 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.639 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.639 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.639 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.640 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.640 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.640 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.640 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.640 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.641 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.641 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.641 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.641 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.642 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.642 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.642 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.642 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.643 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.643 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.643 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.643 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.643 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.644 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.644 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.644 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.644 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.645 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.645 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.645 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.645 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.645 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.646 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.646 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.646 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.646 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.647 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.647 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.647 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.647 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.648 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.648 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.648 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.648 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.648 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.649 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.649 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.649 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.649 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.650 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.650 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.650 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.650 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.651 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.651 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.651 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.651 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.652 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.652 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.652 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.652 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.652 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.653 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.653 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.653 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.653 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.654 2 WARNING oslo_config.cfg [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Sep 30 14:34:04 compute-0 nova_compute[261524]: live_migration_uri is deprecated for removal in favor of two other options that
Sep 30 14:34:04 compute-0 nova_compute[261524]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Sep 30 14:34:04 compute-0 nova_compute[261524]: and ``live_migration_inbound_addr`` respectively.
Sep 30 14:34:04 compute-0 nova_compute[261524]: ).  Its value may be silently ignored in the future.
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.654 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.654 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.654 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.655 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.655 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.655 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.656 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.656 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.656 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.656 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.657 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.657 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.657 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.657 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.658 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.658 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.658 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.658 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.658 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] libvirt.rbd_secret_uuid        = 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.659 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.659 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.659 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.659 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.660 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.660 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.660 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.660 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.660 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.661 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.661 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.661 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.661 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.662 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.662 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.662 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.662 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.662 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.663 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.663 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.663 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.663 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.663 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.664 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.664 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.664 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.664 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.664 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.664 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.665 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.665 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.665 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.665 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.665 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.666 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.666 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.666 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.666 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.666 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.667 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.667 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.667 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.667 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.667 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.667 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.668 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.668 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.668 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.668 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.668 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.669 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.669 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.669 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.669 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.669 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.669 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.670 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.670 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.670 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.670 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.670 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.671 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.671 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.671 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.671 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.671 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.671 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.672 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.672 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.672 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.672 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.672 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.673 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.673 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.673 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.673 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.673 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.673 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.674 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.674 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.674 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.674 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.674 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.675 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.675 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.675 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.675 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.675 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.675 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.676 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.676 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.676 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.676 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.676 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.677 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.677 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.677 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.677 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.677 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.677 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.678 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.678 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.678 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.678 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.678 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.679 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.679 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.679 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.679 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.679 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.679 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.680 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.680 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.680 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.680 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.680 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.681 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.681 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.681 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.681 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.681 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.682 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.682 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.682 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.682 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.682 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.683 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.683 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.683 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.683 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.683 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.683 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.684 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.684 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.684 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.684 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.685 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.685 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.685 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.685 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.685 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.686 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.686 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.686 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.686 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.686 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.686 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.687 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.687 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.687 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.687 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.687 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.688 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.688 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.688 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.688 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.688 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.688 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.689 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.689 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.689 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.689 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.689 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.690 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.690 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.690 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.690 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.691 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.691 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.691 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.691 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.691 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.691 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.692 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.692 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.692 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.692 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.692 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.693 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.693 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.693 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.693 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.693 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.694 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.694 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.694 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.694 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.694 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.694 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.695 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.695 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.695 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.695 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.695 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.696 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.696 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.696 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.696 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.696 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.696 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.697 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.697 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.697 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.697 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.697 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.697 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.698 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.698 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.698 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.698 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.698 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.699 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.699 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.699 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.699 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.699 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.699 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.700 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.700 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.700 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.700 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.700 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.701 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.701 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.701 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.701 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.701 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.701 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.702 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.702 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.702 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.702 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.703 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.703 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.703 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.703 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.703 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.703 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.704 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.704 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.704 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.704 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.704 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.705 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.705 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.705 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.705 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.705 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.706 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.706 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.706 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.706 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.706 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.706 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.707 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.707 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.707 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.707 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.708 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.708 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.708 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.708 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.708 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.709 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.709 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.709 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.709 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.710 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.710 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.710 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.710 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.711 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.711 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.711 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.711 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.712 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.712 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.712 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.712 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.712 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.713 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.713 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.713 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.713 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.714 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.714 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.714 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.714 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.714 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.715 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.715 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.715 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.715 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.716 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.716 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.716 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.716 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.716 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.717 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.717 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.717 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.717 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.718 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.718 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.718 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.718 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.718 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.719 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.719 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.719 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.719 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.720 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.720 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.720 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.720 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.721 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.721 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.721 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.721 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.722 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.722 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.722 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.722 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.723 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.723 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.723 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.723 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.723 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.724 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.724 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.724 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.724 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.725 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.725 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.725 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.725 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.725 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.726 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.726 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.726 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.726 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.727 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.727 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.727 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.727 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.727 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.728 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.728 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.728 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.728 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.729 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.729 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.729 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.729 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.729 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.730 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.730 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.730 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.730 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.731 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.731 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.731 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.731 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.731 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.732 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.732 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:34:04] "GET /metrics HTTP/1.1" 200 48418 "" "Prometheus/2.51.0"
Sep 30 14:34:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:34:04] "GET /metrics HTTP/1.1" 200 48418 "" "Prometheus/2.51.0"
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.732 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.732 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.732 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.733 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.733 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.733 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.733 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.734 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.734 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.734 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.734 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.735 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.735 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.735 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.735 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.735 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.736 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.736 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.736 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.736 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.737 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.737 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.737 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.737 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.738 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.738 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.738 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.738 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.738 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.739 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.739 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.739 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.739 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.740 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.740 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.740 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.740 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.740 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.741 2 DEBUG oslo_service.service [None req-2bf4f646-1a29-4683-9c7f-67adbb0f5ab1 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.742 2 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.760 2 INFO nova.virt.node [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] Determined node identity 06783cfc-6d32-454d-9501-ebd8adea3735 from /var/lib/nova/compute_id
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.760 2 DEBUG nova.virt.libvirt.host [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.761 2 DEBUG nova.virt.libvirt.host [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.762 2 DEBUG nova.virt.libvirt.host [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.762 2 DEBUG nova.virt.libvirt.host [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.776 2 DEBUG nova.virt.libvirt.host [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7efce4d7f310> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.780 2 DEBUG nova.virt.libvirt.host [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7efce4d7f310> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.781 2 INFO nova.virt.libvirt.driver [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] Connection event '1' reason 'None'
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.787 2 INFO nova.virt.libvirt.host [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] Libvirt host capabilities <capabilities>
Sep 30 14:34:04 compute-0 nova_compute[261524]: 
Sep 30 14:34:04 compute-0 nova_compute[261524]:   <host>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     <uuid>294e3813-d409-45a1-9fb5-458cf671b312</uuid>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     <cpu>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <arch>x86_64</arch>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model>EPYC-Rome-v4</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <vendor>AMD</vendor>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <microcode version='16777317'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <signature family='23' model='49' stepping='0'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <maxphysaddr mode='emulate' bits='40'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature name='x2apic'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature name='tsc-deadline'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature name='osxsave'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature name='hypervisor'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature name='tsc_adjust'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature name='spec-ctrl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature name='stibp'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature name='arch-capabilities'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature name='ssbd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature name='cmp_legacy'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature name='topoext'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature name='virt-ssbd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature name='lbrv'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature name='tsc-scale'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature name='vmcb-clean'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature name='pause-filter'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature name='pfthreshold'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature name='svme-addr-chk'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature name='rdctl-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature name='skip-l1dfl-vmentry'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature name='mds-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature name='pschange-mc-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <pages unit='KiB' size='4'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <pages unit='KiB' size='2048'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <pages unit='KiB' size='1048576'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     </cpu>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     <power_management>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <suspend_mem/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     </power_management>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     <iommu support='no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     <migration_features>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <live/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <uri_transports>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <uri_transport>tcp</uri_transport>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <uri_transport>rdma</uri_transport>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </uri_transports>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     </migration_features>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     <topology>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <cells num='1'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <cell id='0'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:           <memory unit='KiB'>7864116</memory>
Sep 30 14:34:04 compute-0 nova_compute[261524]:           <pages unit='KiB' size='4'>1966029</pages>
Sep 30 14:34:04 compute-0 nova_compute[261524]:           <pages unit='KiB' size='2048'>0</pages>
Sep 30 14:34:04 compute-0 nova_compute[261524]:           <pages unit='KiB' size='1048576'>0</pages>
Sep 30 14:34:04 compute-0 nova_compute[261524]:           <distances>
Sep 30 14:34:04 compute-0 nova_compute[261524]:             <sibling id='0' value='10'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:           </distances>
Sep 30 14:34:04 compute-0 nova_compute[261524]:           <cpus num='8'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:           </cpus>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         </cell>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </cells>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     </topology>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     <cache>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     </cache>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     <secmodel>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model>selinux</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <doi>0</doi>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     </secmodel>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     <secmodel>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model>dac</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <doi>0</doi>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <baselabel type='kvm'>+107:+107</baselabel>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <baselabel type='qemu'>+107:+107</baselabel>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     </secmodel>
Sep 30 14:34:04 compute-0 nova_compute[261524]:   </host>
Sep 30 14:34:04 compute-0 nova_compute[261524]: 
Sep 30 14:34:04 compute-0 nova_compute[261524]:   <guest>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     <os_type>hvm</os_type>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     <arch name='i686'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <wordsize>32</wordsize>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <domain type='qemu'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <domain type='kvm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     </arch>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     <features>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <pae/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <nonpae/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <acpi default='on' toggle='yes'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <apic default='on' toggle='no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <cpuselection/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <deviceboot/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <disksnapshot default='on' toggle='no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <externalSnapshot/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     </features>
Sep 30 14:34:04 compute-0 nova_compute[261524]:   </guest>
Sep 30 14:34:04 compute-0 nova_compute[261524]: 
Sep 30 14:34:04 compute-0 nova_compute[261524]:   <guest>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     <os_type>hvm</os_type>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     <arch name='x86_64'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <wordsize>64</wordsize>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <domain type='qemu'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <domain type='kvm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     </arch>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     <features>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <acpi default='on' toggle='yes'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <apic default='on' toggle='no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <cpuselection/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <deviceboot/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <disksnapshot default='on' toggle='no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <externalSnapshot/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     </features>
Sep 30 14:34:04 compute-0 nova_compute[261524]:   </guest>
Sep 30 14:34:04 compute-0 nova_compute[261524]: 
Sep 30 14:34:04 compute-0 nova_compute[261524]: </capabilities>
Sep 30 14:34:04 compute-0 nova_compute[261524]: 
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.797 2 DEBUG nova.virt.libvirt.host [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.803 2 DEBUG nova.virt.libvirt.host [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Sep 30 14:34:04 compute-0 nova_compute[261524]: <domainCapabilities>
Sep 30 14:34:04 compute-0 nova_compute[261524]:   <path>/usr/libexec/qemu-kvm</path>
Sep 30 14:34:04 compute-0 nova_compute[261524]:   <domain>kvm</domain>
Sep 30 14:34:04 compute-0 nova_compute[261524]:   <machine>pc-q35-rhel9.6.0</machine>
Sep 30 14:34:04 compute-0 nova_compute[261524]:   <arch>i686</arch>
Sep 30 14:34:04 compute-0 nova_compute[261524]:   <vcpu max='4096'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:   <iothreads supported='yes'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:   <os supported='yes'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     <enum name='firmware'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     <loader supported='yes'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <enum name='type'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>rom</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>pflash</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <enum name='readonly'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>yes</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>no</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <enum name='secure'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>no</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     </loader>
Sep 30 14:34:04 compute-0 nova_compute[261524]:   </os>
Sep 30 14:34:04 compute-0 nova_compute[261524]:   <cpu>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     <mode name='host-passthrough' supported='yes'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <enum name='hostPassthroughMigratable'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>on</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>off</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     </mode>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     <mode name='maximum' supported='yes'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <enum name='maximumMigratable'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>on</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>off</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     </mode>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     <mode name='host-model' supported='yes'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model fallback='forbid'>EPYC-Rome</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <vendor>AMD</vendor>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <maxphysaddr mode='passthrough' limit='40'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature policy='require' name='x2apic'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature policy='require' name='tsc-deadline'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature policy='require' name='hypervisor'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature policy='require' name='tsc_adjust'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature policy='require' name='spec-ctrl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature policy='require' name='stibp'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature policy='require' name='arch-capabilities'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature policy='require' name='ssbd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature policy='require' name='cmp_legacy'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature policy='require' name='overflow-recov'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature policy='require' name='succor'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature policy='require' name='ibrs'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature policy='require' name='amd-ssbd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature policy='require' name='virt-ssbd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature policy='require' name='lbrv'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature policy='require' name='tsc-scale'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature policy='require' name='vmcb-clean'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature policy='require' name='flushbyasid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature policy='require' name='pause-filter'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature policy='require' name='pfthreshold'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature policy='require' name='svme-addr-chk'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature policy='require' name='lfence-always-serializing'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature policy='require' name='rdctl-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature policy='require' name='mds-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature policy='require' name='pschange-mc-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature policy='require' name='gds-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature policy='require' name='rfds-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature policy='disable' name='xsaves'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     </mode>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     <mode name='custom' supported='yes'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Broadwell'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Broadwell-IBRS'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Broadwell-noTSX'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Broadwell-noTSX-IBRS'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Broadwell-v1'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Broadwell-v2'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Broadwell-v3'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Broadwell-v4'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Cascadelake-Server'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Cascadelake-Server-noTSX'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='ibrs-all'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Cascadelake-Server-v1'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Cascadelake-Server-v2'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='ibrs-all'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Cascadelake-Server-v3'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='ibrs-all'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Cascadelake-Server-v4'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='ibrs-all'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Cascadelake-Server-v5'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='ibrs-all'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Cooperlake'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-bf16'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='ibrs-all'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='taa-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Cooperlake-v1'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-bf16'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='ibrs-all'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='taa-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Cooperlake-v2'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-bf16'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='ibrs-all'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='taa-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Denverton'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='mpx'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Denverton-v1'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='mpx'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Denverton-v2'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Denverton-v3'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Dhyana-v2'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='EPYC-Genoa'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='amd-psfd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='auto-ibrs'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-bf16'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bitalg'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512ifma'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi2'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='gfni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='la57'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='no-nested-data-bp'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='null-sel-clr-base'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='stibp-always-on'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vaes'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vpclmulqdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='EPYC-Genoa-v1'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='amd-psfd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='auto-ibrs'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-bf16'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bitalg'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512ifma'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi2'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='gfni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='la57'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='no-nested-data-bp'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='null-sel-clr-base'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='stibp-always-on'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vaes'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vpclmulqdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='EPYC-Milan'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='EPYC-Milan-v1'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='EPYC-Milan-v2'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='amd-psfd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='no-nested-data-bp'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='null-sel-clr-base'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='stibp-always-on'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vaes'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vpclmulqdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='EPYC-Rome'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='EPYC-Rome-v1'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='EPYC-Rome-v2'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='EPYC-Rome-v3'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='EPYC-v3'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='EPYC-v4'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='GraniteRapids'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='amx-bf16'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='amx-fp16'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='amx-int8'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='amx-tile'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx-vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-bf16'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-fp16'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bitalg'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512ifma'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi2'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='bus-lock-detect'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fbsdp-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrc'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrs'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fzrm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='gfni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='ibrs-all'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='la57'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='mcdt-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pbrsb-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='prefetchiti'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='psdp-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='sbdr-ssdp-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='serialize'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='taa-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='tsx-ldtrk'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vaes'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vpclmulqdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xfd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='GraniteRapids-v1'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='amx-bf16'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='amx-fp16'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='amx-int8'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='amx-tile'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx-vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-bf16'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-fp16'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bitalg'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512ifma'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi2'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='bus-lock-detect'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fbsdp-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrc'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrs'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fzrm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='gfni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='ibrs-all'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='la57'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='mcdt-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pbrsb-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='prefetchiti'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='psdp-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='sbdr-ssdp-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='serialize'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='taa-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='tsx-ldtrk'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vaes'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vpclmulqdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xfd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='GraniteRapids-v2'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='amx-bf16'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='amx-fp16'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='amx-int8'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='amx-tile'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx-vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx10'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx10-128'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx10-256'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx10-512'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-bf16'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-fp16'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bitalg'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512ifma'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi2'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='bus-lock-detect'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='cldemote'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fbsdp-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrc'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrs'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fzrm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='gfni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='ibrs-all'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='la57'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='mcdt-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='movdir64b'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='movdiri'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pbrsb-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='prefetchiti'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='psdp-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='sbdr-ssdp-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='serialize'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='ss'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='taa-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='tsx-ldtrk'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vaes'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vpclmulqdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xfd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Haswell'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Haswell-IBRS'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Haswell-noTSX'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Haswell-noTSX-IBRS'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Haswell-v1'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Haswell-v2'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Haswell-v3'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Haswell-v4'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Icelake-Server'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bitalg'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi2'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='gfni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='la57'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vaes'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vpclmulqdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Icelake-Server-noTSX'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bitalg'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi2'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='gfni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='la57'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vaes'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vpclmulqdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Icelake-Server-v1'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bitalg'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi2'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='gfni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='la57'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vaes'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vpclmulqdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Icelake-Server-v2'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bitalg'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi2'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='gfni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='la57'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vaes'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vpclmulqdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Icelake-Server-v3'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bitalg'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi2'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='gfni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='ibrs-all'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='la57'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='taa-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vaes'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vpclmulqdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Icelake-Server-v4'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bitalg'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512ifma'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi2'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='gfni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='ibrs-all'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='la57'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='taa-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vaes'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vpclmulqdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Icelake-Server-v5'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bitalg'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512ifma'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi2'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='gfni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='ibrs-all'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='la57'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='taa-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vaes'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vpclmulqdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Icelake-Server-v6'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bitalg'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512ifma'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi2'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='gfni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='ibrs-all'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='la57'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='taa-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vaes'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vpclmulqdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Icelake-Server-v7'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bitalg'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512ifma'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi2'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='gfni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='ibrs-all'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='la57'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='taa-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vaes'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vpclmulqdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='IvyBridge'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='IvyBridge-IBRS'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='IvyBridge-v1'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='IvyBridge-v2'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='KnightsMill'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-4fmaps'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-4vnniw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512er'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512pf'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='ss'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='KnightsMill-v1'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-4fmaps'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-4vnniw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512er'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512pf'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='ss'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Opteron_G4'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fma4'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xop'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Opteron_G4-v1'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fma4'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xop'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Opteron_G5'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fma4'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='tbm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xop'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Opteron_G5-v1'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fma4'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='tbm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xop'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='SapphireRapids'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='amx-bf16'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='amx-int8'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='amx-tile'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx-vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-bf16'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-fp16'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bitalg'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512ifma'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi2'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='bus-lock-detect'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrc'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrs'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fzrm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='gfni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='ibrs-all'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='la57'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='serialize'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='taa-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='tsx-ldtrk'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vaes'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vpclmulqdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xfd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='SapphireRapids-v1'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='amx-bf16'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='amx-int8'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='amx-tile'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx-vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-bf16'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-fp16'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bitalg'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512ifma'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi2'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='bus-lock-detect'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrc'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrs'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fzrm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='gfni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='ibrs-all'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='la57'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='serialize'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='taa-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='tsx-ldtrk'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vaes'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vpclmulqdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xfd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='SapphireRapids-v2'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='amx-bf16'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='amx-int8'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='amx-tile'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx-vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-bf16'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-fp16'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bitalg'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512ifma'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi2'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='bus-lock-detect'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fbsdp-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrc'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrs'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fzrm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='gfni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='ibrs-all'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='la57'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='psdp-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='sbdr-ssdp-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='serialize'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='taa-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='tsx-ldtrk'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vaes'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vpclmulqdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xfd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='SapphireRapids-v3'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='amx-bf16'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='amx-int8'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='amx-tile'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx-vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-bf16'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-fp16'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bitalg'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512ifma'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi2'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='bus-lock-detect'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='cldemote'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fbsdp-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrc'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrs'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fzrm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='gfni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='ibrs-all'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='la57'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='movdir64b'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='movdiri'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='psdp-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='sbdr-ssdp-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='serialize'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='ss'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='taa-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='tsx-ldtrk'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vaes'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vpclmulqdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xfd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='SierraForest'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx-ifma'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx-ne-convert'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx-vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx-vnni-int8'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='bus-lock-detect'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='cmpccxadd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fbsdp-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrs'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='gfni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='ibrs-all'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='mcdt-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pbrsb-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='psdp-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='sbdr-ssdp-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='serialize'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vaes'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vpclmulqdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='SierraForest-v1'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx-ifma'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx-ne-convert'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx-vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx-vnni-int8'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='bus-lock-detect'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='cmpccxadd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fbsdp-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrs'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='gfni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='ibrs-all'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='mcdt-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pbrsb-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='psdp-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='sbdr-ssdp-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='serialize'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vaes'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vpclmulqdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Skylake-Client'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Skylake-Client-IBRS'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Skylake-Client-v1'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Skylake-Client-v2'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Skylake-Client-v3'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Skylake-Client-v4'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Skylake-Server'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Skylake-Server-IBRS'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Skylake-Server-v1'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Skylake-Server-v2'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Skylake-Server-v3'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Skylake-Server-v4'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Skylake-Server-v5'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Snowridge'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='cldemote'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='core-capability'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='gfni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='movdir64b'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='movdiri'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='mpx'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='split-lock-detect'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Snowridge-v1'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='cldemote'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='core-capability'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='gfni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='movdir64b'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='movdiri'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='mpx'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='split-lock-detect'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Snowridge-v2'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='cldemote'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='core-capability'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='gfni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='movdir64b'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='movdiri'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='split-lock-detect'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Snowridge-v3'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='cldemote'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='core-capability'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='gfni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='movdir64b'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='movdiri'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='split-lock-detect'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Snowridge-v4'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='cldemote'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='gfni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='movdir64b'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='movdiri'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='athlon'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='3dnow'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='3dnowext'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='athlon-v1'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='3dnow'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='3dnowext'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='core2duo'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='ss'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='core2duo-v1'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='ss'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='coreduo'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='ss'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='coreduo-v1'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='ss'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='n270'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='ss'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='n270-v1'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='ss'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='phenom'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='3dnow'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='3dnowext'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='phenom-v1'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='3dnow'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='3dnowext'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     </mode>
Sep 30 14:34:04 compute-0 nova_compute[261524]:   </cpu>
Sep 30 14:34:04 compute-0 nova_compute[261524]:   <memoryBacking supported='yes'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     <enum name='sourceType'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <value>file</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <value>anonymous</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <value>memfd</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     </enum>
Sep 30 14:34:04 compute-0 nova_compute[261524]:   </memoryBacking>
Sep 30 14:34:04 compute-0 nova_compute[261524]:   <devices>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     <disk supported='yes'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <enum name='diskDevice'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>disk</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>cdrom</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>floppy</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>lun</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <enum name='bus'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>fdc</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>scsi</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>virtio</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>usb</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>sata</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <enum name='model'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>virtio</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>virtio-transitional</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>virtio-non-transitional</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     </disk>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     <graphics supported='yes'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <enum name='type'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>vnc</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>egl-headless</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>dbus</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     </graphics>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     <video supported='yes'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <enum name='modelType'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>vga</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>cirrus</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>virtio</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>none</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>bochs</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>ramfb</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     </video>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     <hostdev supported='yes'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <enum name='mode'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>subsystem</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <enum name='startupPolicy'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>default</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>mandatory</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>requisite</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>optional</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <enum name='subsysType'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>usb</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>pci</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>scsi</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <enum name='capsType'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <enum name='pciBackend'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     </hostdev>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     <rng supported='yes'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <enum name='model'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>virtio</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>virtio-transitional</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>virtio-non-transitional</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <enum name='backendModel'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>random</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>egd</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>builtin</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     </rng>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     <filesystem supported='yes'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <enum name='driverType'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>path</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>handle</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>virtiofs</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     </filesystem>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     <tpm supported='yes'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <enum name='model'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>tpm-tis</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>tpm-crb</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <enum name='backendModel'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>emulator</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>external</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <enum name='backendVersion'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>2.0</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     </tpm>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     <redirdev supported='yes'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <enum name='bus'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>usb</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     </redirdev>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     <channel supported='yes'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <enum name='type'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>pty</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>unix</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     </channel>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     <crypto supported='yes'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <enum name='model'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <enum name='type'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>qemu</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <enum name='backendModel'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>builtin</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     </crypto>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     <interface supported='yes'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <enum name='backendType'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>default</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>passt</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     </interface>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     <panic supported='yes'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <enum name='model'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>isa</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>hyperv</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     </panic>
Sep 30 14:34:04 compute-0 nova_compute[261524]:   </devices>
Sep 30 14:34:04 compute-0 nova_compute[261524]:   <features>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     <gic supported='no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     <vmcoreinfo supported='yes'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     <genid supported='yes'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     <backingStoreInput supported='yes'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     <backup supported='yes'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     <async-teardown supported='yes'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     <ps2 supported='yes'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     <sev supported='no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     <sgx supported='no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     <hyperv supported='yes'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <enum name='features'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>relaxed</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>vapic</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>spinlocks</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>vpindex</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>runtime</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>synic</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>stimer</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>reset</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>vendor_id</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>frequencies</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>reenlightenment</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>tlbflush</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>ipi</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>avic</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>emsr_bitmap</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>xmm_input</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     </hyperv>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     <launchSecurity supported='no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:   </features>
Sep 30 14:34:04 compute-0 nova_compute[261524]: </domainCapabilities>
Sep 30 14:34:04 compute-0 nova_compute[261524]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.811 2 DEBUG nova.virt.libvirt.volume.mount [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.815 2 DEBUG nova.virt.libvirt.host [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Sep 30 14:34:04 compute-0 nova_compute[261524]: <domainCapabilities>
Sep 30 14:34:04 compute-0 nova_compute[261524]:   <path>/usr/libexec/qemu-kvm</path>
Sep 30 14:34:04 compute-0 nova_compute[261524]:   <domain>kvm</domain>
Sep 30 14:34:04 compute-0 nova_compute[261524]:   <machine>pc-i440fx-rhel7.6.0</machine>
Sep 30 14:34:04 compute-0 nova_compute[261524]:   <arch>i686</arch>
Sep 30 14:34:04 compute-0 nova_compute[261524]:   <vcpu max='240'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:   <iothreads supported='yes'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:   <os supported='yes'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     <enum name='firmware'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     <loader supported='yes'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <enum name='type'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>rom</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>pflash</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <enum name='readonly'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>yes</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>no</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <enum name='secure'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>no</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     </loader>
Sep 30 14:34:04 compute-0 nova_compute[261524]:   </os>
Sep 30 14:34:04 compute-0 nova_compute[261524]:   <cpu>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     <mode name='host-passthrough' supported='yes'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <enum name='hostPassthroughMigratable'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>on</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>off</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     </mode>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     <mode name='maximum' supported='yes'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <enum name='maximumMigratable'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>on</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>off</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     </mode>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     <mode name='host-model' supported='yes'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model fallback='forbid'>EPYC-Rome</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <vendor>AMD</vendor>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <maxphysaddr mode='passthrough' limit='40'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature policy='require' name='x2apic'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature policy='require' name='tsc-deadline'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature policy='require' name='hypervisor'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature policy='require' name='tsc_adjust'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature policy='require' name='spec-ctrl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature policy='require' name='stibp'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature policy='require' name='arch-capabilities'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature policy='require' name='ssbd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature policy='require' name='cmp_legacy'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature policy='require' name='overflow-recov'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature policy='require' name='succor'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature policy='require' name='ibrs'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature policy='require' name='amd-ssbd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature policy='require' name='virt-ssbd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature policy='require' name='lbrv'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature policy='require' name='tsc-scale'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature policy='require' name='vmcb-clean'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature policy='require' name='flushbyasid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature policy='require' name='pause-filter'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature policy='require' name='pfthreshold'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature policy='require' name='svme-addr-chk'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature policy='require' name='lfence-always-serializing'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature policy='require' name='rdctl-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature policy='require' name='mds-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature policy='require' name='pschange-mc-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature policy='require' name='gds-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature policy='require' name='rfds-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature policy='disable' name='xsaves'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     </mode>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     <mode name='custom' supported='yes'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Broadwell'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Broadwell-IBRS'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Broadwell-noTSX'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Broadwell-noTSX-IBRS'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Broadwell-v1'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Broadwell-v2'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Broadwell-v3'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Broadwell-v4'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Cascadelake-Server'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Cascadelake-Server-noTSX'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='ibrs-all'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Cascadelake-Server-v1'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Cascadelake-Server-v2'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='ibrs-all'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Cascadelake-Server-v3'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='ibrs-all'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Cascadelake-Server-v4'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='ibrs-all'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Cascadelake-Server-v5'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='ibrs-all'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Cooperlake'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-bf16'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='ibrs-all'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='taa-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Cooperlake-v1'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-bf16'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='ibrs-all'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='taa-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Cooperlake-v2'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-bf16'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='ibrs-all'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='taa-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Denverton'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='mpx'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Denverton-v1'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='mpx'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Denverton-v2'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Denverton-v3'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Dhyana-v2'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='EPYC-Genoa'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='amd-psfd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='auto-ibrs'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-bf16'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bitalg'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512ifma'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi2'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='gfni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='la57'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='no-nested-data-bp'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='null-sel-clr-base'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='stibp-always-on'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vaes'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vpclmulqdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='EPYC-Genoa-v1'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='amd-psfd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='auto-ibrs'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-bf16'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bitalg'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512ifma'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi2'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='gfni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='la57'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='no-nested-data-bp'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='null-sel-clr-base'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='stibp-always-on'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vaes'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vpclmulqdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='EPYC-Milan'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='EPYC-Milan-v1'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='EPYC-Milan-v2'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='amd-psfd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='no-nested-data-bp'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='null-sel-clr-base'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='stibp-always-on'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vaes'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vpclmulqdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='EPYC-Rome'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='EPYC-Rome-v1'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='EPYC-Rome-v2'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='EPYC-Rome-v3'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='EPYC-v3'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='EPYC-v4'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='GraniteRapids'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='amx-bf16'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='amx-fp16'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='amx-int8'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='amx-tile'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx-vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-bf16'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-fp16'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bitalg'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512ifma'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi2'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='bus-lock-detect'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fbsdp-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrc'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrs'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fzrm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='gfni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='ibrs-all'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='la57'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='mcdt-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pbrsb-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='prefetchiti'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='psdp-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='sbdr-ssdp-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='serialize'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='taa-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='tsx-ldtrk'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vaes'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vpclmulqdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xfd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='GraniteRapids-v1'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='amx-bf16'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='amx-fp16'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='amx-int8'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='amx-tile'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx-vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-bf16'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-fp16'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bitalg'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512ifma'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi2'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='bus-lock-detect'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fbsdp-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrc'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrs'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fzrm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='gfni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='ibrs-all'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='la57'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='mcdt-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pbrsb-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='prefetchiti'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='psdp-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='sbdr-ssdp-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='serialize'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='taa-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='tsx-ldtrk'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vaes'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vpclmulqdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xfd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='GraniteRapids-v2'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='amx-bf16'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='amx-fp16'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='amx-int8'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='amx-tile'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx-vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx10'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx10-128'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx10-256'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx10-512'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-bf16'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-fp16'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bitalg'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512ifma'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi2'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='bus-lock-detect'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='cldemote'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fbsdp-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrc'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrs'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fzrm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='gfni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='ibrs-all'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='la57'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='mcdt-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='movdir64b'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='movdiri'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pbrsb-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='prefetchiti'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='psdp-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='sbdr-ssdp-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='serialize'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='ss'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='taa-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='tsx-ldtrk'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vaes'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vpclmulqdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xfd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Haswell'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Haswell-IBRS'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Haswell-noTSX'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Haswell-noTSX-IBRS'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Haswell-v1'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Haswell-v2'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Haswell-v3'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Haswell-v4'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Icelake-Server'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bitalg'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi2'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='gfni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='la57'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vaes'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vpclmulqdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Icelake-Server-noTSX'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bitalg'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi2'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='gfni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='la57'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vaes'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vpclmulqdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Icelake-Server-v1'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bitalg'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi2'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='gfni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='la57'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vaes'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vpclmulqdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Icelake-Server-v2'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bitalg'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi2'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='gfni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='la57'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vaes'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vpclmulqdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Icelake-Server-v3'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bitalg'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi2'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='gfni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='ibrs-all'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='la57'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='taa-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vaes'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vpclmulqdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Icelake-Server-v4'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bitalg'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512ifma'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi2'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='gfni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='ibrs-all'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='la57'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='taa-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vaes'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vpclmulqdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Icelake-Server-v5'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bitalg'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512ifma'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi2'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='gfni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='ibrs-all'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='la57'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='taa-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vaes'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vpclmulqdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Icelake-Server-v6'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bitalg'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512ifma'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi2'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='gfni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='ibrs-all'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='la57'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='taa-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vaes'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vpclmulqdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Icelake-Server-v7'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bitalg'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512ifma'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi2'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='gfni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='ibrs-all'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='la57'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='taa-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vaes'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vpclmulqdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='IvyBridge'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='IvyBridge-IBRS'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='IvyBridge-v1'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='IvyBridge-v2'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='KnightsMill'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-4fmaps'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-4vnniw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512er'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512pf'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='ss'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='KnightsMill-v1'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-4fmaps'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-4vnniw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512er'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512pf'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='ss'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Opteron_G4'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fma4'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xop'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Opteron_G4-v1'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fma4'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xop'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Opteron_G5'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fma4'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='tbm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xop'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Opteron_G5-v1'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fma4'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='tbm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xop'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='SapphireRapids'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='amx-bf16'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='amx-int8'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='amx-tile'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx-vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-bf16'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-fp16'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bitalg'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512ifma'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi2'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='bus-lock-detect'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrc'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrs'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fzrm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='gfni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='ibrs-all'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='la57'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='serialize'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='taa-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='tsx-ldtrk'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vaes'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vpclmulqdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xfd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='SapphireRapids-v1'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='amx-bf16'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='amx-int8'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='amx-tile'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx-vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-bf16'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-fp16'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bitalg'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512ifma'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi2'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='bus-lock-detect'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrc'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrs'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fzrm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='gfni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='ibrs-all'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='la57'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='serialize'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='taa-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='tsx-ldtrk'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vaes'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vpclmulqdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xfd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='SapphireRapids-v2'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='amx-bf16'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='amx-int8'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='amx-tile'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx-vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-bf16'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-fp16'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bitalg'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512ifma'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi2'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='bus-lock-detect'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fbsdp-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrc'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrs'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fzrm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='gfni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='ibrs-all'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='la57'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='psdp-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='sbdr-ssdp-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='serialize'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='taa-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='tsx-ldtrk'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vaes'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vpclmulqdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xfd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='SapphireRapids-v3'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='amx-bf16'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='amx-int8'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='amx-tile'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx-vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-bf16'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-fp16'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bitalg'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512ifma'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi2'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='bus-lock-detect'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='cldemote'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fbsdp-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrc'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrs'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fzrm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='gfni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='ibrs-all'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='la57'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='movdir64b'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='movdiri'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='psdp-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='sbdr-ssdp-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='serialize'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='ss'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='taa-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='tsx-ldtrk'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vaes'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vpclmulqdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xfd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='SierraForest'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx-ifma'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx-ne-convert'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx-vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx-vnni-int8'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='bus-lock-detect'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='cmpccxadd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fbsdp-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrs'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='gfni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='ibrs-all'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='mcdt-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pbrsb-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='psdp-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='sbdr-ssdp-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='serialize'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vaes'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vpclmulqdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='SierraForest-v1'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx-ifma'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx-ne-convert'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx-vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx-vnni-int8'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='bus-lock-detect'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='cmpccxadd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fbsdp-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrs'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='gfni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='ibrs-all'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='mcdt-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pbrsb-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='psdp-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='sbdr-ssdp-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='serialize'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vaes'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vpclmulqdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Skylake-Client'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Skylake-Client-IBRS'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Skylake-Client-v1'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Skylake-Client-v2'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Skylake-Client-v3'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Skylake-Client-v4'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Skylake-Server'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Skylake-Server-IBRS'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Skylake-Server-v1'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Skylake-Server-v2'>
Sep 30 14:34:04 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:34:04.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Skylake-Server-v3'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Skylake-Server-v4'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Skylake-Server-v5'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Snowridge'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='cldemote'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='core-capability'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='gfni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='movdir64b'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='movdiri'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='mpx'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='split-lock-detect'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Snowridge-v1'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='cldemote'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='core-capability'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='gfni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='movdir64b'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='movdiri'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='mpx'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='split-lock-detect'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Snowridge-v2'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='cldemote'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='core-capability'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='gfni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='movdir64b'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='movdiri'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='split-lock-detect'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Snowridge-v3'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='cldemote'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='core-capability'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='gfni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='movdir64b'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='movdiri'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='split-lock-detect'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Snowridge-v4'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='cldemote'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='gfni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='movdir64b'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='movdiri'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='athlon'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='3dnow'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='3dnowext'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='athlon-v1'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='3dnow'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='3dnowext'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='core2duo'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='ss'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='core2duo-v1'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='ss'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='coreduo'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='ss'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='coreduo-v1'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='ss'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='n270'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='ss'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='n270-v1'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='ss'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='phenom'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='3dnow'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='3dnowext'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='phenom-v1'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='3dnow'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='3dnowext'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     </mode>
Sep 30 14:34:04 compute-0 nova_compute[261524]:   </cpu>
Sep 30 14:34:04 compute-0 nova_compute[261524]:   <memoryBacking supported='yes'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     <enum name='sourceType'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <value>file</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <value>anonymous</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <value>memfd</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     </enum>
Sep 30 14:34:04 compute-0 nova_compute[261524]:   </memoryBacking>
Sep 30 14:34:04 compute-0 nova_compute[261524]:   <devices>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     <disk supported='yes'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <enum name='diskDevice'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>disk</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>cdrom</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>floppy</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>lun</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <enum name='bus'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>ide</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>fdc</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>scsi</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>virtio</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>usb</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>sata</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <enum name='model'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>virtio</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>virtio-transitional</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>virtio-non-transitional</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     </disk>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     <graphics supported='yes'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <enum name='type'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>vnc</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>egl-headless</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>dbus</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     </graphics>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     <video supported='yes'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <enum name='modelType'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>vga</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>cirrus</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>virtio</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>none</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>bochs</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>ramfb</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     </video>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     <hostdev supported='yes'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <enum name='mode'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>subsystem</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <enum name='startupPolicy'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>default</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>mandatory</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>requisite</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>optional</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <enum name='subsysType'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>usb</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>pci</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>scsi</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <enum name='capsType'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <enum name='pciBackend'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     </hostdev>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     <rng supported='yes'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <enum name='model'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>virtio</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>virtio-transitional</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>virtio-non-transitional</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <enum name='backendModel'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>random</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>egd</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>builtin</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     </rng>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     <filesystem supported='yes'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <enum name='driverType'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>path</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>handle</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>virtiofs</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     </filesystem>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     <tpm supported='yes'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <enum name='model'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>tpm-tis</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>tpm-crb</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <enum name='backendModel'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>emulator</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>external</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <enum name='backendVersion'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>2.0</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     </tpm>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     <redirdev supported='yes'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <enum name='bus'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>usb</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     </redirdev>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     <channel supported='yes'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <enum name='type'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>pty</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>unix</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     </channel>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     <crypto supported='yes'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <enum name='model'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <enum name='type'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>qemu</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <enum name='backendModel'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>builtin</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     </crypto>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     <interface supported='yes'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <enum name='backendType'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>default</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>passt</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     </interface>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     <panic supported='yes'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <enum name='model'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>isa</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>hyperv</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     </panic>
Sep 30 14:34:04 compute-0 nova_compute[261524]:   </devices>
Sep 30 14:34:04 compute-0 nova_compute[261524]:   <features>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     <gic supported='no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     <vmcoreinfo supported='yes'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     <genid supported='yes'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     <backingStoreInput supported='yes'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     <backup supported='yes'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     <async-teardown supported='yes'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     <ps2 supported='yes'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     <sev supported='no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     <sgx supported='no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     <hyperv supported='yes'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <enum name='features'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>relaxed</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>vapic</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>spinlocks</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>vpindex</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>runtime</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>synic</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>stimer</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>reset</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>vendor_id</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>frequencies</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>reenlightenment</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>tlbflush</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>ipi</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>avic</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>emsr_bitmap</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>xmm_input</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     </hyperv>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     <launchSecurity supported='no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:   </features>
Sep 30 14:34:04 compute-0 nova_compute[261524]: </domainCapabilities>
Sep 30 14:34:04 compute-0 nova_compute[261524]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.846 2 DEBUG nova.virt.libvirt.host [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Sep 30 14:34:04 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.850 2 DEBUG nova.virt.libvirt.host [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Sep 30 14:34:04 compute-0 nova_compute[261524]: <domainCapabilities>
Sep 30 14:34:04 compute-0 nova_compute[261524]:   <path>/usr/libexec/qemu-kvm</path>
Sep 30 14:34:04 compute-0 nova_compute[261524]:   <domain>kvm</domain>
Sep 30 14:34:04 compute-0 nova_compute[261524]:   <machine>pc-q35-rhel9.6.0</machine>
Sep 30 14:34:04 compute-0 nova_compute[261524]:   <arch>x86_64</arch>
Sep 30 14:34:04 compute-0 nova_compute[261524]:   <vcpu max='4096'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:   <iothreads supported='yes'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:   <os supported='yes'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     <enum name='firmware'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <value>efi</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     </enum>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     <loader supported='yes'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <enum name='type'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>rom</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>pflash</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <enum name='readonly'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>yes</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>no</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <enum name='secure'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>yes</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>no</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     </loader>
Sep 30 14:34:04 compute-0 nova_compute[261524]:   </os>
Sep 30 14:34:04 compute-0 nova_compute[261524]:   <cpu>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     <mode name='host-passthrough' supported='yes'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <enum name='hostPassthroughMigratable'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>on</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>off</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     </mode>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     <mode name='maximum' supported='yes'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <enum name='maximumMigratable'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>on</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <value>off</value>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     </mode>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     <mode name='host-model' supported='yes'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model fallback='forbid'>EPYC-Rome</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <vendor>AMD</vendor>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <maxphysaddr mode='passthrough' limit='40'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature policy='require' name='x2apic'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature policy='require' name='tsc-deadline'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature policy='require' name='hypervisor'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature policy='require' name='tsc_adjust'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature policy='require' name='spec-ctrl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature policy='require' name='stibp'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature policy='require' name='arch-capabilities'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature policy='require' name='ssbd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature policy='require' name='cmp_legacy'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature policy='require' name='overflow-recov'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature policy='require' name='succor'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature policy='require' name='ibrs'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature policy='require' name='amd-ssbd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature policy='require' name='virt-ssbd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature policy='require' name='lbrv'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature policy='require' name='tsc-scale'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature policy='require' name='vmcb-clean'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature policy='require' name='flushbyasid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature policy='require' name='pause-filter'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature policy='require' name='pfthreshold'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature policy='require' name='svme-addr-chk'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature policy='require' name='lfence-always-serializing'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature policy='require' name='rdctl-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature policy='require' name='mds-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature policy='require' name='pschange-mc-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature policy='require' name='gds-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature policy='require' name='rfds-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <feature policy='disable' name='xsaves'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     </mode>
Sep 30 14:34:04 compute-0 nova_compute[261524]:     <mode name='custom' supported='yes'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Broadwell'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Broadwell-IBRS'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Broadwell-noTSX'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Broadwell-noTSX-IBRS'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Broadwell-v1'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Broadwell-v2'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Broadwell-v3'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Broadwell-v4'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Cascadelake-Server'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Cascadelake-Server-noTSX'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='ibrs-all'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Cascadelake-Server-v1'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Cascadelake-Server-v2'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='ibrs-all'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Cascadelake-Server-v3'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='ibrs-all'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Cascadelake-Server-v4'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='ibrs-all'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Cascadelake-Server-v5'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='ibrs-all'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Cooperlake'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-bf16'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='ibrs-all'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='taa-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Cooperlake-v1'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-bf16'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='ibrs-all'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='taa-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Cooperlake-v2'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-bf16'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='ibrs-all'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='taa-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Denverton'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='mpx'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Denverton-v1'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='mpx'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Denverton-v2'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Denverton-v3'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Dhyana-v2'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='EPYC-Genoa'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='amd-psfd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='auto-ibrs'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-bf16'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bitalg'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512ifma'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi2'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='gfni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='la57'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='no-nested-data-bp'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='null-sel-clr-base'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='stibp-always-on'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vaes'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vpclmulqdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='EPYC-Genoa-v1'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='amd-psfd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='auto-ibrs'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-bf16'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bitalg'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512ifma'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi2'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='gfni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='la57'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='no-nested-data-bp'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='null-sel-clr-base'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='stibp-always-on'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vaes'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vpclmulqdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='EPYC-Milan'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='EPYC-Milan-v1'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='EPYC-Milan-v2'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='amd-psfd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='no-nested-data-bp'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='null-sel-clr-base'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='stibp-always-on'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vaes'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vpclmulqdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='EPYC-Rome'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='EPYC-Rome-v1'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='EPYC-Rome-v2'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='EPYC-Rome-v3'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='EPYC-v3'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='EPYC-v4'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='GraniteRapids'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='amx-bf16'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='amx-fp16'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='amx-int8'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='amx-tile'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx-vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-bf16'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-fp16'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bitalg'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512ifma'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi2'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='bus-lock-detect'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fbsdp-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrc'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrs'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fzrm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='gfni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='ibrs-all'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='la57'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='mcdt-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pbrsb-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='prefetchiti'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='psdp-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='sbdr-ssdp-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='serialize'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='taa-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='tsx-ldtrk'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vaes'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vpclmulqdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xfd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='GraniteRapids-v1'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='amx-bf16'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='amx-fp16'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='amx-int8'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='amx-tile'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx-vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-bf16'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-fp16'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bitalg'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512ifma'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi2'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='bus-lock-detect'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fbsdp-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrc'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrs'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fzrm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='gfni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='ibrs-all'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='la57'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='mcdt-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pbrsb-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='prefetchiti'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='psdp-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='sbdr-ssdp-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='serialize'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='taa-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='tsx-ldtrk'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vaes'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vpclmulqdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xfd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='GraniteRapids-v2'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='amx-bf16'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='amx-fp16'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='amx-int8'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='amx-tile'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx-vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx10'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx10-128'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx10-256'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx10-512'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-bf16'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-fp16'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bitalg'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512ifma'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi2'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='bus-lock-detect'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='cldemote'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fbsdp-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrc'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrs'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fzrm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='gfni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='ibrs-all'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='la57'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='mcdt-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='movdir64b'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='movdiri'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pbrsb-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='prefetchiti'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='psdp-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='sbdr-ssdp-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='serialize'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='ss'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='taa-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='tsx-ldtrk'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vaes'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vpclmulqdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xfd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Haswell'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Haswell-IBRS'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Haswell-noTSX'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Haswell-noTSX-IBRS'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Haswell-v1'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Haswell-v2'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Haswell-v3'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Haswell-v4'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Icelake-Server'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bitalg'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi2'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='gfni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='la57'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vaes'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vpclmulqdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Icelake-Server-noTSX'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bitalg'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi2'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='gfni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='la57'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vaes'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vpclmulqdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Icelake-Server-v1'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bitalg'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi2'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='gfni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='la57'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vaes'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vpclmulqdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Icelake-Server-v2'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bitalg'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi2'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='gfni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='la57'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vaes'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vpclmulqdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Icelake-Server-v3'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bitalg'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi2'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='gfni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='ibrs-all'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='la57'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='taa-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vaes'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vpclmulqdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Icelake-Server-v4'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bitalg'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512ifma'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi2'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='gfni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='ibrs-all'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='la57'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='taa-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vaes'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vpclmulqdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Icelake-Server-v5'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bitalg'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512ifma'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi2'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='gfni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='ibrs-all'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='la57'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='taa-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vaes'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vpclmulqdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Icelake-Server-v6'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bitalg'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512ifma'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi2'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='gfni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='ibrs-all'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='la57'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='taa-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vaes'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vpclmulqdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Icelake-Server-v7'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bitalg'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512ifma'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi2'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='gfni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='ibrs-all'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='la57'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='taa-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vaes'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vpclmulqdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='IvyBridge'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='IvyBridge-IBRS'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='IvyBridge-v1'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='IvyBridge-v2'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='KnightsMill'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-4fmaps'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-4vnniw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512er'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512pf'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='ss'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='KnightsMill-v1'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-4fmaps'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-4vnniw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512er'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512pf'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='ss'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Opteron_G4'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fma4'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xop'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Opteron_G4-v1'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fma4'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xop'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Opteron_G5'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fma4'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='tbm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xop'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Opteron_G5-v1'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fma4'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='tbm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xop'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='SapphireRapids'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='amx-bf16'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='amx-int8'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='amx-tile'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx-vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-bf16'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-fp16'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bitalg'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512ifma'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi2'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='bus-lock-detect'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrc'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrs'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fzrm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='gfni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='ibrs-all'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='la57'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='serialize'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='taa-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='tsx-ldtrk'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vaes'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vpclmulqdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xfd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='SapphireRapids-v1'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='amx-bf16'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='amx-int8'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='amx-tile'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx-vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-bf16'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-fp16'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bitalg'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512ifma'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi2'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='bus-lock-detect'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrc'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrs'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fzrm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='gfni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='ibrs-all'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='la57'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='serialize'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='taa-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='tsx-ldtrk'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vaes'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vpclmulqdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xfd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='SapphireRapids-v2'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='amx-bf16'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='amx-int8'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='amx-tile'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx-vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-bf16'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-fp16'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bitalg'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512ifma'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi2'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='bus-lock-detect'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fbsdp-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrc'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrs'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fzrm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='gfni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='ibrs-all'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='la57'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='psdp-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='sbdr-ssdp-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='serialize'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='taa-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='tsx-ldtrk'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vaes'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vpclmulqdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xfd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='SapphireRapids-v3'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='amx-bf16'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='amx-int8'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='amx-tile'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx-vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-bf16'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-fp16'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bitalg'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512ifma'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vbmi2'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='bus-lock-detect'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='cldemote'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fbsdp-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrc'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrs'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fzrm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='gfni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='ibrs-all'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='la57'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='movdir64b'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='movdiri'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='psdp-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='sbdr-ssdp-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='serialize'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='ss'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='taa-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='tsx-ldtrk'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vaes'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vpclmulqdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xfd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='SierraForest'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx-ifma'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx-ne-convert'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx-vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx-vnni-int8'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='bus-lock-detect'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='cmpccxadd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fbsdp-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrs'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='gfni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='ibrs-all'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='mcdt-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pbrsb-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='psdp-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='sbdr-ssdp-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='serialize'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vaes'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vpclmulqdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='SierraForest-v1'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx-ifma'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx-ne-convert'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx-vnni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx-vnni-int8'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='bus-lock-detect'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='cmpccxadd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fbsdp-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='fsrs'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='gfni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='ibrs-all'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='mcdt-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pbrsb-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='psdp-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='sbdr-ssdp-no'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='serialize'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vaes'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='vpclmulqdq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Skylake-Client'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Skylake-Client-IBRS'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Skylake-Client-v1'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Skylake-Client-v2'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Skylake-Client-v3'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Skylake-Client-v4'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Skylake-Server'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Skylake-Server-IBRS'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Skylake-Server-v1'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Skylake-Server-v2'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Skylake-Server-v3'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Skylake-Server-v4'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Skylake-Server-v5'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Snowridge'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='cldemote'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='core-capability'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='gfni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='movdir64b'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='movdiri'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='mpx'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='split-lock-detect'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Snowridge-v1'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='cldemote'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='core-capability'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='gfni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='movdir64b'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='movdiri'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='mpx'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='split-lock-detect'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Snowridge-v2'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='cldemote'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='core-capability'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='gfni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='movdir64b'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='movdiri'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='split-lock-detect'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Snowridge-v3'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='cldemote'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='core-capability'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='gfni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='movdir64b'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='movdiri'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='split-lock-detect'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='Snowridge-v4'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='cldemote'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='gfni'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='movdir64b'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='movdiri'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='athlon'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='3dnow'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='3dnowext'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='athlon-v1'>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='3dnow'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:         <feature name='3dnowext'/>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Sep 30 14:34:04 compute-0 nova_compute[261524]:       <blockers model='core2duo'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='ss'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='core2duo-v1'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='ss'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='coreduo'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='ss'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='coreduo-v1'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='ss'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='n270'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='ss'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='n270-v1'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='ss'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='phenom'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='3dnow'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='3dnowext'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='phenom-v1'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='3dnow'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='3dnowext'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:     </mode>
Sep 30 14:34:05 compute-0 nova_compute[261524]:   </cpu>
Sep 30 14:34:05 compute-0 nova_compute[261524]:   <memoryBacking supported='yes'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:     <enum name='sourceType'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <value>file</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <value>anonymous</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <value>memfd</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:     </enum>
Sep 30 14:34:05 compute-0 nova_compute[261524]:   </memoryBacking>
Sep 30 14:34:05 compute-0 nova_compute[261524]:   <devices>
Sep 30 14:34:05 compute-0 nova_compute[261524]:     <disk supported='yes'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <enum name='diskDevice'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>disk</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>cdrom</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>floppy</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>lun</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <enum name='bus'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>fdc</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>scsi</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>virtio</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>usb</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>sata</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <enum name='model'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>virtio</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>virtio-transitional</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>virtio-non-transitional</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:05 compute-0 nova_compute[261524]:     </disk>
Sep 30 14:34:05 compute-0 nova_compute[261524]:     <graphics supported='yes'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <enum name='type'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>vnc</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>egl-headless</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>dbus</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:05 compute-0 nova_compute[261524]:     </graphics>
Sep 30 14:34:05 compute-0 nova_compute[261524]:     <video supported='yes'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <enum name='modelType'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>vga</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>cirrus</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>virtio</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>none</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>bochs</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>ramfb</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:05 compute-0 nova_compute[261524]:     </video>
Sep 30 14:34:05 compute-0 nova_compute[261524]:     <hostdev supported='yes'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <enum name='mode'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>subsystem</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <enum name='startupPolicy'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>default</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>mandatory</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>requisite</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>optional</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <enum name='subsysType'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>usb</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>pci</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>scsi</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <enum name='capsType'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <enum name='pciBackend'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:     </hostdev>
Sep 30 14:34:05 compute-0 nova_compute[261524]:     <rng supported='yes'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <enum name='model'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>virtio</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>virtio-transitional</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>virtio-non-transitional</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <enum name='backendModel'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>random</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>egd</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>builtin</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:05 compute-0 nova_compute[261524]:     </rng>
Sep 30 14:34:05 compute-0 nova_compute[261524]:     <filesystem supported='yes'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <enum name='driverType'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>path</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>handle</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>virtiofs</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:05 compute-0 nova_compute[261524]:     </filesystem>
Sep 30 14:34:05 compute-0 nova_compute[261524]:     <tpm supported='yes'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <enum name='model'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>tpm-tis</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>tpm-crb</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <enum name='backendModel'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>emulator</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>external</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <enum name='backendVersion'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>2.0</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:05 compute-0 nova_compute[261524]:     </tpm>
Sep 30 14:34:05 compute-0 nova_compute[261524]:     <redirdev supported='yes'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <enum name='bus'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>usb</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:05 compute-0 nova_compute[261524]:     </redirdev>
Sep 30 14:34:05 compute-0 nova_compute[261524]:     <channel supported='yes'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <enum name='type'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>pty</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>unix</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:05 compute-0 nova_compute[261524]:     </channel>
Sep 30 14:34:05 compute-0 nova_compute[261524]:     <crypto supported='yes'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <enum name='model'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <enum name='type'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>qemu</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <enum name='backendModel'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>builtin</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:05 compute-0 nova_compute[261524]:     </crypto>
Sep 30 14:34:05 compute-0 nova_compute[261524]:     <interface supported='yes'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <enum name='backendType'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>default</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>passt</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:05 compute-0 nova_compute[261524]:     </interface>
Sep 30 14:34:05 compute-0 nova_compute[261524]:     <panic supported='yes'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <enum name='model'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>isa</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>hyperv</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:05 compute-0 nova_compute[261524]:     </panic>
Sep 30 14:34:05 compute-0 nova_compute[261524]:   </devices>
Sep 30 14:34:05 compute-0 nova_compute[261524]:   <features>
Sep 30 14:34:05 compute-0 nova_compute[261524]:     <gic supported='no'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:     <vmcoreinfo supported='yes'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:     <genid supported='yes'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:     <backingStoreInput supported='yes'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:     <backup supported='yes'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:     <async-teardown supported='yes'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:     <ps2 supported='yes'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:     <sev supported='no'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:     <sgx supported='no'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:     <hyperv supported='yes'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <enum name='features'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>relaxed</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>vapic</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>spinlocks</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>vpindex</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>runtime</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>synic</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>stimer</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>reset</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>vendor_id</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>frequencies</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>reenlightenment</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>tlbflush</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>ipi</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>avic</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>emsr_bitmap</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>xmm_input</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:05 compute-0 nova_compute[261524]:     </hyperv>
Sep 30 14:34:05 compute-0 nova_compute[261524]:     <launchSecurity supported='no'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:   </features>
Sep 30 14:34:05 compute-0 nova_compute[261524]: </domainCapabilities>
Sep 30 14:34:05 compute-0 nova_compute[261524]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Sep 30 14:34:05 compute-0 nova_compute[261524]: 2025-09-30 14:34:04.933 2 DEBUG nova.virt.libvirt.host [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Sep 30 14:34:05 compute-0 nova_compute[261524]: <domainCapabilities>
Sep 30 14:34:05 compute-0 nova_compute[261524]:   <path>/usr/libexec/qemu-kvm</path>
Sep 30 14:34:05 compute-0 nova_compute[261524]:   <domain>kvm</domain>
Sep 30 14:34:05 compute-0 nova_compute[261524]:   <machine>pc-i440fx-rhel7.6.0</machine>
Sep 30 14:34:05 compute-0 nova_compute[261524]:   <arch>x86_64</arch>
Sep 30 14:34:05 compute-0 nova_compute[261524]:   <vcpu max='240'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:   <iothreads supported='yes'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:   <os supported='yes'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:     <enum name='firmware'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:     <loader supported='yes'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <enum name='type'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>rom</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>pflash</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <enum name='readonly'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>yes</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>no</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <enum name='secure'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>no</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:05 compute-0 nova_compute[261524]:     </loader>
Sep 30 14:34:05 compute-0 nova_compute[261524]:   </os>
Sep 30 14:34:05 compute-0 nova_compute[261524]:   <cpu>
Sep 30 14:34:05 compute-0 nova_compute[261524]:     <mode name='host-passthrough' supported='yes'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <enum name='hostPassthroughMigratable'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>on</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>off</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:05 compute-0 nova_compute[261524]:     </mode>
Sep 30 14:34:05 compute-0 nova_compute[261524]:     <mode name='maximum' supported='yes'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <enum name='maximumMigratable'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>on</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>off</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:05 compute-0 nova_compute[261524]:     </mode>
Sep 30 14:34:05 compute-0 nova_compute[261524]:     <mode name='host-model' supported='yes'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model fallback='forbid'>EPYC-Rome</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <vendor>AMD</vendor>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <maxphysaddr mode='passthrough' limit='40'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <feature policy='require' name='x2apic'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <feature policy='require' name='tsc-deadline'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <feature policy='require' name='hypervisor'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <feature policy='require' name='tsc_adjust'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <feature policy='require' name='spec-ctrl'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <feature policy='require' name='stibp'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <feature policy='require' name='arch-capabilities'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <feature policy='require' name='ssbd'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <feature policy='require' name='cmp_legacy'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <feature policy='require' name='overflow-recov'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <feature policy='require' name='succor'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <feature policy='require' name='ibrs'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <feature policy='require' name='amd-ssbd'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <feature policy='require' name='virt-ssbd'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <feature policy='require' name='lbrv'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <feature policy='require' name='tsc-scale'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <feature policy='require' name='vmcb-clean'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <feature policy='require' name='flushbyasid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <feature policy='require' name='pause-filter'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <feature policy='require' name='pfthreshold'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <feature policy='require' name='svme-addr-chk'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <feature policy='require' name='lfence-always-serializing'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <feature policy='require' name='rdctl-no'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <feature policy='require' name='mds-no'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <feature policy='require' name='pschange-mc-no'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <feature policy='require' name='gds-no'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <feature policy='require' name='rfds-no'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <feature policy='disable' name='xsaves'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:     </mode>
Sep 30 14:34:05 compute-0 nova_compute[261524]:     <mode name='custom' supported='yes'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='Broadwell'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='Broadwell-IBRS'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='Broadwell-noTSX'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='Broadwell-noTSX-IBRS'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='Broadwell-v1'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='Broadwell-v2'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='Broadwell-v3'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='Broadwell-v4'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='Cascadelake-Server'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='Cascadelake-Server-noTSX'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='ibrs-all'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='Cascadelake-Server-v1'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='Cascadelake-Server-v2'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='ibrs-all'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='Cascadelake-Server-v3'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='ibrs-all'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='Cascadelake-Server-v4'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='ibrs-all'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='Cascadelake-Server-v5'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='ibrs-all'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='Cooperlake'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512-bf16'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='ibrs-all'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='taa-no'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='Cooperlake-v1'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512-bf16'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='ibrs-all'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='taa-no'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='Cooperlake-v2'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512-bf16'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='ibrs-all'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='taa-no'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='Denverton'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='mpx'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='Denverton-v1'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='mpx'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='Denverton-v2'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='Denverton-v3'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='Dhyana-v2'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='EPYC-Genoa'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='amd-psfd'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='auto-ibrs'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512-bf16'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512bitalg'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512ifma'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512vbmi'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512vbmi2'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='fsrm'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='gfni'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='la57'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='no-nested-data-bp'/>
Sep 30 14:34:05 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:34:05 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f3c004530 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='null-sel-clr-base'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='stibp-always-on'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='vaes'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='vpclmulqdq'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='EPYC-Genoa-v1'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='amd-psfd'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='auto-ibrs'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512-bf16'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512bitalg'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512ifma'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512vbmi'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512vbmi2'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='fsrm'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='gfni'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='la57'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='no-nested-data-bp'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='null-sel-clr-base'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='stibp-always-on'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='vaes'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='vpclmulqdq'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='EPYC-Milan'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='fsrm'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='EPYC-Milan-v1'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='fsrm'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='EPYC-Milan-v2'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='amd-psfd'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='fsrm'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='no-nested-data-bp'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='null-sel-clr-base'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='stibp-always-on'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='vaes'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='vpclmulqdq'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='EPYC-Rome'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='EPYC-Rome-v1'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='EPYC-Rome-v2'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='EPYC-Rome-v3'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='EPYC-v3'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='EPYC-v4'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='GraniteRapids'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='amx-bf16'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='amx-fp16'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='amx-int8'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='amx-tile'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx-vnni'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512-bf16'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512-fp16'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512bitalg'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512ifma'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512vbmi'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512vbmi2'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='bus-lock-detect'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='fbsdp-no'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='fsrc'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='fsrm'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='fsrs'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='fzrm'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='gfni'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='ibrs-all'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='la57'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='mcdt-no'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pbrsb-no'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='prefetchiti'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='psdp-no'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='sbdr-ssdp-no'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='serialize'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='taa-no'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='tsx-ldtrk'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='vaes'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='vpclmulqdq'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='xfd'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='GraniteRapids-v1'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='amx-bf16'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='amx-fp16'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='amx-int8'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='amx-tile'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx-vnni'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512-bf16'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512-fp16'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512bitalg'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512ifma'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512vbmi'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512vbmi2'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='bus-lock-detect'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='fbsdp-no'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='fsrc'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='fsrm'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='fsrs'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='fzrm'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='gfni'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='ibrs-all'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='la57'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='mcdt-no'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pbrsb-no'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='prefetchiti'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='psdp-no'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='sbdr-ssdp-no'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='serialize'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='taa-no'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='tsx-ldtrk'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='vaes'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='vpclmulqdq'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='xfd'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='GraniteRapids-v2'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='amx-bf16'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='amx-fp16'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='amx-int8'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='amx-tile'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx-vnni'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx10'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx10-128'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx10-256'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx10-512'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512-bf16'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512-fp16'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512bitalg'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512ifma'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512vbmi'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512vbmi2'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='bus-lock-detect'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='cldemote'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='fbsdp-no'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='fsrc'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='fsrm'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='fsrs'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='fzrm'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='gfni'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='ibrs-all'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='la57'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='mcdt-no'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='movdir64b'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='movdiri'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pbrsb-no'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='prefetchiti'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='psdp-no'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='sbdr-ssdp-no'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='serialize'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='ss'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='taa-no'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='tsx-ldtrk'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='vaes'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='vpclmulqdq'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='xfd'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='Haswell'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='Haswell-IBRS'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='Haswell-noTSX'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='Haswell-noTSX-IBRS'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='Haswell-v1'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='Haswell-v2'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='Haswell-v3'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='Haswell-v4'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='Icelake-Server'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512bitalg'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512vbmi'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512vbmi2'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='gfni'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='la57'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='vaes'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='vpclmulqdq'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='Icelake-Server-noTSX'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512bitalg'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512vbmi'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512vbmi2'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='gfni'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='la57'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='vaes'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='vpclmulqdq'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='Icelake-Server-v1'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512bitalg'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512vbmi'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512vbmi2'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='gfni'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='la57'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='vaes'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='vpclmulqdq'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='Icelake-Server-v2'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512bitalg'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512vbmi'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512vbmi2'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='gfni'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='la57'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='vaes'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='vpclmulqdq'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='Icelake-Server-v3'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512bitalg'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512vbmi'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512vbmi2'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='gfni'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='ibrs-all'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='la57'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='taa-no'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='vaes'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='vpclmulqdq'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='Icelake-Server-v4'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512bitalg'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512ifma'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512vbmi'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512vbmi2'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='fsrm'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='gfni'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='ibrs-all'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='la57'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='taa-no'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='vaes'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='vpclmulqdq'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='Icelake-Server-v5'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512bitalg'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512ifma'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512vbmi'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512vbmi2'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='fsrm'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='gfni'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='ibrs-all'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='la57'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='taa-no'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='vaes'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='vpclmulqdq'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='Icelake-Server-v6'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512bitalg'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512ifma'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512vbmi'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512vbmi2'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='fsrm'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='gfni'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='ibrs-all'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='la57'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='taa-no'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='vaes'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='vpclmulqdq'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='Icelake-Server-v7'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512bitalg'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512ifma'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512vbmi'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512vbmi2'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='fsrm'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='gfni'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='ibrs-all'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='la57'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='taa-no'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='vaes'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='vpclmulqdq'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='IvyBridge'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='IvyBridge-IBRS'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='IvyBridge-v1'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='IvyBridge-v2'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='KnightsMill'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512-4fmaps'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512-4vnniw'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512er'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512pf'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='ss'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='KnightsMill-v1'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512-4fmaps'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512-4vnniw'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512er'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512pf'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='ss'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='Opteron_G4'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='fma4'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='xop'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='Opteron_G4-v1'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='fma4'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='xop'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='Opteron_G5'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='fma4'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='tbm'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='xop'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='Opteron_G5-v1'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='fma4'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='tbm'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='xop'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='SapphireRapids'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='amx-bf16'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='amx-int8'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='amx-tile'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx-vnni'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512-bf16'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512-fp16'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512bitalg'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512ifma'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512vbmi'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512vbmi2'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='bus-lock-detect'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='fsrc'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='fsrm'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='fsrs'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='fzrm'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='gfni'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='ibrs-all'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='la57'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='serialize'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='taa-no'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='tsx-ldtrk'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='vaes'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='vpclmulqdq'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='xfd'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='SapphireRapids-v1'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='amx-bf16'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='amx-int8'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='amx-tile'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx-vnni'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512-bf16'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512-fp16'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512bitalg'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512ifma'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512vbmi'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512vbmi2'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='bus-lock-detect'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='fsrc'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='fsrm'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='fsrs'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='fzrm'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='gfni'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='ibrs-all'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='la57'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='serialize'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='taa-no'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='tsx-ldtrk'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='vaes'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='vpclmulqdq'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='xfd'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='SapphireRapids-v2'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='amx-bf16'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='amx-int8'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='amx-tile'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx-vnni'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512-bf16'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512-fp16'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512bitalg'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512ifma'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512vbmi'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512vbmi2'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='bus-lock-detect'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='fbsdp-no'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='fsrc'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='fsrm'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='fsrs'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='fzrm'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='gfni'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='ibrs-all'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='la57'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='psdp-no'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='sbdr-ssdp-no'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='serialize'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='taa-no'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='tsx-ldtrk'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='vaes'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='vpclmulqdq'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='xfd'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='SapphireRapids-v3'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='amx-bf16'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='amx-int8'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='amx-tile'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx-vnni'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512-bf16'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512-fp16'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512-vpopcntdq'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512bitalg'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512ifma'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512vbmi'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512vbmi2'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512vnni'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='bus-lock-detect'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='cldemote'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='fbsdp-no'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='fsrc'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='fsrm'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='fsrs'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='fzrm'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='gfni'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='ibrs-all'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='la57'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='movdir64b'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='movdiri'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='psdp-no'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='sbdr-ssdp-no'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='serialize'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='ss'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='taa-no'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='tsx-ldtrk'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='vaes'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='vpclmulqdq'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='xfd'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='SierraForest'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx-ifma'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx-ne-convert'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx-vnni'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx-vnni-int8'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='bus-lock-detect'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='cmpccxadd'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='fbsdp-no'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='fsrm'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='fsrs'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='gfni'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='ibrs-all'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='mcdt-no'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pbrsb-no'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='psdp-no'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='sbdr-ssdp-no'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='serialize'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='vaes'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='vpclmulqdq'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='SierraForest-v1'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx-ifma'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx-ne-convert'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx-vnni'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx-vnni-int8'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='bus-lock-detect'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='cmpccxadd'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='fbsdp-no'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='fsrm'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='fsrs'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='gfni'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='ibrs-all'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='mcdt-no'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pbrsb-no'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='psdp-no'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='sbdr-ssdp-no'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='serialize'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='vaes'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='vpclmulqdq'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='Skylake-Client'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='Skylake-Client-IBRS'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='Skylake-Client-v1'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='Skylake-Client-v2'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='Skylake-Client-v3'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='Skylake-Client-v4'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='Skylake-Server'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='Skylake-Server-IBRS'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='Skylake-Server-v1'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='Skylake-Server-v2'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='hle'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='rtm'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='Skylake-Server-v3'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='Skylake-Server-v4'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='Skylake-Server-v5'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512bw'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512cd'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512dq'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512f'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='avx512vl'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='invpcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pcid'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='pku'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='Snowridge'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='cldemote'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='core-capability'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='gfni'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='movdir64b'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='movdiri'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='mpx'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='split-lock-detect'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='Snowridge-v1'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='cldemote'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='core-capability'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='gfni'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='movdir64b'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='movdiri'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='mpx'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='split-lock-detect'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='Snowridge-v2'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='cldemote'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='core-capability'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='gfni'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='movdir64b'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='movdiri'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='split-lock-detect'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='Snowridge-v3'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='cldemote'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='core-capability'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='gfni'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='movdir64b'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='movdiri'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='split-lock-detect'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='Snowridge-v4'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='cldemote'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='erms'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='gfni'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='movdir64b'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='movdiri'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='xsaves'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='athlon'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='3dnow'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='3dnowext'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='athlon-v1'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='3dnow'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='3dnowext'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='core2duo'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='ss'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='core2duo-v1'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='ss'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='coreduo'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='ss'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='coreduo-v1'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='ss'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='n270'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='ss'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='n270-v1'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='ss'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='phenom'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='3dnow'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='3dnowext'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <blockers model='phenom-v1'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='3dnow'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <feature name='3dnowext'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </blockers>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Sep 30 14:34:05 compute-0 nova_compute[261524]:     </mode>
Sep 30 14:34:05 compute-0 nova_compute[261524]:   </cpu>
Sep 30 14:34:05 compute-0 nova_compute[261524]:   <memoryBacking supported='yes'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:     <enum name='sourceType'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <value>file</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <value>anonymous</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <value>memfd</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:     </enum>
Sep 30 14:34:05 compute-0 nova_compute[261524]:   </memoryBacking>
Sep 30 14:34:05 compute-0 nova_compute[261524]:   <devices>
Sep 30 14:34:05 compute-0 nova_compute[261524]:     <disk supported='yes'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <enum name='diskDevice'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>disk</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>cdrom</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>floppy</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>lun</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <enum name='bus'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>ide</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>fdc</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>scsi</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>virtio</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>usb</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>sata</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <enum name='model'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>virtio</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>virtio-transitional</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>virtio-non-transitional</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:05 compute-0 nova_compute[261524]:     </disk>
Sep 30 14:34:05 compute-0 nova_compute[261524]:     <graphics supported='yes'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <enum name='type'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>vnc</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>egl-headless</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>dbus</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:05 compute-0 nova_compute[261524]:     </graphics>
Sep 30 14:34:05 compute-0 nova_compute[261524]:     <video supported='yes'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <enum name='modelType'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>vga</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>cirrus</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>virtio</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>none</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>bochs</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>ramfb</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:05 compute-0 nova_compute[261524]:     </video>
Sep 30 14:34:05 compute-0 nova_compute[261524]:     <hostdev supported='yes'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <enum name='mode'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>subsystem</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <enum name='startupPolicy'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>default</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>mandatory</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>requisite</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>optional</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <enum name='subsysType'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>usb</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>pci</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>scsi</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <enum name='capsType'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <enum name='pciBackend'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:     </hostdev>
Sep 30 14:34:05 compute-0 nova_compute[261524]:     <rng supported='yes'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <enum name='model'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>virtio</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>virtio-transitional</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>virtio-non-transitional</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <enum name='backendModel'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>random</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>egd</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>builtin</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:05 compute-0 nova_compute[261524]:     </rng>
Sep 30 14:34:05 compute-0 nova_compute[261524]:     <filesystem supported='yes'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <enum name='driverType'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>path</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>handle</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>virtiofs</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:05 compute-0 nova_compute[261524]:     </filesystem>
Sep 30 14:34:05 compute-0 nova_compute[261524]:     <tpm supported='yes'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <enum name='model'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>tpm-tis</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>tpm-crb</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <enum name='backendModel'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>emulator</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>external</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <enum name='backendVersion'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>2.0</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:05 compute-0 nova_compute[261524]:     </tpm>
Sep 30 14:34:05 compute-0 nova_compute[261524]:     <redirdev supported='yes'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <enum name='bus'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>usb</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:05 compute-0 nova_compute[261524]:     </redirdev>
Sep 30 14:34:05 compute-0 nova_compute[261524]:     <channel supported='yes'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <enum name='type'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>pty</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>unix</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:05 compute-0 nova_compute[261524]:     </channel>
Sep 30 14:34:05 compute-0 nova_compute[261524]:     <crypto supported='yes'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <enum name='model'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <enum name='type'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>qemu</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <enum name='backendModel'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>builtin</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:05 compute-0 nova_compute[261524]:     </crypto>
Sep 30 14:34:05 compute-0 nova_compute[261524]:     <interface supported='yes'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <enum name='backendType'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>default</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>passt</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:05 compute-0 nova_compute[261524]:     </interface>
Sep 30 14:34:05 compute-0 nova_compute[261524]:     <panic supported='yes'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <enum name='model'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>isa</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>hyperv</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:05 compute-0 nova_compute[261524]:     </panic>
Sep 30 14:34:05 compute-0 nova_compute[261524]:   </devices>
Sep 30 14:34:05 compute-0 nova_compute[261524]:   <features>
Sep 30 14:34:05 compute-0 nova_compute[261524]:     <gic supported='no'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:     <vmcoreinfo supported='yes'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:     <genid supported='yes'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:     <backingStoreInput supported='yes'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:     <backup supported='yes'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:     <async-teardown supported='yes'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:     <ps2 supported='yes'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:     <sev supported='no'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:     <sgx supported='no'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:     <hyperv supported='yes'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       <enum name='features'>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>relaxed</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>vapic</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>spinlocks</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>vpindex</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>runtime</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>synic</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>stimer</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>reset</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>vendor_id</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>frequencies</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>reenlightenment</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>tlbflush</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>ipi</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>avic</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>emsr_bitmap</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:         <value>xmm_input</value>
Sep 30 14:34:05 compute-0 nova_compute[261524]:       </enum>
Sep 30 14:34:05 compute-0 nova_compute[261524]:     </hyperv>
Sep 30 14:34:05 compute-0 nova_compute[261524]:     <launchSecurity supported='no'/>
Sep 30 14:34:05 compute-0 nova_compute[261524]:   </features>
Sep 30 14:34:05 compute-0 nova_compute[261524]: </domainCapabilities>
Sep 30 14:34:05 compute-0 nova_compute[261524]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Sep 30 14:34:05 compute-0 nova_compute[261524]: 2025-09-30 14:34:05.003 2 DEBUG nova.virt.libvirt.host [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Sep 30 14:34:05 compute-0 nova_compute[261524]: 2025-09-30 14:34:05.004 2 INFO nova.virt.libvirt.host [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] Secure Boot support detected
Sep 30 14:34:05 compute-0 nova_compute[261524]: 2025-09-30 14:34:05.007 2 INFO nova.virt.libvirt.driver [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Sep 30 14:34:05 compute-0 nova_compute[261524]: 2025-09-30 14:34:05.007 2 INFO nova.virt.libvirt.driver [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Sep 30 14:34:05 compute-0 nova_compute[261524]: 2025-09-30 14:34:05.020 2 DEBUG nova.virt.libvirt.driver [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Sep 30 14:34:05 compute-0 nova_compute[261524]: 2025-09-30 14:34:05.046 2 INFO nova.virt.node [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] Determined node identity 06783cfc-6d32-454d-9501-ebd8adea3735 from /var/lib/nova/compute_id
Sep 30 14:34:05 compute-0 nova_compute[261524]: 2025-09-30 14:34:05.066 2 DEBUG nova.compute.manager [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] Verified node 06783cfc-6d32-454d-9501-ebd8adea3735 matches my host compute-0.ctlplane.example.com _check_for_host_rename /usr/lib/python3.9/site-packages/nova/compute/manager.py:1568
Sep 30 14:34:05 compute-0 nova_compute[261524]: 2025-09-30 14:34:05.100 2 INFO nova.compute.manager [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Sep 30 14:34:05 compute-0 nova_compute[261524]: 2025-09-30 14:34:05.158 2 ERROR nova.compute.manager [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] Could not retrieve compute node resource provider 06783cfc-6d32-454d-9501-ebd8adea3735 and therefore unable to error out any instances stuck in BUILDING state. Error: Failed to retrieve allocations for resource provider 06783cfc-6d32-454d-9501-ebd8adea3735: {"errors": [{"status": 404, "title": "Not Found", "detail": "The resource could not be found.\n\n Resource provider '06783cfc-6d32-454d-9501-ebd8adea3735' not found: No resource provider with uuid 06783cfc-6d32-454d-9501-ebd8adea3735 found  ", "request_id": "req-84320f92-78ed-48bc-a270-6240f5ab3138"}]}: nova.exception.ResourceProviderAllocationRetrievalFailed: Failed to retrieve allocations for resource provider 06783cfc-6d32-454d-9501-ebd8adea3735: {"errors": [{"status": 404, "title": "Not Found", "detail": "The resource could not be found.\n\n Resource provider '06783cfc-6d32-454d-9501-ebd8adea3735' not found: No resource provider with uuid 06783cfc-6d32-454d-9501-ebd8adea3735 found  ", "request_id": "req-84320f92-78ed-48bc-a270-6240f5ab3138"}]}
Sep 30 14:34:05 compute-0 nova_compute[261524]: 2025-09-30 14:34:05.180 2 DEBUG oslo_concurrency.lockutils [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:34:05 compute-0 nova_compute[261524]: 2025-09-30 14:34:05.180 2 DEBUG oslo_concurrency.lockutils [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:34:05 compute-0 nova_compute[261524]: 2025-09-30 14:34:05.180 2 DEBUG oslo_concurrency.lockutils [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:34:05 compute-0 nova_compute[261524]: 2025-09-30 14:34:05.180 2 DEBUG nova.compute.resource_tracker [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Sep 30 14:34:05 compute-0 nova_compute[261524]: 2025-09-30 14:34:05.181 2 DEBUG oslo_concurrency.processutils [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Sep 30 14:34:05 compute-0 ceph-mon[74194]: pgmap v549: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 596 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:34:05 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 14:34:05 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2683933873' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:34:05 compute-0 nova_compute[261524]: 2025-09-30 14:34:05.630 2 DEBUG oslo_concurrency.processutils [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Sep 30 14:34:05 compute-0 nova_compute[261524]: 2025-09-30 14:34:05.799 2 WARNING nova.virt.libvirt.driver [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 14:34:05 compute-0 nova_compute[261524]: 2025-09-30 14:34:05.800 2 DEBUG nova.compute.resource_tracker [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4925MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Sep 30 14:34:05 compute-0 nova_compute[261524]: 2025-09-30 14:34:05.800 2 DEBUG oslo_concurrency.lockutils [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:34:05 compute-0 nova_compute[261524]: 2025-09-30 14:34:05.800 2 DEBUG oslo_concurrency.lockutils [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:34:05 compute-0 nova_compute[261524]: 2025-09-30 14:34:05.977 2 ERROR nova.compute.resource_tracker [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] Skipping removal of allocations for deleted instances: Failed to retrieve allocations for resource provider 06783cfc-6d32-454d-9501-ebd8adea3735: {"errors": [{"status": 404, "title": "Not Found", "detail": "The resource could not be found.\n\n Resource provider '06783cfc-6d32-454d-9501-ebd8adea3735' not found: No resource provider with uuid 06783cfc-6d32-454d-9501-ebd8adea3735 found  ", "request_id": "req-9c2354e4-2e65-4b70-b75b-05ddffe1ae5d"}]}: nova.exception.ResourceProviderAllocationRetrievalFailed: Failed to retrieve allocations for resource provider 06783cfc-6d32-454d-9501-ebd8adea3735: {"errors": [{"status": 404, "title": "Not Found", "detail": "The resource could not be found.\n\n Resource provider '06783cfc-6d32-454d-9501-ebd8adea3735' not found: No resource provider with uuid 06783cfc-6d32-454d-9501-ebd8adea3735 found  ", "request_id": "req-9c2354e4-2e65-4b70-b75b-05ddffe1ae5d"}]}
Sep 30 14:34:05 compute-0 nova_compute[261524]: 2025-09-30 14:34:05.978 2 DEBUG nova.compute.resource_tracker [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Sep 30 14:34:05 compute-0 nova_compute[261524]: 2025-09-30 14:34:05.978 2 DEBUG nova.compute.resource_tracker [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Sep 30 14:34:06 compute-0 nova_compute[261524]: 2025-09-30 14:34:06.097 2 INFO nova.scheduler.client.report [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] [req-92265bc6-c7e6-4caa-bfd9-2d6fb5cc86f0] Created resource provider record via placement API for resource provider with UUID 06783cfc-6d32-454d-9501-ebd8adea3735 and name compute-0.ctlplane.example.com.
Sep 30 14:34:06 compute-0 nova_compute[261524]: 2025-09-30 14:34:06.124 2 DEBUG oslo_concurrency.processutils [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Sep 30 14:34:06 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:34:06 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f2c003ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:34:06 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/1598379290' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:34:06 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/2683933873' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:34:06 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v550: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 596 B/s wr, 2 op/s
Sep 30 14:34:06 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:34:06 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f38004710 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:34:06 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:34:06 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:34:06 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:34:06.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:34:06 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 14:34:06 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1556795962' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:34:06 compute-0 nova_compute[261524]: 2025-09-30 14:34:06.559 2 DEBUG oslo_concurrency.processutils [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Sep 30 14:34:06 compute-0 nova_compute[261524]: 2025-09-30 14:34:06.565 2 DEBUG nova.virt.libvirt.host [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Sep 30 14:34:06 compute-0 nova_compute[261524]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Sep 30 14:34:06 compute-0 nova_compute[261524]: 2025-09-30 14:34:06.565 2 INFO nova.virt.libvirt.host [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] kernel doesn't support AMD SEV
Sep 30 14:34:06 compute-0 nova_compute[261524]: 2025-09-30 14:34:06.566 2 DEBUG nova.compute.provider_tree [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] Updating inventory in ProviderTree for provider 06783cfc-6d32-454d-9501-ebd8adea3735 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Sep 30 14:34:06 compute-0 nova_compute[261524]: 2025-09-30 14:34:06.566 2 DEBUG nova.virt.libvirt.driver [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Sep 30 14:34:06 compute-0 nova_compute[261524]: 2025-09-30 14:34:06.675 2 DEBUG nova.scheduler.client.report [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] Updated inventory for provider 06783cfc-6d32-454d-9501-ebd8adea3735 with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Sep 30 14:34:06 compute-0 nova_compute[261524]: 2025-09-30 14:34:06.676 2 DEBUG nova.compute.provider_tree [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] Updating resource provider 06783cfc-6d32-454d-9501-ebd8adea3735 generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Sep 30 14:34:06 compute-0 nova_compute[261524]: 2025-09-30 14:34:06.677 2 DEBUG nova.compute.provider_tree [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] Updating inventory in ProviderTree for provider 06783cfc-6d32-454d-9501-ebd8adea3735 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Sep 30 14:34:06 compute-0 nova_compute[261524]: 2025-09-30 14:34:06.755 2 DEBUG nova.compute.provider_tree [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] Updating resource provider 06783cfc-6d32-454d-9501-ebd8adea3735 generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Sep 30 14:34:06 compute-0 nova_compute[261524]: 2025-09-30 14:34:06.787 2 DEBUG nova.compute.resource_tracker [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Sep 30 14:34:06 compute-0 nova_compute[261524]: 2025-09-30 14:34:06.788 2 DEBUG oslo_concurrency.lockutils [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.987s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:34:06 compute-0 nova_compute[261524]: 2025-09-30 14:34:06.788 2 DEBUG nova.service [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Sep 30 14:34:06 compute-0 nova_compute[261524]: 2025-09-30 14:34:06.852 2 DEBUG nova.service [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Sep 30 14:34:06 compute-0 nova_compute[261524]: 2025-09-30 14:34:06.853 2 DEBUG nova.servicegroup.drivers.db [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Sep 30 14:34:06 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:34:06 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:34:06 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:34:06.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:34:06 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:34:07 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:34:07 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f54002480 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:34:07 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:34:07.066Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:34:07 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:34:07.067Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:34:07 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:34:07.067Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:34:07 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:34:07 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:34:07 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:34:07 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:34:07 compute-0 ceph-mon[74194]: pgmap v550: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 596 B/s wr, 2 op/s
Sep 30 14:34:07 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/433949717' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:34:07 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/1556795962' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:34:08 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:34:08 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f3c004530 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:34:08 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v551: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 596 B/s wr, 1 op/s
Sep 30 14:34:08 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:34:08 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f2c003ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:34:08 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:34:08 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:34:08 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:34:08.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:34:08 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:34:08 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:34:08 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:34:08.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:34:09 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:34:09 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f38004710 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:34:09 compute-0 ceph-mon[74194]: pgmap v551: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 596 B/s wr, 1 op/s
Sep 30 14:34:10 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:34:10 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f54002480 fd 14 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:34:10 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:34:10 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Sep 30 14:34:10 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v552: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 596 B/s wr, 1 op/s
Sep 30 14:34:10 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:34:10 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f54002480 fd 14 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:34:10 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:34:10 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:34:10 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:34:10.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:34:10 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:34:10 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:34:10 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:34:10.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:34:11 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:34:11 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f2c003ad0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:34:11 compute-0 ceph-mon[74194]: pgmap v552: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 596 B/s wr, 1 op/s
Sep 30 14:34:11 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:34:12 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:34:12 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f38004710 fd 14 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:34:12 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v553: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Sep 30 14:34:12 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:34:12 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f38004710 fd 14 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:34:12 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:34:12 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:34:12 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:34:12.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:34:12 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:34:12 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:34:12 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:34:12.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:34:13 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:34:13 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f3c004530 fd 14 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:34:13 compute-0 podman[261906]: 2025-09-30 14:34:13.141588312 +0000 UTC m=+0.064297886 container health_status 3f9405f717bf7bccb1d94628a6cea0442375ebf8d5cf43ef2536ee30dce6c6e0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Sep 30 14:34:13 compute-0 podman[261908]: 2025-09-30 14:34:13.171518346 +0000 UTC m=+0.088749943 container health_status b3fb2fd96e3ed0561f41ccf5f3a6a8f5edc1610051f196882b9fe64f54041c07 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=multipathd)
Sep 30 14:34:13 compute-0 podman[261907]: 2025-09-30 14:34:13.175887573 +0000 UTC m=+0.094221430 container health_status 8ace90c1ad44d98a416105b72e1c541724757ab08efa10adfaa0df7e8c2027c6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, container_name=ovn_controller)
Sep 30 14:34:13 compute-0 podman[261963]: 2025-09-30 14:34:13.231519386 +0000 UTC m=+0.062184770 container health_status c96c6cc1b89de09f21811bef665f35e069874e3ad1bbd4c37d02227af98e3458 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20250923, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Sep 30 14:34:13 compute-0 ceph-mon[74194]: pgmap v553: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Sep 30 14:34:13 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:34:13.569Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:34:13 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:34:13.569Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:34:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:34:14 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f2c003ad0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:34:14 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v554: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Sep 30 14:34:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:34:14 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f54004600 fd 14 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:34:14 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:34:14 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:34:14 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:34:14.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:34:14 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:34:14 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:34:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:34:14] "GET /metrics HTTP/1.1" 200 48418 "" "Prometheus/2.51.0"
Sep 30 14:34:14 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:34:14] "GET /metrics HTTP/1.1" 200 48418 "" "Prometheus/2.51.0"
Sep 30 14:34:14 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:34:14 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:34:14 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:34:14.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:34:15 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:34:15 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f48002b80 fd 14 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:34:15 compute-0 ceph-mon[74194]: pgmap v554: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Sep 30 14:34:15 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:34:16 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:34:16 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f3c004530 fd 14 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:34:16 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-nfs-cephfs-compute-0-yvkpei[97239]: [WARNING] 272/143416 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Sep 30 14:34:16 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v555: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Sep 30 14:34:16 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:34:16 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f2c003ad0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:34:16 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:34:16 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:34:16 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:34:16.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:34:16 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:34:16 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:34:16 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:34:16 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:34:16.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:34:17 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:34:17 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f54004600 fd 14 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:34:17 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:34:17.068Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:34:17 compute-0 ceph-mon[74194]: pgmap v555: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Sep 30 14:34:18 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:34:18 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f48002b80 fd 14 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:34:18 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v556: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Sep 30 14:34:18 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:34:18 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f3c004530 fd 14 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:34:18 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:34:18 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:34:18 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:34:18.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:34:18 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:34:18 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:34:18 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:34:18.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:34:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:34:19 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f3c004530 fd 14 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:34:19 compute-0 ceph-mon[74194]: pgmap v556: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Sep 30 14:34:20 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:34:20 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f54004600 fd 14 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:34:20 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v557: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Sep 30 14:34:20 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:34:20 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f48002b80 fd 14 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:34:20 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:34:20 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:34:20 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:34:20.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:34:20 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:34:20 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:34:20 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:34:20.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:34:21 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:34:21 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f3c004530 fd 14 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:34:21 compute-0 ceph-mon[74194]: pgmap v557: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Sep 30 14:34:21 compute-0 sudo[262001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:34:21 compute-0 sudo[262001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:34:21 compute-0 sudo[262001]: pam_unix(sudo:session): session closed for user root
Sep 30 14:34:21 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:34:22 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:34:22 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f3c004530 fd 14 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:34:22 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v558: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Sep 30 14:34:22 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:34:22 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f54004600 fd 14 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:34:22 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:34:22 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:34:22 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:34:22.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:34:22 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:34:22 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:34:22 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:34:22.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:34:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:34:23 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f48002280 fd 14 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:34:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:34:23.570Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:34:23 compute-0 ceph-mon[74194]: pgmap v558: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Sep 30 14:34:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:34:24 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f48002280 fd 14 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:34:24 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v559: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Sep 30 14:34:24 compute-0 kernel: ganesha.nfsd[261992]: segfault at 50 ip 00007f800dcd132e sp 00007f7fd27fb210 error 4 in libntirpc.so.5.8[7f800dcb6000+2c000] likely on CPU 5 (core 0, socket 5)
Sep 30 14:34:24 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Sep 30 14:34:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[224444]: 30/09/2025 14:34:24 : epoch 68dbe982 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f48002280 fd 14 proxy ignored for local
Sep 30 14:34:24 compute-0 systemd[1]: Started Process Core Dump (PID 262030/UID 0).
Sep 30 14:34:24 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:34:24 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:34:24 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:34:24.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:34:24 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:34:24] "GET /metrics HTTP/1.1" 200 48418 "" "Prometheus/2.51.0"
Sep 30 14:34:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:34:24] "GET /metrics HTTP/1.1" 200 48418 "" "Prometheus/2.51.0"
Sep 30 14:34:24 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:34:24 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:34:24 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:34:24.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:34:25 compute-0 sudo[262032]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:34:25 compute-0 sudo[262032]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:34:25 compute-0 sudo[262032]: pam_unix(sudo:session): session closed for user root
Sep 30 14:34:25 compute-0 sudo[262057]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 14:34:25 compute-0 sudo[262057]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:34:25 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Sep 30 14:34:25 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:34:25 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Sep 30 14:34:25 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:34:25 compute-0 systemd-coredump[262031]: Process 224448 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 67:
                                                    #0  0x00007f800dcd132e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Sep 30 14:34:25 compute-0 sudo[262057]: pam_unix(sudo:session): session closed for user root
Sep 30 14:34:25 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Sep 30 14:34:25 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Sep 30 14:34:25 compute-0 systemd[1]: systemd-coredump@6-262030-0.service: Deactivated successfully.
Sep 30 14:34:25 compute-0 systemd[1]: systemd-coredump@6-262030-0.service: Consumed 1.031s CPU time.
Sep 30 14:34:25 compute-0 podman[262121]: 2025-09-30 14:34:25.681594437 +0000 UTC m=+0.025600988 container died a9f632908bc14e1c8c508281bd22a30acb3acffb173c593a50e9fa74e66cefeb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Sep 30 14:34:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-2cfbb0d4da650e36839dccb1ea26065108c4a4cfbb681aac74dd645b5c0a4c63-merged.mount: Deactivated successfully.
Sep 30 14:34:25 compute-0 ceph-mon[74194]: pgmap v559: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Sep 30 14:34:25 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:34:25 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:34:25 compute-0 podman[262121]: 2025-09-30 14:34:25.721724294 +0000 UTC m=+0.065730825 container remove a9f632908bc14e1c8c508281bd22a30acb3acffb173c593a50e9fa74e66cefeb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Sep 30 14:34:25 compute-0 systemd[1]: ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6@nfs.cephfs.2.0.compute-0.qrbicy.service: Main process exited, code=exited, status=139/n/a
Sep 30 14:34:25 compute-0 systemd[1]: ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6@nfs.cephfs.2.0.compute-0.qrbicy.service: Failed with result 'exit-code'.
Sep 30 14:34:25 compute-0 systemd[1]: ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6@nfs.cephfs.2.0.compute-0.qrbicy.service: Consumed 1.583s CPU time.
Sep 30 14:34:26 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:34:26 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:34:26 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 14:34:26 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 14:34:26 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 14:34:26 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:34:26 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 14:34:26 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:34:26 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 14:34:26 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 14:34:26 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 14:34:26 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 14:34:26 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:34:26 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:34:26 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v560: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Sep 30 14:34:26 compute-0 sudo[262165]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:34:26 compute-0 sudo[262165]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:34:26 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:34:26 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:34:26 compute-0 sudo[262165]: pam_unix(sudo:session): session closed for user root
Sep 30 14:34:26 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:34:26.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:34:26 compute-0 sudo[262190]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 14:34:26 compute-0 sudo[262190]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:34:26 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:34:26 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 14:34:26 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:34:26 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:34:26 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 14:34:26 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 14:34:26 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:34:26 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:34:26 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:34:26 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:34:26 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:34:26.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:34:26 compute-0 podman[262257]: 2025-09-30 14:34:26.960411219 +0000 UTC m=+0.041433133 container create a5b572c3d795b1bdc0612921a338e01e9a8c1ad753a0ab2bb77a6ae801e71a8e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:34:27 compute-0 systemd[1]: Started libpod-conmon-a5b572c3d795b1bdc0612921a338e01e9a8c1ad753a0ab2bb77a6ae801e71a8e.scope.
Sep 30 14:34:27 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:34:27 compute-0 podman[262257]: 2025-09-30 14:34:26.942807656 +0000 UTC m=+0.023829600 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:34:27 compute-0 podman[262257]: 2025-09-30 14:34:27.050045535 +0000 UTC m=+0.131067449 container init a5b572c3d795b1bdc0612921a338e01e9a8c1ad753a0ab2bb77a6ae801e71a8e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_bouman, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:34:27 compute-0 podman[262257]: 2025-09-30 14:34:27.059036606 +0000 UTC m=+0.140058520 container start a5b572c3d795b1bdc0612921a338e01e9a8c1ad753a0ab2bb77a6ae801e71a8e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:34:27 compute-0 podman[262257]: 2025-09-30 14:34:27.062475398 +0000 UTC m=+0.143497342 container attach a5b572c3d795b1bdc0612921a338e01e9a8c1ad753a0ab2bb77a6ae801e71a8e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_bouman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Sep 30 14:34:27 compute-0 zealous_bouman[262274]: 167 167
Sep 30 14:34:27 compute-0 systemd[1]: libpod-a5b572c3d795b1bdc0612921a338e01e9a8c1ad753a0ab2bb77a6ae801e71a8e.scope: Deactivated successfully.
Sep 30 14:34:27 compute-0 podman[262257]: 2025-09-30 14:34:27.066105286 +0000 UTC m=+0.147127210 container died a5b572c3d795b1bdc0612921a338e01e9a8c1ad753a0ab2bb77a6ae801e71a8e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_bouman, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Sep 30 14:34:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:34:27.069Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:34:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-6fede335436ba6a1318d36ae783ab41fb17013d9fd6223d4d1235ce627b2ea6e-merged.mount: Deactivated successfully.
Sep 30 14:34:27 compute-0 podman[262257]: 2025-09-30 14:34:27.105302178 +0000 UTC m=+0.186324082 container remove a5b572c3d795b1bdc0612921a338e01e9a8c1ad753a0ab2bb77a6ae801e71a8e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_bouman, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:34:27 compute-0 systemd[1]: libpod-conmon-a5b572c3d795b1bdc0612921a338e01e9a8c1ad753a0ab2bb77a6ae801e71a8e.scope: Deactivated successfully.
Sep 30 14:34:27 compute-0 podman[262298]: 2025-09-30 14:34:27.310598177 +0000 UTC m=+0.065791786 container create 4f396c1f6f6b37d3398f28366b353aa3149ae3bb35d75885c0a21be5b130a0bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_keller, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Sep 30 14:34:27 compute-0 systemd[1]: Started libpod-conmon-4f396c1f6f6b37d3398f28366b353aa3149ae3bb35d75885c0a21be5b130a0bb.scope.
Sep 30 14:34:27 compute-0 podman[262298]: 2025-09-30 14:34:27.289648805 +0000 UTC m=+0.044842434 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:34:27 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:34:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18931ae18ed956ff3bf3e865222c3500926fef1c39568c47fd8930039453a30d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:34:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18931ae18ed956ff3bf3e865222c3500926fef1c39568c47fd8930039453a30d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:34:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18931ae18ed956ff3bf3e865222c3500926fef1c39568c47fd8930039453a30d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:34:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18931ae18ed956ff3bf3e865222c3500926fef1c39568c47fd8930039453a30d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:34:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18931ae18ed956ff3bf3e865222c3500926fef1c39568c47fd8930039453a30d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 14:34:27 compute-0 podman[262298]: 2025-09-30 14:34:27.411911717 +0000 UTC m=+0.167105366 container init 4f396c1f6f6b37d3398f28366b353aa3149ae3bb35d75885c0a21be5b130a0bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_keller, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Sep 30 14:34:27 compute-0 podman[262298]: 2025-09-30 14:34:27.423324063 +0000 UTC m=+0.178517672 container start 4f396c1f6f6b37d3398f28366b353aa3149ae3bb35d75885c0a21be5b130a0bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_keller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2)
Sep 30 14:34:27 compute-0 podman[262298]: 2025-09-30 14:34:27.426977601 +0000 UTC m=+0.182171230 container attach 4f396c1f6f6b37d3398f28366b353aa3149ae3bb35d75885c0a21be5b130a0bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_keller, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:34:27 compute-0 ceph-mon[74194]: pgmap v560: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Sep 30 14:34:27 compute-0 jovial_keller[262316]: --> passed data devices: 0 physical, 1 LVM
Sep 30 14:34:27 compute-0 jovial_keller[262316]: --> All data devices are unavailable
Sep 30 14:34:27 compute-0 systemd[1]: libpod-4f396c1f6f6b37d3398f28366b353aa3149ae3bb35d75885c0a21be5b130a0bb.scope: Deactivated successfully.
Sep 30 14:34:27 compute-0 podman[262298]: 2025-09-30 14:34:27.817071921 +0000 UTC m=+0.572265590 container died 4f396c1f6f6b37d3398f28366b353aa3149ae3bb35d75885c0a21be5b130a0bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_keller, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Sep 30 14:34:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-18931ae18ed956ff3bf3e865222c3500926fef1c39568c47fd8930039453a30d-merged.mount: Deactivated successfully.
Sep 30 14:34:27 compute-0 podman[262298]: 2025-09-30 14:34:27.864655408 +0000 UTC m=+0.619849037 container remove 4f396c1f6f6b37d3398f28366b353aa3149ae3bb35d75885c0a21be5b130a0bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_keller, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Sep 30 14:34:27 compute-0 systemd[1]: libpod-conmon-4f396c1f6f6b37d3398f28366b353aa3149ae3bb35d75885c0a21be5b130a0bb.scope: Deactivated successfully.
Sep 30 14:34:27 compute-0 sudo[262190]: pam_unix(sudo:session): session closed for user root
Sep 30 14:34:28 compute-0 sudo[262344]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:34:28 compute-0 sudo[262344]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:34:28 compute-0 sudo[262344]: pam_unix(sudo:session): session closed for user root
Sep 30 14:34:28 compute-0 sudo[262369]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -- lvm list --format json
Sep 30 14:34:28 compute-0 sudo[262369]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:34:28 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v561: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:34:28 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:34:28 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:34:28 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:34:28.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:34:28 compute-0 podman[262433]: 2025-09-30 14:34:28.517555521 +0000 UTC m=+0.037400305 container create b956f545276b1701edae3153e913eee0752276ca205b1e8bc9fddaf44e5fd896 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_goldberg, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Sep 30 14:34:28 compute-0 systemd[1]: Started libpod-conmon-b956f545276b1701edae3153e913eee0752276ca205b1e8bc9fddaf44e5fd896.scope.
Sep 30 14:34:28 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:34:28 compute-0 podman[262433]: 2025-09-30 14:34:28.591628689 +0000 UTC m=+0.111473493 container init b956f545276b1701edae3153e913eee0752276ca205b1e8bc9fddaf44e5fd896 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_goldberg, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:34:28 compute-0 podman[262433]: 2025-09-30 14:34:28.598145604 +0000 UTC m=+0.117990388 container start b956f545276b1701edae3153e913eee0752276ca205b1e8bc9fddaf44e5fd896 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_goldberg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:34:28 compute-0 podman[262433]: 2025-09-30 14:34:28.503037361 +0000 UTC m=+0.022882165 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:34:28 compute-0 podman[262433]: 2025-09-30 14:34:28.601606507 +0000 UTC m=+0.121451541 container attach b956f545276b1701edae3153e913eee0752276ca205b1e8bc9fddaf44e5fd896 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_goldberg, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:34:28 compute-0 serene_goldberg[262449]: 167 167
Sep 30 14:34:28 compute-0 systemd[1]: libpod-b956f545276b1701edae3153e913eee0752276ca205b1e8bc9fddaf44e5fd896.scope: Deactivated successfully.
Sep 30 14:34:28 compute-0 podman[262433]: 2025-09-30 14:34:28.604645528 +0000 UTC m=+0.124490312 container died b956f545276b1701edae3153e913eee0752276ca205b1e8bc9fddaf44e5fd896 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_goldberg, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:34:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-ee00b86cd3ebf4fab9b60bc285de0084887a76ab535b839bfc591e6e4a510d1a-merged.mount: Deactivated successfully.
Sep 30 14:34:28 compute-0 podman[262433]: 2025-09-30 14:34:28.639493933 +0000 UTC m=+0.159338727 container remove b956f545276b1701edae3153e913eee0752276ca205b1e8bc9fddaf44e5fd896 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_goldberg, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:34:28 compute-0 systemd[1]: libpod-conmon-b956f545276b1701edae3153e913eee0752276ca205b1e8bc9fddaf44e5fd896.scope: Deactivated successfully.
Sep 30 14:34:28 compute-0 podman[262473]: 2025-09-30 14:34:28.796772494 +0000 UTC m=+0.041373032 container create 8dc18a8c892b8c40384ea4b8205696a424391109e26c4175430db66dfa3f5596 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_germain, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:34:28 compute-0 systemd[1]: Started libpod-conmon-8dc18a8c892b8c40384ea4b8205696a424391109e26c4175430db66dfa3f5596.scope.
Sep 30 14:34:28 compute-0 podman[262473]: 2025-09-30 14:34:28.779364696 +0000 UTC m=+0.023965244 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:34:28 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:34:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69e69a8492ac2905c63a848a92f2d08abdf870e0da895b7c389fadf4576af416/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:34:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69e69a8492ac2905c63a848a92f2d08abdf870e0da895b7c389fadf4576af416/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:34:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69e69a8492ac2905c63a848a92f2d08abdf870e0da895b7c389fadf4576af416/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:34:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69e69a8492ac2905c63a848a92f2d08abdf870e0da895b7c389fadf4576af416/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:34:28 compute-0 podman[262473]: 2025-09-30 14:34:28.895216286 +0000 UTC m=+0.139816844 container init 8dc18a8c892b8c40384ea4b8205696a424391109e26c4175430db66dfa3f5596 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_germain, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Sep 30 14:34:28 compute-0 podman[262473]: 2025-09-30 14:34:28.903138068 +0000 UTC m=+0.147738596 container start 8dc18a8c892b8c40384ea4b8205696a424391109e26c4175430db66dfa3f5596 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_germain, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Sep 30 14:34:28 compute-0 podman[262473]: 2025-09-30 14:34:28.90954867 +0000 UTC m=+0.154149218 container attach 8dc18a8c892b8c40384ea4b8205696a424391109e26c4175430db66dfa3f5596 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_germain, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:34:28 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:34:28 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:34:28 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:34:28.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:34:29 compute-0 serene_germain[262489]: {
Sep 30 14:34:29 compute-0 serene_germain[262489]:     "0": [
Sep 30 14:34:29 compute-0 serene_germain[262489]:         {
Sep 30 14:34:29 compute-0 serene_germain[262489]:             "devices": [
Sep 30 14:34:29 compute-0 serene_germain[262489]:                 "/dev/loop3"
Sep 30 14:34:29 compute-0 serene_germain[262489]:             ],
Sep 30 14:34:29 compute-0 serene_germain[262489]:             "lv_name": "ceph_lv0",
Sep 30 14:34:29 compute-0 serene_germain[262489]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:34:29 compute-0 serene_germain[262489]:             "lv_size": "21470642176",
Sep 30 14:34:29 compute-0 serene_germain[262489]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5e3c7776-ac03-5698-b79f-a6dc2d80cae6,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1bf35304-bfb4-41f5-b832-570aa31de1b2,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 14:34:29 compute-0 serene_germain[262489]:             "lv_uuid": "f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L",
Sep 30 14:34:29 compute-0 serene_germain[262489]:             "name": "ceph_lv0",
Sep 30 14:34:29 compute-0 serene_germain[262489]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:34:29 compute-0 serene_germain[262489]:             "tags": {
Sep 30 14:34:29 compute-0 serene_germain[262489]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:34:29 compute-0 serene_germain[262489]:                 "ceph.block_uuid": "f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L",
Sep 30 14:34:29 compute-0 serene_germain[262489]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 14:34:29 compute-0 serene_germain[262489]:                 "ceph.cluster_fsid": "5e3c7776-ac03-5698-b79f-a6dc2d80cae6",
Sep 30 14:34:29 compute-0 serene_germain[262489]:                 "ceph.cluster_name": "ceph",
Sep 30 14:34:29 compute-0 serene_germain[262489]:                 "ceph.crush_device_class": "",
Sep 30 14:34:29 compute-0 serene_germain[262489]:                 "ceph.encrypted": "0",
Sep 30 14:34:29 compute-0 serene_germain[262489]:                 "ceph.osd_fsid": "1bf35304-bfb4-41f5-b832-570aa31de1b2",
Sep 30 14:34:29 compute-0 serene_germain[262489]:                 "ceph.osd_id": "0",
Sep 30 14:34:29 compute-0 serene_germain[262489]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 14:34:29 compute-0 serene_germain[262489]:                 "ceph.type": "block",
Sep 30 14:34:29 compute-0 serene_germain[262489]:                 "ceph.vdo": "0",
Sep 30 14:34:29 compute-0 serene_germain[262489]:                 "ceph.with_tpm": "0"
Sep 30 14:34:29 compute-0 serene_germain[262489]:             },
Sep 30 14:34:29 compute-0 serene_germain[262489]:             "type": "block",
Sep 30 14:34:29 compute-0 serene_germain[262489]:             "vg_name": "ceph_vg0"
Sep 30 14:34:29 compute-0 serene_germain[262489]:         }
Sep 30 14:34:29 compute-0 serene_germain[262489]:     ]
Sep 30 14:34:29 compute-0 serene_germain[262489]: }
Sep 30 14:34:29 compute-0 systemd[1]: libpod-8dc18a8c892b8c40384ea4b8205696a424391109e26c4175430db66dfa3f5596.scope: Deactivated successfully.
Sep 30 14:34:29 compute-0 podman[262473]: 2025-09-30 14:34:29.181675904 +0000 UTC m=+0.426276432 container died 8dc18a8c892b8c40384ea4b8205696a424391109e26c4175430db66dfa3f5596 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_germain, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Sep 30 14:34:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-69e69a8492ac2905c63a848a92f2d08abdf870e0da895b7c389fadf4576af416-merged.mount: Deactivated successfully.
Sep 30 14:34:29 compute-0 podman[262473]: 2025-09-30 14:34:29.375345182 +0000 UTC m=+0.619945750 container remove 8dc18a8c892b8c40384ea4b8205696a424391109e26c4175430db66dfa3f5596 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_germain, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Sep 30 14:34:29 compute-0 systemd[1]: libpod-conmon-8dc18a8c892b8c40384ea4b8205696a424391109e26c4175430db66dfa3f5596.scope: Deactivated successfully.
Sep 30 14:34:29 compute-0 sudo[262369]: pam_unix(sudo:session): session closed for user root
Sep 30 14:34:29 compute-0 sudo[262512]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:34:29 compute-0 sudo[262512]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:34:29 compute-0 sudo[262512]: pam_unix(sudo:session): session closed for user root
Sep 30 14:34:29 compute-0 sudo[262537]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -- raw list --format json
Sep 30 14:34:29 compute-0 sudo[262537]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:34:29 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:34:29 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:34:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:34:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:34:29 compute-0 ceph-mon[74194]: pgmap v561: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:34:29 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:34:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:34:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:34:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:34:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:34:30 compute-0 podman[262602]: 2025-09-30 14:34:30.025340327 +0000 UTC m=+0.046654663 container create 0d9e00587914d2441d4dd200cdf437f8b6e5d2e20a5889edeae5078d73959f61 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_nash, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:34:30 compute-0 systemd[1]: Started libpod-conmon-0d9e00587914d2441d4dd200cdf437f8b6e5d2e20a5889edeae5078d73959f61.scope.
Sep 30 14:34:30 compute-0 podman[262602]: 2025-09-30 14:34:30.001419685 +0000 UTC m=+0.022734051 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:34:30 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:34:30 compute-0 podman[262602]: 2025-09-30 14:34:30.131832505 +0000 UTC m=+0.153146851 container init 0d9e00587914d2441d4dd200cdf437f8b6e5d2e20a5889edeae5078d73959f61 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_nash, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:34:30 compute-0 podman[262602]: 2025-09-30 14:34:30.141956907 +0000 UTC m=+0.163271243 container start 0d9e00587914d2441d4dd200cdf437f8b6e5d2e20a5889edeae5078d73959f61 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True)
Sep 30 14:34:30 compute-0 frosty_nash[262619]: 167 167
Sep 30 14:34:30 compute-0 systemd[1]: libpod-0d9e00587914d2441d4dd200cdf437f8b6e5d2e20a5889edeae5078d73959f61.scope: Deactivated successfully.
Sep 30 14:34:30 compute-0 podman[262602]: 2025-09-30 14:34:30.16256514 +0000 UTC m=+0.183879506 container attach 0d9e00587914d2441d4dd200cdf437f8b6e5d2e20a5889edeae5078d73959f61 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:34:30 compute-0 podman[262602]: 2025-09-30 14:34:30.164099661 +0000 UTC m=+0.185414007 container died 0d9e00587914d2441d4dd200cdf437f8b6e5d2e20a5889edeae5078d73959f61 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_nash, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:34:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-0636f32543ad8e3d68fc03521aff386376c2fadd917c435de8544f62c9487b8d-merged.mount: Deactivated successfully.
Sep 30 14:34:30 compute-0 podman[262602]: 2025-09-30 14:34:30.320783626 +0000 UTC m=+0.342097962 container remove 0d9e00587914d2441d4dd200cdf437f8b6e5d2e20a5889edeae5078d73959f61 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:34:30 compute-0 systemd[1]: libpod-conmon-0d9e00587914d2441d4dd200cdf437f8b6e5d2e20a5889edeae5078d73959f61.scope: Deactivated successfully.
Sep 30 14:34:30 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v562: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:34:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-nfs-cephfs-compute-0-yvkpei[97239]: [WARNING] 272/143430 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Sep 30 14:34:30 compute-0 podman[262645]: 2025-09-30 14:34:30.480281117 +0000 UTC m=+0.041794282 container create 1ae892d16a4ccd7731db7c21b8a5379f695d6a533f5afb3a900453370a9d9874 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_varahamihira, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:34:30 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:34:30 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:34:30 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:34:30.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:34:30 compute-0 systemd[1]: Started libpod-conmon-1ae892d16a4ccd7731db7c21b8a5379f695d6a533f5afb3a900453370a9d9874.scope.
Sep 30 14:34:30 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:34:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85aa9b28fb491a70e882332ee8191983a8bc9bf7bcdec616862d91052e309ba8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:34:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85aa9b28fb491a70e882332ee8191983a8bc9bf7bcdec616862d91052e309ba8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:34:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85aa9b28fb491a70e882332ee8191983a8bc9bf7bcdec616862d91052e309ba8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:34:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85aa9b28fb491a70e882332ee8191983a8bc9bf7bcdec616862d91052e309ba8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:34:30 compute-0 podman[262645]: 2025-09-30 14:34:30.46175254 +0000 UTC m=+0.023265725 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:34:30 compute-0 podman[262645]: 2025-09-30 14:34:30.564915939 +0000 UTC m=+0.126429124 container init 1ae892d16a4ccd7731db7c21b8a5379f695d6a533f5afb3a900453370a9d9874 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:34:30 compute-0 podman[262645]: 2025-09-30 14:34:30.571403123 +0000 UTC m=+0.132916288 container start 1ae892d16a4ccd7731db7c21b8a5379f695d6a533f5afb3a900453370a9d9874 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_varahamihira, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Sep 30 14:34:30 compute-0 podman[262645]: 2025-09-30 14:34:30.576352556 +0000 UTC m=+0.137865721 container attach 1ae892d16a4ccd7731db7c21b8a5379f695d6a533f5afb3a900453370a9d9874 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_varahamihira, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Sep 30 14:34:30 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:34:30 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:34:30 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:34:30.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:34:31 compute-0 lvm[262737]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 14:34:31 compute-0 lvm[262737]: VG ceph_vg0 finished
Sep 30 14:34:31 compute-0 priceless_varahamihira[262662]: {}
Sep 30 14:34:31 compute-0 systemd[1]: libpod-1ae892d16a4ccd7731db7c21b8a5379f695d6a533f5afb3a900453370a9d9874.scope: Deactivated successfully.
Sep 30 14:34:31 compute-0 systemd[1]: libpod-1ae892d16a4ccd7731db7c21b8a5379f695d6a533f5afb3a900453370a9d9874.scope: Consumed 1.046s CPU time.
Sep 30 14:34:31 compute-0 conmon[262662]: conmon 1ae892d16a4ccd7731db <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1ae892d16a4ccd7731db7c21b8a5379f695d6a533f5afb3a900453370a9d9874.scope/container/memory.events
Sep 30 14:34:31 compute-0 podman[262645]: 2025-09-30 14:34:31.304856018 +0000 UTC m=+0.866369183 container died 1ae892d16a4ccd7731db7c21b8a5379f695d6a533f5afb3a900453370a9d9874 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_varahamihira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Sep 30 14:34:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-85aa9b28fb491a70e882332ee8191983a8bc9bf7bcdec616862d91052e309ba8-merged.mount: Deactivated successfully.
Sep 30 14:34:31 compute-0 podman[262645]: 2025-09-30 14:34:31.426344988 +0000 UTC m=+0.987858153 container remove 1ae892d16a4ccd7731db7c21b8a5379f695d6a533f5afb3a900453370a9d9874 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_varahamihira, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:34:31 compute-0 systemd[1]: libpod-conmon-1ae892d16a4ccd7731db7c21b8a5379f695d6a533f5afb3a900453370a9d9874.scope: Deactivated successfully.
Sep 30 14:34:31 compute-0 sudo[262537]: pam_unix(sudo:session): session closed for user root
Sep 30 14:34:31 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 14:34:31 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:34:31 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 14:34:31 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:34:31 compute-0 sudo[262753]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 14:34:31 compute-0 sudo[262753]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:34:31 compute-0 sudo[262753]: pam_unix(sudo:session): session closed for user root
Sep 30 14:34:31 compute-0 ceph-mon[74194]: pgmap v562: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:34:31 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:34:31 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:34:31 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 14:34:31 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1623833875' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 14:34:31 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 14:34:31 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1623833875' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 14:34:31 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:34:32 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v563: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Sep 30 14:34:32 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:34:32 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:34:32 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:34:32.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:34:32 compute-0 ceph-mon[74194]: from='client.? 192.168.122.10:0/1623833875' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 14:34:32 compute-0 ceph-mon[74194]: from='client.? 192.168.122.10:0/1623833875' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 14:34:32 compute-0 ceph-mon[74194]: from='client.? 192.168.122.10:0/1769064499' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 14:34:32 compute-0 ceph-mon[74194]: from='client.? 192.168.122.10:0/1769064499' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 14:34:32 compute-0 ceph-mon[74194]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #42. Immutable memtables: 0.
Sep 30 14:34:32 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:34:32.856045) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Sep 30 14:34:32 compute-0 ceph-mon[74194]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 42
Sep 30 14:34:32 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759242872856083, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 1223, "num_deletes": 251, "total_data_size": 2323651, "memory_usage": 2368752, "flush_reason": "Manual Compaction"}
Sep 30 14:34:32 compute-0 ceph-mon[74194]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #43: started
Sep 30 14:34:32 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759242872883280, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 43, "file_size": 2254108, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18936, "largest_seqno": 20158, "table_properties": {"data_size": 2248231, "index_size": 3204, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 12598, "raw_average_key_size": 19, "raw_value_size": 2236449, "raw_average_value_size": 3549, "num_data_blocks": 141, "num_entries": 630, "num_filter_entries": 630, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759242761, "oldest_key_time": 1759242761, "file_creation_time": 1759242872, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4a74fe2f-a33e-416b-ba25-743e7942b3ac", "db_session_id": "KY5CTSKWFSFJYE5835A9", "orig_file_number": 43, "seqno_to_time_mapping": "N/A"}}
Sep 30 14:34:32 compute-0 ceph-mon[74194]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 27320 microseconds, and 5884 cpu microseconds.
Sep 30 14:34:32 compute-0 ceph-mon[74194]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 14:34:32 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:34:32.883355) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #43: 2254108 bytes OK
Sep 30 14:34:32 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:34:32.883390) [db/memtable_list.cc:519] [default] Level-0 commit table #43 started
Sep 30 14:34:32 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:34:32.887376) [db/memtable_list.cc:722] [default] Level-0 commit table #43: memtable #1 done
Sep 30 14:34:32 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:34:32.887397) EVENT_LOG_v1 {"time_micros": 1759242872887392, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Sep 30 14:34:32 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:34:32.887414) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Sep 30 14:34:32 compute-0 ceph-mon[74194]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 2318199, prev total WAL file size 2318199, number of live WAL files 2.
Sep 30 14:34:32 compute-0 ceph-mon[74194]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000039.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 14:34:32 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:34:32.888156) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Sep 30 14:34:32 compute-0 ceph-mon[74194]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Sep 30 14:34:32 compute-0 ceph-mon[74194]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [43(2201KB)], [41(12MB)]
Sep 30 14:34:32 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759242872888297, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [43], "files_L6": [41], "score": -1, "input_data_size": 15692188, "oldest_snapshot_seqno": -1}
Sep 30 14:34:32 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 14:34:32 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4098033183' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 14:34:32 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 14:34:32 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4098033183' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 14:34:32 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:34:32 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:34:32 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:34:32.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:34:33 compute-0 ceph-mon[74194]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #44: 5058 keys, 13479337 bytes, temperature: kUnknown
Sep 30 14:34:33 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759242873029613, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 44, "file_size": 13479337, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13444151, "index_size": 21444, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12677, "raw_key_size": 128752, "raw_average_key_size": 25, "raw_value_size": 13350876, "raw_average_value_size": 2639, "num_data_blocks": 881, "num_entries": 5058, "num_filter_entries": 5058, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759241526, "oldest_key_time": 0, "file_creation_time": 1759242872, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4a74fe2f-a33e-416b-ba25-743e7942b3ac", "db_session_id": "KY5CTSKWFSFJYE5835A9", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Sep 30 14:34:33 compute-0 ceph-mon[74194]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 14:34:33 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:34:33.029857) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 13479337 bytes
Sep 30 14:34:33 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:34:33.033080) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 111.0 rd, 95.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.1, 12.8 +0.0 blob) out(12.9 +0.0 blob), read-write-amplify(12.9) write-amplify(6.0) OK, records in: 5578, records dropped: 520 output_compression: NoCompression
Sep 30 14:34:33 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:34:33.033118) EVENT_LOG_v1 {"time_micros": 1759242873033103, "job": 20, "event": "compaction_finished", "compaction_time_micros": 141380, "compaction_time_cpu_micros": 27770, "output_level": 6, "num_output_files": 1, "total_output_size": 13479337, "num_input_records": 5578, "num_output_records": 5058, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Sep 30 14:34:33 compute-0 ceph-mon[74194]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000043.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 14:34:33 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759242873033670, "job": 20, "event": "table_file_deletion", "file_number": 43}
Sep 30 14:34:33 compute-0 ceph-mon[74194]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 14:34:33 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759242873036327, "job": 20, "event": "table_file_deletion", "file_number": 41}
Sep 30 14:34:33 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:34:32.888033) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:34:33 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:34:33.036432) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:34:33 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:34:33.036437) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:34:33 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:34:33.036439) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:34:33 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:34:33.036440) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:34:33 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:34:33.036442) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:34:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:34:33.571Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:34:33 compute-0 ceph-mon[74194]: pgmap v563: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Sep 30 14:34:33 compute-0 ceph-mon[74194]: from='client.? 192.168.122.10:0/4098033183' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 14:34:33 compute-0 ceph-mon[74194]: from='client.? 192.168.122.10:0/4098033183' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 14:34:34 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v564: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Sep 30 14:34:34 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:34:34 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:34:34 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:34:34.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:34:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:34:34] "GET /metrics HTTP/1.1" 200 48414 "" "Prometheus/2.51.0"
Sep 30 14:34:34 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:34:34] "GET /metrics HTTP/1.1" 200 48414 "" "Prometheus/2.51.0"
Sep 30 14:34:34 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:34:34 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:34:34 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:34:34.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:34:35 compute-0 ceph-mon[74194]: pgmap v564: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Sep 30 14:34:36 compute-0 systemd[1]: ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6@nfs.cephfs.2.0.compute-0.qrbicy.service: Scheduled restart job, restart counter is at 7.
Sep 30 14:34:36 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.qrbicy for 5e3c7776-ac03-5698-b79f-a6dc2d80cae6.
Sep 30 14:34:36 compute-0 systemd[1]: ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6@nfs.cephfs.2.0.compute-0.qrbicy.service: Consumed 1.583s CPU time.
Sep 30 14:34:36 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.qrbicy for 5e3c7776-ac03-5698-b79f-a6dc2d80cae6...
Sep 30 14:34:36 compute-0 podman[262830]: 2025-09-30 14:34:36.280290921 +0000 UTC m=+0.038367591 container create 38d2f44398f663089db14636735265e90874be739fcc8fc04c52c5d08862b893 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:34:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e38fd8c78d4dae757227f05f360349bdcaaa0464f9bba60a7e041b225acd5e6/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Sep 30 14:34:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e38fd8c78d4dae757227f05f360349bdcaaa0464f9bba60a7e041b225acd5e6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:34:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e38fd8c78d4dae757227f05f360349bdcaaa0464f9bba60a7e041b225acd5e6/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 14:34:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e38fd8c78d4dae757227f05f360349bdcaaa0464f9bba60a7e041b225acd5e6/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.qrbicy-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 14:34:36 compute-0 podman[262830]: 2025-09-30 14:34:36.340917088 +0000 UTC m=+0.098993778 container init 38d2f44398f663089db14636735265e90874be739fcc8fc04c52c5d08862b893 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:34:36 compute-0 podman[262830]: 2025-09-30 14:34:36.346231091 +0000 UTC m=+0.104307761 container start 38d2f44398f663089db14636735265e90874be739fcc8fc04c52c5d08862b893 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Sep 30 14:34:36 compute-0 bash[262830]: 38d2f44398f663089db14636735265e90874be739fcc8fc04c52c5d08862b893
Sep 30 14:34:36 compute-0 podman[262830]: 2025-09-30 14:34:36.264412015 +0000 UTC m=+0.022488695 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:34:36 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.qrbicy for 5e3c7776-ac03-5698-b79f-a6dc2d80cae6.
Sep 30 14:34:36 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[262846]: 30/09/2025 14:34:36 : epoch 68dbea7c : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Sep 30 14:34:36 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[262846]: 30/09/2025 14:34:36 : epoch 68dbea7c : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Sep 30 14:34:36 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[262846]: 30/09/2025 14:34:36 : epoch 68dbea7c : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Sep 30 14:34:36 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[262846]: 30/09/2025 14:34:36 : epoch 68dbea7c : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Sep 30 14:34:36 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[262846]: 30/09/2025 14:34:36 : epoch 68dbea7c : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Sep 30 14:34:36 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[262846]: 30/09/2025 14:34:36 : epoch 68dbea7c : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Sep 30 14:34:36 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[262846]: 30/09/2025 14:34:36 : epoch 68dbea7c : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Sep 30 14:34:36 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[262846]: 30/09/2025 14:34:36 : epoch 68dbea7c : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:34:36 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v565: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:34:36 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:34:36 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:34:36 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:34:36.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:34:36 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:34:36 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:34:36 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:34:36 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:34:36.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:34:37 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:34:37.071Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:34:37 compute-0 ceph-mon[74194]: pgmap v565: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:34:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:34:38.251 163966 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:34:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:34:38.251 163966 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:34:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:34:38.251 163966 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:34:38 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v566: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Sep 30 14:34:38 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:34:38 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:34:38 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:34:38.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:34:38 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:34:38 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:34:38 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:34:38.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:34:39 compute-0 ceph-mon[74194]: pgmap v566: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Sep 30 14:34:40 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v567: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Sep 30 14:34:40 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:34:40 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:34:40 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:34:40.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:34:40 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:34:40 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:34:40 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:34:40.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:34:41 compute-0 ceph-mon[74194]: pgmap v567: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Sep 30 14:34:41 compute-0 sudo[262894]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:34:41 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:34:41 compute-0 sudo[262894]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:34:41 compute-0 sudo[262894]: pam_unix(sudo:session): session closed for user root
Sep 30 14:34:42 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v568: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Sep 30 14:34:42 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[262846]: 30/09/2025 14:34:42 : epoch 68dbea7c : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:34:42 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[262846]: 30/09/2025 14:34:42 : epoch 68dbea7c : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:34:42 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:34:42 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:34:42 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:34:42.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:34:42 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:34:42 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:34:42 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:34:42.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:34:43 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:34:43.575Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:34:43 compute-0 ceph-mon[74194]: pgmap v568: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Sep 30 14:34:44 compute-0 podman[262921]: 2025-09-30 14:34:44.12902147 +0000 UTC m=+0.054816563 container health_status 3f9405f717bf7bccb1d94628a6cea0442375ebf8d5cf43ef2536ee30dce6c6e0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid)
Sep 30 14:34:44 compute-0 podman[262924]: 2025-09-30 14:34:44.135603456 +0000 UTC m=+0.050188908 container health_status c96c6cc1b89de09f21811bef665f35e069874e3ad1bbd4c37d02227af98e3458 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Sep 30 14:34:44 compute-0 podman[262923]: 2025-09-30 14:34:44.152935752 +0000 UTC m=+0.071383157 container health_status b3fb2fd96e3ed0561f41ccf5f3a6a8f5edc1610051f196882b9fe64f54041c07 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250923, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=multipathd)
Sep 30 14:34:44 compute-0 podman[262922]: 2025-09-30 14:34:44.156091516 +0000 UTC m=+0.078752664 container health_status 8ace90c1ad44d98a416105b72e1c541724757ab08efa10adfaa0df7e8c2027c6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=36bccb96575468ec919301205d8daa2c, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Sep 30 14:34:44 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v569: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Sep 30 14:34:44 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:34:44 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:34:44 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:34:44.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:34:44 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:34:44 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:34:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:34:44] "GET /metrics HTTP/1.1" 200 48418 "" "Prometheus/2.51.0"
Sep 30 14:34:44 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:34:44] "GET /metrics HTTP/1.1" 200 48418 "" "Prometheus/2.51.0"
Sep 30 14:34:44 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:34:44 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:34:44 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:34:44.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:34:45 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:34:46 compute-0 ceph-mon[74194]: pgmap v569: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Sep 30 14:34:46 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v570: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1023 B/s wr, 3 op/s
Sep 30 14:34:46 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:34:46 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:34:46 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:34:46.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:34:46 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:34:46 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:34:46 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:34:46 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:34:46.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:34:47 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:34:47.071Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:34:47 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:34:47.071Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:34:48 compute-0 ceph-mon[74194]: pgmap v570: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1023 B/s wr, 3 op/s
Sep 30 14:34:48 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v571: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1023 B/s wr, 3 op/s
Sep 30 14:34:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[262846]: 30/09/2025 14:34:48 : epoch 68dbea7c : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Sep 30 14:34:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[262846]: 30/09/2025 14:34:48 : epoch 68dbea7c : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Sep 30 14:34:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[262846]: 30/09/2025 14:34:48 : epoch 68dbea7c : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Sep 30 14:34:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[262846]: 30/09/2025 14:34:48 : epoch 68dbea7c : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Sep 30 14:34:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[262846]: 30/09/2025 14:34:48 : epoch 68dbea7c : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Sep 30 14:34:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[262846]: 30/09/2025 14:34:48 : epoch 68dbea7c : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Sep 30 14:34:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[262846]: 30/09/2025 14:34:48 : epoch 68dbea7c : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Sep 30 14:34:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[262846]: 30/09/2025 14:34:48 : epoch 68dbea7c : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 14:34:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[262846]: 30/09/2025 14:34:48 : epoch 68dbea7c : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 14:34:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[262846]: 30/09/2025 14:34:48 : epoch 68dbea7c : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 14:34:48 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:34:48 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:34:48 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:34:48.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:34:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[262846]: 30/09/2025 14:34:48 : epoch 68dbea7c : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Sep 30 14:34:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[262846]: 30/09/2025 14:34:48 : epoch 68dbea7c : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 14:34:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[262846]: 30/09/2025 14:34:48 : epoch 68dbea7c : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Sep 30 14:34:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[262846]: 30/09/2025 14:34:48 : epoch 68dbea7c : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Sep 30 14:34:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[262846]: 30/09/2025 14:34:48 : epoch 68dbea7c : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Sep 30 14:34:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[262846]: 30/09/2025 14:34:48 : epoch 68dbea7c : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Sep 30 14:34:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[262846]: 30/09/2025 14:34:48 : epoch 68dbea7c : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Sep 30 14:34:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[262846]: 30/09/2025 14:34:48 : epoch 68dbea7c : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Sep 30 14:34:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[262846]: 30/09/2025 14:34:48 : epoch 68dbea7c : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Sep 30 14:34:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[262846]: 30/09/2025 14:34:48 : epoch 68dbea7c : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Sep 30 14:34:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[262846]: 30/09/2025 14:34:48 : epoch 68dbea7c : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Sep 30 14:34:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[262846]: 30/09/2025 14:34:48 : epoch 68dbea7c : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Sep 30 14:34:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[262846]: 30/09/2025 14:34:48 : epoch 68dbea7c : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Sep 30 14:34:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[262846]: 30/09/2025 14:34:48 : epoch 68dbea7c : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Sep 30 14:34:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[262846]: 30/09/2025 14:34:48 : epoch 68dbea7c : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Sep 30 14:34:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[262846]: 30/09/2025 14:34:48 : epoch 68dbea7c : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Sep 30 14:34:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[262846]: 30/09/2025 14:34:48 : epoch 68dbea7c : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Sep 30 14:34:48 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:34:48 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:34:48 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:34:48.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:34:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[262846]: 30/09/2025 14:34:49 : epoch 68dbea7c : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f28d0000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:34:50 compute-0 ceph-mon[74194]: pgmap v571: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1023 B/s wr, 3 op/s
Sep 30 14:34:50 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[262846]: 30/09/2025 14:34:50 : epoch 68dbea7c : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f28d0000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:34:50 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v572: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1023 B/s wr, 3 op/s
Sep 30 14:34:50 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[262846]: 30/09/2025 14:34:50 : epoch 68dbea7c : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f28a8000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:34:50 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:34:50 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:34:50 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:34:50.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:34:50 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:34:50 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:34:50 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:34:50.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:34:51 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[262846]: 30/09/2025 14:34:51 : epoch 68dbea7c : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f28a0000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:34:51 compute-0 ceph-mon[74194]: pgmap v572: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1023 B/s wr, 3 op/s
Sep 30 14:34:51 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:34:52 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[262846]: 30/09/2025 14:34:52 : epoch 68dbea7c : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f28ac000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:34:52 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v573: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1023 B/s wr, 4 op/s
Sep 30 14:34:52 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-nfs-cephfs-compute-0-yvkpei[97239]: [WARNING] 272/143452 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Sep 30 14:34:52 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[262846]: 30/09/2025 14:34:52 : epoch 68dbea7c : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f28d0002070 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:34:52 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:34:52 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:34:52 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:34:52.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:34:52 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:34:52 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:34:52 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:34:52.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:34:53 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[262846]: 30/09/2025 14:34:53 : epoch 68dbea7c : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f28a80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:34:53 compute-0 ceph-mon[74194]: pgmap v573: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1023 B/s wr, 4 op/s
Sep 30 14:34:53 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:34:53.576Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:34:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[262846]: 30/09/2025 14:34:54 : epoch 68dbea7c : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f28a00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:34:54 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v574: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 426 B/s wr, 2 op/s
Sep 30 14:34:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[262846]: 30/09/2025 14:34:54 : epoch 68dbea7c : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f28ac001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:34:54 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:34:54 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:34:54 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:34:54.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:34:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:34:54] "GET /metrics HTTP/1.1" 200 48418 "" "Prometheus/2.51.0"
Sep 30 14:34:54 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:34:54] "GET /metrics HTTP/1.1" 200 48418 "" "Prometheus/2.51.0"
Sep 30 14:34:54 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:34:54 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:34:54 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:34:54.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:34:55 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[262846]: 30/09/2025 14:34:55 : epoch 68dbea7c : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f28d0002070 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:34:55 compute-0 ceph-mon[74194]: pgmap v574: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 426 B/s wr, 2 op/s
Sep 30 14:34:56 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[262846]: 30/09/2025 14:34:56 : epoch 68dbea7c : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f28a80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:34:56 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v575: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 426 B/s wr, 2 op/s
Sep 30 14:34:56 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[262846]: 30/09/2025 14:34:56 : epoch 68dbea7c : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f28a00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:34:56 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:34:56 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:34:56 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:34:56.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:34:56 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:34:56 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:34:56 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:34:56 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:34:56.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:34:57 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[262846]: 30/09/2025 14:34:57 : epoch 68dbea7c : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f28ac001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:34:57 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:34:57.072Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:34:57 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:34:57.073Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:34:57 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-nfs-cephfs-compute-0-yvkpei[97239]: [WARNING] 272/143457 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Sep 30 14:34:57 compute-0 ceph-mon[74194]: pgmap v575: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 426 B/s wr, 2 op/s
Sep 30 14:34:57 compute-0 nova_compute[261524]: 2025-09-30 14:34:57.856 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:34:57 compute-0 nova_compute[261524]: 2025-09-30 14:34:57.887 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:34:58 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[262846]: 30/09/2025 14:34:58 : epoch 68dbea7c : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f28d0002070 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:34:58 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v576: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Sep 30 14:34:58 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[262846]: 30/09/2025 14:34:58 : epoch 68dbea7c : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f28a80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:34:58 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:34:58 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:34:58 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:34:58.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:34:59 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:34:59 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:34:59 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:34:59.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:34:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[262846]: 30/09/2025 14:34:59 : epoch 68dbea7c : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f28a00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:34:59 compute-0 ceph-mgr[74485]: [balancer INFO root] Optimize plan auto_2025-09-30_14:34:59
Sep 30 14:34:59 compute-0 ceph-mgr[74485]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 14:34:59 compute-0 ceph-mgr[74485]: [balancer INFO root] do_upmap
Sep 30 14:34:59 compute-0 ceph-mgr[74485]: [balancer INFO root] pools ['.rgw.root', 'images', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.control', '.nfs', 'volumes', 'default.rgw.log', 'backups', 'vms', 'cephfs.cephfs.meta', '.mgr']
Sep 30 14:34:59 compute-0 ceph-mgr[74485]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 14:34:59 compute-0 ceph-mon[74194]: pgmap v576: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Sep 30 14:34:59 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:34:59 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:34:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:34:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:34:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 14:34:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:34:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 14:34:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:34:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:34:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:34:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:34:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:34:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:34:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:34:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:34:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:34:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Sep 30 14:34:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:34:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:34:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:34:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Sep 30 14:34:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:34:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Sep 30 14:34:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:34:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:34:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:34:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 14:34:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:34:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 14:34:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:34:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:34:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:34:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:35:00 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[262846]: 30/09/2025 14:35:00 : epoch 68dbea7c : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f28ac001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:35:00 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v577: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Sep 30 14:35:00 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[262846]: 30/09/2025 14:35:00 : epoch 68dbea7c : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f28d0002070 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:35:00 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:35:00 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:35:00 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:35:00.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:35:00 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:35:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 14:35:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 14:35:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 14:35:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 14:35:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 14:35:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 14:35:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 14:35:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 14:35:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 14:35:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 14:35:01 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:35:01 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:35:01 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:35:01.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:35:01 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[262846]: 30/09/2025 14:35:01 : epoch 68dbea7c : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f28a8002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:35:01 compute-0 ceph-mon[74194]: pgmap v577: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Sep 30 14:35:01 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/387725695' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:35:01 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:35:02 compute-0 sudo[263037]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:35:02 compute-0 sudo[263037]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:35:02 compute-0 sudo[263037]: pam_unix(sudo:session): session closed for user root
Sep 30 14:35:02 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[262846]: 30/09/2025 14:35:02 : epoch 68dbea7c : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f28a8002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:35:02 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v578: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Sep 30 14:35:02 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[262846]: 30/09/2025 14:35:02 : epoch 68dbea7c : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f28ac002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:35:02 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:35:02 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:35:02 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:35:02.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:35:02 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/1943241391' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:35:03 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:35:03 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:35:03 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:35:03.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:35:03 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[262846]: 30/09/2025 14:35:03 : epoch 68dbea7c : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f28d0009990 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:35:03 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:35:03.576Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:35:03 compute-0 ceph-mon[74194]: pgmap v578: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Sep 30 14:35:03 compute-0 nova_compute[261524]: 2025-09-30 14:35:03.953 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:35:03 compute-0 nova_compute[261524]: 2025-09-30 14:35:03.954 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:35:03 compute-0 nova_compute[261524]: 2025-09-30 14:35:03.954 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Sep 30 14:35:03 compute-0 nova_compute[261524]: 2025-09-30 14:35:03.954 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Sep 30 14:35:03 compute-0 nova_compute[261524]: 2025-09-30 14:35:03.982 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Sep 30 14:35:03 compute-0 nova_compute[261524]: 2025-09-30 14:35:03.982 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:35:03 compute-0 nova_compute[261524]: 2025-09-30 14:35:03.983 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:35:03 compute-0 nova_compute[261524]: 2025-09-30 14:35:03.983 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:35:03 compute-0 nova_compute[261524]: 2025-09-30 14:35:03.983 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:35:03 compute-0 nova_compute[261524]: 2025-09-30 14:35:03.984 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:35:03 compute-0 nova_compute[261524]: 2025-09-30 14:35:03.984 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:35:03 compute-0 nova_compute[261524]: 2025-09-30 14:35:03.984 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Sep 30 14:35:03 compute-0 nova_compute[261524]: 2025-09-30 14:35:03.984 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:35:04 compute-0 nova_compute[261524]: 2025-09-30 14:35:04.022 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:35:04 compute-0 nova_compute[261524]: 2025-09-30 14:35:04.022 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:35:04 compute-0 nova_compute[261524]: 2025-09-30 14:35:04.022 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:35:04 compute-0 nova_compute[261524]: 2025-09-30 14:35:04.023 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Sep 30 14:35:04 compute-0 nova_compute[261524]: 2025-09-30 14:35:04.023 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Sep 30 14:35:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[262846]: 30/09/2025 14:35:04 : epoch 68dbea7c : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f28a8002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:35:04 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v579: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Sep 30 14:35:04 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 14:35:04 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/774936540' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:35:04 compute-0 kernel: ganesha.nfsd[263021]: segfault at 50 ip 00007f2981a0832e sp 00007f293e7fb210 error 4 in libntirpc.so.5.8[7f29819ed000+2c000] likely on CPU 1 (core 0, socket 1)
Sep 30 14:35:04 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Sep 30 14:35:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[262846]: 30/09/2025 14:35:04 : epoch 68dbea7c : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f28a8002b10 fd 38 proxy ignored for local
Sep 30 14:35:04 compute-0 nova_compute[261524]: 2025-09-30 14:35:04.489 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Sep 30 14:35:04 compute-0 systemd[1]: Started Process Core Dump (PID 263086/UID 0).
Sep 30 14:35:04 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:35:04 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:35:04 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:35:04.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:35:04 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/4233834496' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:35:04 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/774936540' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:35:04 compute-0 nova_compute[261524]: 2025-09-30 14:35:04.651 2 WARNING nova.virt.libvirt.driver [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 14:35:04 compute-0 nova_compute[261524]: 2025-09-30 14:35:04.652 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4930MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Sep 30 14:35:04 compute-0 nova_compute[261524]: 2025-09-30 14:35:04.652 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:35:04 compute-0 nova_compute[261524]: 2025-09-30 14:35:04.653 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:35:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:35:04] "GET /metrics HTTP/1.1" 200 48419 "" "Prometheus/2.51.0"
Sep 30 14:35:04 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:35:04] "GET /metrics HTTP/1.1" 200 48419 "" "Prometheus/2.51.0"
Sep 30 14:35:04 compute-0 nova_compute[261524]: 2025-09-30 14:35:04.810 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Sep 30 14:35:04 compute-0 nova_compute[261524]: 2025-09-30 14:35:04.810 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Sep 30 14:35:04 compute-0 nova_compute[261524]: 2025-09-30 14:35:04.904 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Sep 30 14:35:05 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:35:05 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:35:05 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:35:05.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:35:05 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 14:35:05 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1176525277' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:35:05 compute-0 nova_compute[261524]: 2025-09-30 14:35:05.357 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Sep 30 14:35:05 compute-0 nova_compute[261524]: 2025-09-30 14:35:05.365 2 DEBUG nova.compute.provider_tree [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Inventory has not changed in ProviderTree for provider: 06783cfc-6d32-454d-9501-ebd8adea3735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Sep 30 14:35:05 compute-0 nova_compute[261524]: 2025-09-30 14:35:05.383 2 DEBUG nova.scheduler.client.report [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Inventory has not changed for provider 06783cfc-6d32-454d-9501-ebd8adea3735 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Sep 30 14:35:05 compute-0 nova_compute[261524]: 2025-09-30 14:35:05.384 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Sep 30 14:35:05 compute-0 nova_compute[261524]: 2025-09-30 14:35:05.385 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.732s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:35:05 compute-0 systemd-coredump[263087]: Process 262850 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 53:
                                                    #0  0x00007f2981a0832e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Sep 30 14:35:05 compute-0 ceph-mon[74194]: pgmap v579: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Sep 30 14:35:05 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/3267604937' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:35:05 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/1176525277' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:35:05 compute-0 systemd[1]: systemd-coredump@7-263086-0.service: Deactivated successfully.
Sep 30 14:35:05 compute-0 systemd[1]: systemd-coredump@7-263086-0.service: Consumed 1.035s CPU time.
Sep 30 14:35:05 compute-0 podman[263115]: 2025-09-30 14:35:05.777961674 +0000 UTC m=+0.039713586 container died 38d2f44398f663089db14636735265e90874be739fcc8fc04c52c5d08862b893 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:35:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-9e38fd8c78d4dae757227f05f360349bdcaaa0464f9bba60a7e041b225acd5e6-merged.mount: Deactivated successfully.
Sep 30 14:35:05 compute-0 podman[263115]: 2025-09-30 14:35:05.823878487 +0000 UTC m=+0.085630339 container remove 38d2f44398f663089db14636735265e90874be739fcc8fc04c52c5d08862b893 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:35:05 compute-0 systemd[1]: ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6@nfs.cephfs.2.0.compute-0.qrbicy.service: Main process exited, code=exited, status=139/n/a
Sep 30 14:35:06 compute-0 systemd[1]: ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6@nfs.cephfs.2.0.compute-0.qrbicy.service: Failed with result 'exit-code'.
Sep 30 14:35:06 compute-0 systemd[1]: ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6@nfs.cephfs.2.0.compute-0.qrbicy.service: Consumed 1.279s CPU time.
Sep 30 14:35:06 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v580: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:35:06 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:35:06 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:35:06 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:35:06.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:35:06 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:35:07 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:35:07 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:35:07 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:35:07.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:35:07 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:35:07.074Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:35:07 compute-0 ceph-mon[74194]: pgmap v580: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:35:08 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v581: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:35:08 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:35:08 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:35:08 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:35:08.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:35:09 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:35:09 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:35:09 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:35:09.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:35:09 compute-0 ceph-mon[74194]: pgmap v581: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:35:10 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v582: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:35:10 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-nfs-cephfs-compute-0-yvkpei[97239]: [WARNING] 272/143510 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Sep 30 14:35:10 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:35:10 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:35:10 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:35:10.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:35:11 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:35:11 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:35:11 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:35:11.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:35:11 compute-0 ceph-mon[74194]: pgmap v582: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:35:11 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:35:12 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v583: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 426 B/s wr, 1 op/s
Sep 30 14:35:12 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:35:12 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:35:12 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:35:12.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:35:13 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:35:13 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:35:13 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:35:13.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:35:13 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:35:13.577Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:35:13 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:35:13.577Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:35:13 compute-0 ceph-mon[74194]: pgmap v583: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 426 B/s wr, 1 op/s
Sep 30 14:35:14 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v584: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Sep 30 14:35:14 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:35:14 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:35:14 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:35:14.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:35:14 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:35:14 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:35:14 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:35:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:35:14] "GET /metrics HTTP/1.1" 200 48420 "" "Prometheus/2.51.0"
Sep 30 14:35:14 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:35:14] "GET /metrics HTTP/1.1" 200 48420 "" "Prometheus/2.51.0"
Sep 30 14:35:15 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:35:15 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:35:15 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:35:15.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:35:15 compute-0 podman[263169]: 2025-09-30 14:35:15.160813688 +0000 UTC m=+0.074832619 container health_status b3fb2fd96e3ed0561f41ccf5f3a6a8f5edc1610051f196882b9fe64f54041c07 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20250923, container_name=multipathd)
Sep 30 14:35:15 compute-0 podman[263167]: 2025-09-30 14:35:15.183197818 +0000 UTC m=+0.100702033 container health_status 3f9405f717bf7bccb1d94628a6cea0442375ebf8d5cf43ef2536ee30dce6c6e0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:35:15 compute-0 podman[263170]: 2025-09-30 14:35:15.184878513 +0000 UTC m=+0.084306623 container health_status c96c6cc1b89de09f21811bef665f35e069874e3ad1bbd4c37d02227af98e3458 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Sep 30 14:35:15 compute-0 podman[263168]: 2025-09-30 14:35:15.200102961 +0000 UTC m=+0.115551941 container health_status 8ace90c1ad44d98a416105b72e1c541724757ab08efa10adfaa0df7e8c2027c6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20250923, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Sep 30 14:35:15 compute-0 ceph-mon[74194]: pgmap v584: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Sep 30 14:35:16 compute-0 systemd[1]: ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6@nfs.cephfs.2.0.compute-0.qrbicy.service: Scheduled restart job, restart counter is at 8.
Sep 30 14:35:16 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.qrbicy for 5e3c7776-ac03-5698-b79f-a6dc2d80cae6.
Sep 30 14:35:16 compute-0 systemd[1]: ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6@nfs.cephfs.2.0.compute-0.qrbicy.service: Consumed 1.279s CPU time.
Sep 30 14:35:16 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.qrbicy for 5e3c7776-ac03-5698-b79f-a6dc2d80cae6...
Sep 30 14:35:16 compute-0 podman[263304]: 2025-09-30 14:35:16.306406673 +0000 UTC m=+0.054569645 container create d71623795855505ad9d7680a4e0fc119f5b35be89ab837ffbfb0a5b54eb96aab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid)
Sep 30 14:35:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0db962e2db3d53f37b2527ed46033fcd31f22205d966d5b62d39b737ce038151/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Sep 30 14:35:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0db962e2db3d53f37b2527ed46033fcd31f22205d966d5b62d39b737ce038151/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:35:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0db962e2db3d53f37b2527ed46033fcd31f22205d966d5b62d39b737ce038151/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 14:35:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0db962e2db3d53f37b2527ed46033fcd31f22205d966d5b62d39b737ce038151/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.qrbicy-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 14:35:16 compute-0 podman[263304]: 2025-09-30 14:35:16.360009332 +0000 UTC m=+0.108172294 container init d71623795855505ad9d7680a4e0fc119f5b35be89ab837ffbfb0a5b54eb96aab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Sep 30 14:35:16 compute-0 podman[263304]: 2025-09-30 14:35:16.367485272 +0000 UTC m=+0.115648224 container start d71623795855505ad9d7680a4e0fc119f5b35be89ab837ffbfb0a5b54eb96aab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:35:16 compute-0 bash[263304]: d71623795855505ad9d7680a4e0fc119f5b35be89ab837ffbfb0a5b54eb96aab
Sep 30 14:35:16 compute-0 podman[263304]: 2025-09-30 14:35:16.281984478 +0000 UTC m=+0.030147510 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:35:16 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:35:16 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Sep 30 14:35:16 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:35:16 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Sep 30 14:35:16 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.qrbicy for 5e3c7776-ac03-5698-b79f-a6dc2d80cae6.
Sep 30 14:35:16 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:35:16 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Sep 30 14:35:16 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:35:16 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Sep 30 14:35:16 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:35:16 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Sep 30 14:35:16 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:35:16 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Sep 30 14:35:16 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:35:16 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Sep 30 14:35:16 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:35:16 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:35:16 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v585: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 426 B/s wr, 2 op/s
Sep 30 14:35:16 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:35:16 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:35:16 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:35:16.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:35:16 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:35:17 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:35:17 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:35:17 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:35:17.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:35:17 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:35:17.075Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:35:17 compute-0 ceph-mon[74194]: pgmap v585: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 426 B/s wr, 2 op/s
Sep 30 14:35:18 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v586: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 341 B/s wr, 1 op/s
Sep 30 14:35:18 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:35:18 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:35:18 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:35:18.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:35:19 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:35:19 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:35:19 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:35:19.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:35:19 compute-0 ceph-mon[74194]: pgmap v586: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 341 B/s wr, 1 op/s
Sep 30 14:35:20 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v587: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 341 B/s wr, 1 op/s
Sep 30 14:35:20 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:35:20 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:35:20 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:35:20.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:35:21 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:35:21 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:35:21 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:35:21.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:35:21 compute-0 ceph-mon[74194]: pgmap v587: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 341 B/s wr, 1 op/s
Sep 30 14:35:21 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:35:22 compute-0 sudo[263367]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:35:22 compute-0 sudo[263367]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:35:22 compute-0 sudo[263367]: pam_unix(sudo:session): session closed for user root
Sep 30 14:35:22 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v588: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 938 B/s wr, 3 op/s
Sep 30 14:35:22 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:35:22 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[main] rados_kv_traverse :CLIENT ID :EVENT :Failed to lst kv ret=-2
Sep 30 14:35:22 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:35:22 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_read_clids :CLIENT ID :EVENT :Failed to traverse recovery db: -2
Sep 30 14:35:22 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:35:22 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:35:22 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:35:22 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:35:22 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:35:22 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:35:22 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:35:22 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:35:22 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:35:22.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:35:22 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:35:22 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:35:22 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:35:22 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:35:22 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:35:22 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:35:22 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:35:22 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:35:22 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:35:22 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:35:22 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:35:22 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:35:22 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:35:22 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:35:23 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:35:23 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:35:23 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:35:23.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:35:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:35:23.579Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:35:23 compute-0 ceph-mon[74194]: pgmap v588: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 938 B/s wr, 3 op/s
Sep 30 14:35:24 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v589: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 596 B/s wr, 2 op/s
Sep 30 14:35:24 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:35:24 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:35:24 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:35:24.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:35:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:35:24] "GET /metrics HTTP/1.1" 200 48420 "" "Prometheus/2.51.0"
Sep 30 14:35:24 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:35:24] "GET /metrics HTTP/1.1" 200 48420 "" "Prometheus/2.51.0"
Sep 30 14:35:25 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:35:25 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:35:25 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:35:25.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:35:25 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-nfs-cephfs-compute-0-yvkpei[97239]: [WARNING] 272/143525 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Sep 30 14:35:25 compute-0 ceph-mon[74194]: pgmap v589: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 596 B/s wr, 2 op/s
Sep 30 14:35:26 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v590: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 1.3 KiB/s wr, 5 op/s
Sep 30 14:35:26 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:35:26 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:35:26 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:35:26.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:35:26 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:35:27 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:35:27 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:35:27 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:35:27.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:35:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:35:27.076Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:35:27 compute-0 ceph-mon[74194]: pgmap v590: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 1.3 KiB/s wr, 5 op/s
Sep 30 14:35:28 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v591: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 1.3 KiB/s wr, 4 op/s
Sep 30 14:35:28 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:35:28 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:35:28 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:35:28.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:35:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:35:28 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-000000000000001b:nfs.cephfs.2: -2
Sep 30 14:35:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:35:28 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Sep 30 14:35:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:35:28 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Sep 30 14:35:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:35:28 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Sep 30 14:35:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:35:28 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Sep 30 14:35:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:35:28 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Sep 30 14:35:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:35:28 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Sep 30 14:35:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:35:28 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Sep 30 14:35:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:35:28 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 14:35:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:35:28 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 14:35:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:35:28 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 14:35:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:35:28 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Sep 30 14:35:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:35:28 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 14:35:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:35:28 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Sep 30 14:35:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:35:28 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Sep 30 14:35:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:35:28 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Sep 30 14:35:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:35:28 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Sep 30 14:35:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:35:28 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Sep 30 14:35:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:35:28 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Sep 30 14:35:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:35:28 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Sep 30 14:35:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:35:28 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Sep 30 14:35:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:35:28 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Sep 30 14:35:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:35:28 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Sep 30 14:35:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:35:28 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Sep 30 14:35:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:35:28 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Sep 30 14:35:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:35:28 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Sep 30 14:35:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:35:28 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Sep 30 14:35:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:35:28 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Sep 30 14:35:29 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:35:29 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:35:29 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:35:29.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:35:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:35:29 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb660000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:35:29 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:35:29 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:35:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:35:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:35:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:35:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:35:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:35:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:35:29 compute-0 ceph-mon[74194]: pgmap v591: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 1.3 KiB/s wr, 4 op/s
Sep 30 14:35:29 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:35:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:35:30 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb65c001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:35:30 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v592: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 1.3 KiB/s wr, 4 op/s
Sep 30 14:35:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:35:30 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb63c000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:35:30 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:35:30 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:35:30 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:35:30.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:35:30 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.24553 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Sep 30 14:35:30 compute-0 ceph-mgr[74485]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Sep 30 14:35:30 compute-0 ceph-mgr[74485]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Sep 30 14:35:30 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.24556 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Sep 30 14:35:30 compute-0 ceph-mgr[74485]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Sep 30 14:35:30 compute-0 ceph-mgr[74485]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Sep 30 14:35:30 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.24553 -' entity='client.openstack' cmd=[{"prefix": "nfs cluster info", "cluster_id": "cephfs", "format": "json"}]: dispatch
Sep 30 14:35:31 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:35:31 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:35:31 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:35:31.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:35:31 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:35:31 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb660000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:35:31 compute-0 ceph-mon[74194]: from='client.? 192.168.122.10:0/3143694713' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Sep 30 14:35:31 compute-0 ceph-mon[74194]: from='client.? 192.168.122.10:0/3207517197' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Sep 30 14:35:31 compute-0 sudo[263417]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:35:31 compute-0 sudo[263417]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:35:31 compute-0 sudo[263417]: pam_unix(sudo:session): session closed for user root
Sep 30 14:35:31 compute-0 sudo[263442]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 14:35:31 compute-0 sudo[263442]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:35:31 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:35:32 compute-0 ceph-mon[74194]: pgmap v592: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 1.3 KiB/s wr, 4 op/s
Sep 30 14:35:32 compute-0 ceph-mon[74194]: from='client.24553 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Sep 30 14:35:32 compute-0 ceph-mon[74194]: from='client.24556 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Sep 30 14:35:32 compute-0 ceph-mon[74194]: from='client.24553 -' entity='client.openstack' cmd=[{"prefix": "nfs cluster info", "cluster_id": "cephfs", "format": "json"}]: dispatch
Sep 30 14:35:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:35:32 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb644000fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:35:32 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v593: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 1.3 KiB/s wr, 5 op/s
Sep 30 14:35:32 compute-0 sudo[263442]: pam_unix(sudo:session): session closed for user root
Sep 30 14:35:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-nfs-cephfs-compute-0-yvkpei[97239]: [WARNING] 272/143532 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Sep 30 14:35:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:35:32 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb65c0023e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:35:32 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:35:32 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:35:32 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:35:32.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:35:32 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:35:32 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:35:32 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 14:35:32 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 14:35:32 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 14:35:32 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:35:32 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 14:35:32 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:35:32 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 14:35:32 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 14:35:32 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 14:35:32 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 14:35:32 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:35:32 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:35:32 compute-0 sudo[263499]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:35:32 compute-0 sudo[263499]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:35:32 compute-0 sudo[263499]: pam_unix(sudo:session): session closed for user root
Sep 30 14:35:32 compute-0 sudo[263524]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 14:35:32 compute-0 sudo[263524]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:35:33 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:35:33 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:35:33 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:35:33.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:35:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:35:33 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb63c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:35:33 compute-0 ceph-mon[74194]: pgmap v593: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 1.3 KiB/s wr, 5 op/s
Sep 30 14:35:33 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:35:33 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 14:35:33 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:35:33 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:35:33 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 14:35:33 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 14:35:33 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:35:33 compute-0 podman[263592]: 2025-09-30 14:35:33.20129804 +0000 UTC m=+0.040296003 container create 23c452888cf372f1ad104de1e4d9cf7933661c3f24b308e7e5de260ebc072cab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_wilbur, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Sep 30 14:35:33 compute-0 systemd[1]: Started libpod-conmon-23c452888cf372f1ad104de1e4d9cf7933661c3f24b308e7e5de260ebc072cab.scope.
Sep 30 14:35:33 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:35:33 compute-0 podman[263592]: 2025-09-30 14:35:33.185761803 +0000 UTC m=+0.024759796 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:35:33 compute-0 podman[263592]: 2025-09-30 14:35:33.286513417 +0000 UTC m=+0.125511410 container init 23c452888cf372f1ad104de1e4d9cf7933661c3f24b308e7e5de260ebc072cab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:35:33 compute-0 podman[263592]: 2025-09-30 14:35:33.293385552 +0000 UTC m=+0.132383515 container start 23c452888cf372f1ad104de1e4d9cf7933661c3f24b308e7e5de260ebc072cab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default)
Sep 30 14:35:33 compute-0 podman[263592]: 2025-09-30 14:35:33.29630246 +0000 UTC m=+0.135300423 container attach 23c452888cf372f1ad104de1e4d9cf7933661c3f24b308e7e5de260ebc072cab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_wilbur, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:35:33 compute-0 festive_wilbur[263608]: 167 167
Sep 30 14:35:33 compute-0 systemd[1]: libpod-23c452888cf372f1ad104de1e4d9cf7933661c3f24b308e7e5de260ebc072cab.scope: Deactivated successfully.
Sep 30 14:35:33 compute-0 podman[263592]: 2025-09-30 14:35:33.299198338 +0000 UTC m=+0.138196321 container died 23c452888cf372f1ad104de1e4d9cf7933661c3f24b308e7e5de260ebc072cab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_wilbur, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True)
Sep 30 14:35:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-a0209b18c31d449c20c5bb43eebd7bd9b8b09c346c74b3baa3bdda7c47fb1c36-merged.mount: Deactivated successfully.
Sep 30 14:35:33 compute-0 podman[263592]: 2025-09-30 14:35:33.336753996 +0000 UTC m=+0.175751959 container remove 23c452888cf372f1ad104de1e4d9cf7933661c3f24b308e7e5de260ebc072cab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:35:33 compute-0 systemd[1]: libpod-conmon-23c452888cf372f1ad104de1e4d9cf7933661c3f24b308e7e5de260ebc072cab.scope: Deactivated successfully.
Sep 30 14:35:33 compute-0 podman[263632]: 2025-09-30 14:35:33.487663866 +0000 UTC m=+0.036044099 container create 437c876b4408a9d140b12dbb66661947146207f08747c01ce053dfd70a0fb6fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_dijkstra, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Sep 30 14:35:33 compute-0 systemd[1]: Started libpod-conmon-437c876b4408a9d140b12dbb66661947146207f08747c01ce053dfd70a0fb6fa.scope.
Sep 30 14:35:33 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:35:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c0fd283306e631cfb7fa4bc2a5b089c4ff79e47956e4f2b0d223cd6ec5a5826/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:35:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c0fd283306e631cfb7fa4bc2a5b089c4ff79e47956e4f2b0d223cd6ec5a5826/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:35:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c0fd283306e631cfb7fa4bc2a5b089c4ff79e47956e4f2b0d223cd6ec5a5826/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:35:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c0fd283306e631cfb7fa4bc2a5b089c4ff79e47956e4f2b0d223cd6ec5a5826/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:35:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c0fd283306e631cfb7fa4bc2a5b089c4ff79e47956e4f2b0d223cd6ec5a5826/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 14:35:33 compute-0 podman[263632]: 2025-09-30 14:35:33.472680054 +0000 UTC m=+0.021060317 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:35:33 compute-0 podman[263632]: 2025-09-30 14:35:33.57390704 +0000 UTC m=+0.122287313 container init 437c876b4408a9d140b12dbb66661947146207f08747c01ce053dfd70a0fb6fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_dijkstra, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:35:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:35:33.579Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:35:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:35:33.580Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:35:33 compute-0 podman[263632]: 2025-09-30 14:35:33.581935766 +0000 UTC m=+0.130315999 container start 437c876b4408a9d140b12dbb66661947146207f08747c01ce053dfd70a0fb6fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_dijkstra, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:35:33 compute-0 podman[263632]: 2025-09-30 14:35:33.585204634 +0000 UTC m=+0.133584897 container attach 437c876b4408a9d140b12dbb66661947146207f08747c01ce053dfd70a0fb6fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_dijkstra, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid)
Sep 30 14:35:33 compute-0 unix_chkpwd[263664]: password check failed for user (root)
Sep 30 14:35:33 compute-0 sshd-session[263590]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.159  user=root
Sep 30 14:35:33 compute-0 zen_dijkstra[263649]: --> passed data devices: 0 physical, 1 LVM
Sep 30 14:35:33 compute-0 zen_dijkstra[263649]: --> All data devices are unavailable
Sep 30 14:35:33 compute-0 systemd[1]: libpod-437c876b4408a9d140b12dbb66661947146207f08747c01ce053dfd70a0fb6fa.scope: Deactivated successfully.
Sep 30 14:35:33 compute-0 podman[263632]: 2025-09-30 14:35:33.918818037 +0000 UTC m=+0.467198270 container died 437c876b4408a9d140b12dbb66661947146207f08747c01ce053dfd70a0fb6fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_dijkstra, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Sep 30 14:35:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-0c0fd283306e631cfb7fa4bc2a5b089c4ff79e47956e4f2b0d223cd6ec5a5826-merged.mount: Deactivated successfully.
Sep 30 14:35:33 compute-0 podman[263632]: 2025-09-30 14:35:33.960061834 +0000 UTC m=+0.508442067 container remove 437c876b4408a9d140b12dbb66661947146207f08747c01ce053dfd70a0fb6fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Sep 30 14:35:33 compute-0 systemd[1]: libpod-conmon-437c876b4408a9d140b12dbb66661947146207f08747c01ce053dfd70a0fb6fa.scope: Deactivated successfully.
Sep 30 14:35:33 compute-0 sudo[263524]: pam_unix(sudo:session): session closed for user root
Sep 30 14:35:34 compute-0 sudo[263679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:35:34 compute-0 sudo[263679]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:35:34 compute-0 sudo[263679]: pam_unix(sudo:session): session closed for user root
Sep 30 14:35:34 compute-0 sudo[263704]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -- lvm list --format json
Sep 30 14:35:34 compute-0 sudo[263704]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:35:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:35:34 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6600021f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:35:34 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v594: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Sep 30 14:35:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:35:34 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb644001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:35:34 compute-0 podman[263771]: 2025-09-30 14:35:34.550034378 +0000 UTC m=+0.040229060 container create 4b550654a17f2ceba55dd4e0dea42ddc196d09041c826e5ab5a0d39dec910ba0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_bardeen, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:35:34 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:35:34 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:35:34 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:35:34.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:35:34 compute-0 systemd[1]: Started libpod-conmon-4b550654a17f2ceba55dd4e0dea42ddc196d09041c826e5ab5a0d39dec910ba0.scope.
Sep 30 14:35:34 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:35:34 compute-0 podman[263771]: 2025-09-30 14:35:34.627714343 +0000 UTC m=+0.117909075 container init 4b550654a17f2ceba55dd4e0dea42ddc196d09041c826e5ab5a0d39dec910ba0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_bardeen, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Sep 30 14:35:34 compute-0 podman[263771]: 2025-09-30 14:35:34.534421339 +0000 UTC m=+0.024616041 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:35:34 compute-0 podman[263771]: 2025-09-30 14:35:34.640338392 +0000 UTC m=+0.130533084 container start 4b550654a17f2ceba55dd4e0dea42ddc196d09041c826e5ab5a0d39dec910ba0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_bardeen, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Sep 30 14:35:34 compute-0 podman[263771]: 2025-09-30 14:35:34.643494637 +0000 UTC m=+0.133689359 container attach 4b550654a17f2ceba55dd4e0dea42ddc196d09041c826e5ab5a0d39dec910ba0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_bardeen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Sep 30 14:35:34 compute-0 unruffled_bardeen[263787]: 167 167
Sep 30 14:35:34 compute-0 systemd[1]: libpod-4b550654a17f2ceba55dd4e0dea42ddc196d09041c826e5ab5a0d39dec910ba0.scope: Deactivated successfully.
Sep 30 14:35:34 compute-0 podman[263771]: 2025-09-30 14:35:34.647042042 +0000 UTC m=+0.137236724 container died 4b550654a17f2ceba55dd4e0dea42ddc196d09041c826e5ab5a0d39dec910ba0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_bardeen, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:35:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-b8d33625ba17fcbffe1e820da4a665014324bd7e985a579cfff6b7569ca0d7cd-merged.mount: Deactivated successfully.
Sep 30 14:35:34 compute-0 podman[263771]: 2025-09-30 14:35:34.685473243 +0000 UTC m=+0.175667935 container remove 4b550654a17f2ceba55dd4e0dea42ddc196d09041c826e5ab5a0d39dec910ba0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Sep 30 14:35:34 compute-0 systemd[1]: libpod-conmon-4b550654a17f2ceba55dd4e0dea42ddc196d09041c826e5ab5a0d39dec910ba0.scope: Deactivated successfully.
Sep 30 14:35:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:35:34] "GET /metrics HTTP/1.1" 200 48419 "" "Prometheus/2.51.0"
Sep 30 14:35:34 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:35:34] "GET /metrics HTTP/1.1" 200 48419 "" "Prometheus/2.51.0"
Sep 30 14:35:34 compute-0 podman[263811]: 2025-09-30 14:35:34.885205024 +0000 UTC m=+0.047897337 container create 50c8e3e63b374dbea9a602ae64e6a100fa41ba7a7e970f8b9f07f3da4d418aae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_napier, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:35:34 compute-0 systemd[1]: Started libpod-conmon-50c8e3e63b374dbea9a602ae64e6a100fa41ba7a7e970f8b9f07f3da4d418aae.scope.
Sep 30 14:35:34 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:35:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c0e47abb71d0c8dcf868f91df8be60c68374197adb88277ad3146e5c7b7fbc5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:35:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c0e47abb71d0c8dcf868f91df8be60c68374197adb88277ad3146e5c7b7fbc5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:35:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c0e47abb71d0c8dcf868f91df8be60c68374197adb88277ad3146e5c7b7fbc5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:35:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c0e47abb71d0c8dcf868f91df8be60c68374197adb88277ad3146e5c7b7fbc5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:35:34 compute-0 podman[263811]: 2025-09-30 14:35:34.946115919 +0000 UTC m=+0.108808262 container init 50c8e3e63b374dbea9a602ae64e6a100fa41ba7a7e970f8b9f07f3da4d418aae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_napier, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:35:34 compute-0 podman[263811]: 2025-09-30 14:35:34.952239783 +0000 UTC m=+0.114932096 container start 50c8e3e63b374dbea9a602ae64e6a100fa41ba7a7e970f8b9f07f3da4d418aae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_napier, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Sep 30 14:35:34 compute-0 podman[263811]: 2025-09-30 14:35:34.955282755 +0000 UTC m=+0.117975088 container attach 50c8e3e63b374dbea9a602ae64e6a100fa41ba7a7e970f8b9f07f3da4d418aae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_napier, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Sep 30 14:35:34 compute-0 podman[263811]: 2025-09-30 14:35:34.865499975 +0000 UTC m=+0.028192308 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:35:35 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:35:35 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:35:35 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:35:35.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:35:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:35:35 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb65c0023e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:35:35 compute-0 priceless_napier[263827]: {
Sep 30 14:35:35 compute-0 priceless_napier[263827]:     "0": [
Sep 30 14:35:35 compute-0 priceless_napier[263827]:         {
Sep 30 14:35:35 compute-0 priceless_napier[263827]:             "devices": [
Sep 30 14:35:35 compute-0 priceless_napier[263827]:                 "/dev/loop3"
Sep 30 14:35:35 compute-0 priceless_napier[263827]:             ],
Sep 30 14:35:35 compute-0 priceless_napier[263827]:             "lv_name": "ceph_lv0",
Sep 30 14:35:35 compute-0 priceless_napier[263827]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:35:35 compute-0 priceless_napier[263827]:             "lv_size": "21470642176",
Sep 30 14:35:35 compute-0 priceless_napier[263827]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5e3c7776-ac03-5698-b79f-a6dc2d80cae6,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1bf35304-bfb4-41f5-b832-570aa31de1b2,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 14:35:35 compute-0 priceless_napier[263827]:             "lv_uuid": "f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L",
Sep 30 14:35:35 compute-0 priceless_napier[263827]:             "name": "ceph_lv0",
Sep 30 14:35:35 compute-0 priceless_napier[263827]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:35:35 compute-0 priceless_napier[263827]:             "tags": {
Sep 30 14:35:35 compute-0 priceless_napier[263827]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:35:35 compute-0 priceless_napier[263827]:                 "ceph.block_uuid": "f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L",
Sep 30 14:35:35 compute-0 priceless_napier[263827]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 14:35:35 compute-0 priceless_napier[263827]:                 "ceph.cluster_fsid": "5e3c7776-ac03-5698-b79f-a6dc2d80cae6",
Sep 30 14:35:35 compute-0 priceless_napier[263827]:                 "ceph.cluster_name": "ceph",
Sep 30 14:35:35 compute-0 priceless_napier[263827]:                 "ceph.crush_device_class": "",
Sep 30 14:35:35 compute-0 priceless_napier[263827]:                 "ceph.encrypted": "0",
Sep 30 14:35:35 compute-0 priceless_napier[263827]:                 "ceph.osd_fsid": "1bf35304-bfb4-41f5-b832-570aa31de1b2",
Sep 30 14:35:35 compute-0 priceless_napier[263827]:                 "ceph.osd_id": "0",
Sep 30 14:35:35 compute-0 priceless_napier[263827]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 14:35:35 compute-0 priceless_napier[263827]:                 "ceph.type": "block",
Sep 30 14:35:35 compute-0 priceless_napier[263827]:                 "ceph.vdo": "0",
Sep 30 14:35:35 compute-0 priceless_napier[263827]:                 "ceph.with_tpm": "0"
Sep 30 14:35:35 compute-0 priceless_napier[263827]:             },
Sep 30 14:35:35 compute-0 priceless_napier[263827]:             "type": "block",
Sep 30 14:35:35 compute-0 priceless_napier[263827]:             "vg_name": "ceph_vg0"
Sep 30 14:35:35 compute-0 priceless_napier[263827]:         }
Sep 30 14:35:35 compute-0 priceless_napier[263827]:     ]
Sep 30 14:35:35 compute-0 priceless_napier[263827]: }
Sep 30 14:35:35 compute-0 systemd[1]: libpod-50c8e3e63b374dbea9a602ae64e6a100fa41ba7a7e970f8b9f07f3da4d418aae.scope: Deactivated successfully.
Sep 30 14:35:35 compute-0 podman[263811]: 2025-09-30 14:35:35.258532894 +0000 UTC m=+0.421225207 container died 50c8e3e63b374dbea9a602ae64e6a100fa41ba7a7e970f8b9f07f3da4d418aae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_napier, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Sep 30 14:35:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-3c0e47abb71d0c8dcf868f91df8be60c68374197adb88277ad3146e5c7b7fbc5-merged.mount: Deactivated successfully.
Sep 30 14:35:35 compute-0 podman[263811]: 2025-09-30 14:35:35.307643702 +0000 UTC m=+0.470336015 container remove 50c8e3e63b374dbea9a602ae64e6a100fa41ba7a7e970f8b9f07f3da4d418aae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_napier, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Sep 30 14:35:35 compute-0 systemd[1]: libpod-conmon-50c8e3e63b374dbea9a602ae64e6a100fa41ba7a7e970f8b9f07f3da4d418aae.scope: Deactivated successfully.
Sep 30 14:35:35 compute-0 sudo[263704]: pam_unix(sudo:session): session closed for user root
Sep 30 14:35:35 compute-0 sudo[263849]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:35:35 compute-0 sudo[263849]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:35:35 compute-0 sudo[263849]: pam_unix(sudo:session): session closed for user root
Sep 30 14:35:35 compute-0 ceph-mon[74194]: pgmap v594: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Sep 30 14:35:35 compute-0 sudo[263874]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -- raw list --format json
Sep 30 14:35:35 compute-0 sudo[263874]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:35:35 compute-0 podman[263939]: 2025-09-30 14:35:35.988079954 +0000 UTC m=+0.056826197 container create 819fc54decd891276e15e81055dc3545fdff51a942b2af31dbe9b2ed8605aa36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_kowalevski, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:35:36 compute-0 systemd[1]: Started libpod-conmon-819fc54decd891276e15e81055dc3545fdff51a942b2af31dbe9b2ed8605aa36.scope.
Sep 30 14:35:36 compute-0 podman[263939]: 2025-09-30 14:35:35.958972792 +0000 UTC m=+0.027719135 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:35:36 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:35:36 compute-0 podman[263939]: 2025-09-30 14:35:36.082440616 +0000 UTC m=+0.151186929 container init 819fc54decd891276e15e81055dc3545fdff51a942b2af31dbe9b2ed8605aa36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_kowalevski, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:35:36 compute-0 podman[263939]: 2025-09-30 14:35:36.094800388 +0000 UTC m=+0.163546641 container start 819fc54decd891276e15e81055dc3545fdff51a942b2af31dbe9b2ed8605aa36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_kowalevski, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Sep 30 14:35:36 compute-0 podman[263939]: 2025-09-30 14:35:36.098300342 +0000 UTC m=+0.167046595 container attach 819fc54decd891276e15e81055dc3545fdff51a942b2af31dbe9b2ed8605aa36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_kowalevski, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:35:36 compute-0 agitated_kowalevski[263956]: 167 167
Sep 30 14:35:36 compute-0 systemd[1]: libpod-819fc54decd891276e15e81055dc3545fdff51a942b2af31dbe9b2ed8605aa36.scope: Deactivated successfully.
Sep 30 14:35:36 compute-0 podman[263939]: 2025-09-30 14:35:36.102610677 +0000 UTC m=+0.171356950 container died 819fc54decd891276e15e81055dc3545fdff51a942b2af31dbe9b2ed8605aa36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_kowalevski, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True)
Sep 30 14:35:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-2321ef18f72103507477a82073dc1f34ab2569939a011821a4beba4d2a8285b9-merged.mount: Deactivated successfully.
Sep 30 14:35:36 compute-0 podman[263939]: 2025-09-30 14:35:36.141883441 +0000 UTC m=+0.210629684 container remove 819fc54decd891276e15e81055dc3545fdff51a942b2af31dbe9b2ed8605aa36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_kowalevski, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Sep 30 14:35:36 compute-0 systemd[1]: libpod-conmon-819fc54decd891276e15e81055dc3545fdff51a942b2af31dbe9b2ed8605aa36.scope: Deactivated successfully.
Sep 30 14:35:36 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:35:36 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb63c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:35:36 compute-0 podman[263980]: 2025-09-30 14:35:36.342001792 +0000 UTC m=+0.046428177 container create 87dc9aca1199b3346eacb23669dcbda68a207f2f8d5f9b575661608f2f0e7419 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:35:36 compute-0 systemd[1]: Started libpod-conmon-87dc9aca1199b3346eacb23669dcbda68a207f2f8d5f9b575661608f2f0e7419.scope.
Sep 30 14:35:36 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:35:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13e82403a7a487d0cc70a92e152015953328049a2e2182a555cbfce4064bdc6a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:35:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13e82403a7a487d0cc70a92e152015953328049a2e2182a555cbfce4064bdc6a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:35:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13e82403a7a487d0cc70a92e152015953328049a2e2182a555cbfce4064bdc6a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:35:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13e82403a7a487d0cc70a92e152015953328049a2e2182a555cbfce4064bdc6a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:35:36 compute-0 podman[263980]: 2025-09-30 14:35:36.321949484 +0000 UTC m=+0.026375919 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:35:36 compute-0 sshd-session[263590]: Failed password for root from 193.46.255.159 port 32014 ssh2
Sep 30 14:35:36 compute-0 podman[263980]: 2025-09-30 14:35:36.4328199 +0000 UTC m=+0.137246305 container init 87dc9aca1199b3346eacb23669dcbda68a207f2f8d5f9b575661608f2f0e7419 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_turing, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:35:36 compute-0 podman[263980]: 2025-09-30 14:35:36.43803769 +0000 UTC m=+0.142464095 container start 87dc9aca1199b3346eacb23669dcbda68a207f2f8d5f9b575661608f2f0e7419 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_turing, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:35:36 compute-0 podman[263980]: 2025-09-30 14:35:36.442951372 +0000 UTC m=+0.147377777 container attach 87dc9aca1199b3346eacb23669dcbda68a207f2f8d5f9b575661608f2f0e7419 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Sep 30 14:35:36 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v595: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 767 B/s wr, 43 op/s
Sep 30 14:35:36 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:35:36 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6600021f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:35:36 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:35:36 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:35:36 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:35:36.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:35:36 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:35:37 compute-0 lvm[264070]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 14:35:37 compute-0 lvm[264070]: VG ceph_vg0 finished
Sep 30 14:35:37 compute-0 lvm[264072]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 14:35:37 compute-0 lvm[264072]: VG ceph_vg0 finished
Sep 30 14:35:37 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:35:37 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:35:37 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:35:37.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:35:37 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:35:37.076Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:35:37 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:35:37.077Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:35:37 compute-0 gracious_turing[263996]: {}
Sep 30 14:35:37 compute-0 systemd[1]: libpod-87dc9aca1199b3346eacb23669dcbda68a207f2f8d5f9b575661608f2f0e7419.scope: Deactivated successfully.
Sep 30 14:35:37 compute-0 podman[263980]: 2025-09-30 14:35:37.108527024 +0000 UTC m=+0.812953419 container died 87dc9aca1199b3346eacb23669dcbda68a207f2f8d5f9b575661608f2f0e7419 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_turing, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:35:37 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:35:37 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb644001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:35:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-13e82403a7a487d0cc70a92e152015953328049a2e2182a555cbfce4064bdc6a-merged.mount: Deactivated successfully.
Sep 30 14:35:37 compute-0 podman[263980]: 2025-09-30 14:35:37.153210544 +0000 UTC m=+0.857636929 container remove 87dc9aca1199b3346eacb23669dcbda68a207f2f8d5f9b575661608f2f0e7419 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_turing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1)
Sep 30 14:35:37 compute-0 systemd[1]: libpod-conmon-87dc9aca1199b3346eacb23669dcbda68a207f2f8d5f9b575661608f2f0e7419.scope: Deactivated successfully.
Sep 30 14:35:37 compute-0 sudo[263874]: pam_unix(sudo:session): session closed for user root
Sep 30 14:35:37 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 14:35:37 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:35:37 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 14:35:37 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:35:37 compute-0 sudo[264087]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 14:35:37 compute-0 sudo[264087]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:35:37 compute-0 sudo[264087]: pam_unix(sudo:session): session closed for user root
Sep 30 14:35:37 compute-0 ceph-mon[74194]: pgmap v595: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 767 B/s wr, 43 op/s
Sep 30 14:35:37 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:35:37 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:35:37 compute-0 unix_chkpwd[264114]: password check failed for user (root)
Sep 30 14:35:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:35:38.252 163966 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:35:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:35:38.253 163966 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:35:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:35:38.253 163966 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:35:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:35:38 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb65c0023e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:35:38 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v596: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 40 op/s
Sep 30 14:35:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:35:38 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb63c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:35:38 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:35:38 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:35:38 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:35:38.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:35:39 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:35:39 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:35:39 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:35:39.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:35:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:35:39 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6600021f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:35:39 compute-0 ceph-mon[74194]: pgmap v596: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 40 op/s
Sep 30 14:35:39 compute-0 sshd-session[263590]: Failed password for root from 193.46.255.159 port 32014 ssh2
Sep 30 14:35:40 compute-0 unix_chkpwd[264117]: password check failed for user (root)
Sep 30 14:35:40 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:35:40 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb644001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:35:40 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v597: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 40 op/s
Sep 30 14:35:40 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:35:40 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb65c0023e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:35:40 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:35:40 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:35:40 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:35:40.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:35:41 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:35:41 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:35:41 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:35:41.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:35:41 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:35:41 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb63c002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:35:41 compute-0 sshd-session[263590]: Failed password for root from 193.46.255.159 port 32014 ssh2
Sep 30 14:35:41 compute-0 ceph-mon[74194]: pgmap v597: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 40 op/s
Sep 30 14:35:41 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:35:42 compute-0 sshd-session[263590]: Received disconnect from 193.46.255.159 port 32014:11:  [preauth]
Sep 30 14:35:42 compute-0 sshd-session[263590]: Disconnected from authenticating user root 193.46.255.159 port 32014 [preauth]
Sep 30 14:35:42 compute-0 sshd-session[263590]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.159  user=root
Sep 30 14:35:42 compute-0 sudo[264120]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:35:42 compute-0 sudo[264120]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:35:42 compute-0 sudo[264120]: pam_unix(sudo:session): session closed for user root
Sep 30 14:35:42 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:35:42 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6600095a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:35:42 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v598: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 60 op/s
Sep 30 14:35:42 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:35:42 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb644002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:35:42 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:35:42 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:35:42 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:35:42.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:35:42 compute-0 unix_chkpwd[264147]: password check failed for user (root)
Sep 30 14:35:42 compute-0 sshd-session[264145]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.159  user=root
Sep 30 14:35:43 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:35:43 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:35:43 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:35:43.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:35:43 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:35:43 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb65c0023e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:35:43 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:35:43.581Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:35:43 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:35:43.581Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:35:43 compute-0 ceph-mon[74194]: pgmap v598: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 60 op/s
Sep 30 14:35:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:35:44 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb63c002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:35:44 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v599: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Sep 30 14:35:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:35:44 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6600095a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:35:44 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:35:44 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:35:44 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:35:44.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:35:44 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:35:44 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:35:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:35:44] "GET /metrics HTTP/1.1" 200 48413 "" "Prometheus/2.51.0"
Sep 30 14:35:44 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:35:44] "GET /metrics HTTP/1.1" 200 48413 "" "Prometheus/2.51.0"
Sep 30 14:35:44 compute-0 sshd-session[264145]: Failed password for root from 193.46.255.159 port 38638 ssh2
Sep 30 14:35:45 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:35:45 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:35:45 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:35:45.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:35:45 compute-0 unix_chkpwd[264150]: password check failed for user (root)
Sep 30 14:35:45 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:35:45 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb644002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:35:45 compute-0 ceph-mon[74194]: pgmap v599: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Sep 30 14:35:45 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:35:46 compute-0 podman[264153]: 2025-09-30 14:35:46.149939394 +0000 UTC m=+0.072245000 container health_status 3f9405f717bf7bccb1d94628a6cea0442375ebf8d5cf43ef2536ee30dce6c6e0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Sep 30 14:35:46 compute-0 podman[264156]: 2025-09-30 14:35:46.174845423 +0000 UTC m=+0.080874482 container health_status c96c6cc1b89de09f21811bef665f35e069874e3ad1bbd4c37d02227af98e3458 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Sep 30 14:35:46 compute-0 podman[264155]: 2025-09-30 14:35:46.189049734 +0000 UTC m=+0.106379326 container health_status b3fb2fd96e3ed0561f41ccf5f3a6a8f5edc1610051f196882b9fe64f54041c07 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923)
Sep 30 14:35:46 compute-0 podman[264154]: 2025-09-30 14:35:46.189244569 +0000 UTC m=+0.104328011 container health_status 8ace90c1ad44d98a416105b72e1c541724757ab08efa10adfaa0df7e8c2027c6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Sep 30 14:35:46 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:35:46 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb65c0023e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:35:46 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v600: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Sep 30 14:35:46 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:35:46 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb63c002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:35:46 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:35:46 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:35:46 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:35:46.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:35:46 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:35:47 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:35:47 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:35:47 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:35:47.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:35:47 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:35:47.077Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:35:47 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:35:47 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb66000a2b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:35:47 compute-0 sshd-session[264145]: Failed password for root from 193.46.255.159 port 38638 ssh2
Sep 30 14:35:47 compute-0 ceph-mon[74194]: pgmap v600: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Sep 30 14:35:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:35:48 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb644002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:35:48 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v601: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 0 B/s wr, 19 op/s
Sep 30 14:35:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:35:48 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb65c0023e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:35:48 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:35:48 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:35:48 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:35:48.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:35:49 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:35:49 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:35:49 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:35:49.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:35:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:35:49 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb63c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:35:49 compute-0 unix_chkpwd[264236]: password check failed for user (root)
Sep 30 14:35:49 compute-0 ceph-mon[74194]: pgmap v601: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 0 B/s wr, 19 op/s
Sep 30 14:35:50 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:35:50 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb66000a2b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:35:50 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v602: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 0 B/s wr, 19 op/s
Sep 30 14:35:50 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:35:50 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb644004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:35:50 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:35:50 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:35:50 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:35:50.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:35:51 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:35:51 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:35:51 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:35:51.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:35:51 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:35:51 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb65c0023e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:35:51 compute-0 sshd-session[264145]: Failed password for root from 193.46.255.159 port 38638 ssh2
Sep 30 14:35:51 compute-0 ceph-mon[74194]: pgmap v602: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 0 B/s wr, 19 op/s
Sep 30 14:35:51 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:35:52 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.24572 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Sep 30 14:35:52 compute-0 ceph-mgr[74485]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Sep 30 14:35:52 compute-0 ceph-mgr[74485]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Sep 30 14:35:52 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.24578 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Sep 30 14:35:52 compute-0 ceph-mgr[74485]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Sep 30 14:35:52 compute-0 ceph-mgr[74485]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Sep 30 14:35:52 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.24578 -' entity='client.openstack' cmd=[{"prefix": "nfs cluster info", "cluster_id": "cephfs", "format": "json"}]: dispatch
Sep 30 14:35:52 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:35:52 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb63c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:35:52 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v603: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 0 B/s wr, 19 op/s
Sep 30 14:35:52 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:35:52 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb66000a2b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:35:52 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:35:52 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:35:52 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:35:52.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:35:52 compute-0 ceph-mon[74194]: from='client.? 192.168.122.10:0/697285396' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Sep 30 14:35:52 compute-0 ceph-mon[74194]: from='client.? 192.168.122.10:0/3525699199' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Sep 30 14:35:53 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:35:53 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:35:53 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:35:53.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:35:53 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:35:53 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb644004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:35:53 compute-0 sshd-session[264145]: Received disconnect from 193.46.255.159 port 38638:11:  [preauth]
Sep 30 14:35:53 compute-0 sshd-session[264145]: Disconnected from authenticating user root 193.46.255.159 port 38638 [preauth]
Sep 30 14:35:53 compute-0 sshd-session[264145]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.159  user=root
Sep 30 14:35:53 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:35:53.583Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:35:53 compute-0 ceph-mon[74194]: from='client.24572 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Sep 30 14:35:53 compute-0 ceph-mon[74194]: from='client.24578 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Sep 30 14:35:53 compute-0 ceph-mon[74194]: from='client.24578 -' entity='client.openstack' cmd=[{"prefix": "nfs cluster info", "cluster_id": "cephfs", "format": "json"}]: dispatch
Sep 30 14:35:53 compute-0 ceph-mon[74194]: pgmap v603: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 0 B/s wr, 19 op/s
Sep 30 14:35:54 compute-0 unix_chkpwd[264245]: password check failed for user (root)
Sep 30 14:35:54 compute-0 sshd-session[264241]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.159  user=root
Sep 30 14:35:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:35:54 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb65c0023e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:35:54 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v604: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:35:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:35:54 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb63c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:35:54 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:35:54 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:35:54 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:35:54.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:35:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:35:54] "GET /metrics HTTP/1.1" 200 48413 "" "Prometheus/2.51.0"
Sep 30 14:35:54 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:35:54] "GET /metrics HTTP/1.1" 200 48413 "" "Prometheus/2.51.0"
Sep 30 14:35:55 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:35:55 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:35:55 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:35:55.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:35:55 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:35:55 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb66000a2b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:35:55 compute-0 ceph-mon[74194]: pgmap v604: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:35:56 compute-0 sshd-session[264241]: Failed password for root from 193.46.255.159 port 60930 ssh2
Sep 30 14:35:56 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:35:56 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb644004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:35:56 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v605: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:35:56 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:35:56 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb644004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:35:56 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:35:56 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:35:56 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:35:56.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:35:56 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:35:57 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:35:57.078Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:35:57 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:35:57 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:35:57 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:35:57.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:35:57 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:35:57 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb65c0023e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:35:57 compute-0 ceph-mon[74194]: pgmap v605: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:35:58 compute-0 unix_chkpwd[264250]: password check failed for user (root)
Sep 30 14:35:58 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:35:58 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb66000a2b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:35:58 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v606: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:35:58 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:35:58 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb66000a2b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:35:58 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:35:58 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:35:58 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:35:58.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:35:59 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:35:59 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:35:59 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:35:59.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:35:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:35:59 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb63c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:35:59 compute-0 ceph-mgr[74485]: [balancer INFO root] Optimize plan auto_2025-09-30_14:35:59
Sep 30 14:35:59 compute-0 ceph-mgr[74485]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 14:35:59 compute-0 ceph-mgr[74485]: [balancer INFO root] do_upmap
Sep 30 14:35:59 compute-0 ceph-mgr[74485]: [balancer INFO root] pools ['backups', 'default.rgw.control', 'cephfs.cephfs.data', 'vms', 'cephfs.cephfs.meta', 'images', '.mgr', 'volumes', 'default.rgw.log', '.rgw.root', 'default.rgw.meta', '.nfs']
Sep 30 14:35:59 compute-0 ceph-mgr[74485]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 14:35:59 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:35:59 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:35:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:35:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:35:59 compute-0 sshd-session[264241]: Failed password for root from 193.46.255.159 port 60930 ssh2
Sep 30 14:35:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 14:35:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:35:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 14:35:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:35:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:35:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:35:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:35:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:35:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:35:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:35:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:35:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:35:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Sep 30 14:35:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:35:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:35:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:35:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Sep 30 14:35:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:35:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Sep 30 14:35:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:35:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:35:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:35:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 14:35:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:35:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 14:35:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:35:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:35:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:35:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:36:00 compute-0 ceph-mon[74194]: pgmap v606: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:36:00 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:36:00 compute-0 unix_chkpwd[264253]: password check failed for user (root)
Sep 30 14:36:00 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:36:00 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb65c0023e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:36:00 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v607: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:36:00 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:36:00 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb644004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:36:00 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:36:00 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:36:00 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:36:00.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:36:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 14:36:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 14:36:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 14:36:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 14:36:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 14:36:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 14:36:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 14:36:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 14:36:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 14:36:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 14:36:01 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:36:01 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:36:01 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:36:01.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:36:01 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:36:01 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb630000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:36:01 compute-0 sshd-session[264241]: Failed password for root from 193.46.255.159 port 60930 ssh2
Sep 30 14:36:01 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:36:02 compute-0 ceph-mon[74194]: pgmap v607: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:36:02 compute-0 sshd-session[264241]: Received disconnect from 193.46.255.159 port 60930:11:  [preauth]
Sep 30 14:36:02 compute-0 sshd-session[264241]: Disconnected from authenticating user root 193.46.255.159 port 60930 [preauth]
Sep 30 14:36:02 compute-0 sshd-session[264241]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.159  user=root
Sep 30 14:36:02 compute-0 sudo[264258]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:36:02 compute-0 sudo[264258]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:36:02 compute-0 sudo[264258]: pam_unix(sudo:session): session closed for user root
Sep 30 14:36:02 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:36:02 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb63c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:36:02 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v608: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:36:02 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:36:02 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb65c0023e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:36:02 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:36:02 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:36:02 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:36:02.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:36:03 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:36:03 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:36:03 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:36:03.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:36:03 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:36:03 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb644004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:36:03 compute-0 ceph-mon[74194]: pgmap v608: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:36:03 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/3651056691' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:36:03 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:36:03.585Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:36:04 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/2041890311' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:36:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:36:04 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb644004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:36:04 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v609: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:36:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:36:04 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb63c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:36:04 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:36:04 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:36:04 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:36:04.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:36:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:36:04] "GET /metrics HTTP/1.1" 200 48422 "" "Prometheus/2.51.0"
Sep 30 14:36:04 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:36:04] "GET /metrics HTTP/1.1" 200 48422 "" "Prometheus/2.51.0"
Sep 30 14:36:05 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:36:05 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.002000054s ======
Sep 30 14:36:05 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:36:05.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Sep 30 14:36:05 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:36:05 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb65c0023e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:36:05 compute-0 ceph-mon[74194]: pgmap v609: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:36:05 compute-0 nova_compute[261524]: 2025-09-30 14:36:05.378 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:36:05 compute-0 nova_compute[261524]: 2025-09-30 14:36:05.400 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:36:05 compute-0 nova_compute[261524]: 2025-09-30 14:36:05.401 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Sep 30 14:36:05 compute-0 nova_compute[261524]: 2025-09-30 14:36:05.401 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Sep 30 14:36:05 compute-0 nova_compute[261524]: 2025-09-30 14:36:05.423 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Sep 30 14:36:05 compute-0 nova_compute[261524]: 2025-09-30 14:36:05.424 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:36:05 compute-0 nova_compute[261524]: 2025-09-30 14:36:05.425 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:36:05 compute-0 nova_compute[261524]: 2025-09-30 14:36:05.425 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:36:05 compute-0 nova_compute[261524]: 2025-09-30 14:36:05.425 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:36:05 compute-0 nova_compute[261524]: 2025-09-30 14:36:05.425 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:36:05 compute-0 nova_compute[261524]: 2025-09-30 14:36:05.425 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Sep 30 14:36:05 compute-0 nova_compute[261524]: 2025-09-30 14:36:05.954 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:36:05 compute-0 nova_compute[261524]: 2025-09-30 14:36:05.956 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:36:05 compute-0 nova_compute[261524]: 2025-09-30 14:36:05.956 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:36:05 compute-0 nova_compute[261524]: 2025-09-30 14:36:05.979 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:36:05 compute-0 nova_compute[261524]: 2025-09-30 14:36:05.979 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:36:05 compute-0 nova_compute[261524]: 2025-09-30 14:36:05.980 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:36:05 compute-0 nova_compute[261524]: 2025-09-30 14:36:05.980 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Sep 30 14:36:05 compute-0 nova_compute[261524]: 2025-09-30 14:36:05.980 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Sep 30 14:36:06 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/2956818519' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:36:06 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:36:06 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb644004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:36:06 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 14:36:06 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3742306097' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:36:06 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v610: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:36:06 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:36:06 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb644004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:36:06 compute-0 nova_compute[261524]: 2025-09-30 14:36:06.509 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Sep 30 14:36:06 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:36:06 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:36:06 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:36:06.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:36:06 compute-0 nova_compute[261524]: 2025-09-30 14:36:06.693 2 WARNING nova.virt.libvirt.driver [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 14:36:06 compute-0 nova_compute[261524]: 2025-09-30 14:36:06.694 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4926MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Sep 30 14:36:06 compute-0 nova_compute[261524]: 2025-09-30 14:36:06.695 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:36:06 compute-0 nova_compute[261524]: 2025-09-30 14:36:06.695 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:36:06 compute-0 nova_compute[261524]: 2025-09-30 14:36:06.796 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Sep 30 14:36:06 compute-0 nova_compute[261524]: 2025-09-30 14:36:06.796 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Sep 30 14:36:06 compute-0 nova_compute[261524]: 2025-09-30 14:36:06.813 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Sep 30 14:36:06 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:36:07 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:36:07.079Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:36:07 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:36:07.079Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:36:07 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:36:07 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:36:07 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:36:07.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:36:07 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:36:07 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb63c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:36:07 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/3742306097' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:36:07 compute-0 ceph-mon[74194]: pgmap v610: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:36:07 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/2509901728' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:36:07 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 14:36:07 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3386958093' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:36:07 compute-0 nova_compute[261524]: 2025-09-30 14:36:07.289 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Sep 30 14:36:07 compute-0 nova_compute[261524]: 2025-09-30 14:36:07.296 2 DEBUG nova.compute.provider_tree [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Inventory has not changed in ProviderTree for provider: 06783cfc-6d32-454d-9501-ebd8adea3735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Sep 30 14:36:07 compute-0 nova_compute[261524]: 2025-09-30 14:36:07.323 2 DEBUG nova.scheduler.client.report [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Inventory has not changed for provider 06783cfc-6d32-454d-9501-ebd8adea3735 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Sep 30 14:36:07 compute-0 nova_compute[261524]: 2025-09-30 14:36:07.325 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Sep 30 14:36:07 compute-0 nova_compute[261524]: 2025-09-30 14:36:07.325 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.630s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:36:08 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/3386958093' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:36:08 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:36:08 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb65c0023e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:36:08 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v611: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:36:08 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:36:08 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb65c0023e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:36:08 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:36:08 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:36:08 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:36:08.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:36:09 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:36:09 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:36:09 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:36:09.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:36:09 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:36:09 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6300016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:36:09 compute-0 ceph-mon[74194]: pgmap v611: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:36:10 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:36:10 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb63c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:36:10 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v612: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:36:10 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:36:10 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb644004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:36:10 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:36:10 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:36:10 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:36:10.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:36:11 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:36:11 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:36:11 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:36:11.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:36:11 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:36:11 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb65c0023e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:36:11 compute-0 ceph-mon[74194]: pgmap v612: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:36:11 compute-0 ceph-mon[74194]: from='client.? 192.168.122.10:0/1748850653' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 14:36:11 compute-0 ceph-mon[74194]: from='client.? 192.168.122.10:0/1748850653' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 14:36:11 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:36:12 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:36:12 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb630002050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:36:12 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v613: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:36:12 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:36:12 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb63c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:36:12 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:36:12 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:36:12 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:36:12.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:36:13 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:36:13 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:36:13 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:36:13.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:36:13 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:36:13 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb644004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:36:13 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:36:13.586Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:36:13 compute-0 ceph-mon[74194]: pgmap v613: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:36:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:36:14 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb65c0023e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:36:14 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v614: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:36:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:36:14 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb630002050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:36:14 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:36:14 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:36:14 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:36:14.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:36:14 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:36:14 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:36:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:36:14] "GET /metrics HTTP/1.1" 200 48417 "" "Prometheus/2.51.0"
Sep 30 14:36:14 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:36:14] "GET /metrics HTTP/1.1" 200 48417 "" "Prometheus/2.51.0"
Sep 30 14:36:14 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:36:15 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:36:15 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:36:15 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:36:15.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:36:15 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:36:15 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb63c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:36:16 compute-0 ceph-mon[74194]: pgmap v614: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:36:16 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:36:16 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb644004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:36:16 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v615: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 0 B/s wr, 62 op/s
Sep 30 14:36:16 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:36:16 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb65c0023e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:36:16 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:36:16 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:36:16 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:36:16.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:36:16 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:36:17 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:36:17.085Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:36:17 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:36:17.085Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:36:17 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:36:17 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:36:17 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:36:17.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:36:17 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:36:17 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb630002f00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:36:17 compute-0 podman[264343]: 2025-09-30 14:36:17.159359512 +0000 UTC m=+0.071726396 container health_status b3fb2fd96e3ed0561f41ccf5f3a6a8f5edc1610051f196882b9fe64f54041c07 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0)
Sep 30 14:36:17 compute-0 podman[264344]: 2025-09-30 14:36:17.17008733 +0000 UTC m=+0.080585554 container health_status c96c6cc1b89de09f21811bef665f35e069874e3ad1bbd4c37d02227af98e3458 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:36:17 compute-0 podman[264341]: 2025-09-30 14:36:17.173551123 +0000 UTC m=+0.095661538 container health_status 3f9405f717bf7bccb1d94628a6cea0442375ebf8d5cf43ef2536ee30dce6c6e0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, org.label-schema.build-date=20250923, config_id=iscsid, io.buildah.version=1.41.3, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true)
Sep 30 14:36:17 compute-0 podman[264342]: 2025-09-30 14:36:17.178005813 +0000 UTC m=+0.097318923 container health_status 8ace90c1ad44d98a416105b72e1c541724757ab08efa10adfaa0df7e8c2027c6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller)
Sep 30 14:36:18 compute-0 ceph-mon[74194]: pgmap v615: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 0 B/s wr, 62 op/s
Sep 30 14:36:18 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:36:18 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb63c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:36:18 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v616: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 0 B/s wr, 62 op/s
Sep 30 14:36:18 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:36:18 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb644004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:36:18 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:36:18 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:36:18 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:36:18.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:36:19 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:36:19 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:36:19 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:36:19.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:36:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:36:19 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb630002f00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:36:19 compute-0 ceph-mon[74194]: pgmap v616: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 0 B/s wr, 62 op/s
Sep 30 14:36:20 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:36:20 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb65c0040b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:36:20 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v617: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 0 B/s wr, 62 op/s
Sep 30 14:36:20 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:36:20 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb63c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:36:20 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:36:20 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:36:20 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:36:20.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:36:21 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:36:21 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:36:21 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:36:21.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:36:21 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:36:21 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb644004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:36:21 compute-0 ceph-mon[74194]: pgmap v617: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 0 B/s wr, 62 op/s
Sep 30 14:36:21 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:36:22 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:36:22 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb644004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:36:22 compute-0 sudo[264430]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:36:22 compute-0 sudo[264430]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:36:22 compute-0 sudo[264430]: pam_unix(sudo:session): session closed for user root
Sep 30 14:36:22 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v618: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 72 KiB/s rd, 0 B/s wr, 119 op/s
Sep 30 14:36:22 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:36:22 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb65c0040b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:36:22 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:36:22 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:36:22 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:36:22.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:36:23 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:36:23 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:36:23 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:36:23.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:36:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:36:23 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb63c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:36:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:36:23.587Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:36:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:36:23.587Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:36:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:36:23.587Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:36:23 compute-0 ceph-mon[74194]: pgmap v618: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 72 KiB/s rd, 0 B/s wr, 119 op/s
Sep 30 14:36:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:36:24 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb644004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:36:24 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v619: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 72 KiB/s rd, 0 B/s wr, 119 op/s
Sep 30 14:36:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:36:24 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb65c0040b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:36:24 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:36:24 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:36:24 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:36:24.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:36:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:36:24] "GET /metrics HTTP/1.1" 200 48417 "" "Prometheus/2.51.0"
Sep 30 14:36:24 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:36:24] "GET /metrics HTTP/1.1" 200 48417 "" "Prometheus/2.51.0"
Sep 30 14:36:25 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:36:25 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:36:25 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:36:25.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:36:25 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:36:25 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb630002f00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:36:25 compute-0 ceph-mon[74194]: pgmap v619: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 72 KiB/s rd, 0 B/s wr, 119 op/s
Sep 30 14:36:26 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:36:26 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb63c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:36:26 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v620: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 72 KiB/s rd, 0 B/s wr, 119 op/s
Sep 30 14:36:26 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:36:26 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb644004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:36:26 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:36:26 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:36:26 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:36:26.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:36:26 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:36:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:36:27.087Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:36:27 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:36:27 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:36:27 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:36:27.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:36:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:36:27 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb65c0040b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:36:27 compute-0 ceph-mon[74194]: pgmap v620: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 72 KiB/s rd, 0 B/s wr, 119 op/s
Sep 30 14:36:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:36:28 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb630002f00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:36:28 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v621: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 57 op/s
Sep 30 14:36:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:36:28 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb644004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:36:28 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:36:28 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:36:28 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:36:28.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:36:29 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:36:29 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:36:29 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:36:29.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:36:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:36:29 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb63c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:36:29 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:36:29 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:36:29 compute-0 ceph-mon[74194]: pgmap v621: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 57 op/s
Sep 30 14:36:29 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:36:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:36:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:36:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:36:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:36:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:36:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:36:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:36:30 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb65c0040b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:36:30 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v622: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 57 op/s
Sep 30 14:36:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:36:30 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb630002f00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:36:30 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:36:30 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:36:30 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:36:30.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:36:31 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:36:31 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:36:31 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:36:31.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:36:31 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:36:31 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb660001320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:36:31 compute-0 ceph-mon[74194]: pgmap v622: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 57 op/s
Sep 30 14:36:31 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:36:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:36:32 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb63c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:36:32 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v623: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 0 B/s wr, 57 op/s
Sep 30 14:36:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:36:32 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb65c0040b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:36:32 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:36:32 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:36:32 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:36:32.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:36:33 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:36:33 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:36:33 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:36:33.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:36:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:36:33 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb638000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:36:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:36:33.588Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:36:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:36:33.588Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:36:33 compute-0 ceph-mon[74194]: pgmap v623: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 0 B/s wr, 57 op/s
Sep 30 14:36:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:36:34 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb660001320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:36:34 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v624: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:36:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:36:34 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb63c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:36:34 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:36:34 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:36:34 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:36:34.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:36:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:36:34] "GET /metrics HTTP/1.1" 200 48418 "" "Prometheus/2.51.0"
Sep 30 14:36:34 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:36:34] "GET /metrics HTTP/1.1" 200 48418 "" "Prometheus/2.51.0"
Sep 30 14:36:35 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:36:35 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:36:35 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:36:35.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:36:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:36:35 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb65c0040b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:36:35 compute-0 ceph-mon[74194]: pgmap v624: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:36:36 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:36:36 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb6380016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:36:36 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v625: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:36:36 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[263319]: 30/09/2025 14:36:36 : epoch 68dbeaa4 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb660001320 fd 39 proxy ignored for local
Sep 30 14:36:36 compute-0 kernel: ganesha.nfsd[264463]: segfault at 50 ip 00007fb70de0432e sp 00007fb6ddffa210 error 4 in libntirpc.so.5.8[7fb70dde9000+2c000] likely on CPU 6 (core 0, socket 6)
Sep 30 14:36:36 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Sep 30 14:36:36 compute-0 systemd[1]: Started Process Core Dump (PID 264471/UID 0).
Sep 30 14:36:36 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:36:36 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:36:36 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:36:36.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:36:36 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:36:37 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:36:37.087Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:36:37 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:36:37 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:36:37 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:36:37.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:36:37 compute-0 sudo[264474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:36:37 compute-0 sudo[264474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:36:37 compute-0 sudo[264474]: pam_unix(sudo:session): session closed for user root
Sep 30 14:36:37 compute-0 sudo[264499]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 14:36:37 compute-0 sudo[264499]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:36:37 compute-0 systemd-coredump[264472]: Process 263323 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 57:
                                                    #0  0x00007fb70de0432e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Sep 30 14:36:37 compute-0 ceph-mon[74194]: pgmap v625: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:36:37 compute-0 systemd[1]: systemd-coredump@8-264471-0.service: Deactivated successfully.
Sep 30 14:36:37 compute-0 systemd[1]: systemd-coredump@8-264471-0.service: Consumed 1.192s CPU time.
Sep 30 14:36:37 compute-0 podman[264540]: 2025-09-30 14:36:37.854556694 +0000 UTC m=+0.030200792 container died d71623795855505ad9d7680a4e0fc119f5b35be89ab837ffbfb0a5b54eb96aab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Sep 30 14:36:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-0db962e2db3d53f37b2527ed46033fcd31f22205d966d5b62d39b737ce038151-merged.mount: Deactivated successfully.
Sep 30 14:36:37 compute-0 podman[264540]: 2025-09-30 14:36:37.897569948 +0000 UTC m=+0.073214036 container remove d71623795855505ad9d7680a4e0fc119f5b35be89ab837ffbfb0a5b54eb96aab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Sep 30 14:36:37 compute-0 systemd[1]: ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6@nfs.cephfs.2.0.compute-0.qrbicy.service: Main process exited, code=exited, status=139/n/a
Sep 30 14:36:38 compute-0 systemd[1]: ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6@nfs.cephfs.2.0.compute-0.qrbicy.service: Failed with result 'exit-code'.
Sep 30 14:36:38 compute-0 systemd[1]: ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6@nfs.cephfs.2.0.compute-0.qrbicy.service: Consumed 1.596s CPU time.
Sep 30 14:36:38 compute-0 sudo[264499]: pam_unix(sudo:session): session closed for user root
Sep 30 14:36:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:36:38.253 163966 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:36:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:36:38.253 163966 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:36:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:36:38.253 163966 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:36:38 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:36:38 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:36:38 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 14:36:38 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 14:36:38 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 14:36:38 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:36:38 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 14:36:38 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:36:38 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 14:36:38 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 14:36:38 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 14:36:38 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 14:36:38 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:36:38 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:36:38 compute-0 sudo[264603]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:36:38 compute-0 sudo[264603]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:36:38 compute-0 sudo[264603]: pam_unix(sudo:session): session closed for user root
Sep 30 14:36:38 compute-0 sudo[264628]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 14:36:38 compute-0 sudo[264628]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:36:38 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v626: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:36:38 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:36:38 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:36:38 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:36:38.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:36:38 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:36:38 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 14:36:38 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:36:38 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:36:38 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 14:36:38 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 14:36:38 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:36:38 compute-0 podman[264692]: 2025-09-30 14:36:38.841887642 +0000 UTC m=+0.056975470 container create 750b095ab93170f97ef921456c367bdcdee8eab147af55c28d52479a20d7591b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_wescoff, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Sep 30 14:36:38 compute-0 systemd[1]: Started libpod-conmon-750b095ab93170f97ef921456c367bdcdee8eab147af55c28d52479a20d7591b.scope.
Sep 30 14:36:38 compute-0 podman[264692]: 2025-09-30 14:36:38.820623512 +0000 UTC m=+0.035711410 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:36:38 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:36:38 compute-0 podman[264692]: 2025-09-30 14:36:38.937415366 +0000 UTC m=+0.152503194 container init 750b095ab93170f97ef921456c367bdcdee8eab147af55c28d52479a20d7591b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_wescoff, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Sep 30 14:36:38 compute-0 podman[264692]: 2025-09-30 14:36:38.94351206 +0000 UTC m=+0.158599888 container start 750b095ab93170f97ef921456c367bdcdee8eab147af55c28d52479a20d7591b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_wescoff, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:36:38 compute-0 podman[264692]: 2025-09-30 14:36:38.946407578 +0000 UTC m=+0.161495406 container attach 750b095ab93170f97ef921456c367bdcdee8eab147af55c28d52479a20d7591b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_wescoff, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid)
Sep 30 14:36:38 compute-0 naughty_wescoff[264709]: 167 167
Sep 30 14:36:38 compute-0 systemd[1]: libpod-750b095ab93170f97ef921456c367bdcdee8eab147af55c28d52479a20d7591b.scope: Deactivated successfully.
Sep 30 14:36:38 compute-0 conmon[264709]: conmon 750b095ab93170f97ef9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-750b095ab93170f97ef921456c367bdcdee8eab147af55c28d52479a20d7591b.scope/container/memory.events
Sep 30 14:36:38 compute-0 podman[264692]: 2025-09-30 14:36:38.950021505 +0000 UTC m=+0.165109333 container died 750b095ab93170f97ef921456c367bdcdee8eab147af55c28d52479a20d7591b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_wescoff, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid)
Sep 30 14:36:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-5239b134157c82c0dcf5907cfa10879606db71fbc5cd45eff3a09b0dc9d43d96-merged.mount: Deactivated successfully.
Sep 30 14:36:38 compute-0 podman[264692]: 2025-09-30 14:36:38.983303988 +0000 UTC m=+0.198391816 container remove 750b095ab93170f97ef921456c367bdcdee8eab147af55c28d52479a20d7591b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_wescoff, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Sep 30 14:36:38 compute-0 systemd[1]: libpod-conmon-750b095ab93170f97ef921456c367bdcdee8eab147af55c28d52479a20d7591b.scope: Deactivated successfully.
Sep 30 14:36:39 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:36:39 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:36:39 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:36:39.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:36:39 compute-0 podman[264732]: 2025-09-30 14:36:39.162529508 +0000 UTC m=+0.045012899 container create 64996e56bda48653f84cfe1acfff8c5ef59377c4b781210face82ac676f65905 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_noyce, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:36:39 compute-0 systemd[1]: Started libpod-conmon-64996e56bda48653f84cfe1acfff8c5ef59377c4b781210face82ac676f65905.scope.
Sep 30 14:36:39 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:36:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0b5445783c88807b80f9a083084e6a98f2fb93e74abb6070d00a68c31d55380/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:36:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0b5445783c88807b80f9a083084e6a98f2fb93e74abb6070d00a68c31d55380/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:36:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0b5445783c88807b80f9a083084e6a98f2fb93e74abb6070d00a68c31d55380/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:36:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0b5445783c88807b80f9a083084e6a98f2fb93e74abb6070d00a68c31d55380/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:36:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0b5445783c88807b80f9a083084e6a98f2fb93e74abb6070d00a68c31d55380/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 14:36:39 compute-0 podman[264732]: 2025-09-30 14:36:39.140668871 +0000 UTC m=+0.023152282 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:36:39 compute-0 podman[264732]: 2025-09-30 14:36:39.242788942 +0000 UTC m=+0.125272403 container init 64996e56bda48653f84cfe1acfff8c5ef59377c4b781210face82ac676f65905 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_noyce, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:36:39 compute-0 podman[264732]: 2025-09-30 14:36:39.253048238 +0000 UTC m=+0.135531639 container start 64996e56bda48653f84cfe1acfff8c5ef59377c4b781210face82ac676f65905 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_noyce, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Sep 30 14:36:39 compute-0 podman[264732]: 2025-09-30 14:36:39.257806865 +0000 UTC m=+0.140290236 container attach 64996e56bda48653f84cfe1acfff8c5ef59377c4b781210face82ac676f65905 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_noyce, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Sep 30 14:36:39 compute-0 festive_noyce[264748]: --> passed data devices: 0 physical, 1 LVM
Sep 30 14:36:39 compute-0 festive_noyce[264748]: --> All data devices are unavailable
Sep 30 14:36:39 compute-0 systemd[1]: libpod-64996e56bda48653f84cfe1acfff8c5ef59377c4b781210face82ac676f65905.scope: Deactivated successfully.
Sep 30 14:36:39 compute-0 podman[264732]: 2025-09-30 14:36:39.617699674 +0000 UTC m=+0.500183045 container died 64996e56bda48653f84cfe1acfff8c5ef59377c4b781210face82ac676f65905 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_noyce, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:36:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-e0b5445783c88807b80f9a083084e6a98f2fb93e74abb6070d00a68c31d55380-merged.mount: Deactivated successfully.
Sep 30 14:36:39 compute-0 podman[264732]: 2025-09-30 14:36:39.658825218 +0000 UTC m=+0.541308589 container remove 64996e56bda48653f84cfe1acfff8c5ef59377c4b781210face82ac676f65905 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_noyce, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:36:39 compute-0 systemd[1]: libpod-conmon-64996e56bda48653f84cfe1acfff8c5ef59377c4b781210face82ac676f65905.scope: Deactivated successfully.
Sep 30 14:36:39 compute-0 sudo[264628]: pam_unix(sudo:session): session closed for user root
Sep 30 14:36:39 compute-0 sudo[264774]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:36:39 compute-0 sudo[264774]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:36:39 compute-0 sudo[264774]: pam_unix(sudo:session): session closed for user root
Sep 30 14:36:39 compute-0 ceph-mon[74194]: pgmap v626: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:36:39 compute-0 sudo[264800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -- lvm list --format json
Sep 30 14:36:39 compute-0 sudo[264800]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:36:40 compute-0 podman[264866]: 2025-09-30 14:36:40.254065554 +0000 UTC m=+0.041759042 container create 99cb705dd74f7c19fabba777319434d42fd1dcc29c25a32fbbd1b1b284eac9bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_jennings, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Sep 30 14:36:40 compute-0 systemd[1]: Started libpod-conmon-99cb705dd74f7c19fabba777319434d42fd1dcc29c25a32fbbd1b1b284eac9bb.scope.
Sep 30 14:36:40 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:36:40 compute-0 podman[264866]: 2025-09-30 14:36:40.328878831 +0000 UTC m=+0.116572339 container init 99cb705dd74f7c19fabba777319434d42fd1dcc29c25a32fbbd1b1b284eac9bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_jennings, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default)
Sep 30 14:36:40 compute-0 podman[264866]: 2025-09-30 14:36:40.23605099 +0000 UTC m=+0.023744498 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:36:40 compute-0 podman[264866]: 2025-09-30 14:36:40.335393956 +0000 UTC m=+0.123087434 container start 99cb705dd74f7c19fabba777319434d42fd1dcc29c25a32fbbd1b1b284eac9bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_jennings, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Sep 30 14:36:40 compute-0 podman[264866]: 2025-09-30 14:36:40.338120139 +0000 UTC m=+0.125813647 container attach 99cb705dd74f7c19fabba777319434d42fd1dcc29c25a32fbbd1b1b284eac9bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_jennings, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:36:40 compute-0 infallible_jennings[264882]: 167 167
Sep 30 14:36:40 compute-0 systemd[1]: libpod-99cb705dd74f7c19fabba777319434d42fd1dcc29c25a32fbbd1b1b284eac9bb.scope: Deactivated successfully.
Sep 30 14:36:40 compute-0 podman[264866]: 2025-09-30 14:36:40.339375333 +0000 UTC m=+0.127068831 container died 99cb705dd74f7c19fabba777319434d42fd1dcc29c25a32fbbd1b1b284eac9bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_jennings, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Sep 30 14:36:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-d4aa646449c84c28518cb03d3cd4cce23c16d2726c48bb82857297910fcc11b5-merged.mount: Deactivated successfully.
Sep 30 14:36:40 compute-0 podman[264866]: 2025-09-30 14:36:40.372065541 +0000 UTC m=+0.159759019 container remove 99cb705dd74f7c19fabba777319434d42fd1dcc29c25a32fbbd1b1b284eac9bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_jennings, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Sep 30 14:36:40 compute-0 systemd[1]: libpod-conmon-99cb705dd74f7c19fabba777319434d42fd1dcc29c25a32fbbd1b1b284eac9bb.scope: Deactivated successfully.
Sep 30 14:36:40 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-nfs-cephfs-compute-0-yvkpei[97239]: [WARNING] 272/143640 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Sep 30 14:36:40 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v627: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:36:40 compute-0 podman[264905]: 2025-09-30 14:36:40.594829819 +0000 UTC m=+0.047867165 container create f456cefa7a06cd2c1a43bc0ff0ceec79b848855eb06cea2cba37e0da989e5a1d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_bhabha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:36:40 compute-0 systemd[1]: Started libpod-conmon-f456cefa7a06cd2c1a43bc0ff0ceec79b848855eb06cea2cba37e0da989e5a1d.scope.
Sep 30 14:36:40 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:36:40 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:36:40 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:36:40.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:36:40 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:36:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afe02a49c78c6be518305c5f948896e5bc5eb618abe822449376ba85825ef75d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:36:40 compute-0 podman[264905]: 2025-09-30 14:36:40.575162971 +0000 UTC m=+0.028200297 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:36:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afe02a49c78c6be518305c5f948896e5bc5eb618abe822449376ba85825ef75d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:36:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afe02a49c78c6be518305c5f948896e5bc5eb618abe822449376ba85825ef75d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:36:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afe02a49c78c6be518305c5f948896e5bc5eb618abe822449376ba85825ef75d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:36:40 compute-0 podman[264905]: 2025-09-30 14:36:40.679581504 +0000 UTC m=+0.132618830 container init f456cefa7a06cd2c1a43bc0ff0ceec79b848855eb06cea2cba37e0da989e5a1d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_bhabha, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:36:40 compute-0 podman[264905]: 2025-09-30 14:36:40.693106057 +0000 UTC m=+0.146143393 container start f456cefa7a06cd2c1a43bc0ff0ceec79b848855eb06cea2cba37e0da989e5a1d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_bhabha, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default)
Sep 30 14:36:40 compute-0 podman[264905]: 2025-09-30 14:36:40.696777545 +0000 UTC m=+0.149814901 container attach f456cefa7a06cd2c1a43bc0ff0ceec79b848855eb06cea2cba37e0da989e5a1d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_bhabha, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Sep 30 14:36:40 compute-0 gracious_bhabha[264922]: {
Sep 30 14:36:40 compute-0 gracious_bhabha[264922]:     "0": [
Sep 30 14:36:40 compute-0 gracious_bhabha[264922]:         {
Sep 30 14:36:40 compute-0 gracious_bhabha[264922]:             "devices": [
Sep 30 14:36:40 compute-0 gracious_bhabha[264922]:                 "/dev/loop3"
Sep 30 14:36:40 compute-0 gracious_bhabha[264922]:             ],
Sep 30 14:36:40 compute-0 gracious_bhabha[264922]:             "lv_name": "ceph_lv0",
Sep 30 14:36:40 compute-0 gracious_bhabha[264922]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:36:40 compute-0 gracious_bhabha[264922]:             "lv_size": "21470642176",
Sep 30 14:36:40 compute-0 gracious_bhabha[264922]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5e3c7776-ac03-5698-b79f-a6dc2d80cae6,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1bf35304-bfb4-41f5-b832-570aa31de1b2,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 14:36:40 compute-0 gracious_bhabha[264922]:             "lv_uuid": "f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L",
Sep 30 14:36:40 compute-0 gracious_bhabha[264922]:             "name": "ceph_lv0",
Sep 30 14:36:40 compute-0 gracious_bhabha[264922]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:36:40 compute-0 gracious_bhabha[264922]:             "tags": {
Sep 30 14:36:40 compute-0 gracious_bhabha[264922]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:36:40 compute-0 gracious_bhabha[264922]:                 "ceph.block_uuid": "f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L",
Sep 30 14:36:40 compute-0 gracious_bhabha[264922]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 14:36:40 compute-0 gracious_bhabha[264922]:                 "ceph.cluster_fsid": "5e3c7776-ac03-5698-b79f-a6dc2d80cae6",
Sep 30 14:36:40 compute-0 gracious_bhabha[264922]:                 "ceph.cluster_name": "ceph",
Sep 30 14:36:40 compute-0 gracious_bhabha[264922]:                 "ceph.crush_device_class": "",
Sep 30 14:36:40 compute-0 gracious_bhabha[264922]:                 "ceph.encrypted": "0",
Sep 30 14:36:40 compute-0 gracious_bhabha[264922]:                 "ceph.osd_fsid": "1bf35304-bfb4-41f5-b832-570aa31de1b2",
Sep 30 14:36:40 compute-0 gracious_bhabha[264922]:                 "ceph.osd_id": "0",
Sep 30 14:36:40 compute-0 gracious_bhabha[264922]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 14:36:40 compute-0 gracious_bhabha[264922]:                 "ceph.type": "block",
Sep 30 14:36:40 compute-0 gracious_bhabha[264922]:                 "ceph.vdo": "0",
Sep 30 14:36:40 compute-0 gracious_bhabha[264922]:                 "ceph.with_tpm": "0"
Sep 30 14:36:40 compute-0 gracious_bhabha[264922]:             },
Sep 30 14:36:40 compute-0 gracious_bhabha[264922]:             "type": "block",
Sep 30 14:36:40 compute-0 gracious_bhabha[264922]:             "vg_name": "ceph_vg0"
Sep 30 14:36:40 compute-0 gracious_bhabha[264922]:         }
Sep 30 14:36:40 compute-0 gracious_bhabha[264922]:     ]
Sep 30 14:36:40 compute-0 gracious_bhabha[264922]: }
Sep 30 14:36:40 compute-0 systemd[1]: libpod-f456cefa7a06cd2c1a43bc0ff0ceec79b848855eb06cea2cba37e0da989e5a1d.scope: Deactivated successfully.
Sep 30 14:36:40 compute-0 podman[264905]: 2025-09-30 14:36:40.988723151 +0000 UTC m=+0.441760497 container died f456cefa7a06cd2c1a43bc0ff0ceec79b848855eb06cea2cba37e0da989e5a1d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_bhabha, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Sep 30 14:36:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-afe02a49c78c6be518305c5f948896e5bc5eb618abe822449376ba85825ef75d-merged.mount: Deactivated successfully.
Sep 30 14:36:41 compute-0 podman[264905]: 2025-09-30 14:36:41.045605308 +0000 UTC m=+0.498642654 container remove f456cefa7a06cd2c1a43bc0ff0ceec79b848855eb06cea2cba37e0da989e5a1d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_bhabha, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Sep 30 14:36:41 compute-0 systemd[1]: libpod-conmon-f456cefa7a06cd2c1a43bc0ff0ceec79b848855eb06cea2cba37e0da989e5a1d.scope: Deactivated successfully.
Sep 30 14:36:41 compute-0 sudo[264800]: pam_unix(sudo:session): session closed for user root
Sep 30 14:36:41 compute-0 sudo[264945]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:36:41 compute-0 sudo[264945]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:36:41 compute-0 sudo[264945]: pam_unix(sudo:session): session closed for user root
Sep 30 14:36:41 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:36:41 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:36:41 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:36:41.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:36:41 compute-0 sudo[264970]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -- raw list --format json
Sep 30 14:36:41 compute-0 sudo[264970]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:36:41 compute-0 podman[265035]: 2025-09-30 14:36:41.606318835 +0000 UTC m=+0.046018966 container create 63ddeba1316223ab53b6273a9a50e370b27e3a9e9d61707024107c17af9e7e36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_varahamihira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:36:41 compute-0 systemd[1]: Started libpod-conmon-63ddeba1316223ab53b6273a9a50e370b27e3a9e9d61707024107c17af9e7e36.scope.
Sep 30 14:36:41 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:36:41 compute-0 podman[265035]: 2025-09-30 14:36:41.672747078 +0000 UTC m=+0.112447159 container init 63ddeba1316223ab53b6273a9a50e370b27e3a9e9d61707024107c17af9e7e36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_varahamihira, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:36:41 compute-0 podman[265035]: 2025-09-30 14:36:41.680143927 +0000 UTC m=+0.119843968 container start 63ddeba1316223ab53b6273a9a50e370b27e3a9e9d61707024107c17af9e7e36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_varahamihira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:36:41 compute-0 podman[265035]: 2025-09-30 14:36:41.683537458 +0000 UTC m=+0.123237519 container attach 63ddeba1316223ab53b6273a9a50e370b27e3a9e9d61707024107c17af9e7e36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_varahamihira, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:36:41 compute-0 jovial_varahamihira[265051]: 167 167
Sep 30 14:36:41 compute-0 podman[265035]: 2025-09-30 14:36:41.589691669 +0000 UTC m=+0.029391710 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:36:41 compute-0 systemd[1]: libpod-63ddeba1316223ab53b6273a9a50e370b27e3a9e9d61707024107c17af9e7e36.scope: Deactivated successfully.
Sep 30 14:36:41 compute-0 podman[265035]: 2025-09-30 14:36:41.68585232 +0000 UTC m=+0.125552361 container died 63ddeba1316223ab53b6273a9a50e370b27e3a9e9d61707024107c17af9e7e36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_varahamihira, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Sep 30 14:36:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-9b890a3b874aab52d49aabfdbe596648f62c3d35536fced71c95d23584d0d203-merged.mount: Deactivated successfully.
Sep 30 14:36:41 compute-0 podman[265035]: 2025-09-30 14:36:41.732237475 +0000 UTC m=+0.171937506 container remove 63ddeba1316223ab53b6273a9a50e370b27e3a9e9d61707024107c17af9e7e36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_varahamihira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Sep 30 14:36:41 compute-0 systemd[1]: libpod-conmon-63ddeba1316223ab53b6273a9a50e370b27e3a9e9d61707024107c17af9e7e36.scope: Deactivated successfully.
Sep 30 14:36:41 compute-0 ceph-mon[74194]: pgmap v627: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:36:41 compute-0 podman[265077]: 2025-09-30 14:36:41.91424648 +0000 UTC m=+0.042382989 container create c961eeadf755336fb7c208731c382a3e41b33e1d55318a609eff0854738132b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:36:41 compute-0 systemd[1]: Started libpod-conmon-c961eeadf755336fb7c208731c382a3e41b33e1d55318a609eff0854738132b0.scope.
Sep 30 14:36:41 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:36:41 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:36:41 compute-0 podman[265077]: 2025-09-30 14:36:41.895445055 +0000 UTC m=+0.023581584 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:36:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfbece5f165682e86d0de125edb3c2d011fc795f6e3f7dada82b7e108aeb850b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:36:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfbece5f165682e86d0de125edb3c2d011fc795f6e3f7dada82b7e108aeb850b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:36:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfbece5f165682e86d0de125edb3c2d011fc795f6e3f7dada82b7e108aeb850b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:36:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfbece5f165682e86d0de125edb3c2d011fc795f6e3f7dada82b7e108aeb850b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:36:42 compute-0 podman[265077]: 2025-09-30 14:36:42.008310584 +0000 UTC m=+0.136447113 container init c961eeadf755336fb7c208731c382a3e41b33e1d55318a609eff0854738132b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_euler, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:36:42 compute-0 podman[265077]: 2025-09-30 14:36:42.017308026 +0000 UTC m=+0.145444535 container start c961eeadf755336fb7c208731c382a3e41b33e1d55318a609eff0854738132b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_euler, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Sep 30 14:36:42 compute-0 podman[265077]: 2025-09-30 14:36:42.020631255 +0000 UTC m=+0.148767794 container attach c961eeadf755336fb7c208731c382a3e41b33e1d55318a609eff0854738132b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True)
Sep 30 14:36:42 compute-0 sudo[265134]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:36:42 compute-0 sudo[265134]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:36:42 compute-0 sudo[265134]: pam_unix(sudo:session): session closed for user root
Sep 30 14:36:42 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v628: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Sep 30 14:36:42 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-nfs-cephfs-compute-0-yvkpei[97239]: [WARNING] 272/143642 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Sep 30 14:36:42 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:36:42 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:36:42 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:36:42.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:36:42 compute-0 lvm[265192]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 14:36:42 compute-0 lvm[265192]: VG ceph_vg0 finished
Sep 30 14:36:42 compute-0 festive_euler[265093]: {}
Sep 30 14:36:42 compute-0 systemd[1]: libpod-c961eeadf755336fb7c208731c382a3e41b33e1d55318a609eff0854738132b0.scope: Deactivated successfully.
Sep 30 14:36:42 compute-0 systemd[1]: libpod-c961eeadf755336fb7c208731c382a3e41b33e1d55318a609eff0854738132b0.scope: Consumed 1.180s CPU time.
Sep 30 14:36:42 compute-0 podman[265077]: 2025-09-30 14:36:42.719565384 +0000 UTC m=+0.847701893 container died c961eeadf755336fb7c208731c382a3e41b33e1d55318a609eff0854738132b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_euler, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:36:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-cfbece5f165682e86d0de125edb3c2d011fc795f6e3f7dada82b7e108aeb850b-merged.mount: Deactivated successfully.
Sep 30 14:36:42 compute-0 podman[265077]: 2025-09-30 14:36:42.761592252 +0000 UTC m=+0.889728761 container remove c961eeadf755336fb7c208731c382a3e41b33e1d55318a609eff0854738132b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_euler, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:36:42 compute-0 systemd[1]: libpod-conmon-c961eeadf755336fb7c208731c382a3e41b33e1d55318a609eff0854738132b0.scope: Deactivated successfully.
Sep 30 14:36:42 compute-0 sudo[264970]: pam_unix(sudo:session): session closed for user root
Sep 30 14:36:42 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 14:36:42 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:36:42 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 14:36:42 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:36:42 compute-0 sudo[265207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 14:36:42 compute-0 sudo[265207]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:36:42 compute-0 sudo[265207]: pam_unix(sudo:session): session closed for user root
Sep 30 14:36:43 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:36:43 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:36:43 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:36:43.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:36:43 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:36:43.589Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:36:43 compute-0 ceph-mon[74194]: pgmap v628: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Sep 30 14:36:43 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:36:43 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:36:44 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v629: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Sep 30 14:36:44 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:36:44 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:36:44 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:36:44.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:36:44 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:36:44 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:36:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:36:44] "GET /metrics HTTP/1.1" 200 48418 "" "Prometheus/2.51.0"
Sep 30 14:36:44 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:36:44] "GET /metrics HTTP/1.1" 200 48418 "" "Prometheus/2.51.0"
Sep 30 14:36:44 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:36:45 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:36:45 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:36:45 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:36:45.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:36:45 compute-0 ceph-mon[74194]: pgmap v629: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Sep 30 14:36:46 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v630: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:36:46 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:36:46 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:36:46 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:36:46.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:36:46 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:36:47 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:36:47.089Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:36:47 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:36:47 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:36:47 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:36:47.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:36:47 compute-0 ceph-mon[74194]: pgmap v630: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:36:48 compute-0 systemd[1]: ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6@nfs.cephfs.2.0.compute-0.qrbicy.service: Scheduled restart job, restart counter is at 9.
Sep 30 14:36:48 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.qrbicy for 5e3c7776-ac03-5698-b79f-a6dc2d80cae6.
Sep 30 14:36:48 compute-0 systemd[1]: ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6@nfs.cephfs.2.0.compute-0.qrbicy.service: Consumed 1.596s CPU time.
Sep 30 14:36:48 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.qrbicy for 5e3c7776-ac03-5698-b79f-a6dc2d80cae6...
Sep 30 14:36:48 compute-0 podman[265238]: 2025-09-30 14:36:48.170133658 +0000 UTC m=+0.086632746 container health_status 3f9405f717bf7bccb1d94628a6cea0442375ebf8d5cf43ef2536ee30dce6c6e0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Sep 30 14:36:48 compute-0 podman[265240]: 2025-09-30 14:36:48.176372995 +0000 UTC m=+0.084064217 container health_status b3fb2fd96e3ed0561f41ccf5f3a6a8f5edc1610051f196882b9fe64f54041c07 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=multipathd, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Sep 30 14:36:48 compute-0 podman[265241]: 2025-09-30 14:36:48.182423978 +0000 UTC m=+0.080421990 container health_status c96c6cc1b89de09f21811bef665f35e069874e3ad1bbd4c37d02227af98e3458 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250923, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, io.buildah.version=1.41.3)
Sep 30 14:36:48 compute-0 podman[265239]: 2025-09-30 14:36:48.191092831 +0000 UTC m=+0.110712993 container health_status 8ace90c1ad44d98a416105b72e1c541724757ab08efa10adfaa0df7e8c2027c6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20250923)
Sep 30 14:36:48 compute-0 podman[265357]: 2025-09-30 14:36:48.322059384 +0000 UTC m=+0.041811343 container create 2494e710c4141598b3341817e6fb96cd6048dfc708ce657418d5686a2aecab76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:36:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8573f3e31a3bbbdedf48cb8e88a8011d90de131ec9b33330d47f6a5dc20d2a08/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Sep 30 14:36:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8573f3e31a3bbbdedf48cb8e88a8011d90de131ec9b33330d47f6a5dc20d2a08/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:36:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8573f3e31a3bbbdedf48cb8e88a8011d90de131ec9b33330d47f6a5dc20d2a08/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 14:36:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8573f3e31a3bbbdedf48cb8e88a8011d90de131ec9b33330d47f6a5dc20d2a08/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.qrbicy-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 14:36:48 compute-0 podman[265357]: 2025-09-30 14:36:48.374972025 +0000 UTC m=+0.094724004 container init 2494e710c4141598b3341817e6fb96cd6048dfc708ce657418d5686a2aecab76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:36:48 compute-0 podman[265357]: 2025-09-30 14:36:48.380788491 +0000 UTC m=+0.100540440 container start 2494e710c4141598b3341817e6fb96cd6048dfc708ce657418d5686a2aecab76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Sep 30 14:36:48 compute-0 bash[265357]: 2494e710c4141598b3341817e6fb96cd6048dfc708ce657418d5686a2aecab76
Sep 30 14:36:48 compute-0 podman[265357]: 2025-09-30 14:36:48.304303438 +0000 UTC m=+0.024055417 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:36:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:36:48 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Sep 30 14:36:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:36:48 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Sep 30 14:36:48 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.qrbicy for 5e3c7776-ac03-5698-b79f-a6dc2d80cae6.
Sep 30 14:36:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:36:48 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Sep 30 14:36:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:36:48 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Sep 30 14:36:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:36:48 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Sep 30 14:36:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:36:48 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Sep 30 14:36:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:36:48 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Sep 30 14:36:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:36:48 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:36:48 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v631: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:36:48 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:36:48 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:36:48 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:36:48.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:36:49 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:36:49 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:36:49 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:36:49.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:36:49 compute-0 ceph-mon[74194]: pgmap v631: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:36:50 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v632: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:36:50 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:36:50 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:36:50 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:36:50.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:36:51 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:36:51 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:36:51 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:36:51.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:36:51 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:36:52 compute-0 ceph-mon[74194]: pgmap v632: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:36:52 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v633: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s
Sep 30 14:36:52 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:36:52 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:36:52 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:36:52.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:36:53 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:36:53 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:36:53 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:36:53.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:36:53 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:36:53.591Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:36:54 compute-0 ceph-mon[74194]: pgmap v633: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s
Sep 30 14:36:54 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v634: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 511 B/s wr, 1 op/s
Sep 30 14:36:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:36:54 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:36:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:36:54 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:36:54 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:36:54 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:36:54 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:36:54.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:36:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:36:54] "GET /metrics HTTP/1.1" 200 48418 "" "Prometheus/2.51.0"
Sep 30 14:36:54 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:36:54] "GET /metrics HTTP/1.1" 200 48418 "" "Prometheus/2.51.0"
Sep 30 14:36:55 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:36:55 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:36:55 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:36:55.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:36:55 compute-0 ceph-mon[74194]: pgmap v634: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 511 B/s wr, 1 op/s
Sep 30 14:36:56 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v635: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1023 B/s wr, 3 op/s
Sep 30 14:36:56 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:36:56 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:36:56 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:36:56.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:36:56 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:36:57 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:36:57.091Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:36:57 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:36:57.091Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:36:57 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:36:57.091Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:36:57 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:36:57 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:36:57 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:36:57.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:36:57 compute-0 ceph-mon[74194]: pgmap v635: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1023 B/s wr, 3 op/s
Sep 30 14:36:58 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v636: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Sep 30 14:36:58 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:36:58 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:36:58 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:36:58.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:36:59 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:36:59 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:36:59 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:36:59.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:36:59 compute-0 ceph-mgr[74485]: [balancer INFO root] Optimize plan auto_2025-09-30_14:36:59
Sep 30 14:36:59 compute-0 ceph-mgr[74485]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 14:36:59 compute-0 ceph-mgr[74485]: [balancer INFO root] do_upmap
Sep 30 14:36:59 compute-0 ceph-mgr[74485]: [balancer INFO root] pools ['.rgw.root', 'backups', 'default.rgw.log', '.nfs', 'cephfs.cephfs.meta', '.mgr', 'vms', 'volumes', 'images', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.data']
Sep 30 14:36:59 compute-0 ceph-mgr[74485]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 14:36:59 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:36:59 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:36:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:36:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:36:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 14:36:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:36:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 14:36:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:36:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:36:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:36:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:36:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:36:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:36:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:36:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:36:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:36:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Sep 30 14:36:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:36:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:36:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:36:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Sep 30 14:36:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:36:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Sep 30 14:36:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:36:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:36:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:36:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 14:36:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:36:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 14:36:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:36:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:36:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:36:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:36:59 compute-0 ceph-mon[74194]: pgmap v636: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Sep 30 14:36:59 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:37:00 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v637: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Sep 30 14:37:00 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:00 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Sep 30 14:37:00 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:00 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Sep 30 14:37:00 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:00 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Sep 30 14:37:00 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:00 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Sep 30 14:37:00 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:00 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Sep 30 14:37:00 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:00 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Sep 30 14:37:00 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:00 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Sep 30 14:37:00 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:00 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 14:37:00 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:00 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 14:37:00 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:00 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 14:37:00 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:37:00 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:37:00 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:37:00.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:37:00 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:00 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Sep 30 14:37:00 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:00 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Sep 30 14:37:00 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:00 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Sep 30 14:37:00 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:00 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Sep 30 14:37:00 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:00 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Sep 30 14:37:00 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:00 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Sep 30 14:37:00 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:00 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Sep 30 14:37:00 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:00 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Sep 30 14:37:00 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:00 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Sep 30 14:37:00 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:00 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Sep 30 14:37:00 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:00 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Sep 30 14:37:00 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:00 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Sep 30 14:37:00 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:00 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Sep 30 14:37:00 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:00 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Sep 30 14:37:00 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:00 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Sep 30 14:37:00 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:00 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Sep 30 14:37:00 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:00 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Sep 30 14:37:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 14:37:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 14:37:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 14:37:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 14:37:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 14:37:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 14:37:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 14:37:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 14:37:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 14:37:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 14:37:01 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:37:01 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:37:01 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:37:01.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:37:01 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:01 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0938000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:37:01 compute-0 ceph-mon[74194]: pgmap v637: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Sep 30 14:37:01 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:37:02 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:02 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f092c001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:37:02 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-nfs-cephfs-compute-0-yvkpei[97239]: [WARNING] 272/143702 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Sep 30 14:37:02 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v638: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1023 B/s wr, 3 op/s
Sep 30 14:37:02 compute-0 sudo[265443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:37:02 compute-0 sudo[265443]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:37:02 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:02 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f092c001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:37:02 compute-0 sudo[265443]: pam_unix(sudo:session): session closed for user root
Sep 30 14:37:02 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:37:02 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:37:02 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:37:02.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:37:03 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:37:03 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:37:03 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:37:03.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:37:03 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:03 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0914000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:37:03 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:37:03.592Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:37:03 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-nfs-cephfs-compute-0-yvkpei[97239]: [WARNING] 272/143703 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Sep 30 14:37:03 compute-0 ceph-mon[74194]: pgmap v638: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1023 B/s wr, 3 op/s
Sep 30 14:37:03 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/745604479' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:37:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:04 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0910000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:37:04 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v639: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 597 B/s wr, 2 op/s
Sep 30 14:37:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-nfs-cephfs-compute-0-yvkpei[97239]: [WARNING] 272/143704 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Sep 30 14:37:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:04 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0908000d00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:37:04 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:37:04 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:37:04 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:37:04.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:37:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:37:04] "GET /metrics HTTP/1.1" 200 48422 "" "Prometheus/2.51.0"
Sep 30 14:37:04 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:37:04] "GET /metrics HTTP/1.1" 200 48422 "" "Prometheus/2.51.0"
Sep 30 14:37:05 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/2089101234' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:37:05 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:37:05 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:37:05 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:37:05.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:37:05 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:05 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f092c001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:37:06 compute-0 ceph-mon[74194]: pgmap v639: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 597 B/s wr, 2 op/s
Sep 30 14:37:06 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:06 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0914001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:37:06 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v640: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 597 B/s wr, 2 op/s
Sep 30 14:37:06 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:06 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0910001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:37:06 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:37:06 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:37:06 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:37:06.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:37:06 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:37:07 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:37:07.092Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:37:07 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:37:07.093Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:37:07 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:37:07 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:37:07 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:37:07.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:37:07 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:07 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0908001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:37:07 compute-0 nova_compute[261524]: 2025-09-30 14:37:07.321 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:37:07 compute-0 nova_compute[261524]: 2025-09-30 14:37:07.322 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:37:07 compute-0 nova_compute[261524]: 2025-09-30 14:37:07.322 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Sep 30 14:37:07 compute-0 nova_compute[261524]: 2025-09-30 14:37:07.322 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Sep 30 14:37:07 compute-0 nova_compute[261524]: 2025-09-30 14:37:07.338 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Sep 30 14:37:07 compute-0 nova_compute[261524]: 2025-09-30 14:37:07.338 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:37:07 compute-0 nova_compute[261524]: 2025-09-30 14:37:07.338 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:37:07 compute-0 nova_compute[261524]: 2025-09-30 14:37:07.338 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:37:07 compute-0 nova_compute[261524]: 2025-09-30 14:37:07.339 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:37:07 compute-0 nova_compute[261524]: 2025-09-30 14:37:07.339 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:37:07 compute-0 nova_compute[261524]: 2025-09-30 14:37:07.339 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Sep 30 14:37:07 compute-0 nova_compute[261524]: 2025-09-30 14:37:07.339 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:37:07 compute-0 nova_compute[261524]: 2025-09-30 14:37:07.361 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:37:07 compute-0 nova_compute[261524]: 2025-09-30 14:37:07.361 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:37:07 compute-0 nova_compute[261524]: 2025-09-30 14:37:07.362 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:37:07 compute-0 nova_compute[261524]: 2025-09-30 14:37:07.362 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Sep 30 14:37:07 compute-0 nova_compute[261524]: 2025-09-30 14:37:07.362 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Sep 30 14:37:07 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 14:37:07 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/221205682' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:37:07 compute-0 nova_compute[261524]: 2025-09-30 14:37:07.821 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Sep 30 14:37:07 compute-0 nova_compute[261524]: 2025-09-30 14:37:07.965 2 WARNING nova.virt.libvirt.driver [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 14:37:07 compute-0 nova_compute[261524]: 2025-09-30 14:37:07.967 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4903MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Sep 30 14:37:07 compute-0 nova_compute[261524]: 2025-09-30 14:37:07.967 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:37:07 compute-0 nova_compute[261524]: 2025-09-30 14:37:07.967 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:37:08 compute-0 nova_compute[261524]: 2025-09-30 14:37:08.016 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Sep 30 14:37:08 compute-0 nova_compute[261524]: 2025-09-30 14:37:08.016 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Sep 30 14:37:08 compute-0 ceph-mon[74194]: pgmap v640: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 597 B/s wr, 2 op/s
Sep 30 14:37:08 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/1028298652' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:37:08 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/221205682' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:37:08 compute-0 nova_compute[261524]: 2025-09-30 14:37:08.045 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Sep 30 14:37:08 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:08 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f092c001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:37:08 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v641: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:37:08 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 14:37:08 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2789354744' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:37:08 compute-0 nova_compute[261524]: 2025-09-30 14:37:08.538 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Sep 30 14:37:08 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:08 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0914001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:37:08 compute-0 nova_compute[261524]: 2025-09-30 14:37:08.544 2 DEBUG nova.compute.provider_tree [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Inventory has not changed in ProviderTree for provider: 06783cfc-6d32-454d-9501-ebd8adea3735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Sep 30 14:37:08 compute-0 nova_compute[261524]: 2025-09-30 14:37:08.560 2 DEBUG nova.scheduler.client.report [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Inventory has not changed for provider 06783cfc-6d32-454d-9501-ebd8adea3735 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Sep 30 14:37:08 compute-0 nova_compute[261524]: 2025-09-30 14:37:08.562 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Sep 30 14:37:08 compute-0 nova_compute[261524]: 2025-09-30 14:37:08.562 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.595s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:37:08 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:37:08 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:37:08 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:37:08.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:37:09 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/2004604004' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:37:09 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/2789354744' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:37:09 compute-0 nova_compute[261524]: 2025-09-30 14:37:09.176 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:37:09 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:09 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0910001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:37:09 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:37:09 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:37:09 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:37:09.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:37:10 compute-0 ceph-mon[74194]: pgmap v641: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:37:10 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:10 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0908001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:37:10 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v642: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:37:10 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:10 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f092c001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:37:10 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:37:10 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:37:10 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:37:10.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:37:11 compute-0 ceph-mon[74194]: from='client.? 192.168.122.10:0/1993709817' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 14:37:11 compute-0 ceph-mon[74194]: from='client.? 192.168.122.10:0/1993709817' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 14:37:11 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:11 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0914001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:37:11 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:37:11 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:37:11 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:37:11.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:37:11 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:37:12 compute-0 ceph-mon[74194]: pgmap v642: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:37:12 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:12 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0910001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:37:12 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v643: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 170 B/s wr, 1 op/s
Sep 30 14:37:12 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:12 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0908001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:37:12 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:37:12 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:37:12 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:37:12.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:37:12 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:12 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:37:13 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:13 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f092c001c00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:37:13 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:37:13 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:37:13 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:37:13.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:37:13 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:37:13.592Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:37:14 compute-0 ceph-mon[74194]: pgmap v643: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 170 B/s wr, 1 op/s
Sep 30 14:37:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:14 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0914002f50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:37:14 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v644: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:37:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:14 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0910002f50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:37:14 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:37:14 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:37:14 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:37:14 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:37:14 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:37:14.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:37:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:37:14] "GET /metrics HTTP/1.1" 200 48418 "" "Prometheus/2.51.0"
Sep 30 14:37:14 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:37:14] "GET /metrics HTTP/1.1" 200 48418 "" "Prometheus/2.51.0"
Sep 30 14:37:15 compute-0 ceph-mon[74194]: pgmap v644: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:37:15 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:37:15 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:15 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0908002cb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:37:15 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:37:15 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:37:15 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:37:15.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:37:15 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:15 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:37:15 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:15 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:37:15 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:15 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:37:16 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:16 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f092c001c00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:37:16 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v645: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 938 B/s wr, 3 op/s
Sep 30 14:37:16 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:16 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f092c001c00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:37:16 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:37:16 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:37:16 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:37:16.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:37:16 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:37:17 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:37:17.094Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:37:17 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:17 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0910002f50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:37:17 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:37:17 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:37:17 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:37:17.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:37:17 compute-0 ceph-mon[74194]: pgmap v645: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 938 B/s wr, 3 op/s
Sep 30 14:37:18 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:18 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0908002cb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:37:18 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v646: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 938 B/s wr, 3 op/s
Sep 30 14:37:18 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:18 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f092c001c00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:37:18 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:37:18 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:37:18 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:37:18.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:37:18 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:18 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Sep 30 14:37:19 compute-0 podman[265531]: 2025-09-30 14:37:19.150477471 +0000 UTC m=+0.061685827 container health_status c96c6cc1b89de09f21811bef665f35e069874e3ad1bbd4c37d02227af98e3458 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Sep 30 14:37:19 compute-0 podman[265530]: 2025-09-30 14:37:19.161581997 +0000 UTC m=+0.078092754 container health_status b3fb2fd96e3ed0561f41ccf5f3a6a8f5edc1610051f196882b9fe64f54041c07 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3)
Sep 30 14:37:19 compute-0 podman[265528]: 2025-09-30 14:37:19.161630268 +0000 UTC m=+0.083711804 container health_status 3f9405f717bf7bccb1d94628a6cea0442375ebf8d5cf43ef2536ee30dce6c6e0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Sep 30 14:37:19 compute-0 podman[265529]: 2025-09-30 14:37:19.193232281 +0000 UTC m=+0.107442927 container health_status 8ace90c1ad44d98a416105b72e1c541724757ab08efa10adfaa0df7e8c2027c6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible)
Sep 30 14:37:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:19 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0914002f50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:37:19 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:37:19 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:37:19 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:37:19.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:37:19 compute-0 ceph-mon[74194]: pgmap v646: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 938 B/s wr, 3 op/s
Sep 30 14:37:20 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:20 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0910002f50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:37:20 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v647: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 938 B/s wr, 3 op/s
Sep 30 14:37:20 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:20 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0908002cb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:37:20 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:37:20 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:37:20 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:37:20.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:37:21 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:21 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f092c001c00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:37:21 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:37:21 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:37:21 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:37:21.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:37:21 compute-0 ceph-mon[74194]: pgmap v647: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 938 B/s wr, 3 op/s
Sep 30 14:37:21 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:37:22 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:22 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0914002f50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:37:22 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v648: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 1023 B/s wr, 3 op/s
Sep 30 14:37:22 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:22 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0910004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:37:22 compute-0 sudo[265614]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:37:22 compute-0 sudo[265614]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:37:22 compute-0 sudo[265614]: pam_unix(sudo:session): session closed for user root
Sep 30 14:37:22 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:37:22 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:37:22 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:37:22.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:37:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:23 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0908003db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:37:23 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:37:23 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:37:23 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:37:23.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:37:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:37:23.593Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:37:23 compute-0 ceph-mon[74194]: pgmap v648: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 1023 B/s wr, 3 op/s
Sep 30 14:37:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:24 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f092c001c00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:37:24 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v649: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Sep 30 14:37:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:24 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0914004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:37:24 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:37:24 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:37:24 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:37:24.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:37:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:37:24] "GET /metrics HTTP/1.1" 200 48418 "" "Prometheus/2.51.0"
Sep 30 14:37:24 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:37:24] "GET /metrics HTTP/1.1" 200 48418 "" "Prometheus/2.51.0"
Sep 30 14:37:25 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:25 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0910004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:37:25 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:37:25 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:37:25 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:37:25.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:37:25 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-nfs-cephfs-compute-0-yvkpei[97239]: [WARNING] 272/143725 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Sep 30 14:37:25 compute-0 ceph-mon[74194]: pgmap v649: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Sep 30 14:37:26 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:26 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0910004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:37:26 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v650: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Sep 30 14:37:26 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:26 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f092c001c00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:37:26 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:37:26 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:37:26 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:37:26.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:37:26 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:37:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:37:27.095Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:37:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:27 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0914004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:37:27 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:37:27 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:37:27 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:37:27.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:37:27 compute-0 ceph-mon[74194]: pgmap v650: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Sep 30 14:37:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:28 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0910004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:37:28 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v651: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:37:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:28 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0910004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:37:28 compute-0 PackageKit[194450]: daemon quit
Sep 30 14:37:28 compute-0 systemd[1]: packagekit.service: Deactivated successfully.
Sep 30 14:37:28 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:37:28 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:37:28 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:37:28.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:37:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:29 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f092c001c00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:37:29 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:37:29 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:37:29 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:37:29.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:37:29 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:37:29 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:37:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:37:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:37:29 compute-0 ceph-mon[74194]: pgmap v651: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:37:29 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:37:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:37:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:37:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:37:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:37:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:30 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f092c001c00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:37:30 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v652: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:37:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:30 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f092c001c00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:37:30 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:37:30 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:37:30 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:37:30.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:37:31 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:31 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0908003db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:37:31 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:37:31 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:37:31 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:37:31.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:37:31 compute-0 ceph-mon[74194]: pgmap v652: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:37:31 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:37:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:32 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0914004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:37:32 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v653: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:37:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:32 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0910004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:37:32 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:37:32 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:37:32 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:37:32.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:37:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:33 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f092c003cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:37:33 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:37:33 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:37:33 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:37:33.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:37:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:37:33.596Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:37:34 compute-0 ceph-mon[74194]: pgmap v653: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:37:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:34 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0928000df0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:37:34 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v654: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Sep 30 14:37:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:34 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0914004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:37:34 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:37:34 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:37:34 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:37:34.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:37:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:37:34] "GET /metrics HTTP/1.1" 200 48420 "" "Prometheus/2.51.0"
Sep 30 14:37:34 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:37:34] "GET /metrics HTTP/1.1" 200 48420 "" "Prometheus/2.51.0"
Sep 30 14:37:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:35 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0910004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:37:35 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:37:35 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:37:35 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:37:35.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:37:36 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:37:36.018 163966 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ea:30:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '86:54:af:bb:5a:5f'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Sep 30 14:37:36 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:37:36.021 163966 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Sep 30 14:37:36 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:37:36.023 163966 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c6331d25-78a2-493c-bb43-51ad387342be, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 14:37:36 compute-0 ceph-mon[74194]: pgmap v654: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Sep 30 14:37:36 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:36 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f092c003cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:37:36 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v655: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Sep 30 14:37:36 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:36 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0928001930 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:37:36 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:37:36 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:37:36 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:37:36.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:37:37 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:37:37 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:37:37.096Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:37:37 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:37 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0914004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:37:37 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:37:37 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:37:37 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:37:37.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:37:38 compute-0 ceph-mon[74194]: pgmap v655: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Sep 30 14:37:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:37:38.254 163966 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:37:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:37:38.254 163966 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:37:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:37:38.255 163966 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:37:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:38 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0910004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:37:38 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v656: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:37:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:38 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f092c003cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:37:38 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:37:38 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:37:38 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:37:38.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:37:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:39 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0928001930 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:37:39 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:37:39 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:37:39 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:37:39.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:37:40 compute-0 ceph-mon[74194]: pgmap v656: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:37:40 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:40 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0914004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:37:40 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v657: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:37:40 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:40 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0910004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:37:40 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:37:40 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:37:40 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:37:40.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:37:41 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:41 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f092c003cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:37:41 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:37:41 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:37:41 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:37:41.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:37:42 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:37:42 compute-0 ceph-mon[74194]: pgmap v657: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:37:42 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:42 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f092c003cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:37:42 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v658: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Sep 30 14:37:42 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:42 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f092c003cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:37:42 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:37:42 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:37:42 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:37:42.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:37:42 compute-0 sudo[265662]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:37:42 compute-0 sudo[265662]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:37:42 compute-0 sudo[265662]: pam_unix(sudo:session): session closed for user root
Sep 30 14:37:43 compute-0 sudo[265687]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:37:43 compute-0 sudo[265687]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:37:43 compute-0 sudo[265687]: pam_unix(sudo:session): session closed for user root
Sep 30 14:37:43 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:43 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0910004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:37:43 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:37:43 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:37:43 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:37:43.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:37:43 compute-0 ceph-mon[74194]: pgmap v658: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Sep 30 14:37:43 compute-0 sudo[265712]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 14:37:43 compute-0 sudo[265712]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:37:43 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:37:43.597Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:37:43 compute-0 sudo[265712]: pam_unix(sudo:session): session closed for user root
Sep 30 14:37:43 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:37:43 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:37:43 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 14:37:43 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 14:37:43 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 14:37:43 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:37:43 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 14:37:44 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:37:44 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 14:37:44 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 14:37:44 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 14:37:44 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 14:37:44 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:37:44 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:37:44 compute-0 sudo[265772]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:37:44 compute-0 sudo[265772]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:37:44 compute-0 sudo[265772]: pam_unix(sudo:session): session closed for user root
Sep 30 14:37:44 compute-0 sudo[265797]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 14:37:44 compute-0 sudo[265797]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:37:44 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:37:44 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 14:37:44 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:37:44 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:37:44 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 14:37:44 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 14:37:44 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:37:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:44 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f09280029b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:37:44 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v659: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:37:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:44 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f092c003cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:37:44 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:37:44 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:37:44 compute-0 podman[265862]: 2025-09-30 14:37:44.679202805 +0000 UTC m=+0.064720987 container create a2272f8848415d188a722745e8b044022e08ce580c73cf82bef6cbb04977e8a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_poincare, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Sep 30 14:37:44 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:37:44 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:37:44 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:37:44.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:37:44 compute-0 systemd[1]: Started libpod-conmon-a2272f8848415d188a722745e8b044022e08ce580c73cf82bef6cbb04977e8a9.scope.
Sep 30 14:37:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:37:44] "GET /metrics HTTP/1.1" 200 48422 "" "Prometheus/2.51.0"
Sep 30 14:37:44 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:37:44] "GET /metrics HTTP/1.1" 200 48422 "" "Prometheus/2.51.0"
Sep 30 14:37:44 compute-0 podman[265862]: 2025-09-30 14:37:44.658158764 +0000 UTC m=+0.043676936 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:37:44 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:37:44 compute-0 podman[265862]: 2025-09-30 14:37:44.794165422 +0000 UTC m=+0.179683654 container init a2272f8848415d188a722745e8b044022e08ce580c73cf82bef6cbb04977e8a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_poincare, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:37:44 compute-0 podman[265862]: 2025-09-30 14:37:44.805144485 +0000 UTC m=+0.190662667 container start a2272f8848415d188a722745e8b044022e08ce580c73cf82bef6cbb04977e8a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_poincare, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:37:44 compute-0 podman[265862]: 2025-09-30 14:37:44.809585033 +0000 UTC m=+0.195103215 container attach a2272f8848415d188a722745e8b044022e08ce580c73cf82bef6cbb04977e8a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_poincare, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:37:44 compute-0 brave_poincare[265878]: 167 167
Sep 30 14:37:44 compute-0 systemd[1]: libpod-a2272f8848415d188a722745e8b044022e08ce580c73cf82bef6cbb04977e8a9.scope: Deactivated successfully.
Sep 30 14:37:44 compute-0 podman[265862]: 2025-09-30 14:37:44.816473777 +0000 UTC m=+0.201991959 container died a2272f8848415d188a722745e8b044022e08ce580c73cf82bef6cbb04977e8a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_poincare, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:37:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-a88375de485471c0dfa2fee1b711f485303a3a3d8dc3dd00fecd629c604bc3b0-merged.mount: Deactivated successfully.
Sep 30 14:37:44 compute-0 podman[265862]: 2025-09-30 14:37:44.881942643 +0000 UTC m=+0.267460795 container remove a2272f8848415d188a722745e8b044022e08ce580c73cf82bef6cbb04977e8a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_poincare, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:37:44 compute-0 systemd[1]: libpod-conmon-a2272f8848415d188a722745e8b044022e08ce580c73cf82bef6cbb04977e8a9.scope: Deactivated successfully.
Sep 30 14:37:45 compute-0 podman[265902]: 2025-09-30 14:37:45.074408197 +0000 UTC m=+0.058833060 container create 865db78e530a873cf9745cd554b6a8b8a1dd63bc46fd45e2d1782fa9ffd3f5e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_goodall, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:37:45 compute-0 systemd[1]: Started libpod-conmon-865db78e530a873cf9745cd554b6a8b8a1dd63bc46fd45e2d1782fa9ffd3f5e3.scope.
Sep 30 14:37:45 compute-0 podman[265902]: 2025-09-30 14:37:45.045042794 +0000 UTC m=+0.029467677 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:37:45 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:37:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c16bf10f5f1e59c8a9060c7fcbdc410dbd91141c35c2cb5642fd1f5da0df2e5e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:37:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c16bf10f5f1e59c8a9060c7fcbdc410dbd91141c35c2cb5642fd1f5da0df2e5e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:37:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c16bf10f5f1e59c8a9060c7fcbdc410dbd91141c35c2cb5642fd1f5da0df2e5e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:37:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c16bf10f5f1e59c8a9060c7fcbdc410dbd91141c35c2cb5642fd1f5da0df2e5e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:37:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c16bf10f5f1e59c8a9060c7fcbdc410dbd91141c35c2cb5642fd1f5da0df2e5e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 14:37:45 compute-0 podman[265902]: 2025-09-30 14:37:45.185982784 +0000 UTC m=+0.170407637 container init 865db78e530a873cf9745cd554b6a8b8a1dd63bc46fd45e2d1782fa9ffd3f5e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_goodall, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Sep 30 14:37:45 compute-0 podman[265902]: 2025-09-30 14:37:45.200816969 +0000 UTC m=+0.185241802 container start 865db78e530a873cf9745cd554b6a8b8a1dd63bc46fd45e2d1782fa9ffd3f5e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_goodall, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:37:45 compute-0 podman[265902]: 2025-09-30 14:37:45.204720043 +0000 UTC m=+0.189144936 container attach 865db78e530a873cf9745cd554b6a8b8a1dd63bc46fd45e2d1782fa9ffd3f5e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_goodall, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Sep 30 14:37:45 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:45 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0914004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:37:45 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:37:45 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:37:45 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:37:45.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:37:45 compute-0 ceph-mon[74194]: pgmap v659: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:37:45 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:37:45 compute-0 frosty_goodall[265918]: --> passed data devices: 0 physical, 1 LVM
Sep 30 14:37:45 compute-0 frosty_goodall[265918]: --> All data devices are unavailable
Sep 30 14:37:45 compute-0 systemd[1]: libpod-865db78e530a873cf9745cd554b6a8b8a1dd63bc46fd45e2d1782fa9ffd3f5e3.scope: Deactivated successfully.
Sep 30 14:37:45 compute-0 podman[265902]: 2025-09-30 14:37:45.621932412 +0000 UTC m=+0.606357245 container died 865db78e530a873cf9745cd554b6a8b8a1dd63bc46fd45e2d1782fa9ffd3f5e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_goodall, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Sep 30 14:37:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-c16bf10f5f1e59c8a9060c7fcbdc410dbd91141c35c2cb5642fd1f5da0df2e5e-merged.mount: Deactivated successfully.
Sep 30 14:37:45 compute-0 podman[265902]: 2025-09-30 14:37:45.679374334 +0000 UTC m=+0.663799197 container remove 865db78e530a873cf9745cd554b6a8b8a1dd63bc46fd45e2d1782fa9ffd3f5e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_goodall, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:37:45 compute-0 systemd[1]: libpod-conmon-865db78e530a873cf9745cd554b6a8b8a1dd63bc46fd45e2d1782fa9ffd3f5e3.scope: Deactivated successfully.
Sep 30 14:37:45 compute-0 sudo[265797]: pam_unix(sudo:session): session closed for user root
Sep 30 14:37:45 compute-0 sudo[265947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:37:45 compute-0 sudo[265947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:37:45 compute-0 sudo[265947]: pam_unix(sudo:session): session closed for user root
Sep 30 14:37:45 compute-0 sudo[265973]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -- lvm list --format json
Sep 30 14:37:45 compute-0 sudo[265973]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:37:46 compute-0 podman[266039]: 2025-09-30 14:37:46.364794457 +0000 UTC m=+0.058761508 container create e9f5b6f7aa113469233199d37250cf8c1b413df3cfdbb1c373e06c22b9c886a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_khayyam, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Sep 30 14:37:46 compute-0 systemd[1]: Started libpod-conmon-e9f5b6f7aa113469233199d37250cf8c1b413df3cfdbb1c373e06c22b9c886a0.scope.
Sep 30 14:37:46 compute-0 podman[266039]: 2025-09-30 14:37:46.331633063 +0000 UTC m=+0.025600194 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:37:46 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:37:46 compute-0 podman[266039]: 2025-09-30 14:37:46.451322675 +0000 UTC m=+0.145289796 container init e9f5b6f7aa113469233199d37250cf8c1b413df3cfdbb1c373e06c22b9c886a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_khayyam, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:37:46 compute-0 podman[266039]: 2025-09-30 14:37:46.45973157 +0000 UTC m=+0.153698641 container start e9f5b6f7aa113469233199d37250cf8c1b413df3cfdbb1c373e06c22b9c886a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_khayyam, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:37:46 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:46 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0910004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:37:46 compute-0 podman[266039]: 2025-09-30 14:37:46.462908884 +0000 UTC m=+0.156876045 container attach e9f5b6f7aa113469233199d37250cf8c1b413df3cfdbb1c373e06c22b9c886a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:37:46 compute-0 determined_khayyam[266055]: 167 167
Sep 30 14:37:46 compute-0 systemd[1]: libpod-e9f5b6f7aa113469233199d37250cf8c1b413df3cfdbb1c373e06c22b9c886a0.scope: Deactivated successfully.
Sep 30 14:37:46 compute-0 podman[266039]: 2025-09-30 14:37:46.468040371 +0000 UTC m=+0.162007472 container died e9f5b6f7aa113469233199d37250cf8c1b413df3cfdbb1c373e06c22b9c886a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_khayyam, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Sep 30 14:37:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-e6757e5dd34df119ad72118a7518e66ba23d389c89f6a7d47e3c6e9d7aecb20e-merged.mount: Deactivated successfully.
Sep 30 14:37:46 compute-0 podman[266039]: 2025-09-30 14:37:46.518277781 +0000 UTC m=+0.212244882 container remove e9f5b6f7aa113469233199d37250cf8c1b413df3cfdbb1c373e06c22b9c886a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_khayyam, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:37:46 compute-0 systemd[1]: libpod-conmon-e9f5b6f7aa113469233199d37250cf8c1b413df3cfdbb1c373e06c22b9c886a0.scope: Deactivated successfully.
Sep 30 14:37:46 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v660: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Sep 30 14:37:46 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:46 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f09280029b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:37:46 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:37:46 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:37:46 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:37:46.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:37:46 compute-0 podman[266079]: 2025-09-30 14:37:46.724151983 +0000 UTC m=+0.046275275 container create a9cb562d7804f1f58fee1e2abcc27b66020a6284016d54a53adf8a304da1c325 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Sep 30 14:37:46 compute-0 systemd[1]: Started libpod-conmon-a9cb562d7804f1f58fee1e2abcc27b66020a6284016d54a53adf8a304da1c325.scope.
Sep 30 14:37:46 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:37:46 compute-0 podman[266079]: 2025-09-30 14:37:46.703704138 +0000 UTC m=+0.025827470 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:37:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0566fe82d0d0520de25a8d9e4977d676fd363170f1c45b143c19b6dcd2ae0a38/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:37:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0566fe82d0d0520de25a8d9e4977d676fd363170f1c45b143c19b6dcd2ae0a38/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:37:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0566fe82d0d0520de25a8d9e4977d676fd363170f1c45b143c19b6dcd2ae0a38/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:37:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0566fe82d0d0520de25a8d9e4977d676fd363170f1c45b143c19b6dcd2ae0a38/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:37:46 compute-0 podman[266079]: 2025-09-30 14:37:46.813704562 +0000 UTC m=+0.135827894 container init a9cb562d7804f1f58fee1e2abcc27b66020a6284016d54a53adf8a304da1c325 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_davinci, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Sep 30 14:37:46 compute-0 podman[266079]: 2025-09-30 14:37:46.822573539 +0000 UTC m=+0.144696841 container start a9cb562d7804f1f58fee1e2abcc27b66020a6284016d54a53adf8a304da1c325 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_davinci, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Sep 30 14:37:46 compute-0 podman[266079]: 2025-09-30 14:37:46.826869273 +0000 UTC m=+0.148992605 container attach a9cb562d7804f1f58fee1e2abcc27b66020a6284016d54a53adf8a304da1c325 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_davinci, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Sep 30 14:37:47 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:37:47 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:37:47.097Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:37:47 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:37:47.098Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:37:47 compute-0 reverent_davinci[266094]: {
Sep 30 14:37:47 compute-0 reverent_davinci[266094]:     "0": [
Sep 30 14:37:47 compute-0 reverent_davinci[266094]:         {
Sep 30 14:37:47 compute-0 reverent_davinci[266094]:             "devices": [
Sep 30 14:37:47 compute-0 reverent_davinci[266094]:                 "/dev/loop3"
Sep 30 14:37:47 compute-0 reverent_davinci[266094]:             ],
Sep 30 14:37:47 compute-0 reverent_davinci[266094]:             "lv_name": "ceph_lv0",
Sep 30 14:37:47 compute-0 reverent_davinci[266094]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:37:47 compute-0 reverent_davinci[266094]:             "lv_size": "21470642176",
Sep 30 14:37:47 compute-0 reverent_davinci[266094]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5e3c7776-ac03-5698-b79f-a6dc2d80cae6,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1bf35304-bfb4-41f5-b832-570aa31de1b2,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 14:37:47 compute-0 reverent_davinci[266094]:             "lv_uuid": "f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L",
Sep 30 14:37:47 compute-0 reverent_davinci[266094]:             "name": "ceph_lv0",
Sep 30 14:37:47 compute-0 reverent_davinci[266094]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:37:47 compute-0 reverent_davinci[266094]:             "tags": {
Sep 30 14:37:47 compute-0 reverent_davinci[266094]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:37:47 compute-0 reverent_davinci[266094]:                 "ceph.block_uuid": "f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L",
Sep 30 14:37:47 compute-0 reverent_davinci[266094]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 14:37:47 compute-0 reverent_davinci[266094]:                 "ceph.cluster_fsid": "5e3c7776-ac03-5698-b79f-a6dc2d80cae6",
Sep 30 14:37:47 compute-0 reverent_davinci[266094]:                 "ceph.cluster_name": "ceph",
Sep 30 14:37:47 compute-0 reverent_davinci[266094]:                 "ceph.crush_device_class": "",
Sep 30 14:37:47 compute-0 reverent_davinci[266094]:                 "ceph.encrypted": "0",
Sep 30 14:37:47 compute-0 reverent_davinci[266094]:                 "ceph.osd_fsid": "1bf35304-bfb4-41f5-b832-570aa31de1b2",
Sep 30 14:37:47 compute-0 reverent_davinci[266094]:                 "ceph.osd_id": "0",
Sep 30 14:37:47 compute-0 reverent_davinci[266094]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 14:37:47 compute-0 reverent_davinci[266094]:                 "ceph.type": "block",
Sep 30 14:37:47 compute-0 reverent_davinci[266094]:                 "ceph.vdo": "0",
Sep 30 14:37:47 compute-0 reverent_davinci[266094]:                 "ceph.with_tpm": "0"
Sep 30 14:37:47 compute-0 reverent_davinci[266094]:             },
Sep 30 14:37:47 compute-0 reverent_davinci[266094]:             "type": "block",
Sep 30 14:37:47 compute-0 reverent_davinci[266094]:             "vg_name": "ceph_vg0"
Sep 30 14:37:47 compute-0 reverent_davinci[266094]:         }
Sep 30 14:37:47 compute-0 reverent_davinci[266094]:     ]
Sep 30 14:37:47 compute-0 reverent_davinci[266094]: }
Sep 30 14:37:47 compute-0 systemd[1]: libpod-a9cb562d7804f1f58fee1e2abcc27b66020a6284016d54a53adf8a304da1c325.scope: Deactivated successfully.
Sep 30 14:37:47 compute-0 podman[266079]: 2025-09-30 14:37:47.157436361 +0000 UTC m=+0.479559663 container died a9cb562d7804f1f58fee1e2abcc27b66020a6284016d54a53adf8a304da1c325 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_davinci, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:37:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-0566fe82d0d0520de25a8d9e4977d676fd363170f1c45b143c19b6dcd2ae0a38-merged.mount: Deactivated successfully.
Sep 30 14:37:47 compute-0 podman[266079]: 2025-09-30 14:37:47.196928284 +0000 UTC m=+0.519051586 container remove a9cb562d7804f1f58fee1e2abcc27b66020a6284016d54a53adf8a304da1c325 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_davinci, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:37:47 compute-0 systemd[1]: libpod-conmon-a9cb562d7804f1f58fee1e2abcc27b66020a6284016d54a53adf8a304da1c325.scope: Deactivated successfully.
Sep 30 14:37:47 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:47 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f092c003cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:37:47 compute-0 sudo[265973]: pam_unix(sudo:session): session closed for user root
Sep 30 14:37:47 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:37:47 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:37:47 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:37:47.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:37:47 compute-0 sudo[266118]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:37:47 compute-0 sudo[266118]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:37:47 compute-0 sudo[266118]: pam_unix(sudo:session): session closed for user root
Sep 30 14:37:47 compute-0 sudo[266143]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -- raw list --format json
Sep 30 14:37:47 compute-0 sudo[266143]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:37:47 compute-0 ceph-mon[74194]: pgmap v660: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Sep 30 14:37:47 compute-0 podman[266209]: 2025-09-30 14:37:47.834586794 +0000 UTC m=+0.043008628 container create 45b56a9060b2a82e5b0c3bc949b056d34b119f57a07ea4838fcbaa77e2b1ddb6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_taussig, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Sep 30 14:37:47 compute-0 systemd[1]: Started libpod-conmon-45b56a9060b2a82e5b0c3bc949b056d34b119f57a07ea4838fcbaa77e2b1ddb6.scope.
Sep 30 14:37:47 compute-0 podman[266209]: 2025-09-30 14:37:47.815407412 +0000 UTC m=+0.023829226 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:37:47 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:37:47 compute-0 podman[266209]: 2025-09-30 14:37:47.939877172 +0000 UTC m=+0.148298986 container init 45b56a9060b2a82e5b0c3bc949b056d34b119f57a07ea4838fcbaa77e2b1ddb6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_taussig, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:37:47 compute-0 podman[266209]: 2025-09-30 14:37:47.94728108 +0000 UTC m=+0.155702874 container start 45b56a9060b2a82e5b0c3bc949b056d34b119f57a07ea4838fcbaa77e2b1ddb6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_taussig, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Sep 30 14:37:47 compute-0 podman[266209]: 2025-09-30 14:37:47.950565097 +0000 UTC m=+0.158986891 container attach 45b56a9060b2a82e5b0c3bc949b056d34b119f57a07ea4838fcbaa77e2b1ddb6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_taussig, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Sep 30 14:37:47 compute-0 angry_taussig[266226]: 167 167
Sep 30 14:37:47 compute-0 systemd[1]: libpod-45b56a9060b2a82e5b0c3bc949b056d34b119f57a07ea4838fcbaa77e2b1ddb6.scope: Deactivated successfully.
Sep 30 14:37:47 compute-0 podman[266209]: 2025-09-30 14:37:47.954308427 +0000 UTC m=+0.162730241 container died 45b56a9060b2a82e5b0c3bc949b056d34b119f57a07ea4838fcbaa77e2b1ddb6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_taussig, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:37:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-ce942e277e0640a8acc316dcf2276759afc674b5935909a96e985da40d8a9f4d-merged.mount: Deactivated successfully.
Sep 30 14:37:47 compute-0 podman[266209]: 2025-09-30 14:37:47.991256303 +0000 UTC m=+0.199678097 container remove 45b56a9060b2a82e5b0c3bc949b056d34b119f57a07ea4838fcbaa77e2b1ddb6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_taussig, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:37:48 compute-0 systemd[1]: libpod-conmon-45b56a9060b2a82e5b0c3bc949b056d34b119f57a07ea4838fcbaa77e2b1ddb6.scope: Deactivated successfully.
Sep 30 14:37:48 compute-0 podman[266250]: 2025-09-30 14:37:48.159414038 +0000 UTC m=+0.047080206 container create 706751c9dd14945d2b11c7855207fdd60b0e2f07212a9f9251d99256a841d366 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_bhaskara, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:37:48 compute-0 systemd[1]: Started libpod-conmon-706751c9dd14945d2b11c7855207fdd60b0e2f07212a9f9251d99256a841d366.scope.
Sep 30 14:37:48 compute-0 podman[266250]: 2025-09-30 14:37:48.138445079 +0000 UTC m=+0.026111297 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:37:48 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:37:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/156d04991eb2f22604d72e0a9ea810bca4613c4b36245e012c09786b9258aea4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:37:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/156d04991eb2f22604d72e0a9ea810bca4613c4b36245e012c09786b9258aea4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:37:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/156d04991eb2f22604d72e0a9ea810bca4613c4b36245e012c09786b9258aea4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:37:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/156d04991eb2f22604d72e0a9ea810bca4613c4b36245e012c09786b9258aea4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:37:48 compute-0 podman[266250]: 2025-09-30 14:37:48.263251508 +0000 UTC m=+0.150917676 container init 706751c9dd14945d2b11c7855207fdd60b0e2f07212a9f9251d99256a841d366 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_bhaskara, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Sep 30 14:37:48 compute-0 podman[266250]: 2025-09-30 14:37:48.270731478 +0000 UTC m=+0.158397656 container start 706751c9dd14945d2b11c7855207fdd60b0e2f07212a9f9251d99256a841d366 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:37:48 compute-0 podman[266250]: 2025-09-30 14:37:48.273721048 +0000 UTC m=+0.161387266 container attach 706751c9dd14945d2b11c7855207fdd60b0e2f07212a9f9251d99256a841d366 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_bhaskara, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:37:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:48 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0914004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:37:48 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v661: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:37:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:48 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0914004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:37:48 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:37:48 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:37:48 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:37:48.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:37:49 compute-0 lvm[266342]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 14:37:49 compute-0 lvm[266342]: VG ceph_vg0 finished
Sep 30 14:37:49 compute-0 modest_bhaskara[266267]: {}
Sep 30 14:37:49 compute-0 systemd[1]: libpod-706751c9dd14945d2b11c7855207fdd60b0e2f07212a9f9251d99256a841d366.scope: Deactivated successfully.
Sep 30 14:37:49 compute-0 systemd[1]: libpod-706751c9dd14945d2b11c7855207fdd60b0e2f07212a9f9251d99256a841d366.scope: Consumed 1.351s CPU time.
Sep 30 14:37:49 compute-0 podman[266250]: 2025-09-30 14:37:49.085951504 +0000 UTC m=+0.973617662 container died 706751c9dd14945d2b11c7855207fdd60b0e2f07212a9f9251d99256a841d366 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:37:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-156d04991eb2f22604d72e0a9ea810bca4613c4b36245e012c09786b9258aea4-merged.mount: Deactivated successfully.
Sep 30 14:37:49 compute-0 podman[266250]: 2025-09-30 14:37:49.137615941 +0000 UTC m=+1.025282109 container remove 706751c9dd14945d2b11c7855207fdd60b0e2f07212a9f9251d99256a841d366 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_bhaskara, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:37:49 compute-0 systemd[1]: libpod-conmon-706751c9dd14945d2b11c7855207fdd60b0e2f07212a9f9251d99256a841d366.scope: Deactivated successfully.
Sep 30 14:37:49 compute-0 sudo[266143]: pam_unix(sudo:session): session closed for user root
Sep 30 14:37:49 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 14:37:49 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:37:49 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 14:37:49 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:37:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:49 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f09280029b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:37:49 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:37:49 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:37:49 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:37:49.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:37:49 compute-0 sudo[266357]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 14:37:49 compute-0 sudo[266357]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:37:49 compute-0 sudo[266357]: pam_unix(sudo:session): session closed for user root
Sep 30 14:37:49 compute-0 podman[266384]: 2025-09-30 14:37:49.372371473 +0000 UTC m=+0.055657476 container health_status c96c6cc1b89de09f21811bef665f35e069874e3ad1bbd4c37d02227af98e3458 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Sep 30 14:37:49 compute-0 podman[266381]: 2025-09-30 14:37:49.372573008 +0000 UTC m=+0.059996781 container health_status 3f9405f717bf7bccb1d94628a6cea0442375ebf8d5cf43ef2536ee30dce6c6e0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_id=iscsid, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS)
Sep 30 14:37:49 compute-0 podman[266383]: 2025-09-30 14:37:49.401897391 +0000 UTC m=+0.081547917 container health_status b3fb2fd96e3ed0561f41ccf5f3a6a8f5edc1610051f196882b9fe64f54041c07 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3)
Sep 30 14:37:49 compute-0 podman[266382]: 2025-09-30 14:37:49.40189201 +0000 UTC m=+0.090484074 container health_status 8ace90c1ad44d98a416105b72e1c541724757ab08efa10adfaa0df7e8c2027c6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Sep 30 14:37:49 compute-0 ceph-mon[74194]: pgmap v661: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:37:49 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:37:49 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:37:50 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:50 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f092c003cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:37:50 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v662: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:37:50 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:50 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0910004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:37:50 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:37:50 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:37:50 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:37:50.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:37:51 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:51 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0914004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:37:51 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:37:51 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:37:51 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:37:51.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:37:51 compute-0 ceph-mon[74194]: pgmap v662: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:37:52 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:37:52 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:52 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0928003ab0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:37:52 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v663: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Sep 30 14:37:52 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:52 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f092c003cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:37:52 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:37:52 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:37:52 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:37:52.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:37:53 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:53 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0910004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:37:53 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:37:53 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:37:53 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:37:53.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:37:53 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:37:53.600Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:37:53 compute-0 ceph-mon[74194]: pgmap v663: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Sep 30 14:37:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:54 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0914004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:37:54 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v664: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:37:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:54 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0928003ab0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:37:54 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:37:54 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:37:54 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:37:54.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:37:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:37:54] "GET /metrics HTTP/1.1" 200 48422 "" "Prometheus/2.51.0"
Sep 30 14:37:54 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:37:54] "GET /metrics HTTP/1.1" 200 48422 "" "Prometheus/2.51.0"
Sep 30 14:37:55 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:55 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0928003ab0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:37:55 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:37:55 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:37:55 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:37:55.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:37:55 compute-0 ceph-mon[74194]: pgmap v664: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:37:56 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:56 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0934001110 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:37:56 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v665: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Sep 30 14:37:56 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:56 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0914004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:37:56 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:37:56 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:37:56 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:37:56.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:37:57 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:37:57 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:37:57.099Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:37:57 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:57 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f092c003cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:37:57 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:37:57 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:37:57 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:37:57.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:37:57 compute-0 ceph-mon[74194]: pgmap v665: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Sep 30 14:37:58 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:58 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0928003ab0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:37:58 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v666: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:37:58 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:58 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0934001110 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:37:58 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:37:58 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:37:58 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:37:58.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:37:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:37:59 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0914004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:37:59 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:37:59 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:37:59 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:37:59.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:37:59 compute-0 ceph-mgr[74485]: [balancer INFO root] Optimize plan auto_2025-09-30_14:37:59
Sep 30 14:37:59 compute-0 ceph-mgr[74485]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 14:37:59 compute-0 ceph-mgr[74485]: [balancer INFO root] do_upmap
Sep 30 14:37:59 compute-0 ceph-mgr[74485]: [balancer INFO root] pools ['backups', 'default.rgw.meta', 'images', '.rgw.root', '.nfs', 'vms', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.control', '.mgr']
Sep 30 14:37:59 compute-0 ceph-mgr[74485]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 14:37:59 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:37:59 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:37:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:37:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:37:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 14:37:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:37:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:37:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:37:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 14:37:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:37:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:37:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:37:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:37:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:37:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:37:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:37:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:37:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:37:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Sep 30 14:37:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:37:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:37:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:37:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Sep 30 14:37:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:37:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Sep 30 14:37:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:37:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:37:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:37:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 14:37:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:37:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 14:37:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:37:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:37:59 compute-0 ceph-mon[74194]: pgmap v666: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:37:59 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:38:00 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:38:00 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f092c003cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:38:00 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v667: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:38:00 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:38:00 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0928003ab0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:38:00 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:38:00 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:38:00 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:38:00.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:38:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 14:38:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 14:38:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 14:38:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 14:38:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 14:38:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 14:38:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 14:38:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 14:38:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 14:38:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 14:38:01 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:38:01 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f09340022a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:38:01 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:38:01 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:38:01 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:38:01.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:38:01 compute-0 ceph-mon[74194]: pgmap v667: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:38:02 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:38:02 compute-0 ceph-mgr[74485]: [devicehealth INFO root] Check health
Sep 30 14:38:02 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:38:02 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0914004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:38:02 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v668: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Sep 30 14:38:02 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:38:02 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f092c003cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:38:02 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:38:02 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:38:02 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:38:02.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:38:02 compute-0 sudo[266477]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:38:02 compute-0 sudo[266477]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:38:02 compute-0 sudo[266477]: pam_unix(sudo:session): session closed for user root
Sep 30 14:38:03 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:38:03 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0928003ab0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:38:03 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:38:03 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:38:03 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:38:03.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:38:03 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:38:03.601Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:38:03 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:38:03.601Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:38:03 compute-0 ceph-mon[74194]: pgmap v668: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Sep 30 14:38:03 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/3358068549' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:38:03 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/4055544409' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:38:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:38:04 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f09340022a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:38:04 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v669: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:38:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:38:04 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0914004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:38:04 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:38:04 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:38:04 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:38:04.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:38:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:38:04] "GET /metrics HTTP/1.1" 200 48424 "" "Prometheus/2.51.0"
Sep 30 14:38:04 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:38:04] "GET /metrics HTTP/1.1" 200 48424 "" "Prometheus/2.51.0"
Sep 30 14:38:05 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:38:05 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0908002830 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:38:05 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:38:05 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:38:05 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:38:05.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:38:05 compute-0 nova_compute[261524]: 2025-09-30 14:38:05.948 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:38:05 compute-0 nova_compute[261524]: 2025-09-30 14:38:05.948 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:38:06 compute-0 ceph-mon[74194]: pgmap v669: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:38:06 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:38:06 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f092c003cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:38:06 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v670: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:38:06 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:38:06 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f09340022a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:38:06 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-nfs-cephfs-compute-0-yvkpei[97239]: [WARNING] 272/143806 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Sep 30 14:38:06 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:38:06 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:38:06 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:38:06.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:38:06 compute-0 nova_compute[261524]: 2025-09-30 14:38:06.952 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:38:06 compute-0 nova_compute[261524]: 2025-09-30 14:38:06.952 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:38:07 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:38:07 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:38:07.099Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:38:07 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:38:07 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0914004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:38:07 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:38:07 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:38:07 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:38:07.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:38:07 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-nfs-cephfs-compute-0-yvkpei[97239]: [WARNING] 272/143807 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Sep 30 14:38:07 compute-0 nova_compute[261524]: 2025-09-30 14:38:07.952 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:38:07 compute-0 nova_compute[261524]: 2025-09-30 14:38:07.952 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Sep 30 14:38:07 compute-0 nova_compute[261524]: 2025-09-30 14:38:07.952 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Sep 30 14:38:07 compute-0 nova_compute[261524]: 2025-09-30 14:38:07.973 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Sep 30 14:38:07 compute-0 nova_compute[261524]: 2025-09-30 14:38:07.974 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:38:07 compute-0 nova_compute[261524]: 2025-09-30 14:38:07.974 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:38:07 compute-0 nova_compute[261524]: 2025-09-30 14:38:07.975 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:38:07 compute-0 nova_compute[261524]: 2025-09-30 14:38:07.975 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Sep 30 14:38:07 compute-0 nova_compute[261524]: 2025-09-30 14:38:07.975 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:38:08 compute-0 nova_compute[261524]: 2025-09-30 14:38:08.019 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:38:08 compute-0 nova_compute[261524]: 2025-09-30 14:38:08.019 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:38:08 compute-0 nova_compute[261524]: 2025-09-30 14:38:08.020 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:38:08 compute-0 nova_compute[261524]: 2025-09-30 14:38:08.020 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Sep 30 14:38:08 compute-0 nova_compute[261524]: 2025-09-30 14:38:08.020 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Sep 30 14:38:08 compute-0 ceph-mon[74194]: pgmap v670: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:38:08 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 14:38:08 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3068820414' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:38:08 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:38:08 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0908002830 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:38:08 compute-0 nova_compute[261524]: 2025-09-30 14:38:08.486 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Sep 30 14:38:08 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v671: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Sep 30 14:38:08 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:38:08 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f092c003cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:38:08 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:38:08 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:38:08 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:38:08.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:38:08 compute-0 nova_compute[261524]: 2025-09-30 14:38:08.786 2 WARNING nova.virt.libvirt.driver [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 14:38:08 compute-0 nova_compute[261524]: 2025-09-30 14:38:08.787 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4903MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Sep 30 14:38:08 compute-0 nova_compute[261524]: 2025-09-30 14:38:08.787 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:38:08 compute-0 nova_compute[261524]: 2025-09-30 14:38:08.788 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:38:08 compute-0 nova_compute[261524]: 2025-09-30 14:38:08.877 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Sep 30 14:38:08 compute-0 nova_compute[261524]: 2025-09-30 14:38:08.878 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Sep 30 14:38:08 compute-0 nova_compute[261524]: 2025-09-30 14:38:08.903 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Sep 30 14:38:09 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/3068820414' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:38:09 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:38:09 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0934003730 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:38:09 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:38:09 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:38:09 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:38:09.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:38:09 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 14:38:09 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2219605869' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:38:09 compute-0 nova_compute[261524]: 2025-09-30 14:38:09.410 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Sep 30 14:38:09 compute-0 nova_compute[261524]: 2025-09-30 14:38:09.417 2 DEBUG nova.compute.provider_tree [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Inventory has not changed in ProviderTree for provider: 06783cfc-6d32-454d-9501-ebd8adea3735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Sep 30 14:38:09 compute-0 nova_compute[261524]: 2025-09-30 14:38:09.442 2 DEBUG nova.scheduler.client.report [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Inventory has not changed for provider 06783cfc-6d32-454d-9501-ebd8adea3735 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Sep 30 14:38:09 compute-0 nova_compute[261524]: 2025-09-30 14:38:09.445 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Sep 30 14:38:09 compute-0 nova_compute[261524]: 2025-09-30 14:38:09.445 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.657s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:38:10 compute-0 ceph-mon[74194]: pgmap v671: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Sep 30 14:38:10 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/2200336952' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:38:10 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/2219605869' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:38:10 compute-0 nova_compute[261524]: 2025-09-30 14:38:10.424 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:38:10 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:38:10 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0914004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:38:10 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v672: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Sep 30 14:38:10 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:38:10 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0908002830 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:38:10 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:38:10 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.002000053s ======
Sep 30 14:38:10 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:38:10.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Sep 30 14:38:11 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/3841096634' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:38:11 compute-0 ceph-mon[74194]: from='client.? 192.168.122.10:0/136521688' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 14:38:11 compute-0 ceph-mon[74194]: from='client.? 192.168.122.10:0/136521688' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 14:38:11 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:38:11 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f092c003cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:38:11 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:38:11 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:38:11 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:38:11.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:38:12 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:38:12 compute-0 ceph-mon[74194]: pgmap v672: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Sep 30 14:38:12 compute-0 ceph-mon[74194]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #45. Immutable memtables: 0.
Sep 30 14:38:12 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:38:12.105636) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Sep 30 14:38:12 compute-0 ceph-mon[74194]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 45
Sep 30 14:38:12 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759243092105687, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 2120, "num_deletes": 251, "total_data_size": 4144795, "memory_usage": 4232352, "flush_reason": "Manual Compaction"}
Sep 30 14:38:12 compute-0 ceph-mon[74194]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #46: started
Sep 30 14:38:12 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759243092126337, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 46, "file_size": 4043810, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20159, "largest_seqno": 22278, "table_properties": {"data_size": 4034452, "index_size": 5852, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19309, "raw_average_key_size": 20, "raw_value_size": 4015681, "raw_average_value_size": 4174, "num_data_blocks": 258, "num_entries": 962, "num_filter_entries": 962, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759242873, "oldest_key_time": 1759242873, "file_creation_time": 1759243092, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4a74fe2f-a33e-416b-ba25-743e7942b3ac", "db_session_id": "KY5CTSKWFSFJYE5835A9", "orig_file_number": 46, "seqno_to_time_mapping": "N/A"}}
Sep 30 14:38:12 compute-0 ceph-mon[74194]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 20764 microseconds, and 10208 cpu microseconds.
Sep 30 14:38:12 compute-0 ceph-mon[74194]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 14:38:12 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:38:12.126394) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #46: 4043810 bytes OK
Sep 30 14:38:12 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:38:12.126419) [db/memtable_list.cc:519] [default] Level-0 commit table #46 started
Sep 30 14:38:12 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:38:12.131946) [db/memtable_list.cc:722] [default] Level-0 commit table #46: memtable #1 done
Sep 30 14:38:12 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:38:12.132004) EVENT_LOG_v1 {"time_micros": 1759243092131993, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Sep 30 14:38:12 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:38:12.132031) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Sep 30 14:38:12 compute-0 ceph-mon[74194]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 4136256, prev total WAL file size 4136256, number of live WAL files 2.
Sep 30 14:38:12 compute-0 ceph-mon[74194]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000042.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 14:38:12 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:38:12.133082) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Sep 30 14:38:12 compute-0 ceph-mon[74194]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Sep 30 14:38:12 compute-0 ceph-mon[74194]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [46(3949KB)], [44(12MB)]
Sep 30 14:38:12 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759243092133223, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [46], "files_L6": [44], "score": -1, "input_data_size": 17523147, "oldest_snapshot_seqno": -1}
Sep 30 14:38:12 compute-0 ceph-mon[74194]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #47: 5504 keys, 15337834 bytes, temperature: kUnknown
Sep 30 14:38:12 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759243092244440, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 47, "file_size": 15337834, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15298389, "index_size": 24615, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13765, "raw_key_size": 138549, "raw_average_key_size": 25, "raw_value_size": 15196127, "raw_average_value_size": 2760, "num_data_blocks": 1017, "num_entries": 5504, "num_filter_entries": 5504, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759241526, "oldest_key_time": 0, "file_creation_time": 1759243092, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4a74fe2f-a33e-416b-ba25-743e7942b3ac", "db_session_id": "KY5CTSKWFSFJYE5835A9", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Sep 30 14:38:12 compute-0 ceph-mon[74194]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 14:38:12 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:38:12.245066) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 15337834 bytes
Sep 30 14:38:12 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:38:12.247571) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 157.5 rd, 137.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.9, 12.9 +0.0 blob) out(14.6 +0.0 blob), read-write-amplify(8.1) write-amplify(3.8) OK, records in: 6020, records dropped: 516 output_compression: NoCompression
Sep 30 14:38:12 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:38:12.247598) EVENT_LOG_v1 {"time_micros": 1759243092247585, "job": 22, "event": "compaction_finished", "compaction_time_micros": 111279, "compaction_time_cpu_micros": 49758, "output_level": 6, "num_output_files": 1, "total_output_size": 15337834, "num_input_records": 6020, "num_output_records": 5504, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Sep 30 14:38:12 compute-0 ceph-mon[74194]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000046.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 14:38:12 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759243092248817, "job": 22, "event": "table_file_deletion", "file_number": 46}
Sep 30 14:38:12 compute-0 ceph-mon[74194]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 14:38:12 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759243092252054, "job": 22, "event": "table_file_deletion", "file_number": 44}
Sep 30 14:38:12 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:38:12.132940) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:38:12 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:38:12.252287) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:38:12 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:38:12.252297) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:38:12 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:38:12.252301) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:38:12 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:38:12.252304) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:38:12 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:38:12.252307) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:38:12 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:38:12 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f092c003cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:38:12 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v673: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:38:12 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:38:12 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0914004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:38:12 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:38:12 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:38:12 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:38:12.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:38:13 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:38:13 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0908002830 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:38:13 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:38:13 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:38:13 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:38:13.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:38:13 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:38:13.602Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:38:13 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:38:13.602Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:38:14 compute-0 ceph-mon[74194]: pgmap v673: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:38:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:38:14 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f092c003cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:38:14 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v674: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Sep 30 14:38:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:38:14 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0934003730 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:38:14 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:38:14 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:38:14 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:38:14 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:38:14 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:38:14.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:38:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:38:14] "GET /metrics HTTP/1.1" 200 48418 "" "Prometheus/2.51.0"
Sep 30 14:38:14 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:38:14] "GET /metrics HTTP/1.1" 200 48418 "" "Prometheus/2.51.0"
Sep 30 14:38:15 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:38:15 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:38:15 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0914004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:38:15 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:38:15 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:38:15 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:38:15.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:38:16 compute-0 ceph-mon[74194]: pgmap v674: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Sep 30 14:38:16 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:38:16 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:38:16 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:38:16 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0908002830 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:38:16 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v675: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 170 B/s wr, 1 op/s
Sep 30 14:38:16 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:38:16 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0908002830 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:38:16 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:38:16 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:38:16 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:38:16.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:38:17 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:38:17 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:38:17.100Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:38:17 compute-0 ceph-mon[74194]: pgmap v675: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 170 B/s wr, 1 op/s
Sep 30 14:38:17 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:38:17 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0934003730 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:38:17 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:38:17 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:38:17 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:38:17.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:38:18 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:38:18 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0914004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:38:18 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v676: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 170 B/s wr, 1 op/s
Sep 30 14:38:18 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:38:18 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f092c003cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:38:18 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:38:18 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:38:18 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:38:18.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:38:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:38:19 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0908003b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:38:19 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:38:19 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:38:19 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:38:19.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:38:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:38:19 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:38:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:38:19 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:38:19 compute-0 ceph-mon[74194]: pgmap v676: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 170 B/s wr, 1 op/s
Sep 30 14:38:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:38:19 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:38:20 compute-0 podman[266565]: 2025-09-30 14:38:20.180829155 +0000 UTC m=+0.097345097 container health_status 3f9405f717bf7bccb1d94628a6cea0442375ebf8d5cf43ef2536ee30dce6c6e0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=iscsid, org.label-schema.license=GPLv2, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Sep 30 14:38:20 compute-0 podman[266567]: 2025-09-30 14:38:20.18960625 +0000 UTC m=+0.096914037 container health_status b3fb2fd96e3ed0561f41ccf5f3a6a8f5edc1610051f196882b9fe64f54041c07 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=multipathd, io.buildah.version=1.41.3, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Sep 30 14:38:20 compute-0 podman[266568]: 2025-09-30 14:38:20.199095053 +0000 UTC m=+0.105716441 container health_status c96c6cc1b89de09f21811bef665f35e069874e3ad1bbd4c37d02227af98e3458 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:38:20 compute-0 podman[266566]: 2025-09-30 14:38:20.223626967 +0000 UTC m=+0.134351065 container health_status 8ace90c1ad44d98a416105b72e1c541724757ab08efa10adfaa0df7e8c2027c6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Sep 30 14:38:20 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:38:20 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0908003b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:38:20 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v677: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 170 B/s wr, 1 op/s
Sep 30 14:38:20 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:38:20 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0914004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:38:20 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:38:20 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:38:20 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:38:20.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:38:21 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:38:21 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f092c003cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:38:21 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:38:21 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:38:21 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:38:21.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:38:21 compute-0 ceph-mon[74194]: pgmap v677: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 170 B/s wr, 1 op/s
Sep 30 14:38:22 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:38:22 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:38:22 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0908003b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:38:22 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v678: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Sep 30 14:38:22 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:38:22 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0908003b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:38:22 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:38:22 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:38:22 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:38:22.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:38:22 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:38:22 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Sep 30 14:38:22 compute-0 sudo[266646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:38:22 compute-0 sudo[266646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:38:22 compute-0 sudo[266646]: pam_unix(sudo:session): session closed for user root
Sep 30 14:38:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:38:23 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0914004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:38:23 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:38:23 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:38:23 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:38:23.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:38:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:38:23.603Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:38:23 compute-0 ceph-mon[74194]: pgmap v678: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Sep 30 14:38:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:38:24 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f092c003cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:38:24 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v679: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Sep 30 14:38:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:38:24 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f092c003cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:38:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:38:24] "GET /metrics HTTP/1.1" 200 48418 "" "Prometheus/2.51.0"
Sep 30 14:38:24 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:38:24] "GET /metrics HTTP/1.1" 200 48418 "" "Prometheus/2.51.0"
Sep 30 14:38:24 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:38:24 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:38:24 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:38:24.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:38:25 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:38:25 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0934004440 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:38:25 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:38:25 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:38:25 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:38:25.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:38:25 compute-0 ceph-mon[74194]: pgmap v679: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Sep 30 14:38:26 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:38:26 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0914004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:38:26 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v680: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Sep 30 14:38:26 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:38:26 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0914004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:38:26 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:38:26 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:38:26 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:38:26.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:38:27 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:38:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:38:27.101Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:38:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:38:27 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0910002ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:38:27 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:38:27 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:38:27 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:38:27.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:38:27 compute-0 ceph-mon[74194]: pgmap v680: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Sep 30 14:38:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:38:28 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0914004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:38:28 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v681: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Sep 30 14:38:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:38:28 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0934004440 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:38:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-nfs-cephfs-compute-0-yvkpei[97239]: [WARNING] 272/143828 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 1ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Sep 30 14:38:28 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:38:28 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:38:28 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:38:28.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:38:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:38:29 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f092c003cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:38:29 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:38:29 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:38:29 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:38:29.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:38:29 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:38:29 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:38:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-nfs-cephfs-compute-0-yvkpei[97239]: [WARNING] 272/143829 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Sep 30 14:38:29 compute-0 ceph-mon[74194]: pgmap v681: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Sep 30 14:38:29 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:38:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:38:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:38:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:38:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:38:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:38:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:38:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[106217]: logger=cleanup t=2025-09-30T14:38:29.879525414Z level=info msg="Completed cleanup jobs" duration=37.435378ms
Sep 30 14:38:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[106217]: logger=grafana.update.checker t=2025-09-30T14:38:29.960537585Z level=info msg="Update check succeeded" duration=47.110786ms
Sep 30 14:38:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[106217]: logger=plugins.update.checker t=2025-09-30T14:38:29.964455329Z level=info msg="Update check succeeded" duration=55.359117ms
Sep 30 14:38:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:38:30 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0910002ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:38:30 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v682: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Sep 30 14:38:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:38:30 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0934004440 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:38:30 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:38:30 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:38:30 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:38:30.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:38:31 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:38:31 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0914004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:38:31 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:38:31 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:38:31 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:38:31.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:38:31 compute-0 ceph-mon[74194]: pgmap v682: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Sep 30 14:38:32 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:38:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:38:32 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f092c003cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:38:32 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v683: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Sep 30 14:38:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:38:32 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0910002ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:38:32 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:38:32 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:38:32 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:38:32.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:38:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:38:33 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0910002ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:38:33 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:38:33 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:38:33 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:38:33.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:38:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:38:33.604Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:38:33 compute-0 ceph-mon[74194]: pgmap v683: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Sep 30 14:38:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:38:34 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0914004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:38:34 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v684: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 170 B/s wr, 1 op/s
Sep 30 14:38:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:38:34 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f092c003cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:38:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:38:34] "GET /metrics HTTP/1.1" 200 48421 "" "Prometheus/2.51.0"
Sep 30 14:38:34 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:38:34] "GET /metrics HTTP/1.1" 200 48421 "" "Prometheus/2.51.0"
Sep 30 14:38:34 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:38:34 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:38:34 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:38:34.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:38:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:38:35 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0934004440 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:38:35 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:38:35 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:38:35 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:38:35.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:38:35 compute-0 ceph-mon[74194]: pgmap v684: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 170 B/s wr, 1 op/s
Sep 30 14:38:36 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:38:36 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0934004440 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:38:36 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v685: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 170 B/s wr, 1 op/s
Sep 30 14:38:36 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:38:36 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0914004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:38:36 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:38:36 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:38:36 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:38:36.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:38:37 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:38:37 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:38:37.103Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:38:37 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:38:37.103Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:38:37 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:38:37 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0928001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:38:37 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:38:37 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:38:37 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:38:37.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:38:37 compute-0 ceph-mon[74194]: pgmap v685: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 170 B/s wr, 1 op/s
Sep 30 14:38:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:38:38.255 163966 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:38:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:38:38.256 163966 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:38:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:38:38.256 163966 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:38:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:38:38 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0904000b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:38:38 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v686: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:38:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:38:38 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0910002ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:38:38 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:38:38 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:38:38 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:38:38.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:38:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:38:39 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0914004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:38:39 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:38:39 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:38:39 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:38:39.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:38:39 compute-0 ceph-mon[74194]: pgmap v686: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:38:40 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:38:40 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0928002250 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:38:40 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v687: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:38:40 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:38:40 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f09040016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:38:40 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:38:40 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:38:40 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:38:40.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:38:41 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:38:41 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f09100039b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:38:41 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:38:41 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:38:41 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:38:41.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:38:41 compute-0 ceph-mon[74194]: pgmap v687: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:38:42 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:38:42 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:38:42 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0914004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:38:42 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v688: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:38:42 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:38:42 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0928002250 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:38:42 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:38:42 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:38:42 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:38:42.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:38:43 compute-0 sudo[266694]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:38:43 compute-0 sudo[266694]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:38:43 compute-0 sudo[266694]: pam_unix(sudo:session): session closed for user root
Sep 30 14:38:43 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:38:43 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f09040016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:38:43 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:38:43 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:38:43 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:38:43.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:38:43 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:38:43.606Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:38:43 compute-0 ceph-mon[74194]: pgmap v688: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:38:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:38:44 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f09100039b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:38:44 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v689: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:38:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:38:44 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0914004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:38:44 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:38:44 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:38:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:38:44] "GET /metrics HTTP/1.1" 200 48416 "" "Prometheus/2.51.0"
Sep 30 14:38:44 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:38:44] "GET /metrics HTTP/1.1" 200 48416 "" "Prometheus/2.51.0"
Sep 30 14:38:44 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:38:44 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:38:44 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:38:44.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:38:44 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:38:45 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:38:45 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0928002250 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:38:45 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:38:45 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:38:45 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:38:45.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:38:45 compute-0 ceph-mon[74194]: pgmap v689: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:38:46 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:38:46 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0928002250 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:38:46 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v690: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:38:46 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:38:46 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f09100039b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:38:46 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:38:46 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:38:46 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:38:46.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:38:47 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:38:47 compute-0 ceph-mon[74194]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Sep 30 14:38:47 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:38:47.034365) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Sep 30 14:38:47 compute-0 ceph-mon[74194]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Sep 30 14:38:47 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759243127034403, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 522, "num_deletes": 250, "total_data_size": 641690, "memory_usage": 652592, "flush_reason": "Manual Compaction"}
Sep 30 14:38:47 compute-0 ceph-mon[74194]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Sep 30 14:38:47 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759243127039812, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 435993, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 22279, "largest_seqno": 22800, "table_properties": {"data_size": 433385, "index_size": 644, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6646, "raw_average_key_size": 19, "raw_value_size": 428144, "raw_average_value_size": 1255, "num_data_blocks": 29, "num_entries": 341, "num_filter_entries": 341, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759243093, "oldest_key_time": 1759243093, "file_creation_time": 1759243127, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4a74fe2f-a33e-416b-ba25-743e7942b3ac", "db_session_id": "KY5CTSKWFSFJYE5835A9", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Sep 30 14:38:47 compute-0 ceph-mon[74194]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 5510 microseconds, and 1896 cpu microseconds.
Sep 30 14:38:47 compute-0 ceph-mon[74194]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 14:38:47 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:38:47.039868) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 435993 bytes OK
Sep 30 14:38:47 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:38:47.039893) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Sep 30 14:38:47 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:38:47.041660) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Sep 30 14:38:47 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:38:47.041687) EVENT_LOG_v1 {"time_micros": 1759243127041680, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Sep 30 14:38:47 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:38:47.041708) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Sep 30 14:38:47 compute-0 ceph-mon[74194]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 638782, prev total WAL file size 638782, number of live WAL files 2.
Sep 30 14:38:47 compute-0 ceph-mon[74194]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 14:38:47 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:38:47.042215) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353032' seq:72057594037927935, type:22 .. '6D67727374617400373533' seq:0, type:0; will stop at (end)
Sep 30 14:38:47 compute-0 ceph-mon[74194]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Sep 30 14:38:47 compute-0 ceph-mon[74194]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(425KB)], [47(14MB)]
Sep 30 14:38:47 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759243127042759, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 15773827, "oldest_snapshot_seqno": -1}
Sep 30 14:38:47 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:38:47.103Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:38:47 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:38:47.103Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:38:47 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:38:47.104Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:38:47 compute-0 ceph-mon[74194]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 5349 keys, 11870667 bytes, temperature: kUnknown
Sep 30 14:38:47 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759243127126847, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 11870667, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11836408, "index_size": 19813, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13381, "raw_key_size": 135703, "raw_average_key_size": 25, "raw_value_size": 11740861, "raw_average_value_size": 2194, "num_data_blocks": 807, "num_entries": 5349, "num_filter_entries": 5349, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759241526, "oldest_key_time": 0, "file_creation_time": 1759243127, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4a74fe2f-a33e-416b-ba25-743e7942b3ac", "db_session_id": "KY5CTSKWFSFJYE5835A9", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Sep 30 14:38:47 compute-0 ceph-mon[74194]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 14:38:47 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:38:47.127142) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 11870667 bytes
Sep 30 14:38:47 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:38:47.128419) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 187.4 rd, 141.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 14.6 +0.0 blob) out(11.3 +0.0 blob), read-write-amplify(63.4) write-amplify(27.2) OK, records in: 5845, records dropped: 496 output_compression: NoCompression
Sep 30 14:38:47 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:38:47.128439) EVENT_LOG_v1 {"time_micros": 1759243127128430, "job": 24, "event": "compaction_finished", "compaction_time_micros": 84166, "compaction_time_cpu_micros": 29192, "output_level": 6, "num_output_files": 1, "total_output_size": 11870667, "num_input_records": 5845, "num_output_records": 5349, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Sep 30 14:38:47 compute-0 ceph-mon[74194]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 14:38:47 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759243127128643, "job": 24, "event": "table_file_deletion", "file_number": 49}
Sep 30 14:38:47 compute-0 ceph-mon[74194]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 14:38:47 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759243127132282, "job": 24, "event": "table_file_deletion", "file_number": 47}
Sep 30 14:38:47 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:38:47.042120) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:38:47 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:38:47.132454) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:38:47 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:38:47.132463) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:38:47 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:38:47.132466) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:38:47 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:38:47.132469) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:38:47 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:38:47.132474) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:38:47 compute-0 rsyslogd[1004]: imjournal from <np0005462840:ceph-mon>: begin to drop messages due to rate-limiting
Sep 30 14:38:47 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:38:47 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0914004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:38:47 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:38:47 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:38:47 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:38:47.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:38:48 compute-0 ceph-mon[74194]: pgmap v690: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:38:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:38:48 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0928002250 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:38:48 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v691: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:38:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:38:48 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0904002720 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:38:48 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:38:48 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:38:48 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:38:48.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:38:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:38:49 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f09100039b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:38:49 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:38:49 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:38:49 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:38:49.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:38:49 compute-0 sudo[266727]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:38:49 compute-0 sudo[266727]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:38:49 compute-0 sudo[266727]: pam_unix(sudo:session): session closed for user root
Sep 30 14:38:49 compute-0 sudo[266752]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 14:38:49 compute-0 sudo[266752]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:38:50 compute-0 ceph-mon[74194]: pgmap v691: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:38:50 compute-0 sudo[266752]: pam_unix(sudo:session): session closed for user root
Sep 30 14:38:50 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Sep 30 14:38:50 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Sep 30 14:38:50 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:38:50 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:38:50 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 14:38:50 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 14:38:50 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 14:38:50 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:38:50 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 14:38:50 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:38:50 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 14:38:50 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 14:38:50 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 14:38:50 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 14:38:50 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:38:50 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:38:50 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:38:50 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0914004070 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:38:50 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v692: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:38:50 compute-0 sudo[266809]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:38:50 compute-0 sudo[266809]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:38:50 compute-0 sudo[266809]: pam_unix(sudo:session): session closed for user root
Sep 30 14:38:50 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:38:50 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0928002250 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:38:50 compute-0 podman[266836]: 2025-09-30 14:38:50.635138913 +0000 UTC m=+0.051332180 container health_status c96c6cc1b89de09f21811bef665f35e069874e3ad1bbd4c37d02227af98e3458 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=36bccb96575468ec919301205d8daa2c, io.buildah.version=1.41.3)
Sep 30 14:38:50 compute-0 sudo[266868]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 14:38:50 compute-0 sudo[266868]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:38:50 compute-0 podman[266833]: 2025-09-30 14:38:50.656445582 +0000 UTC m=+0.080784306 container health_status 3f9405f717bf7bccb1d94628a6cea0442375ebf8d5cf43ef2536ee30dce6c6e0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=iscsid, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Sep 30 14:38:50 compute-0 podman[266835]: 2025-09-30 14:38:50.663572452 +0000 UTC m=+0.069506035 container health_status b3fb2fd96e3ed0561f41ccf5f3a6a8f5edc1610051f196882b9fe64f54041c07 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Sep 30 14:38:50 compute-0 podman[266834]: 2025-09-30 14:38:50.691939898 +0000 UTC m=+0.114264058 container health_status 8ace90c1ad44d98a416105b72e1c541724757ab08efa10adfaa0df7e8c2027c6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=36bccb96575468ec919301205d8daa2c, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Sep 30 14:38:50 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:38:50 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:38:50 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:38:50.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:38:51 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Sep 30 14:38:51 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:38:51 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 14:38:51 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:38:51 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:38:51 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 14:38:51 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 14:38:51 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:38:51 compute-0 podman[266975]: 2025-09-30 14:38:51.11486362 +0000 UTC m=+0.053017405 container create e0dfc8d0816ecfdc5b5de38dc587c16834f695cfd4773d2e81426b9e9fcc4e8c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_goldberg, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:38:51 compute-0 systemd[1]: Started libpod-conmon-e0dfc8d0816ecfdc5b5de38dc587c16834f695cfd4773d2e81426b9e9fcc4e8c.scope.
Sep 30 14:38:51 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:38:51 compute-0 podman[266975]: 2025-09-30 14:38:51.09048977 +0000 UTC m=+0.028643585 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:38:51 compute-0 podman[266975]: 2025-09-30 14:38:51.203252998 +0000 UTC m=+0.141406783 container init e0dfc8d0816ecfdc5b5de38dc587c16834f695cfd4773d2e81426b9e9fcc4e8c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_goldberg, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:38:51 compute-0 podman[266975]: 2025-09-30 14:38:51.211179259 +0000 UTC m=+0.149333034 container start e0dfc8d0816ecfdc5b5de38dc587c16834f695cfd4773d2e81426b9e9fcc4e8c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_goldberg, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:38:51 compute-0 podman[266975]: 2025-09-30 14:38:51.215018141 +0000 UTC m=+0.153171976 container attach e0dfc8d0816ecfdc5b5de38dc587c16834f695cfd4773d2e81426b9e9fcc4e8c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_goldberg, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:38:51 compute-0 xenodochial_goldberg[266991]: 167 167
Sep 30 14:38:51 compute-0 systemd[1]: libpod-e0dfc8d0816ecfdc5b5de38dc587c16834f695cfd4773d2e81426b9e9fcc4e8c.scope: Deactivated successfully.
Sep 30 14:38:51 compute-0 conmon[266991]: conmon e0dfc8d0816ecfdc5b5d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e0dfc8d0816ecfdc5b5de38dc587c16834f695cfd4773d2e81426b9e9fcc4e8c.scope/container/memory.events
Sep 30 14:38:51 compute-0 podman[266975]: 2025-09-30 14:38:51.217651952 +0000 UTC m=+0.155805727 container died e0dfc8d0816ecfdc5b5de38dc587c16834f695cfd4773d2e81426b9e9fcc4e8c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_goldberg, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Sep 30 14:38:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-b28d2ab97e85eb06c2c494caa03a03e877fd757e3a97dc1fe3cb2062d99cab2f-merged.mount: Deactivated successfully.
Sep 30 14:38:51 compute-0 podman[266975]: 2025-09-30 14:38:51.254005882 +0000 UTC m=+0.192159647 container remove e0dfc8d0816ecfdc5b5de38dc587c16834f695cfd4773d2e81426b9e9fcc4e8c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_goldberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:38:51 compute-0 systemd[1]: libpod-conmon-e0dfc8d0816ecfdc5b5de38dc587c16834f695cfd4773d2e81426b9e9fcc4e8c.scope: Deactivated successfully.
Sep 30 14:38:51 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:38:51 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0904002720 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:38:51 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:38:51 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:38:51 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:38:51.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:38:51 compute-0 podman[267016]: 2025-09-30 14:38:51.476670831 +0000 UTC m=+0.053727824 container create 45958c1322c98bb96aa81ba15ad7abc21775e895136f519955538ad3e7029010 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_napier, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Sep 30 14:38:51 compute-0 systemd[1]: Started libpod-conmon-45958c1322c98bb96aa81ba15ad7abc21775e895136f519955538ad3e7029010.scope.
Sep 30 14:38:51 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:38:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e591f73e8d58aabc7d08759f39b4d2bc95bd4b4f32a1602f42bb7eabd15cd71d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:38:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e591f73e8d58aabc7d08759f39b4d2bc95bd4b4f32a1602f42bb7eabd15cd71d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:38:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e591f73e8d58aabc7d08759f39b4d2bc95bd4b4f32a1602f42bb7eabd15cd71d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:38:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e591f73e8d58aabc7d08759f39b4d2bc95bd4b4f32a1602f42bb7eabd15cd71d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:38:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e591f73e8d58aabc7d08759f39b4d2bc95bd4b4f32a1602f42bb7eabd15cd71d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 14:38:51 compute-0 podman[267016]: 2025-09-30 14:38:51.454110279 +0000 UTC m=+0.031167322 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:38:51 compute-0 podman[267016]: 2025-09-30 14:38:51.557786225 +0000 UTC m=+0.134843308 container init 45958c1322c98bb96aa81ba15ad7abc21775e895136f519955538ad3e7029010 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_napier, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Sep 30 14:38:51 compute-0 podman[267016]: 2025-09-30 14:38:51.567284718 +0000 UTC m=+0.144341751 container start 45958c1322c98bb96aa81ba15ad7abc21775e895136f519955538ad3e7029010 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_napier, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:38:51 compute-0 podman[267016]: 2025-09-30 14:38:51.5722149 +0000 UTC m=+0.149271903 container attach 45958c1322c98bb96aa81ba15ad7abc21775e895136f519955538ad3e7029010 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_napier, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:38:51 compute-0 stupefied_napier[267033]: --> passed data devices: 0 physical, 1 LVM
Sep 30 14:38:51 compute-0 stupefied_napier[267033]: --> All data devices are unavailable
Sep 30 14:38:51 compute-0 systemd[1]: libpod-45958c1322c98bb96aa81ba15ad7abc21775e895136f519955538ad3e7029010.scope: Deactivated successfully.
Sep 30 14:38:51 compute-0 podman[267016]: 2025-09-30 14:38:51.91112875 +0000 UTC m=+0.488185763 container died 45958c1322c98bb96aa81ba15ad7abc21775e895136f519955538ad3e7029010 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_napier, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Sep 30 14:38:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-e591f73e8d58aabc7d08759f39b4d2bc95bd4b4f32a1602f42bb7eabd15cd71d-merged.mount: Deactivated successfully.
Sep 30 14:38:51 compute-0 podman[267016]: 2025-09-30 14:38:51.958140494 +0000 UTC m=+0.535197487 container remove 45958c1322c98bb96aa81ba15ad7abc21775e895136f519955538ad3e7029010 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_napier, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Sep 30 14:38:51 compute-0 systemd[1]: libpod-conmon-45958c1322c98bb96aa81ba15ad7abc21775e895136f519955538ad3e7029010.scope: Deactivated successfully.
Sep 30 14:38:52 compute-0 sudo[266868]: pam_unix(sudo:session): session closed for user root
Sep 30 14:38:52 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:38:52 compute-0 ceph-mon[74194]: pgmap v692: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:38:52 compute-0 sudo[267060]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:38:52 compute-0 sudo[267060]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:38:52 compute-0 sudo[267060]: pam_unix(sudo:session): session closed for user root
Sep 30 14:38:52 compute-0 sudo[267085]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -- lvm list --format json
Sep 30 14:38:52 compute-0 sudo[267085]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:38:52 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:38:52 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f09100039b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:38:52 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v693: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:38:52 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:38:52 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0928004110 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:38:52 compute-0 podman[267151]: 2025-09-30 14:38:52.646234779 +0000 UTC m=+0.050866268 container create 6bf7b7af865495dc0cc284f2bb58393b79c0d1ac3133eeac30c4f39f824448e5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_hodgkin, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Sep 30 14:38:52 compute-0 systemd[1]: Started libpod-conmon-6bf7b7af865495dc0cc284f2bb58393b79c0d1ac3133eeac30c4f39f824448e5.scope.
Sep 30 14:38:52 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:38:52 compute-0 podman[267151]: 2025-09-30 14:38:52.619746512 +0000 UTC m=+0.024377991 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:38:52 compute-0 podman[267151]: 2025-09-30 14:38:52.720917831 +0000 UTC m=+0.125549290 container init 6bf7b7af865495dc0cc284f2bb58393b79c0d1ac3133eeac30c4f39f824448e5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_hodgkin, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Sep 30 14:38:52 compute-0 podman[267151]: 2025-09-30 14:38:52.731702409 +0000 UTC m=+0.136333898 container start 6bf7b7af865495dc0cc284f2bb58393b79c0d1ac3133eeac30c4f39f824448e5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_hodgkin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:38:52 compute-0 podman[267151]: 2025-09-30 14:38:52.735555682 +0000 UTC m=+0.140187151 container attach 6bf7b7af865495dc0cc284f2bb58393b79c0d1ac3133eeac30c4f39f824448e5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_hodgkin, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Sep 30 14:38:52 compute-0 admiring_hodgkin[267168]: 167 167
Sep 30 14:38:52 compute-0 systemd[1]: libpod-6bf7b7af865495dc0cc284f2bb58393b79c0d1ac3133eeac30c4f39f824448e5.scope: Deactivated successfully.
Sep 30 14:38:52 compute-0 podman[267151]: 2025-09-30 14:38:52.737412031 +0000 UTC m=+0.142043470 container died 6bf7b7af865495dc0cc284f2bb58393b79c0d1ac3133eeac30c4f39f824448e5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_hodgkin, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:38:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-146495503ed569736ee8399f083f8d99cc63a2c820ffefe71eb0219ce1d5b9ed-merged.mount: Deactivated successfully.
Sep 30 14:38:52 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:38:52 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:38:52 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:38:52.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:38:52 compute-0 podman[267151]: 2025-09-30 14:38:52.781489387 +0000 UTC m=+0.186120816 container remove 6bf7b7af865495dc0cc284f2bb58393b79c0d1ac3133eeac30c4f39f824448e5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:38:52 compute-0 systemd[1]: libpod-conmon-6bf7b7af865495dc0cc284f2bb58393b79c0d1ac3133eeac30c4f39f824448e5.scope: Deactivated successfully.
Sep 30 14:38:52 compute-0 podman[267192]: 2025-09-30 14:38:52.997791957 +0000 UTC m=+0.075655959 container create ea561c52a4fcf1ee5a3efc4b56e4b97f65aab79f22522f539c0b11fb2eefc66a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_feistel, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:38:53 compute-0 systemd[1]: Started libpod-conmon-ea561c52a4fcf1ee5a3efc4b56e4b97f65aab79f22522f539c0b11fb2eefc66a.scope.
Sep 30 14:38:53 compute-0 podman[267192]: 2025-09-30 14:38:52.967860808 +0000 UTC m=+0.045724910 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:38:53 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:38:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2018a83aa1dc6b4cb6e72901e8f373d630a65f94ed6001943ee5e47faaf307a0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:38:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2018a83aa1dc6b4cb6e72901e8f373d630a65f94ed6001943ee5e47faaf307a0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:38:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2018a83aa1dc6b4cb6e72901e8f373d630a65f94ed6001943ee5e47faaf307a0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:38:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2018a83aa1dc6b4cb6e72901e8f373d630a65f94ed6001943ee5e47faaf307a0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:38:53 compute-0 podman[267192]: 2025-09-30 14:38:53.100152027 +0000 UTC m=+0.178016059 container init ea561c52a4fcf1ee5a3efc4b56e4b97f65aab79f22522f539c0b11fb2eefc66a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_feistel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Sep 30 14:38:53 compute-0 podman[267192]: 2025-09-30 14:38:53.111596912 +0000 UTC m=+0.189460924 container start ea561c52a4fcf1ee5a3efc4b56e4b97f65aab79f22522f539c0b11fb2eefc66a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_feistel, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Sep 30 14:38:53 compute-0 podman[267192]: 2025-09-30 14:38:53.114527741 +0000 UTC m=+0.192391743 container attach ea561c52a4fcf1ee5a3efc4b56e4b97f65aab79f22522f539c0b11fb2eefc66a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_feistel, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Sep 30 14:38:53 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:38:53 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0914004090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:38:53 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:38:53 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:38:53 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:38:53.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:38:53 compute-0 jolly_feistel[267208]: {
Sep 30 14:38:53 compute-0 jolly_feistel[267208]:     "0": [
Sep 30 14:38:53 compute-0 jolly_feistel[267208]:         {
Sep 30 14:38:53 compute-0 jolly_feistel[267208]:             "devices": [
Sep 30 14:38:53 compute-0 jolly_feistel[267208]:                 "/dev/loop3"
Sep 30 14:38:53 compute-0 jolly_feistel[267208]:             ],
Sep 30 14:38:53 compute-0 jolly_feistel[267208]:             "lv_name": "ceph_lv0",
Sep 30 14:38:53 compute-0 jolly_feistel[267208]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:38:53 compute-0 jolly_feistel[267208]:             "lv_size": "21470642176",
Sep 30 14:38:53 compute-0 jolly_feistel[267208]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5e3c7776-ac03-5698-b79f-a6dc2d80cae6,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1bf35304-bfb4-41f5-b832-570aa31de1b2,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 14:38:53 compute-0 jolly_feistel[267208]:             "lv_uuid": "f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L",
Sep 30 14:38:53 compute-0 jolly_feistel[267208]:             "name": "ceph_lv0",
Sep 30 14:38:53 compute-0 jolly_feistel[267208]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:38:53 compute-0 jolly_feistel[267208]:             "tags": {
Sep 30 14:38:53 compute-0 jolly_feistel[267208]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:38:53 compute-0 jolly_feistel[267208]:                 "ceph.block_uuid": "f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L",
Sep 30 14:38:53 compute-0 jolly_feistel[267208]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 14:38:53 compute-0 jolly_feistel[267208]:                 "ceph.cluster_fsid": "5e3c7776-ac03-5698-b79f-a6dc2d80cae6",
Sep 30 14:38:53 compute-0 jolly_feistel[267208]:                 "ceph.cluster_name": "ceph",
Sep 30 14:38:53 compute-0 jolly_feistel[267208]:                 "ceph.crush_device_class": "",
Sep 30 14:38:53 compute-0 jolly_feistel[267208]:                 "ceph.encrypted": "0",
Sep 30 14:38:53 compute-0 jolly_feistel[267208]:                 "ceph.osd_fsid": "1bf35304-bfb4-41f5-b832-570aa31de1b2",
Sep 30 14:38:53 compute-0 jolly_feistel[267208]:                 "ceph.osd_id": "0",
Sep 30 14:38:53 compute-0 jolly_feistel[267208]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 14:38:53 compute-0 jolly_feistel[267208]:                 "ceph.type": "block",
Sep 30 14:38:53 compute-0 jolly_feistel[267208]:                 "ceph.vdo": "0",
Sep 30 14:38:53 compute-0 jolly_feistel[267208]:                 "ceph.with_tpm": "0"
Sep 30 14:38:53 compute-0 jolly_feistel[267208]:             },
Sep 30 14:38:53 compute-0 jolly_feistel[267208]:             "type": "block",
Sep 30 14:38:53 compute-0 jolly_feistel[267208]:             "vg_name": "ceph_vg0"
Sep 30 14:38:53 compute-0 jolly_feistel[267208]:         }
Sep 30 14:38:53 compute-0 jolly_feistel[267208]:     ]
Sep 30 14:38:53 compute-0 jolly_feistel[267208]: }
Sep 30 14:38:53 compute-0 systemd[1]: libpod-ea561c52a4fcf1ee5a3efc4b56e4b97f65aab79f22522f539c0b11fb2eefc66a.scope: Deactivated successfully.
Sep 30 14:38:53 compute-0 podman[267192]: 2025-09-30 14:38:53.451330105 +0000 UTC m=+0.529194127 container died ea561c52a4fcf1ee5a3efc4b56e4b97f65aab79f22522f539c0b11fb2eefc66a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_feistel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:38:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-2018a83aa1dc6b4cb6e72901e8f373d630a65f94ed6001943ee5e47faaf307a0-merged.mount: Deactivated successfully.
Sep 30 14:38:53 compute-0 podman[267192]: 2025-09-30 14:38:53.49875496 +0000 UTC m=+0.576618962 container remove ea561c52a4fcf1ee5a3efc4b56e4b97f65aab79f22522f539c0b11fb2eefc66a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:38:53 compute-0 systemd[1]: libpod-conmon-ea561c52a4fcf1ee5a3efc4b56e4b97f65aab79f22522f539c0b11fb2eefc66a.scope: Deactivated successfully.
Sep 30 14:38:53 compute-0 sudo[267085]: pam_unix(sudo:session): session closed for user root
Sep 30 14:38:53 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:38:53.606Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:38:53 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:38:53.606Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:38:53 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:38:53.607Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:38:53 compute-0 sudo[267230]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:38:53 compute-0 sudo[267230]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:38:53 compute-0 sudo[267230]: pam_unix(sudo:session): session closed for user root
Sep 30 14:38:53 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-nfs-cephfs-compute-0-yvkpei[97239]: [WARNING] 272/143853 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Sep 30 14:38:53 compute-0 sudo[267255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -- raw list --format json
Sep 30 14:38:53 compute-0 sudo[267255]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:38:54 compute-0 ceph-mon[74194]: pgmap v693: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:38:54 compute-0 podman[267321]: 2025-09-30 14:38:54.200088167 +0000 UTC m=+0.068548200 container create ae14b2abc5aeb2f86948ce48cd77ff1d528d99db72d3173d7946a90ada804c2b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:38:54 compute-0 systemd[1]: Started libpod-conmon-ae14b2abc5aeb2f86948ce48cd77ff1d528d99db72d3173d7946a90ada804c2b.scope.
Sep 30 14:38:54 compute-0 podman[267321]: 2025-09-30 14:38:54.173992021 +0000 UTC m=+0.042452134 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:38:54 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:38:54 compute-0 podman[267321]: 2025-09-30 14:38:54.286720798 +0000 UTC m=+0.155180901 container init ae14b2abc5aeb2f86948ce48cd77ff1d528d99db72d3173d7946a90ada804c2b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_lamarr, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True)
Sep 30 14:38:54 compute-0 podman[267321]: 2025-09-30 14:38:54.293107368 +0000 UTC m=+0.161567401 container start ae14b2abc5aeb2f86948ce48cd77ff1d528d99db72d3173d7946a90ada804c2b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_lamarr, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Sep 30 14:38:54 compute-0 podman[267321]: 2025-09-30 14:38:54.296748275 +0000 UTC m=+0.165208318 container attach ae14b2abc5aeb2f86948ce48cd77ff1d528d99db72d3173d7946a90ada804c2b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_lamarr, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:38:54 compute-0 upbeat_lamarr[267337]: 167 167
Sep 30 14:38:54 compute-0 systemd[1]: libpod-ae14b2abc5aeb2f86948ce48cd77ff1d528d99db72d3173d7946a90ada804c2b.scope: Deactivated successfully.
Sep 30 14:38:54 compute-0 conmon[267337]: conmon ae14b2abc5aeb2f86948 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ae14b2abc5aeb2f86948ce48cd77ff1d528d99db72d3173d7946a90ada804c2b.scope/container/memory.events
Sep 30 14:38:54 compute-0 podman[267321]: 2025-09-30 14:38:54.300018662 +0000 UTC m=+0.168478705 container died ae14b2abc5aeb2f86948ce48cd77ff1d528d99db72d3173d7946a90ada804c2b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_lamarr, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS)
Sep 30 14:38:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-80f5b8518bf8de109bc6eb047997e01a2dbcc6d5cb06439b451c6c88667753c2-merged.mount: Deactivated successfully.
Sep 30 14:38:54 compute-0 podman[267321]: 2025-09-30 14:38:54.346916203 +0000 UTC m=+0.215376276 container remove ae14b2abc5aeb2f86948ce48cd77ff1d528d99db72d3173d7946a90ada804c2b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_lamarr, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:38:54 compute-0 systemd[1]: libpod-conmon-ae14b2abc5aeb2f86948ce48cd77ff1d528d99db72d3173d7946a90ada804c2b.scope: Deactivated successfully.
Sep 30 14:38:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:38:54 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0914004090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:38:54 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v694: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:38:54 compute-0 podman[267364]: 2025-09-30 14:38:54.576265361 +0000 UTC m=+0.080145579 container create d5fc4b6b3ed0acf0ea3c2d5157672d4a5b164dbd8e38f0f0b45f9596b8910a59 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_mclean, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:38:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:38:54 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f09100039b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:38:54 compute-0 podman[267364]: 2025-09-30 14:38:54.527503751 +0000 UTC m=+0.031383989 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:38:54 compute-0 systemd[1]: Started libpod-conmon-d5fc4b6b3ed0acf0ea3c2d5157672d4a5b164dbd8e38f0f0b45f9596b8910a59.scope.
Sep 30 14:38:54 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:38:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/147ebe86e6a20706cc72fb60a4e36dc327d0210c25494c6712b6260271f9e7e6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:38:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/147ebe86e6a20706cc72fb60a4e36dc327d0210c25494c6712b6260271f9e7e6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:38:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/147ebe86e6a20706cc72fb60a4e36dc327d0210c25494c6712b6260271f9e7e6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:38:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/147ebe86e6a20706cc72fb60a4e36dc327d0210c25494c6712b6260271f9e7e6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:38:54 compute-0 podman[267364]: 2025-09-30 14:38:54.701373539 +0000 UTC m=+0.205253857 container init d5fc4b6b3ed0acf0ea3c2d5157672d4a5b164dbd8e38f0f0b45f9596b8910a59 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_mclean, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:38:54 compute-0 podman[267364]: 2025-09-30 14:38:54.712056544 +0000 UTC m=+0.215936792 container start d5fc4b6b3ed0acf0ea3c2d5157672d4a5b164dbd8e38f0f0b45f9596b8910a59 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Sep 30 14:38:54 compute-0 podman[267364]: 2025-09-30 14:38:54.71641816 +0000 UTC m=+0.220298378 container attach d5fc4b6b3ed0acf0ea3c2d5157672d4a5b164dbd8e38f0f0b45f9596b8910a59 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_mclean, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Sep 30 14:38:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:38:54] "GET /metrics HTTP/1.1" 200 48416 "" "Prometheus/2.51.0"
Sep 30 14:38:54 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:38:54] "GET /metrics HTTP/1.1" 200 48416 "" "Prometheus/2.51.0"
Sep 30 14:38:54 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:38:54 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:38:54 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:38:54.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:38:55 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:38:55 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0928004110 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:38:55 compute-0 lvm[267458]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 14:38:55 compute-0 lvm[267458]: VG ceph_vg0 finished
Sep 30 14:38:55 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:38:55 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:38:55 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:38:55.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:38:55 compute-0 loving_mclean[267381]: {}
Sep 30 14:38:55 compute-0 systemd[1]: libpod-d5fc4b6b3ed0acf0ea3c2d5157672d4a5b164dbd8e38f0f0b45f9596b8910a59.scope: Deactivated successfully.
Sep 30 14:38:55 compute-0 systemd[1]: libpod-d5fc4b6b3ed0acf0ea3c2d5157672d4a5b164dbd8e38f0f0b45f9596b8910a59.scope: Consumed 1.167s CPU time.
Sep 30 14:38:55 compute-0 podman[267364]: 2025-09-30 14:38:55.461016812 +0000 UTC m=+0.964897060 container died d5fc4b6b3ed0acf0ea3c2d5157672d4a5b164dbd8e38f0f0b45f9596b8910a59 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_mclean, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:38:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-147ebe86e6a20706cc72fb60a4e36dc327d0210c25494c6712b6260271f9e7e6-merged.mount: Deactivated successfully.
Sep 30 14:38:55 compute-0 podman[267364]: 2025-09-30 14:38:55.512561607 +0000 UTC m=+1.016441825 container remove d5fc4b6b3ed0acf0ea3c2d5157672d4a5b164dbd8e38f0f0b45f9596b8910a59 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_mclean, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:38:55 compute-0 systemd[1]: libpod-conmon-d5fc4b6b3ed0acf0ea3c2d5157672d4a5b164dbd8e38f0f0b45f9596b8910a59.scope: Deactivated successfully.
Sep 30 14:38:55 compute-0 sudo[267255]: pam_unix(sudo:session): session closed for user root
Sep 30 14:38:55 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 14:38:55 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:38:55 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 14:38:55 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:38:55 compute-0 sudo[267474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 14:38:55 compute-0 sudo[267474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:38:55 compute-0 sudo[267474]: pam_unix(sudo:session): session closed for user root
Sep 30 14:38:56 compute-0 ceph-mon[74194]: pgmap v694: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:38:56 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:38:56 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:38:56 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:38:56 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0904003430 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:38:56 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v695: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Sep 30 14:38:56 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:38:56 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0914004090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:38:56 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-nfs-cephfs-compute-0-yvkpei[97239]: [WARNING] 272/143856 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Sep 30 14:38:56 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:38:56 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:38:56 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:38:56.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:38:57 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:38:57 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:38:57.105Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:38:57 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:38:57 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f09100039b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:38:57 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:38:57 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:38:57 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:38:57.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:38:58 compute-0 ceph-mon[74194]: pgmap v695: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Sep 30 14:38:58 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:38:58 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0928004110 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:38:58 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v696: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Sep 30 14:38:58 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:38:58 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0904003430 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:38:58 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:38:58 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:38:58 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:38:58.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:38:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:38:59 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f09140040b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:38:59 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:38:59 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:38:59 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:38:59.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:38:59 compute-0 ceph-mgr[74485]: [balancer INFO root] Optimize plan auto_2025-09-30_14:38:59
Sep 30 14:38:59 compute-0 ceph-mgr[74485]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 14:38:59 compute-0 ceph-mgr[74485]: [balancer INFO root] do_upmap
Sep 30 14:38:59 compute-0 ceph-mgr[74485]: [balancer INFO root] pools ['default.rgw.meta', 'volumes', 'vms', 'cephfs.cephfs.data', '.mgr', 'default.rgw.control', '.nfs', 'cephfs.cephfs.meta', 'default.rgw.log', '.rgw.root', 'backups', 'images']
Sep 30 14:38:59 compute-0 ceph-mgr[74485]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 14:38:59 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:38:59 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:38:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:38:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:38:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:38:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:38:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:38:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:38:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 14:38:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:38:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 14:38:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:38:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:38:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:38:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:38:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:38:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:38:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:38:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:38:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:38:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Sep 30 14:38:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:38:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:38:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:38:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Sep 30 14:38:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:38:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Sep 30 14:38:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:38:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:38:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:38:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 14:38:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:38:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 14:39:00 compute-0 ceph-mon[74194]: pgmap v696: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Sep 30 14:39:00 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:39:00 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:00 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f09100039b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:39:00 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v697: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Sep 30 14:39:00 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:00 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0928004110 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:39:00 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:39:00 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:39:00 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:39:00.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:39:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 14:39:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 14:39:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 14:39:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 14:39:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 14:39:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 14:39:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 14:39:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 14:39:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 14:39:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 14:39:01 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:01 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0904003430 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:39:01 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:39:01 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:39:01 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:39:01.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:39:02 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:39:02 compute-0 ceph-mon[74194]: pgmap v697: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Sep 30 14:39:02 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:02 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f09140040d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:39:02 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v698: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:39:02 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:02 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:39:02 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:02 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f09100039b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:39:02 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:39:02 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:39:02 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:39:02.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:39:03 compute-0 sudo[267506]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:39:03 compute-0 sudo[267506]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:39:03 compute-0 sudo[267506]: pam_unix(sudo:session): session closed for user root
Sep 30 14:39:03 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:03 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0928004110 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:39:03 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:39:03 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:39:03 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:39:03.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:39:03 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:39:03.607Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:39:03 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:39:03.607Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:39:03 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:39:03.607Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:39:03 compute-0 nova_compute[261524]: 2025-09-30 14:39:03.953 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:39:03 compute-0 nova_compute[261524]: 2025-09-30 14:39:03.953 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Sep 30 14:39:03 compute-0 nova_compute[261524]: 2025-09-30 14:39:03.967 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Sep 30 14:39:03 compute-0 nova_compute[261524]: 2025-09-30 14:39:03.968 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:39:03 compute-0 nova_compute[261524]: 2025-09-30 14:39:03.969 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Sep 30 14:39:03 compute-0 nova_compute[261524]: 2025-09-30 14:39:03.979 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:39:04 compute-0 ceph-mon[74194]: pgmap v698: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:39:04 compute-0 sshd-session[267531]: Invalid user guest from 194.0.234.19 port 54528
Sep 30 14:39:04 compute-0 sshd-session[267531]: pam_unix(sshd:auth): check pass; user unknown
Sep 30 14:39:04 compute-0 sshd-session[267531]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=194.0.234.19
Sep 30 14:39:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:04 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0904004140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:39:04 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v699: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:39:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:04 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f09140040f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:39:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:39:04] "GET /metrics HTTP/1.1" 200 48421 "" "Prometheus/2.51.0"
Sep 30 14:39:04 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:39:04] "GET /metrics HTTP/1.1" 200 48421 "" "Prometheus/2.51.0"
Sep 30 14:39:04 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:39:04 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:39:04 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:39:04.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:39:05 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/1629321479' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:39:05 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:05 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f09100039b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:39:05 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:39:05 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:39:05 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:39:05.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:39:05 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:05 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:39:05 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:05 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:39:06 compute-0 ceph-mon[74194]: pgmap v699: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:39:06 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/3026658726' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:39:06 compute-0 sshd-session[267531]: Failed password for invalid user guest from 194.0.234.19 port 54528 ssh2
Sep 30 14:39:06 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:06 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0928004110 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:39:06 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v700: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Sep 30 14:39:06 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:06 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0904004140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:39:06 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:39:06 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:39:06 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:39:06.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:39:07 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:39:07 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:39:07.106Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:39:07 compute-0 ceph-mon[74194]: pgmap v700: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Sep 30 14:39:07 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:07 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0914004110 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:39:07 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:39:07 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:39:07 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:39:07.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:39:07 compute-0 sshd-session[267531]: Connection closed by invalid user guest 194.0.234.19 port 54528 [preauth]
Sep 30 14:39:07 compute-0 nova_compute[261524]: 2025-09-30 14:39:07.987 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:39:07 compute-0 nova_compute[261524]: 2025-09-30 14:39:07.988 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:39:07 compute-0 nova_compute[261524]: 2025-09-30 14:39:07.988 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:39:07 compute-0 nova_compute[261524]: 2025-09-30 14:39:07.988 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:39:08 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:08 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f09100039b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:39:08 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v701: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Sep 30 14:39:08 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:08 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0928004110 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:39:08 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:08 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Sep 30 14:39:08 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:39:08 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:39:08 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:39:08.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:39:08 compute-0 nova_compute[261524]: 2025-09-30 14:39:08.952 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:39:08 compute-0 nova_compute[261524]: 2025-09-30 14:39:08.952 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:39:08 compute-0 nova_compute[261524]: 2025-09-30 14:39:08.952 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:39:08 compute-0 nova_compute[261524]: 2025-09-30 14:39:08.979 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:39:08 compute-0 nova_compute[261524]: 2025-09-30 14:39:08.979 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:39:08 compute-0 nova_compute[261524]: 2025-09-30 14:39:08.979 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:39:08 compute-0 nova_compute[261524]: 2025-09-30 14:39:08.980 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Sep 30 14:39:08 compute-0 nova_compute[261524]: 2025-09-30 14:39:08.980 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Sep 30 14:39:09 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:09 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0928004110 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:39:09 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:39:09 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:39:09 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:39:09.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:39:09 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 14:39:09 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3605549365' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:39:09 compute-0 nova_compute[261524]: 2025-09-30 14:39:09.412 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Sep 30 14:39:09 compute-0 nova_compute[261524]: 2025-09-30 14:39:09.646 2 WARNING nova.virt.libvirt.driver [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 14:39:09 compute-0 nova_compute[261524]: 2025-09-30 14:39:09.648 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4883MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Sep 30 14:39:09 compute-0 nova_compute[261524]: 2025-09-30 14:39:09.648 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:39:09 compute-0 nova_compute[261524]: 2025-09-30 14:39:09.648 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:39:09 compute-0 ceph-mon[74194]: pgmap v701: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Sep 30 14:39:09 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/3605549365' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:39:09 compute-0 nova_compute[261524]: 2025-09-30 14:39:09.746 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Sep 30 14:39:09 compute-0 nova_compute[261524]: 2025-09-30 14:39:09.747 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Sep 30 14:39:09 compute-0 nova_compute[261524]: 2025-09-30 14:39:09.786 2 DEBUG nova.scheduler.client.report [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Refreshing inventories for resource provider 06783cfc-6d32-454d-9501-ebd8adea3735 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Sep 30 14:39:09 compute-0 nova_compute[261524]: 2025-09-30 14:39:09.832 2 DEBUG nova.scheduler.client.report [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Updating ProviderTree inventory for provider 06783cfc-6d32-454d-9501-ebd8adea3735 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Sep 30 14:39:09 compute-0 nova_compute[261524]: 2025-09-30 14:39:09.832 2 DEBUG nova.compute.provider_tree [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Updating inventory in ProviderTree for provider 06783cfc-6d32-454d-9501-ebd8adea3735 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Sep 30 14:39:09 compute-0 nova_compute[261524]: 2025-09-30 14:39:09.850 2 DEBUG nova.scheduler.client.report [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Refreshing aggregate associations for resource provider 06783cfc-6d32-454d-9501-ebd8adea3735, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Sep 30 14:39:09 compute-0 nova_compute[261524]: 2025-09-30 14:39:09.870 2 DEBUG nova.scheduler.client.report [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Refreshing trait associations for resource provider 06783cfc-6d32-454d-9501-ebd8adea3735, traits: COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AVX,HW_CPU_X86_SSE2,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_BMI2,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SVM,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_BMI,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_FMA3,HW_CPU_X86_AVX2,HW_CPU_X86_SSE42,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_F16C,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_RESCUE_BFV,COMPUTE_NODE,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_USB,COMPUTE_ACCELERATORS,HW_CPU_X86_CLMUL,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE4A,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_AMD_SVM _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Sep 30 14:39:09 compute-0 nova_compute[261524]: 2025-09-30 14:39:09.886 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Sep 30 14:39:10 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 14:39:10 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4157708474' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:39:10 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 14:39:10 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1812367225' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:39:10 compute-0 nova_compute[261524]: 2025-09-30 14:39:10.336 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Sep 30 14:39:10 compute-0 nova_compute[261524]: 2025-09-30 14:39:10.342 2 DEBUG nova.compute.provider_tree [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Inventory has not changed in ProviderTree for provider: 06783cfc-6d32-454d-9501-ebd8adea3735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Sep 30 14:39:10 compute-0 nova_compute[261524]: 2025-09-30 14:39:10.357 2 DEBUG nova.scheduler.client.report [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Inventory has not changed for provider 06783cfc-6d32-454d-9501-ebd8adea3735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Sep 30 14:39:10 compute-0 nova_compute[261524]: 2025-09-30 14:39:10.359 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Sep 30 14:39:10 compute-0 nova_compute[261524]: 2025-09-30 14:39:10.359 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.711s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:39:10 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:10 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f09140041a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:39:10 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v702: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Sep 30 14:39:10 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:10 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f092c001080 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:39:10 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/4157708474' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:39:10 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/1812367225' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:39:10 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:39:10 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:39:10 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:39:10.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:39:11 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:11 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0934001ac0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:39:11 compute-0 nova_compute[261524]: 2025-09-30 14:39:11.359 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:39:11 compute-0 nova_compute[261524]: 2025-09-30 14:39:11.360 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Sep 30 14:39:11 compute-0 nova_compute[261524]: 2025-09-30 14:39:11.360 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Sep 30 14:39:11 compute-0 nova_compute[261524]: 2025-09-30 14:39:11.381 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Sep 30 14:39:11 compute-0 nova_compute[261524]: 2025-09-30 14:39:11.382 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:39:11 compute-0 nova_compute[261524]: 2025-09-30 14:39:11.382 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Sep 30 14:39:11 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:39:11 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:39:11 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:39:11.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:39:11 compute-0 ceph-mon[74194]: pgmap v702: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Sep 30 14:39:11 compute-0 ceph-mon[74194]: from='client.? 192.168.122.10:0/751279081' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 14:39:11 compute-0 ceph-mon[74194]: from='client.? 192.168.122.10:0/751279081' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 14:39:11 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/2369819811' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:39:11 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:11 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:39:12 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:39:12 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:12 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0928004110 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:39:12 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v703: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Sep 30 14:39:12 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:12 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f09140041a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:39:12 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:39:12 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:39:12 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:39:12.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:39:13 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:13 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f092c002310 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:39:13 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:39:13 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:39:13 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:39:13.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:39:13 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:39:13.609Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:39:13 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:39:13.609Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:39:13 compute-0 ceph-mon[74194]: pgmap v703: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Sep 30 14:39:13 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-nfs-cephfs-compute-0-yvkpei[97239]: [WARNING] 272/143913 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Sep 30 14:39:13 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:39:13 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=404 latency=0.007000187s ======
Sep 30 14:39:13 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:39:13.835 +0000] "GET /info HTTP/1.1" 404 152 - "python-urllib3/1.26.5" - latency=0.007000187s
Sep 30 14:39:13 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:39:13 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.003000080s ======
Sep 30 14:39:13 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - - [30/Sep/2025:14:39:13.859 +0000] "GET /swift/healthcheck HTTP/1.1" 200 0 - "python-urllib3/1.26.5" - latency=0.003000080s
Sep 30 14:39:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:14 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0934001ac0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:39:14 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v704: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Sep 30 14:39:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:14 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0928004110 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:39:14 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:39:14 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:39:14 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:39:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:14 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:39:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:14 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:39:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:39:14] "GET /metrics HTTP/1.1" 200 48420 "" "Prometheus/2.51.0"
Sep 30 14:39:14 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:39:14] "GET /metrics HTTP/1.1" 200 48420 "" "Prometheus/2.51.0"
Sep 30 14:39:14 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:39:14 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:39:14 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:39:14.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:39:15 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:15 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0928004110 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:39:15 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:39:15 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:39:15 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:39:15.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:39:15 compute-0 ceph-mon[74194]: pgmap v704: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Sep 30 14:39:16 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:16 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f09140041a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:39:16 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v705: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 3.6 KiB/s rd, 1.7 KiB/s wr, 5 op/s
Sep 30 14:39:16 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:16 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0934001ac0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:39:16 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:39:16 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:39:16 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:39:16.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:39:17 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:39:17 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:39:17.107Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:39:17 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:17 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f092c002310 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:39:17 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:39:17 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:39:17 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:39:17.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:39:17 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e141 do_prune osdmap full prune enabled
Sep 30 14:39:17 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:17 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Sep 30 14:39:17 compute-0 ceph-mon[74194]: pgmap v705: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 3.6 KiB/s rd, 1.7 KiB/s wr, 5 op/s
Sep 30 14:39:17 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e142 e142: 3 total, 3 up, 3 in
Sep 30 14:39:17 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e142: 3 total, 3 up, 3 in
Sep 30 14:39:18 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:18 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0928004110 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:39:18 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v707: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 1023 B/s wr, 3 op/s
Sep 30 14:39:18 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:18 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f09140041a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:39:18 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e142 do_prune osdmap full prune enabled
Sep 30 14:39:18 compute-0 ceph-mon[74194]: osdmap e142: 3 total, 3 up, 3 in
Sep 30 14:39:18 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e143 e143: 3 total, 3 up, 3 in
Sep 30 14:39:18 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e143: 3 total, 3 up, 3 in
Sep 30 14:39:18 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:39:18 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:39:18 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:39:18.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:39:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:19 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0934002bc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:39:19 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:39:19 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:39:19 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:39:19.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:39:19 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e143 do_prune osdmap full prune enabled
Sep 30 14:39:19 compute-0 ceph-mon[74194]: pgmap v707: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 1023 B/s wr, 3 op/s
Sep 30 14:39:19 compute-0 ceph-mon[74194]: osdmap e143: 3 total, 3 up, 3 in
Sep 30 14:39:19 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e144 e144: 3 total, 3 up, 3 in
Sep 30 14:39:19 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e144: 3 total, 3 up, 3 in
Sep 30 14:39:20 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:20 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f092c003020 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:39:20 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v710: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 1.5 KiB/s wr, 5 op/s
Sep 30 14:39:20 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:20 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0928004110 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:39:20 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-nfs-cephfs-compute-0-yvkpei[97239]: [WARNING] 272/143920 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Sep 30 14:39:20 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e144 do_prune osdmap full prune enabled
Sep 30 14:39:20 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e145 e145: 3 total, 3 up, 3 in
Sep 30 14:39:20 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e145: 3 total, 3 up, 3 in
Sep 30 14:39:20 compute-0 ceph-mon[74194]: osdmap e144: 3 total, 3 up, 3 in
Sep 30 14:39:20 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:39:20 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:39:20 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:39:20.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:39:21 compute-0 podman[267600]: 2025-09-30 14:39:21.15325203 +0000 UTC m=+0.065419906 container health_status b3fb2fd96e3ed0561f41ccf5f3a6a8f5edc1610051f196882b9fe64f54041c07 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Sep 30 14:39:21 compute-0 podman[267598]: 2025-09-30 14:39:21.181543664 +0000 UTC m=+0.101032596 container health_status 3f9405f717bf7bccb1d94628a6cea0442375ebf8d5cf43ef2536ee30dce6c6e0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=iscsid, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Sep 30 14:39:21 compute-0 podman[267601]: 2025-09-30 14:39:21.181440912 +0000 UTC m=+0.087202888 container health_status c96c6cc1b89de09f21811bef665f35e069874e3ad1bbd4c37d02227af98e3458 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Sep 30 14:39:21 compute-0 podman[267599]: 2025-09-30 14:39:21.18813933 +0000 UTC m=+0.105113655 container health_status 8ace90c1ad44d98a416105b72e1c541724757ab08efa10adfaa0df7e8c2027c6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Sep 30 14:39:21 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:21 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f09140041c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:39:21 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:39:21 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:39:21 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:39:21.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:39:21 compute-0 ceph-mon[74194]: pgmap v710: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 1.5 KiB/s wr, 5 op/s
Sep 30 14:39:21 compute-0 ceph-mon[74194]: osdmap e145: 3 total, 3 up, 3 in
Sep 30 14:39:22 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:39:22 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:22 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0934002bc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:39:22 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v712: 337 pgs: 337 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 8.5 MiB/s wr, 79 op/s
Sep 30 14:39:22 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:22 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f092c003020 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:39:22 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:39:22 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:39:22 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:39:22.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:39:23 compute-0 sudo[267679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:39:23 compute-0 sudo[267679]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:39:23 compute-0 sudo[267679]: pam_unix(sudo:session): session closed for user root
Sep 30 14:39:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:23 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0928004110 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:39:23 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:39:23 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:39:23 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:39:23.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:39:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:39:23.610Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:39:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-nfs-cephfs-compute-0-yvkpei[97239]: [WARNING] 272/143923 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Sep 30 14:39:23 compute-0 ceph-mon[74194]: pgmap v712: 337 pgs: 337 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 8.5 MiB/s wr, 79 op/s
Sep 30 14:39:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:24 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f09140041e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:39:24 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v713: 337 pgs: 337 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 6.8 MiB/s wr, 63 op/s
Sep 30 14:39:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:24 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f09140041e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:39:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:39:24] "GET /metrics HTTP/1.1" 200 48420 "" "Prometheus/2.51.0"
Sep 30 14:39:24 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:39:24] "GET /metrics HTTP/1.1" 200 48420 "" "Prometheus/2.51.0"
Sep 30 14:39:24 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:39:24 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:39:24 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:39:24.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:39:25 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:25 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f09140041e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:39:25 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:39:25 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:39:25 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:39:25.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:39:25 compute-0 ceph-mon[74194]: pgmap v713: 337 pgs: 337 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 6.8 MiB/s wr, 63 op/s
Sep 30 14:39:26 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:26 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0928004110 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:39:26 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v714: 337 pgs: 337 active+clean; 41 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 5.2 MiB/s wr, 49 op/s
Sep 30 14:39:26 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:26 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0934002bc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:39:26 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:39:26 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:39:26 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:39:26.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:39:27 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:39:27 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e145 do_prune osdmap full prune enabled
Sep 30 14:39:27 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e146 e146: 3 total, 3 up, 3 in
Sep 30 14:39:27 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e146: 3 total, 3 up, 3 in
Sep 30 14:39:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:39:27.108Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:39:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:27 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f092c003d30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:39:27 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:39:27 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:39:27 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:39:27.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:39:28 compute-0 ceph-mon[74194]: pgmap v714: 337 pgs: 337 active+clean; 41 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 5.2 MiB/s wr, 49 op/s
Sep 30 14:39:28 compute-0 ceph-mon[74194]: osdmap e146: 3 total, 3 up, 3 in
Sep 30 14:39:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:28 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0914004200 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:39:28 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v716: 337 pgs: 337 active+clean; 41 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Sep 30 14:39:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:28 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0928004110 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:39:28 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:39:28 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:39:28 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:39:28.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:39:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:29 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0934002bc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:39:29 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:39:29 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:39:29 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:39:29.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:39:29 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:39:29 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:39:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:39:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:39:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:39:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:39:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:39:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:39:30 compute-0 ceph-mon[74194]: pgmap v716: 337 pgs: 337 active+clean; 41 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Sep 30 14:39:30 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:39:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:30 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f092c003d30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:39:30 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v717: 337 pgs: 337 active+clean; 41 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 4.2 MiB/s wr, 39 op/s
Sep 30 14:39:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:30 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0914004220 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:39:30 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:39:30 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:39:30 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:39:30.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:39:31 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:31 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0928004110 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:39:31 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:39:31 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:39:31 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:39:31.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:39:32 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:39:32 compute-0 ceph-mon[74194]: pgmap v717: 337 pgs: 337 active+clean; 41 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 4.2 MiB/s wr, 39 op/s
Sep 30 14:39:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:32 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0934002bc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:39:32 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v718: 337 pgs: 337 active+clean; 41 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 716 B/s rd, 102 B/s wr, 0 op/s
Sep 30 14:39:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:32 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:39:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:32 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0934002bc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:39:32 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:39:32 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:39:32 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:39:32.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:39:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:33 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0914004240 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:39:33 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:39:33 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:39:33 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:39:33.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:39:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:39:33.611Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:39:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:39:33.611Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:39:34 compute-0 ceph-mon[74194]: pgmap v718: 337 pgs: 337 active+clean; 41 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 716 B/s rd, 102 B/s wr, 0 op/s
Sep 30 14:39:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:34 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0928004110 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:39:34 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v719: 337 pgs: 337 active+clean; 41 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 716 B/s rd, 102 B/s wr, 0 op/s
Sep 30 14:39:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:34 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0934002bc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:39:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:39:34] "GET /metrics HTTP/1.1" 200 48473 "" "Prometheus/2.51.0"
Sep 30 14:39:34 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:39:34] "GET /metrics HTTP/1.1" 200 48473 "" "Prometheus/2.51.0"
Sep 30 14:39:34 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:39:34 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:39:34 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:39:34.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:39:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:35 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f092c003d30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:39:35 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:39:35 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:39:35 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:39:35.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:39:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:35 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:39:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:35 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:39:36 compute-0 ceph-mon[74194]: pgmap v719: 337 pgs: 337 active+clean; 41 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 716 B/s rd, 102 B/s wr, 0 op/s
Sep 30 14:39:36 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:36 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0914004260 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:39:36 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v720: 337 pgs: 337 active+clean; 41 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Sep 30 14:39:36 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:36 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0928004110 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:39:36 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:39:36 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:39:36 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:39:36.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:39:37 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:39:37 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:39:37.108Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:39:37 compute-0 ceph-mon[74194]: pgmap v720: 337 pgs: 337 active+clean; 41 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Sep 30 14:39:37 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:37 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0934002bc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:39:37 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:39:37 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:39:37 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:39:37.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:39:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:39:38.257 163966 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:39:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:39:38.259 163966 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:39:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:39:38.259 163966 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:39:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:38 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f092c003d30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:39:38 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v721: 337 pgs: 337 active+clean; 41 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 975 B/s wr, 3 op/s
Sep 30 14:39:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:38 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0914004280 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:39:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:38 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Sep 30 14:39:38 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:39:38 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:39:38 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:39:38.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:39:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:39 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0928004110 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:39:39 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:39:39 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:39:39 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:39:39.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:39:39 compute-0 ceph-mon[74194]: pgmap v721: 337 pgs: 337 active+clean; 41 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 975 B/s wr, 3 op/s
Sep 30 14:39:40 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:40 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0934002bc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:39:40 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v722: 337 pgs: 337 active+clean; 41 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Sep 30 14:39:40 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:40 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f092c003d30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:39:40 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:39:40 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:39:40 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:39:40.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:39:41 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:41 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f09100014d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:39:41 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:39:41 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:39:41 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:39:41.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:39:41 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:39:41.528 163966 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ea:30:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '86:54:af:bb:5a:5f'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Sep 30 14:39:41 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:39:41.530 163966 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Sep 30 14:39:41 compute-0 ceph-mon[74194]: pgmap v722: 337 pgs: 337 active+clean; 41 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Sep 30 14:39:42 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:39:42 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:42 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0928004110 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:39:42 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v723: 337 pgs: 337 active+clean; 41 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Sep 30 14:39:42 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:42 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0934002bc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:39:42 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:39:42 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:39:42 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:39:42.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:39:43 compute-0 sudo[267727]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:39:43 compute-0 sudo[267727]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:39:43 compute-0 sudo[267727]: pam_unix(sudo:session): session closed for user root
Sep 30 14:39:43 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:43 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f092c003d30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:39:43 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:39:43 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:39:43 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:39:43.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:39:43 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:39:43.612Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:39:43 compute-0 ceph-mon[74194]: pgmap v723: 337 pgs: 337 active+clean; 41 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Sep 30 14:39:43 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-nfs-cephfs-compute-0-yvkpei[97239]: [WARNING] 272/143943 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Sep 30 14:39:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:44 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f09100023e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:39:44 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v724: 337 pgs: 337 active+clean; 41 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Sep 30 14:39:44 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:39:44 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:39:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:44 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0928004110 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:39:44 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:39:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:39:44] "GET /metrics HTTP/1.1" 200 48473 "" "Prometheus/2.51.0"
Sep 30 14:39:44 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:39:44] "GET /metrics HTTP/1.1" 200 48473 "" "Prometheus/2.51.0"
Sep 30 14:39:44 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:39:44 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:39:44 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:39:44.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:39:45 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:45 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0934002bc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:39:45 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:39:45 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:39:45 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:39:45.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:39:45 compute-0 ceph-mon[74194]: pgmap v724: 337 pgs: 337 active+clean; 41 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Sep 30 14:39:46 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:46 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f092c003d30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:39:46 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v725: 337 pgs: 337 active+clean; 41 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Sep 30 14:39:46 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:46 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f09100023e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:39:46 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:39:46 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:39:46 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:39:46.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:39:47 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:39:47 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:39:47.110Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:39:47 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:47 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0928004110 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:39:47 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:39:47 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:39:47 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:39:47.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:39:47 compute-0 ceph-mon[74194]: pgmap v725: 337 pgs: 337 active+clean; 41 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Sep 30 14:39:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:48 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0934002bc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:39:48 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v726: 337 pgs: 337 active+clean; 41 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:39:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:48 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f092c003d30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:39:48 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:39:48 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:39:48 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:39:48.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:39:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:49 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f09100023e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:39:49 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:39:49 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:39:49 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:39:49.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:39:49 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:39:49.532 163966 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c6331d25-78a2-493c-bb43-51ad387342be, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 14:39:49 compute-0 ceph-mon[74194]: pgmap v726: 337 pgs: 337 active+clean; 41 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:39:50 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:50 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0928004110 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:39:50 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v727: 337 pgs: 337 active+clean; 41 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:39:50 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:50 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0934002bc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:39:50 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:39:50 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:39:50 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:39:50.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:39:51 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:51 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f092c003d30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:39:51 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:39:51 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:39:51 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:39:51.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:39:51 compute-0 ceph-mon[74194]: pgmap v727: 337 pgs: 337 active+clean; 41 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:39:52 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:39:52 compute-0 ceph-mon[74194]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Sep 30 14:39:52 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:39:52.115905) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Sep 30 14:39:52 compute-0 ceph-mon[74194]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Sep 30 14:39:52 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759243192115953, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 871, "num_deletes": 255, "total_data_size": 1326471, "memory_usage": 1348160, "flush_reason": "Manual Compaction"}
Sep 30 14:39:52 compute-0 ceph-mon[74194]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Sep 30 14:39:52 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759243192130073, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 1307777, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 22801, "largest_seqno": 23671, "table_properties": {"data_size": 1303439, "index_size": 1990, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 9442, "raw_average_key_size": 18, "raw_value_size": 1294514, "raw_average_value_size": 2578, "num_data_blocks": 88, "num_entries": 502, "num_filter_entries": 502, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759243128, "oldest_key_time": 1759243128, "file_creation_time": 1759243192, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4a74fe2f-a33e-416b-ba25-743e7942b3ac", "db_session_id": "KY5CTSKWFSFJYE5835A9", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Sep 30 14:39:52 compute-0 ceph-mon[74194]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 14235 microseconds, and 8103 cpu microseconds.
Sep 30 14:39:52 compute-0 ceph-mon[74194]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 14:39:52 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:39:52.130136) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 1307777 bytes OK
Sep 30 14:39:52 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:39:52.130164) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Sep 30 14:39:52 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:39:52.132234) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Sep 30 14:39:52 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:39:52.132267) EVENT_LOG_v1 {"time_micros": 1759243192132256, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Sep 30 14:39:52 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:39:52.132296) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Sep 30 14:39:52 compute-0 ceph-mon[74194]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 1322255, prev total WAL file size 1322255, number of live WAL files 2.
Sep 30 14:39:52 compute-0 ceph-mon[74194]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 14:39:52 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:39:52.132976) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323531' seq:72057594037927935, type:22 .. '6C6F676D00353032' seq:0, type:0; will stop at (end)
Sep 30 14:39:52 compute-0 ceph-mon[74194]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Sep 30 14:39:52 compute-0 ceph-mon[74194]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(1277KB)], [50(11MB)]
Sep 30 14:39:52 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759243192133056, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 13178444, "oldest_snapshot_seqno": -1}
Sep 30 14:39:52 compute-0 podman[267770]: 2025-09-30 14:39:52.161549727 +0000 UTC m=+0.063081787 container health_status c96c6cc1b89de09f21811bef665f35e069874e3ad1bbd4c37d02227af98e3458 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Sep 30 14:39:52 compute-0 podman[267762]: 2025-09-30 14:39:52.161578998 +0000 UTC m=+0.077655387 container health_status 3f9405f717bf7bccb1d94628a6cea0442375ebf8d5cf43ef2536ee30dce6c6e0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, config_id=iscsid)
Sep 30 14:39:52 compute-0 podman[267764]: 2025-09-30 14:39:52.17550051 +0000 UTC m=+0.076859976 container health_status b3fb2fd96e3ed0561f41ccf5f3a6a8f5edc1610051f196882b9fe64f54041c07 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS)
Sep 30 14:39:52 compute-0 podman[267763]: 2025-09-30 14:39:52.201140546 +0000 UTC m=+0.110144596 container health_status 8ace90c1ad44d98a416105b72e1c541724757ab08efa10adfaa0df7e8c2027c6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Sep 30 14:39:52 compute-0 ceph-mon[74194]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 5323 keys, 13009910 bytes, temperature: kUnknown
Sep 30 14:39:52 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759243192219788, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 13009910, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12974353, "index_size": 21175, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13317, "raw_key_size": 136352, "raw_average_key_size": 25, "raw_value_size": 12877765, "raw_average_value_size": 2419, "num_data_blocks": 861, "num_entries": 5323, "num_filter_entries": 5323, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759241526, "oldest_key_time": 0, "file_creation_time": 1759243192, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4a74fe2f-a33e-416b-ba25-743e7942b3ac", "db_session_id": "KY5CTSKWFSFJYE5835A9", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Sep 30 14:39:52 compute-0 ceph-mon[74194]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 14:39:52 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:39:52.220031) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 13009910 bytes
Sep 30 14:39:52 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:39:52.221062) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 151.8 rd, 149.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 11.3 +0.0 blob) out(12.4 +0.0 blob), read-write-amplify(20.0) write-amplify(9.9) OK, records in: 5851, records dropped: 528 output_compression: NoCompression
Sep 30 14:39:52 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:39:52.221078) EVENT_LOG_v1 {"time_micros": 1759243192221071, "job": 26, "event": "compaction_finished", "compaction_time_micros": 86803, "compaction_time_cpu_micros": 32270, "output_level": 6, "num_output_files": 1, "total_output_size": 13009910, "num_input_records": 5851, "num_output_records": 5323, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Sep 30 14:39:52 compute-0 ceph-mon[74194]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 14:39:52 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759243192221457, "job": 26, "event": "table_file_deletion", "file_number": 52}
Sep 30 14:39:52 compute-0 ceph-mon[74194]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 14:39:52 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759243192223277, "job": 26, "event": "table_file_deletion", "file_number": 50}
Sep 30 14:39:52 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:39:52.132880) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:39:52 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:39:52.223307) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:39:52 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:39:52.223311) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:39:52 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:39:52.223313) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:39:52 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:39:52.223315) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:39:52 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:39:52.223317) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:39:52 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:52 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f09100039b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:39:52 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v728: 337 pgs: 337 active+clean; 41 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:39:52 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:52 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0928004110 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:39:52 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:39:52 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:39:52 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:39:52.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:39:53 compute-0 ceph-mon[74194]: pgmap v728: 337 pgs: 337 active+clean; 41 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Sep 30 14:39:53 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:53 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0934002bc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:39:53 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:39:53 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:39:53 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:39:53.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:39:53 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:39:53.614Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:39:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:54 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f092c003d30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:39:54 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v729: 337 pgs: 337 active+clean; 41 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Sep 30 14:39:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:54 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f09100039b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:39:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:39:54] "GET /metrics HTTP/1.1" 200 48473 "" "Prometheus/2.51.0"
Sep 30 14:39:54 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:39:54] "GET /metrics HTTP/1.1" 200 48473 "" "Prometheus/2.51.0"
Sep 30 14:39:54 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:39:54 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.002000054s ======
Sep 30 14:39:54 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:39:54.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Sep 30 14:39:55 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:55 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0928004110 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:39:55 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:39:55 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:39:55 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:39:55.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:39:55 compute-0 ceph-mon[74194]: pgmap v729: 337 pgs: 337 active+clean; 41 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Sep 30 14:39:56 compute-0 sudo[267844]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:39:56 compute-0 sudo[267844]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:39:56 compute-0 sudo[267844]: pam_unix(sudo:session): session closed for user root
Sep 30 14:39:56 compute-0 sudo[267869]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 14:39:56 compute-0 sudo[267869]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:39:56 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v730: 337 pgs: 337 active+clean; 41 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Sep 30 14:39:56 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:56 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0934002bc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:39:56 compute-0 sudo[267869]: pam_unix(sudo:session): session closed for user root
Sep 30 14:39:56 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:56 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0934002bc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:39:56 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:39:56 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.002000053s ======
Sep 30 14:39:56 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:39:56.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Sep 30 14:39:57 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:39:57 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:39:57.110Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:39:57 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:57 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f09100039b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:39:57 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:39:57 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:39:57 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:39:57.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:39:57 compute-0 ceph-mon[74194]: pgmap v730: 337 pgs: 337 active+clean; 41 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Sep 30 14:39:58 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v731: 337 pgs: 337 active+clean; 41 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:39:58 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:58 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0928004110 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:39:58 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:58 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f092c003d30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:39:58 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Sep 30 14:39:58 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:39:58 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Sep 30 14:39:58 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:39:58 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:39:58 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:39:58.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:39:58 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:39:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:39:59 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0934002bc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:39:59 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:39:59 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:39:59 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:39:59.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:39:59 compute-0 ceph-mgr[74485]: [balancer INFO root] Optimize plan auto_2025-09-30_14:39:59
Sep 30 14:39:59 compute-0 ceph-mgr[74485]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 14:39:59 compute-0 ceph-mgr[74485]: [balancer INFO root] do_upmap
Sep 30 14:39:59 compute-0 ceph-mgr[74485]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.meta', 'cephfs.cephfs.meta', '.nfs', 'volumes', 'images', 'backups', 'default.rgw.log', '.rgw.root', 'vms', '.mgr']
Sep 30 14:39:59 compute-0 ceph-mgr[74485]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 14:39:59 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:39:59 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:39:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:39:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:39:59 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Sep 30 14:39:59 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Sep 30 14:39:59 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Sep 30 14:39:59 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:39:59 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Sep 30 14:39:59 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:39:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:39:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:39:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:39:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:39:59 compute-0 ceph-mon[74194]: pgmap v731: 337 pgs: 337 active+clean; 41 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:39:59 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:39:59 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:39:59 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:39:59 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Sep 30 14:39:59 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:39:59 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:39:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 14:39:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:39:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 14:39:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:39:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:39:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:39:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:39:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:39:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:39:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:39:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Sep 30 14:39:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:39:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Sep 30 14:39:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:39:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:39:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:39:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Sep 30 14:39:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:39:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Sep 30 14:39:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:39:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:39:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:39:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 14:39:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:39:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 14:40:00 compute-0 ceph-mon[74194]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 1 OSD(s) experiencing slow operations in BlueStore
Sep 30 14:40:00 compute-0 ceph-mon[74194]: log_channel(cluster) log [WRN] : [WRN] BLUESTORE_SLOW_OP_ALERT: 1 OSD(s) experiencing slow operations in BlueStore
Sep 30 14:40:00 compute-0 ceph-mon[74194]: log_channel(cluster) log [WRN] :      osd.1 observed slow operation indications in BlueStore
Sep 30 14:40:00 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Sep 30 14:40:00 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Sep 30 14:40:00 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:40:00 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:40:00 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 14:40:00 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 14:40:00 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 14:40:00 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:40:00 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 14:40:00 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:40:00 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 14:40:00 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 14:40:00 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 14:40:00 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 14:40:00 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:40:00 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:40:00 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v732: 337 pgs: 337 active+clean; 41 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:40:00 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:40:00 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0934002bc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:40:00 compute-0 sudo[267929]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:40:00 compute-0 sudo[267929]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:40:00 compute-0 sudo[267929]: pam_unix(sudo:session): session closed for user root
Sep 30 14:40:00 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:40:00 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0928004130 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:40:00 compute-0 sudo[267954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 14:40:00 compute-0 sudo[267954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:40:00 compute-0 ceph-mon[74194]: Health detail: HEALTH_WARN 1 OSD(s) experiencing slow operations in BlueStore
Sep 30 14:40:00 compute-0 ceph-mon[74194]: [WRN] BLUESTORE_SLOW_OP_ALERT: 1 OSD(s) experiencing slow operations in BlueStore
Sep 30 14:40:00 compute-0 ceph-mon[74194]:      osd.1 observed slow operation indications in BlueStore
Sep 30 14:40:00 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Sep 30 14:40:00 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:40:00 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 14:40:00 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:40:00 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:40:00 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 14:40:00 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 14:40:00 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:40:00 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:40:00 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:40:00 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:40:00.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:40:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 14:40:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 14:40:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 14:40:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 14:40:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 14:40:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 14:40:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 14:40:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 14:40:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 14:40:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 14:40:01 compute-0 podman[268019]: 2025-09-30 14:40:01.15020852 +0000 UTC m=+0.053882231 container create 22d82bac983dd91a0ade07fa34c767e5c9b4f94ca87dae1e04d1165f4bd5856c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_keldysh, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Sep 30 14:40:01 compute-0 systemd[1]: Started libpod-conmon-22d82bac983dd91a0ade07fa34c767e5c9b4f94ca87dae1e04d1165f4bd5856c.scope.
Sep 30 14:40:01 compute-0 podman[268019]: 2025-09-30 14:40:01.130859453 +0000 UTC m=+0.034533194 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:40:01 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:40:01 compute-0 podman[268019]: 2025-09-30 14:40:01.268583435 +0000 UTC m=+0.172257206 container init 22d82bac983dd91a0ade07fa34c767e5c9b4f94ca87dae1e04d1165f4bd5856c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:40:01 compute-0 podman[268019]: 2025-09-30 14:40:01.280021041 +0000 UTC m=+0.183694772 container start 22d82bac983dd91a0ade07fa34c767e5c9b4f94ca87dae1e04d1165f4bd5856c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_keldysh, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Sep 30 14:40:01 compute-0 podman[268019]: 2025-09-30 14:40:01.284443949 +0000 UTC m=+0.188117730 container attach 22d82bac983dd91a0ade07fa34c767e5c9b4f94ca87dae1e04d1165f4bd5856c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_keldysh, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Sep 30 14:40:01 compute-0 flamboyant_keldysh[268036]: 167 167
Sep 30 14:40:01 compute-0 systemd[1]: libpod-22d82bac983dd91a0ade07fa34c767e5c9b4f94ca87dae1e04d1165f4bd5856c.scope: Deactivated successfully.
Sep 30 14:40:01 compute-0 podman[268019]: 2025-09-30 14:40:01.289983517 +0000 UTC m=+0.193657208 container died 22d82bac983dd91a0ade07fa34c767e5c9b4f94ca87dae1e04d1165f4bd5856c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:40:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-1bb1f82b0dc26716d49efe7412d44e6237c60018c7d54a1ad1a2e41e3089df10-merged.mount: Deactivated successfully.
Sep 30 14:40:01 compute-0 podman[268019]: 2025-09-30 14:40:01.339985634 +0000 UTC m=+0.243659325 container remove 22d82bac983dd91a0ade07fa34c767e5c9b4f94ca87dae1e04d1165f4bd5856c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_keldysh, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:40:01 compute-0 systemd[1]: libpod-conmon-22d82bac983dd91a0ade07fa34c767e5c9b4f94ca87dae1e04d1165f4bd5856c.scope: Deactivated successfully.
Sep 30 14:40:01 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:40:01 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f092c004a40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:40:01 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:40:01 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:40:01 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:40:01.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:40:01 compute-0 podman[268061]: 2025-09-30 14:40:01.552376671 +0000 UTC m=+0.055008141 container create df4be715d5ed6680d4c466f8b7744c4347534e35d490e8d7daf2c5e84150f414 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_gagarin, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:40:01 compute-0 systemd[1]: Started libpod-conmon-df4be715d5ed6680d4c466f8b7744c4347534e35d490e8d7daf2c5e84150f414.scope.
Sep 30 14:40:01 compute-0 podman[268061]: 2025-09-30 14:40:01.531936974 +0000 UTC m=+0.034568474 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:40:01 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:40:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddf75a596c4e1d796501656aed0d26b3903b5d94f5dc23ecb8f8633f557b6567/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:40:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddf75a596c4e1d796501656aed0d26b3903b5d94f5dc23ecb8f8633f557b6567/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:40:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddf75a596c4e1d796501656aed0d26b3903b5d94f5dc23ecb8f8633f557b6567/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:40:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddf75a596c4e1d796501656aed0d26b3903b5d94f5dc23ecb8f8633f557b6567/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:40:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddf75a596c4e1d796501656aed0d26b3903b5d94f5dc23ecb8f8633f557b6567/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 14:40:01 compute-0 podman[268061]: 2025-09-30 14:40:01.662816843 +0000 UTC m=+0.165448343 container init df4be715d5ed6680d4c466f8b7744c4347534e35d490e8d7daf2c5e84150f414 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_gagarin, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Sep 30 14:40:01 compute-0 podman[268061]: 2025-09-30 14:40:01.670763236 +0000 UTC m=+0.173394736 container start df4be715d5ed6680d4c466f8b7744c4347534e35d490e8d7daf2c5e84150f414 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_gagarin, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Sep 30 14:40:01 compute-0 podman[268061]: 2025-09-30 14:40:01.675006049 +0000 UTC m=+0.177637519 container attach df4be715d5ed6680d4c466f8b7744c4347534e35d490e8d7daf2c5e84150f414 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_gagarin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:40:01 compute-0 ceph-mon[74194]: pgmap v732: 337 pgs: 337 active+clean; 41 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:40:02 compute-0 blissful_gagarin[268077]: --> passed data devices: 0 physical, 1 LVM
Sep 30 14:40:02 compute-0 blissful_gagarin[268077]: --> All data devices are unavailable
Sep 30 14:40:02 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:40:02 compute-0 systemd[1]: libpod-df4be715d5ed6680d4c466f8b7744c4347534e35d490e8d7daf2c5e84150f414.scope: Deactivated successfully.
Sep 30 14:40:02 compute-0 podman[268061]: 2025-09-30 14:40:02.068295373 +0000 UTC m=+0.570926903 container died df4be715d5ed6680d4c466f8b7744c4347534e35d490e8d7daf2c5e84150f414 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_gagarin, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:40:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-ddf75a596c4e1d796501656aed0d26b3903b5d94f5dc23ecb8f8633f557b6567-merged.mount: Deactivated successfully.
Sep 30 14:40:02 compute-0 podman[268061]: 2025-09-30 14:40:02.12427475 +0000 UTC m=+0.626906220 container remove df4be715d5ed6680d4c466f8b7744c4347534e35d490e8d7daf2c5e84150f414 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_gagarin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Sep 30 14:40:02 compute-0 systemd[1]: libpod-conmon-df4be715d5ed6680d4c466f8b7744c4347534e35d490e8d7daf2c5e84150f414.scope: Deactivated successfully.
Sep 30 14:40:02 compute-0 sudo[267954]: pam_unix(sudo:session): session closed for user root
Sep 30 14:40:02 compute-0 sudo[268106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:40:02 compute-0 sudo[268106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:40:02 compute-0 sudo[268106]: pam_unix(sudo:session): session closed for user root
Sep 30 14:40:02 compute-0 sudo[268131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -- lvm list --format json
Sep 30 14:40:02 compute-0 sudo[268131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:40:02 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v733: 337 pgs: 337 active+clean; 41 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Sep 30 14:40:02 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:40:02 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f092c004a40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:40:02 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:40:02 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0934002bc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:40:02 compute-0 podman[268198]: 2025-09-30 14:40:02.875427742 +0000 UTC m=+0.061168867 container create 4f7960cb11cf6141134e44d4f9e0ef0124bb93365bf9653bf1fe4598087a6825 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_bartik, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Sep 30 14:40:02 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:40:02 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:40:02 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:40:02.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:40:02 compute-0 systemd[1]: Started libpod-conmon-4f7960cb11cf6141134e44d4f9e0ef0124bb93365bf9653bf1fe4598087a6825.scope.
Sep 30 14:40:02 compute-0 podman[268198]: 2025-09-30 14:40:02.845049729 +0000 UTC m=+0.030790914 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:40:02 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:40:02 compute-0 podman[268198]: 2025-09-30 14:40:02.986606754 +0000 UTC m=+0.172347909 container init 4f7960cb11cf6141134e44d4f9e0ef0124bb93365bf9653bf1fe4598087a6825 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_bartik, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Sep 30 14:40:02 compute-0 podman[268198]: 2025-09-30 14:40:02.999206711 +0000 UTC m=+0.184947796 container start 4f7960cb11cf6141134e44d4f9e0ef0124bb93365bf9653bf1fe4598087a6825 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_bartik, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:40:03 compute-0 gifted_bartik[268214]: 167 167
Sep 30 14:40:03 compute-0 systemd[1]: libpod-4f7960cb11cf6141134e44d4f9e0ef0124bb93365bf9653bf1fe4598087a6825.scope: Deactivated successfully.
Sep 30 14:40:03 compute-0 podman[268198]: 2025-09-30 14:40:03.00704612 +0000 UTC m=+0.192787205 container attach 4f7960cb11cf6141134e44d4f9e0ef0124bb93365bf9653bf1fe4598087a6825 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_bartik, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:40:03 compute-0 podman[268198]: 2025-09-30 14:40:03.007944484 +0000 UTC m=+0.193685569 container died 4f7960cb11cf6141134e44d4f9e0ef0124bb93365bf9653bf1fe4598087a6825 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_bartik, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Sep 30 14:40:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-ea092f508f6ce47c354caeb93eaefe071cfa0c3eefa2c2b75ef753b341956472-merged.mount: Deactivated successfully.
Sep 30 14:40:03 compute-0 podman[268198]: 2025-09-30 14:40:03.039613981 +0000 UTC m=+0.225355066 container remove 4f7960cb11cf6141134e44d4f9e0ef0124bb93365bf9653bf1fe4598087a6825 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_bartik, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Sep 30 14:40:03 compute-0 systemd[1]: libpod-conmon-4f7960cb11cf6141134e44d4f9e0ef0124bb93365bf9653bf1fe4598087a6825.scope: Deactivated successfully.
Sep 30 14:40:03 compute-0 podman[268236]: 2025-09-30 14:40:03.282872764 +0000 UTC m=+0.054530269 container create 06d8c094d23f8adf0de4f4f0b86f01cb29baa7351f3f0ccdafc3a001f57e0b69 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_morse, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Sep 30 14:40:03 compute-0 systemd[1]: Started libpod-conmon-06d8c094d23f8adf0de4f4f0b86f01cb29baa7351f3f0ccdafc3a001f57e0b69.scope.
Sep 30 14:40:03 compute-0 podman[268236]: 2025-09-30 14:40:03.259353276 +0000 UTC m=+0.031010871 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:40:03 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:40:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f55e47e2f74d728b176c44889a46ee8502930d5357d5b834e675775aa644249e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:40:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f55e47e2f74d728b176c44889a46ee8502930d5357d5b834e675775aa644249e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:40:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f55e47e2f74d728b176c44889a46ee8502930d5357d5b834e675775aa644249e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:40:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f55e47e2f74d728b176c44889a46ee8502930d5357d5b834e675775aa644249e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:40:03 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:40:03 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f09100039b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:40:03 compute-0 sudo[268252]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:40:03 compute-0 sudo[268252]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:40:03 compute-0 sudo[268252]: pam_unix(sudo:session): session closed for user root
Sep 30 14:40:03 compute-0 podman[268236]: 2025-09-30 14:40:03.383634448 +0000 UTC m=+0.155292053 container init 06d8c094d23f8adf0de4f4f0b86f01cb29baa7351f3f0ccdafc3a001f57e0b69 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_morse, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default)
Sep 30 14:40:03 compute-0 podman[268236]: 2025-09-30 14:40:03.391233051 +0000 UTC m=+0.162890596 container start 06d8c094d23f8adf0de4f4f0b86f01cb29baa7351f3f0ccdafc3a001f57e0b69 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_morse, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:40:03 compute-0 podman[268236]: 2025-09-30 14:40:03.395054373 +0000 UTC m=+0.166711928 container attach 06d8c094d23f8adf0de4f4f0b86f01cb29baa7351f3f0ccdafc3a001f57e0b69 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_morse, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:40:03 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:40:03 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:40:03 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:40:03.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:40:03 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:40:03.615Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:40:03 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:40:03.617Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:40:03 compute-0 elastic_morse[268257]: {
Sep 30 14:40:03 compute-0 elastic_morse[268257]:     "0": [
Sep 30 14:40:03 compute-0 elastic_morse[268257]:         {
Sep 30 14:40:03 compute-0 elastic_morse[268257]:             "devices": [
Sep 30 14:40:03 compute-0 elastic_morse[268257]:                 "/dev/loop3"
Sep 30 14:40:03 compute-0 elastic_morse[268257]:             ],
Sep 30 14:40:03 compute-0 elastic_morse[268257]:             "lv_name": "ceph_lv0",
Sep 30 14:40:03 compute-0 elastic_morse[268257]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:40:03 compute-0 elastic_morse[268257]:             "lv_size": "21470642176",
Sep 30 14:40:03 compute-0 elastic_morse[268257]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5e3c7776-ac03-5698-b79f-a6dc2d80cae6,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1bf35304-bfb4-41f5-b832-570aa31de1b2,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 14:40:03 compute-0 elastic_morse[268257]:             "lv_uuid": "f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L",
Sep 30 14:40:03 compute-0 elastic_morse[268257]:             "name": "ceph_lv0",
Sep 30 14:40:03 compute-0 elastic_morse[268257]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:40:03 compute-0 elastic_morse[268257]:             "tags": {
Sep 30 14:40:03 compute-0 elastic_morse[268257]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:40:03 compute-0 elastic_morse[268257]:                 "ceph.block_uuid": "f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L",
Sep 30 14:40:03 compute-0 elastic_morse[268257]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 14:40:03 compute-0 elastic_morse[268257]:                 "ceph.cluster_fsid": "5e3c7776-ac03-5698-b79f-a6dc2d80cae6",
Sep 30 14:40:03 compute-0 elastic_morse[268257]:                 "ceph.cluster_name": "ceph",
Sep 30 14:40:03 compute-0 elastic_morse[268257]:                 "ceph.crush_device_class": "",
Sep 30 14:40:03 compute-0 elastic_morse[268257]:                 "ceph.encrypted": "0",
Sep 30 14:40:03 compute-0 elastic_morse[268257]:                 "ceph.osd_fsid": "1bf35304-bfb4-41f5-b832-570aa31de1b2",
Sep 30 14:40:03 compute-0 elastic_morse[268257]:                 "ceph.osd_id": "0",
Sep 30 14:40:03 compute-0 elastic_morse[268257]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 14:40:03 compute-0 elastic_morse[268257]:                 "ceph.type": "block",
Sep 30 14:40:03 compute-0 elastic_morse[268257]:                 "ceph.vdo": "0",
Sep 30 14:40:03 compute-0 elastic_morse[268257]:                 "ceph.with_tpm": "0"
Sep 30 14:40:03 compute-0 elastic_morse[268257]:             },
Sep 30 14:40:03 compute-0 elastic_morse[268257]:             "type": "block",
Sep 30 14:40:03 compute-0 elastic_morse[268257]:             "vg_name": "ceph_vg0"
Sep 30 14:40:03 compute-0 elastic_morse[268257]:         }
Sep 30 14:40:03 compute-0 elastic_morse[268257]:     ]
Sep 30 14:40:03 compute-0 elastic_morse[268257]: }
Sep 30 14:40:03 compute-0 systemd[1]: libpod-06d8c094d23f8adf0de4f4f0b86f01cb29baa7351f3f0ccdafc3a001f57e0b69.scope: Deactivated successfully.
Sep 30 14:40:03 compute-0 podman[268236]: 2025-09-30 14:40:03.70277823 +0000 UTC m=+0.474435775 container died 06d8c094d23f8adf0de4f4f0b86f01cb29baa7351f3f0ccdafc3a001f57e0b69 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_morse, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Sep 30 14:40:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-f55e47e2f74d728b176c44889a46ee8502930d5357d5b834e675775aa644249e-merged.mount: Deactivated successfully.
Sep 30 14:40:03 compute-0 podman[268236]: 2025-09-30 14:40:03.761616363 +0000 UTC m=+0.533273908 container remove 06d8c094d23f8adf0de4f4f0b86f01cb29baa7351f3f0ccdafc3a001f57e0b69 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_morse, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:40:03 compute-0 systemd[1]: libpod-conmon-06d8c094d23f8adf0de4f4f0b86f01cb29baa7351f3f0ccdafc3a001f57e0b69.scope: Deactivated successfully.
Sep 30 14:40:03 compute-0 sudo[268131]: pam_unix(sudo:session): session closed for user root
Sep 30 14:40:03 compute-0 ceph-mon[74194]: pgmap v733: 337 pgs: 337 active+clean; 41 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Sep 30 14:40:03 compute-0 sudo[268301]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:40:03 compute-0 sudo[268301]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:40:03 compute-0 sudo[268301]: pam_unix(sudo:session): session closed for user root
Sep 30 14:40:03 compute-0 sudo[268326]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -- raw list --format json
Sep 30 14:40:03 compute-0 sudo[268326]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:40:04 compute-0 podman[268391]: 2025-09-30 14:40:04.436613099 +0000 UTC m=+0.057172350 container create de8f44596947e0767ff5c5e71adeba0cb34533897ac54df6535cdaf2d73c0629 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_agnesi, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:40:04 compute-0 systemd[1]: Started libpod-conmon-de8f44596947e0767ff5c5e71adeba0cb34533897ac54df6535cdaf2d73c0629.scope.
Sep 30 14:40:04 compute-0 podman[268391]: 2025-09-30 14:40:04.413813839 +0000 UTC m=+0.034373070 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:40:04 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:40:04 compute-0 podman[268391]: 2025-09-30 14:40:04.533386126 +0000 UTC m=+0.153945367 container init de8f44596947e0767ff5c5e71adeba0cb34533897ac54df6535cdaf2d73c0629 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_agnesi, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid)
Sep 30 14:40:04 compute-0 podman[268391]: 2025-09-30 14:40:04.54102211 +0000 UTC m=+0.161581321 container start de8f44596947e0767ff5c5e71adeba0cb34533897ac54df6535cdaf2d73c0629 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_agnesi, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:40:04 compute-0 podman[268391]: 2025-09-30 14:40:04.544221385 +0000 UTC m=+0.164780596 container attach de8f44596947e0767ff5c5e71adeba0cb34533897ac54df6535cdaf2d73c0629 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_agnesi, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Sep 30 14:40:04 compute-0 stoic_agnesi[268407]: 167 167
Sep 30 14:40:04 compute-0 systemd[1]: libpod-de8f44596947e0767ff5c5e71adeba0cb34533897ac54df6535cdaf2d73c0629.scope: Deactivated successfully.
Sep 30 14:40:04 compute-0 podman[268391]: 2025-09-30 14:40:04.546744503 +0000 UTC m=+0.167303704 container died de8f44596947e0767ff5c5e71adeba0cb34533897ac54df6535cdaf2d73c0629 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_agnesi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:40:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-270d8199a3806c64a7fdec6d60c85f0d42a7db2dd3fd3317f589b950322a0caf-merged.mount: Deactivated successfully.
Sep 30 14:40:04 compute-0 podman[268391]: 2025-09-30 14:40:04.58630025 +0000 UTC m=+0.206859471 container remove de8f44596947e0767ff5c5e71adeba0cb34533897ac54df6535cdaf2d73c0629 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_agnesi, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Sep 30 14:40:04 compute-0 systemd[1]: libpod-conmon-de8f44596947e0767ff5c5e71adeba0cb34533897ac54df6535cdaf2d73c0629.scope: Deactivated successfully.
Sep 30 14:40:04 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v734: 337 pgs: 337 active+clean; 41 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Sep 30 14:40:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:40:04 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f09100039b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:40:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:40:04 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f092c004a40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:40:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-nfs-cephfs-compute-0-yvkpei[97239]: [WARNING] 272/144004 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Sep 30 14:40:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:40:04] "GET /metrics HTTP/1.1" 200 48474 "" "Prometheus/2.51.0"
Sep 30 14:40:04 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:40:04] "GET /metrics HTTP/1.1" 200 48474 "" "Prometheus/2.51.0"
Sep 30 14:40:04 compute-0 podman[268432]: 2025-09-30 14:40:04.806796515 +0000 UTC m=+0.068975475 container create b89f93343e932b165451b65b7a18b9bd6e63ce78c113331d5239eec3be234ddb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_hellman, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Sep 30 14:40:04 compute-0 systemd[1]: Started libpod-conmon-b89f93343e932b165451b65b7a18b9bd6e63ce78c113331d5239eec3be234ddb.scope.
Sep 30 14:40:04 compute-0 podman[268432]: 2025-09-30 14:40:04.776865105 +0000 UTC m=+0.039044105 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:40:04 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:40:04 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:40:04 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:40:04.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:40:04 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:40:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c9ac3b6c89858c744226539b932c8bea463b35cb32696f2ab95fb0f26b88093/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:40:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c9ac3b6c89858c744226539b932c8bea463b35cb32696f2ab95fb0f26b88093/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:40:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c9ac3b6c89858c744226539b932c8bea463b35cb32696f2ab95fb0f26b88093/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:40:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c9ac3b6c89858c744226539b932c8bea463b35cb32696f2ab95fb0f26b88093/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:40:04 compute-0 podman[268432]: 2025-09-30 14:40:04.910745444 +0000 UTC m=+0.172924484 container init b89f93343e932b165451b65b7a18b9bd6e63ce78c113331d5239eec3be234ddb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_hellman, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:40:04 compute-0 podman[268432]: 2025-09-30 14:40:04.9176973 +0000 UTC m=+0.179876220 container start b89f93343e932b165451b65b7a18b9bd6e63ce78c113331d5239eec3be234ddb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_hellman, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Sep 30 14:40:04 compute-0 podman[268432]: 2025-09-30 14:40:04.920767362 +0000 UTC m=+0.182946362 container attach b89f93343e932b165451b65b7a18b9bd6e63ce78c113331d5239eec3be234ddb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:40:05 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:40:05 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0928004170 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:40:05 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:40:05 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:40:05 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:40:05.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:40:05 compute-0 lvm[268524]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 14:40:05 compute-0 lvm[268524]: VG ceph_vg0 finished
Sep 30 14:40:05 compute-0 naughty_hellman[268448]: {}
Sep 30 14:40:05 compute-0 systemd[1]: libpod-b89f93343e932b165451b65b7a18b9bd6e63ce78c113331d5239eec3be234ddb.scope: Deactivated successfully.
Sep 30 14:40:05 compute-0 systemd[1]: libpod-b89f93343e932b165451b65b7a18b9bd6e63ce78c113331d5239eec3be234ddb.scope: Consumed 1.282s CPU time.
Sep 30 14:40:05 compute-0 podman[268432]: 2025-09-30 14:40:05.714236414 +0000 UTC m=+0.976415374 container died b89f93343e932b165451b65b7a18b9bd6e63ce78c113331d5239eec3be234ddb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:40:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-6c9ac3b6c89858c744226539b932c8bea463b35cb32696f2ab95fb0f26b88093-merged.mount: Deactivated successfully.
Sep 30 14:40:05 compute-0 podman[268432]: 2025-09-30 14:40:05.76946 +0000 UTC m=+1.031638920 container remove b89f93343e932b165451b65b7a18b9bd6e63ce78c113331d5239eec3be234ddb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_hellman, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS)
Sep 30 14:40:05 compute-0 systemd[1]: libpod-conmon-b89f93343e932b165451b65b7a18b9bd6e63ce78c113331d5239eec3be234ddb.scope: Deactivated successfully.
Sep 30 14:40:05 compute-0 sudo[268326]: pam_unix(sudo:session): session closed for user root
Sep 30 14:40:05 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 14:40:05 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:40:05 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 14:40:05 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:40:05 compute-0 ceph-mon[74194]: pgmap v734: 337 pgs: 337 active+clean; 41 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Sep 30 14:40:05 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/3272876967' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:40:05 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:40:05 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:40:05 compute-0 sudo[268543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 14:40:05 compute-0 sudo[268543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:40:05 compute-0 sudo[268543]: pam_unix(sudo:session): session closed for user root
Sep 30 14:40:05 compute-0 nova_compute[261524]: 2025-09-30 14:40:05.972 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:40:06 compute-0 nova_compute[261524]: 2025-09-30 14:40:06.585 2 DEBUG oslo_concurrency.lockutils [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Acquiring lock "4a2e4963-f354-48e2-af39-ce9e01d9eda1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:40:06 compute-0 nova_compute[261524]: 2025-09-30 14:40:06.585 2 DEBUG oslo_concurrency.lockutils [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Lock "4a2e4963-f354-48e2-af39-ce9e01d9eda1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:40:06 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v735: 337 pgs: 337 active+clean; 41 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:40:06 compute-0 nova_compute[261524]: 2025-09-30 14:40:06.610 2 DEBUG nova.compute.manager [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: 4a2e4963-f354-48e2-af39-ce9e01d9eda1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Sep 30 14:40:06 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:40:06 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0934002bc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:40:06 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:40:06 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f09100039b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:40:06 compute-0 nova_compute[261524]: 2025-09-30 14:40:06.745 2 DEBUG oslo_concurrency.lockutils [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:40:06 compute-0 nova_compute[261524]: 2025-09-30 14:40:06.746 2 DEBUG oslo_concurrency.lockutils [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:40:06 compute-0 nova_compute[261524]: 2025-09-30 14:40:06.755 2 DEBUG nova.virt.hardware [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Sep 30 14:40:06 compute-0 nova_compute[261524]: 2025-09-30 14:40:06.755 2 INFO nova.compute.claims [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: 4a2e4963-f354-48e2-af39-ce9e01d9eda1] Claim successful on node compute-0.ctlplane.example.com
Sep 30 14:40:06 compute-0 nova_compute[261524]: 2025-09-30 14:40:06.874 2 DEBUG oslo_concurrency.processutils [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Sep 30 14:40:06 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:40:06 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:40:06 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:40:06.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:40:06 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/3383029270' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:40:07 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:40:07 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:40:07.111Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:40:07 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 14:40:07 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2972646914' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:40:07 compute-0 nova_compute[261524]: 2025-09-30 14:40:07.372 2 DEBUG oslo_concurrency.processutils [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Sep 30 14:40:07 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:40:07 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f092c004a40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:40:07 compute-0 nova_compute[261524]: 2025-09-30 14:40:07.378 2 DEBUG nova.compute.provider_tree [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Inventory has not changed in ProviderTree for provider: 06783cfc-6d32-454d-9501-ebd8adea3735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Sep 30 14:40:07 compute-0 nova_compute[261524]: 2025-09-30 14:40:07.424 2 DEBUG nova.scheduler.client.report [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Inventory has not changed for provider 06783cfc-6d32-454d-9501-ebd8adea3735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Sep 30 14:40:07 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:40:07 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:40:07 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:40:07.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:40:07 compute-0 nova_compute[261524]: 2025-09-30 14:40:07.508 2 DEBUG oslo_concurrency.lockutils [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.762s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:40:07 compute-0 nova_compute[261524]: 2025-09-30 14:40:07.509 2 DEBUG nova.compute.manager [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: 4a2e4963-f354-48e2-af39-ce9e01d9eda1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Sep 30 14:40:07 compute-0 nova_compute[261524]: 2025-09-30 14:40:07.605 2 DEBUG nova.compute.manager [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: 4a2e4963-f354-48e2-af39-ce9e01d9eda1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Sep 30 14:40:07 compute-0 nova_compute[261524]: 2025-09-30 14:40:07.606 2 DEBUG nova.network.neutron [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: 4a2e4963-f354-48e2-af39-ce9e01d9eda1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Sep 30 14:40:07 compute-0 nova_compute[261524]: 2025-09-30 14:40:07.665 2 INFO nova.virt.libvirt.driver [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: 4a2e4963-f354-48e2-af39-ce9e01d9eda1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Sep 30 14:40:07 compute-0 nova_compute[261524]: 2025-09-30 14:40:07.688 2 DEBUG nova.compute.manager [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: 4a2e4963-f354-48e2-af39-ce9e01d9eda1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Sep 30 14:40:07 compute-0 nova_compute[261524]: 2025-09-30 14:40:07.812 2 DEBUG nova.compute.manager [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: 4a2e4963-f354-48e2-af39-ce9e01d9eda1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Sep 30 14:40:07 compute-0 nova_compute[261524]: 2025-09-30 14:40:07.813 2 DEBUG nova.virt.libvirt.driver [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: 4a2e4963-f354-48e2-af39-ce9e01d9eda1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Sep 30 14:40:07 compute-0 nova_compute[261524]: 2025-09-30 14:40:07.814 2 INFO nova.virt.libvirt.driver [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: 4a2e4963-f354-48e2-af39-ce9e01d9eda1] Creating image(s)
Sep 30 14:40:07 compute-0 nova_compute[261524]: 2025-09-30 14:40:07.844 2 DEBUG nova.storage.rbd_utils [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] rbd image 4a2e4963-f354-48e2-af39-ce9e01d9eda1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Sep 30 14:40:07 compute-0 nova_compute[261524]: 2025-09-30 14:40:07.881 2 DEBUG nova.storage.rbd_utils [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] rbd image 4a2e4963-f354-48e2-af39-ce9e01d9eda1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Sep 30 14:40:07 compute-0 ceph-mon[74194]: pgmap v735: 337 pgs: 337 active+clean; 41 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Sep 30 14:40:07 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/2972646914' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:40:07 compute-0 nova_compute[261524]: 2025-09-30 14:40:07.911 2 DEBUG nova.storage.rbd_utils [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] rbd image 4a2e4963-f354-48e2-af39-ce9e01d9eda1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Sep 30 14:40:07 compute-0 nova_compute[261524]: 2025-09-30 14:40:07.915 2 DEBUG oslo_concurrency.lockutils [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Acquiring lock "5be88f2030ae3f90b4568c2fe3300967dbe88639" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:40:07 compute-0 nova_compute[261524]: 2025-09-30 14:40:07.916 2 DEBUG oslo_concurrency.lockutils [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Lock "5be88f2030ae3f90b4568c2fe3300967dbe88639" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:40:08 compute-0 nova_compute[261524]: 2025-09-30 14:40:08.181 2 DEBUG nova.virt.libvirt.imagebackend [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Image locations are: [{'url': 'rbd://5e3c7776-ac03-5698-b79f-a6dc2d80cae6/images/7c70cf84-edc3-42b2-a094-ae3c1dbaffe4/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://5e3c7776-ac03-5698-b79f-a6dc2d80cae6/images/7c70cf84-edc3-42b2-a094-ae3c1dbaffe4/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Sep 30 14:40:08 compute-0 nova_compute[261524]: 2025-09-30 14:40:08.205 2 WARNING oslo_policy.policy [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Sep 30 14:40:08 compute-0 nova_compute[261524]: 2025-09-30 14:40:08.205 2 WARNING oslo_policy.policy [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Sep 30 14:40:08 compute-0 nova_compute[261524]: 2025-09-30 14:40:08.209 2 DEBUG nova.policy [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '59c80c4f189d4667aec64b43afc69ed2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0f6bbb74396f4cb7bfa999ebdabfe722', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Sep 30 14:40:08 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v736: 337 pgs: 337 active+clean; 41 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Sep 30 14:40:08 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:40:08 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0928004190 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:40:08 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:40:08 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0934004920 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:40:08 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:40:08 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.002000053s ======
Sep 30 14:40:08 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:40:08.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Sep 30 14:40:08 compute-0 nova_compute[261524]: 2025-09-30 14:40:08.999 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:40:08 compute-0 nova_compute[261524]: 2025-09-30 14:40:08.999 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:40:09 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:40:09 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f09100039b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:40:09 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:40:09 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:40:09 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:40:09.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:40:09 compute-0 nova_compute[261524]: 2025-09-30 14:40:09.525 2 DEBUG nova.network.neutron [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: 4a2e4963-f354-48e2-af39-ce9e01d9eda1] Successfully created port: 282b94c3-1056-44d8-9ca4-959a3718bd94 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Sep 30 14:40:09 compute-0 nova_compute[261524]: 2025-09-30 14:40:09.733 2 DEBUG oslo_concurrency.processutils [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5be88f2030ae3f90b4568c2fe3300967dbe88639.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Sep 30 14:40:09 compute-0 nova_compute[261524]: 2025-09-30 14:40:09.795 2 DEBUG oslo_concurrency.processutils [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5be88f2030ae3f90b4568c2fe3300967dbe88639.part --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Sep 30 14:40:09 compute-0 nova_compute[261524]: 2025-09-30 14:40:09.797 2 DEBUG nova.virt.images [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] 7c70cf84-edc3-42b2-a094-ae3c1dbaffe4 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Sep 30 14:40:09 compute-0 nova_compute[261524]: 2025-09-30 14:40:09.799 2 DEBUG nova.privsep.utils [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Sep 30 14:40:09 compute-0 nova_compute[261524]: 2025-09-30 14:40:09.799 2 DEBUG oslo_concurrency.processutils [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/5be88f2030ae3f90b4568c2fe3300967dbe88639.part /var/lib/nova/instances/_base/5be88f2030ae3f90b4568c2fe3300967dbe88639.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Sep 30 14:40:09 compute-0 nova_compute[261524]: 2025-09-30 14:40:09.952 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:40:09 compute-0 nova_compute[261524]: 2025-09-30 14:40:09.953 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Sep 30 14:40:09 compute-0 nova_compute[261524]: 2025-09-30 14:40:09.954 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Sep 30 14:40:09 compute-0 nova_compute[261524]: 2025-09-30 14:40:09.977 2 DEBUG oslo_concurrency.processutils [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/5be88f2030ae3f90b4568c2fe3300967dbe88639.part /var/lib/nova/instances/_base/5be88f2030ae3f90b4568c2fe3300967dbe88639.converted" returned: 0 in 0.178s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Sep 30 14:40:09 compute-0 nova_compute[261524]: 2025-09-30 14:40:09.982 2 DEBUG oslo_concurrency.processutils [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5be88f2030ae3f90b4568c2fe3300967dbe88639.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Sep 30 14:40:10 compute-0 nova_compute[261524]: 2025-09-30 14:40:10.000 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] [instance: 4a2e4963-f354-48e2-af39-ce9e01d9eda1] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Sep 30 14:40:10 compute-0 nova_compute[261524]: 2025-09-30 14:40:10.001 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Sep 30 14:40:10 compute-0 nova_compute[261524]: 2025-09-30 14:40:10.002 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:40:10 compute-0 nova_compute[261524]: 2025-09-30 14:40:10.002 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:40:10 compute-0 nova_compute[261524]: 2025-09-30 14:40:10.002 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:40:10 compute-0 nova_compute[261524]: 2025-09-30 14:40:10.003 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:40:10 compute-0 nova_compute[261524]: 2025-09-30 14:40:10.047 2 DEBUG oslo_concurrency.processutils [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5be88f2030ae3f90b4568c2fe3300967dbe88639.converted --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Sep 30 14:40:10 compute-0 nova_compute[261524]: 2025-09-30 14:40:10.048 2 DEBUG oslo_concurrency.lockutils [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Lock "5be88f2030ae3f90b4568c2fe3300967dbe88639" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.132s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:40:10 compute-0 nova_compute[261524]: 2025-09-30 14:40:10.078 2 DEBUG nova.storage.rbd_utils [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] rbd image 4a2e4963-f354-48e2-af39-ce9e01d9eda1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Sep 30 14:40:10 compute-0 nova_compute[261524]: 2025-09-30 14:40:10.082 2 DEBUG oslo_concurrency.processutils [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/5be88f2030ae3f90b4568c2fe3300967dbe88639 4a2e4963-f354-48e2-af39-ce9e01d9eda1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Sep 30 14:40:10 compute-0 ceph-mon[74194]: pgmap v736: 337 pgs: 337 active+clean; 41 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Sep 30 14:40:10 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e146 do_prune osdmap full prune enabled
Sep 30 14:40:10 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e147 e147: 3 total, 3 up, 3 in
Sep 30 14:40:10 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v738: 337 pgs: 337 active+clean; 41 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 0 op/s
Sep 30 14:40:10 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e147: 3 total, 3 up, 3 in
Sep 30 14:40:10 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:40:10 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f092c004a40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:40:10 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:40:10 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f09280041b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:40:10 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:40:10 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:40:10 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:40:10.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:40:10 compute-0 nova_compute[261524]: 2025-09-30 14:40:10.952 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:40:11 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 14:40:11 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2912325926' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 14:40:11 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 14:40:11 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2912325926' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 14:40:11 compute-0 nova_compute[261524]: 2025-09-30 14:40:11.069 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:40:11 compute-0 nova_compute[261524]: 2025-09-30 14:40:11.069 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:40:11 compute-0 nova_compute[261524]: 2025-09-30 14:40:11.070 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:40:11 compute-0 nova_compute[261524]: 2025-09-30 14:40:11.070 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Sep 30 14:40:11 compute-0 nova_compute[261524]: 2025-09-30 14:40:11.070 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Sep 30 14:40:11 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:40:11 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0934004920 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:40:11 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:40:11 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:40:11 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:40:11.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:40:11 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 14:40:11 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3416616448' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:40:11 compute-0 nova_compute[261524]: 2025-09-30 14:40:11.529 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Sep 30 14:40:11 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e147 do_prune osdmap full prune enabled
Sep 30 14:40:11 compute-0 ceph-mon[74194]: pgmap v738: 337 pgs: 337 active+clean; 41 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 0 op/s
Sep 30 14:40:11 compute-0 ceph-mon[74194]: osdmap e147: 3 total, 3 up, 3 in
Sep 30 14:40:11 compute-0 ceph-mon[74194]: from='client.? 192.168.122.10:0/2912325926' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 14:40:11 compute-0 ceph-mon[74194]: from='client.? 192.168.122.10:0/2912325926' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 14:40:11 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/4018328064' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:40:11 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/3416616448' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:40:11 compute-0 nova_compute[261524]: 2025-09-30 14:40:11.728 2 WARNING nova.virt.libvirt.driver [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 14:40:11 compute-0 nova_compute[261524]: 2025-09-30 14:40:11.729 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4866MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Sep 30 14:40:11 compute-0 nova_compute[261524]: 2025-09-30 14:40:11.729 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:40:11 compute-0 nova_compute[261524]: 2025-09-30 14:40:11.729 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:40:11 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e148 e148: 3 total, 3 up, 3 in
Sep 30 14:40:11 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e148: 3 total, 3 up, 3 in
Sep 30 14:40:11 compute-0 nova_compute[261524]: 2025-09-30 14:40:11.799 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Instance 4a2e4963-f354-48e2-af39-ce9e01d9eda1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Sep 30 14:40:11 compute-0 nova_compute[261524]: 2025-09-30 14:40:11.800 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Sep 30 14:40:11 compute-0 nova_compute[261524]: 2025-09-30 14:40:11.800 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Sep 30 14:40:11 compute-0 nova_compute[261524]: 2025-09-30 14:40:11.831 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Sep 30 14:40:11 compute-0 nova_compute[261524]: 2025-09-30 14:40:11.997 2 DEBUG nova.network.neutron [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: 4a2e4963-f354-48e2-af39-ce9e01d9eda1] Successfully updated port: 282b94c3-1056-44d8-9ca4-959a3718bd94 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Sep 30 14:40:12 compute-0 nova_compute[261524]: 2025-09-30 14:40:12.021 2 DEBUG oslo_concurrency.lockutils [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Acquiring lock "refresh_cache-4a2e4963-f354-48e2-af39-ce9e01d9eda1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Sep 30 14:40:12 compute-0 nova_compute[261524]: 2025-09-30 14:40:12.021 2 DEBUG oslo_concurrency.lockutils [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Acquired lock "refresh_cache-4a2e4963-f354-48e2-af39-ce9e01d9eda1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Sep 30 14:40:12 compute-0 nova_compute[261524]: 2025-09-30 14:40:12.022 2 DEBUG nova.network.neutron [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: 4a2e4963-f354-48e2-af39-ce9e01d9eda1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Sep 30 14:40:12 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:40:12 compute-0 nova_compute[261524]: 2025-09-30 14:40:12.206 2 DEBUG nova.network.neutron [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: 4a2e4963-f354-48e2-af39-ce9e01d9eda1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Sep 30 14:40:12 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 14:40:12 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1539829678' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:40:12 compute-0 nova_compute[261524]: 2025-09-30 14:40:12.302 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Sep 30 14:40:12 compute-0 nova_compute[261524]: 2025-09-30 14:40:12.310 2 DEBUG nova.compute.provider_tree [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Updating inventory in ProviderTree for provider 06783cfc-6d32-454d-9501-ebd8adea3735 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Sep 30 14:40:12 compute-0 nova_compute[261524]: 2025-09-30 14:40:12.371 2 ERROR nova.scheduler.client.report [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] [req-45a8f6f3-1d3b-4061-995b-455f705de3a2] Failed to update inventory to [{'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}}] for resource provider with UUID 06783cfc-6d32-454d-9501-ebd8adea3735.  Got 409: {"errors": [{"status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n resource provider generation conflict  ", "code": "placement.concurrent_update", "request_id": "req-45a8f6f3-1d3b-4061-995b-455f705de3a2"}]}
Sep 30 14:40:12 compute-0 nova_compute[261524]: 2025-09-30 14:40:12.395 2 DEBUG nova.scheduler.client.report [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Refreshing inventories for resource provider 06783cfc-6d32-454d-9501-ebd8adea3735 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Sep 30 14:40:12 compute-0 nova_compute[261524]: 2025-09-30 14:40:12.432 2 DEBUG nova.scheduler.client.report [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Updating ProviderTree inventory for provider 06783cfc-6d32-454d-9501-ebd8adea3735 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Sep 30 14:40:12 compute-0 nova_compute[261524]: 2025-09-30 14:40:12.433 2 DEBUG nova.compute.provider_tree [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Updating inventory in ProviderTree for provider 06783cfc-6d32-454d-9501-ebd8adea3735 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Sep 30 14:40:12 compute-0 nova_compute[261524]: 2025-09-30 14:40:12.447 2 DEBUG nova.scheduler.client.report [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Refreshing aggregate associations for resource provider 06783cfc-6d32-454d-9501-ebd8adea3735, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Sep 30 14:40:12 compute-0 nova_compute[261524]: 2025-09-30 14:40:12.467 2 DEBUG nova.scheduler.client.report [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Refreshing trait associations for resource provider 06783cfc-6d32-454d-9501-ebd8adea3735, traits: COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AVX,HW_CPU_X86_SSE2,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_BMI2,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SVM,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_BMI,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_FMA3,HW_CPU_X86_AVX2,HW_CPU_X86_SSE42,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_F16C,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_RESCUE_BFV,COMPUTE_NODE,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_USB,COMPUTE_ACCELERATORS,HW_CPU_X86_CLMUL,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE4A,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_AMD_SVM _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Sep 30 14:40:12 compute-0 nova_compute[261524]: 2025-09-30 14:40:12.498 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Sep 30 14:40:12 compute-0 nova_compute[261524]: 2025-09-30 14:40:12.529 2 DEBUG nova.compute.manager [req-1fe25cc5-ead0-40ea-8c3e-dd08ed4975da req-25fc3794-4cb0-440e-929c-213762624d15 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: 4a2e4963-f354-48e2-af39-ce9e01d9eda1] Received event network-changed-282b94c3-1056-44d8-9ca4-959a3718bd94 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Sep 30 14:40:12 compute-0 nova_compute[261524]: 2025-09-30 14:40:12.530 2 DEBUG nova.compute.manager [req-1fe25cc5-ead0-40ea-8c3e-dd08ed4975da req-25fc3794-4cb0-440e-929c-213762624d15 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: 4a2e4963-f354-48e2-af39-ce9e01d9eda1] Refreshing instance network info cache due to event network-changed-282b94c3-1056-44d8-9ca4-959a3718bd94. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Sep 30 14:40:12 compute-0 nova_compute[261524]: 2025-09-30 14:40:12.530 2 DEBUG oslo_concurrency.lockutils [req-1fe25cc5-ead0-40ea-8c3e-dd08ed4975da req-25fc3794-4cb0-440e-929c-213762624d15 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Acquiring lock "refresh_cache-4a2e4963-f354-48e2-af39-ce9e01d9eda1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Sep 30 14:40:12 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v740: 337 pgs: 337 active+clean; 41 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 255 B/s wr, 11 op/s
Sep 30 14:40:12 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:40:12 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f09100039b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:40:12 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:40:12 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f092c004a40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:40:12 compute-0 ceph-mon[74194]: osdmap e148: 3 total, 3 up, 3 in
Sep 30 14:40:12 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/3326272809' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:40:12 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/1539829678' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:40:12 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:40:12 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:40:12 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:40:12.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:40:12 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 14:40:12 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1880248532' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:40:13 compute-0 nova_compute[261524]: 2025-09-30 14:40:13.013 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Sep 30 14:40:13 compute-0 nova_compute[261524]: 2025-09-30 14:40:13.021 2 DEBUG nova.compute.provider_tree [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Updating inventory in ProviderTree for provider 06783cfc-6d32-454d-9501-ebd8adea3735 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Sep 30 14:40:13 compute-0 nova_compute[261524]: 2025-09-30 14:40:13.173 2 DEBUG nova.network.neutron [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: 4a2e4963-f354-48e2-af39-ce9e01d9eda1] Updating instance_info_cache with network_info: [{"id": "282b94c3-1056-44d8-9ca4-959a3718bd94", "address": "fa:16:3e:d6:6d:09", "network": {"id": "ac4ef079-a88d-4ba4-9e93-1ee01981c523", "bridge": "br-int", "label": "tempest-network-smoke--1497657702", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0f6bbb74396f4cb7bfa999ebdabfe722", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap282b94c3-10", "ovs_interfaceid": "282b94c3-1056-44d8-9ca4-959a3718bd94", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Sep 30 14:40:13 compute-0 nova_compute[261524]: 2025-09-30 14:40:13.268 2 DEBUG nova.scheduler.client.report [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Updated inventory for provider 06783cfc-6d32-454d-9501-ebd8adea3735 with generation 3 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Sep 30 14:40:13 compute-0 nova_compute[261524]: 2025-09-30 14:40:13.269 2 DEBUG nova.compute.provider_tree [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Updating resource provider 06783cfc-6d32-454d-9501-ebd8adea3735 generation from 3 to 4 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Sep 30 14:40:13 compute-0 nova_compute[261524]: 2025-09-30 14:40:13.270 2 DEBUG nova.compute.provider_tree [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Updating inventory in ProviderTree for provider 06783cfc-6d32-454d-9501-ebd8adea3735 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Sep 30 14:40:13 compute-0 nova_compute[261524]: 2025-09-30 14:40:13.298 2 DEBUG oslo_concurrency.processutils [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/5be88f2030ae3f90b4568c2fe3300967dbe88639 4a2e4963-f354-48e2-af39-ce9e01d9eda1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.216s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Sep 30 14:40:13 compute-0 nova_compute[261524]: 2025-09-30 14:40:13.337 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Sep 30 14:40:13 compute-0 nova_compute[261524]: 2025-09-30 14:40:13.338 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.609s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:40:13 compute-0 nova_compute[261524]: 2025-09-30 14:40:13.338 2 DEBUG oslo_concurrency.lockutils [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Releasing lock "refresh_cache-4a2e4963-f354-48e2-af39-ce9e01d9eda1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Sep 30 14:40:13 compute-0 nova_compute[261524]: 2025-09-30 14:40:13.339 2 DEBUG nova.compute.manager [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: 4a2e4963-f354-48e2-af39-ce9e01d9eda1] Instance network_info: |[{"id": "282b94c3-1056-44d8-9ca4-959a3718bd94", "address": "fa:16:3e:d6:6d:09", "network": {"id": "ac4ef079-a88d-4ba4-9e93-1ee01981c523", "bridge": "br-int", "label": "tempest-network-smoke--1497657702", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0f6bbb74396f4cb7bfa999ebdabfe722", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap282b94c3-10", "ovs_interfaceid": "282b94c3-1056-44d8-9ca4-959a3718bd94", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Sep 30 14:40:13 compute-0 nova_compute[261524]: 2025-09-30 14:40:13.339 2 DEBUG oslo_concurrency.lockutils [req-1fe25cc5-ead0-40ea-8c3e-dd08ed4975da req-25fc3794-4cb0-440e-929c-213762624d15 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Acquired lock "refresh_cache-4a2e4963-f354-48e2-af39-ce9e01d9eda1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Sep 30 14:40:13 compute-0 nova_compute[261524]: 2025-09-30 14:40:13.340 2 DEBUG nova.network.neutron [req-1fe25cc5-ead0-40ea-8c3e-dd08ed4975da req-25fc3794-4cb0-440e-929c-213762624d15 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: 4a2e4963-f354-48e2-af39-ce9e01d9eda1] Refreshing network info cache for port 282b94c3-1056-44d8-9ca4-959a3718bd94 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Sep 30 14:40:13 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:40:13 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0934004920 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:40:13 compute-0 nova_compute[261524]: 2025-09-30 14:40:13.392 2 DEBUG nova.storage.rbd_utils [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] resizing rbd image 4a2e4963-f354-48e2-af39-ce9e01d9eda1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Sep 30 14:40:13 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:40:13 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:40:13 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:40:13.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:40:13 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:40:13.618Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:40:13 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:40:13 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:40:13 compute-0 nova_compute[261524]: 2025-09-30 14:40:13.887 2 DEBUG nova.objects.instance [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Lazy-loading 'migration_context' on Instance uuid 4a2e4963-f354-48e2-af39-ce9e01d9eda1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Sep 30 14:40:13 compute-0 nova_compute[261524]: 2025-09-30 14:40:13.901 2 DEBUG nova.virt.libvirt.driver [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: 4a2e4963-f354-48e2-af39-ce9e01d9eda1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Sep 30 14:40:13 compute-0 nova_compute[261524]: 2025-09-30 14:40:13.902 2 DEBUG nova.virt.libvirt.driver [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: 4a2e4963-f354-48e2-af39-ce9e01d9eda1] Ensure instance console log exists: /var/lib/nova/instances/4a2e4963-f354-48e2-af39-ce9e01d9eda1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Sep 30 14:40:13 compute-0 nova_compute[261524]: 2025-09-30 14:40:13.903 2 DEBUG oslo_concurrency.lockutils [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:40:13 compute-0 nova_compute[261524]: 2025-09-30 14:40:13.903 2 DEBUG oslo_concurrency.lockutils [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:40:13 compute-0 nova_compute[261524]: 2025-09-30 14:40:13.904 2 DEBUG oslo_concurrency.lockutils [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:40:13 compute-0 nova_compute[261524]: 2025-09-30 14:40:13.910 2 DEBUG nova.virt.libvirt.driver [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: 4a2e4963-f354-48e2-af39-ce9e01d9eda1] Start _get_guest_xml network_info=[{"id": "282b94c3-1056-44d8-9ca4-959a3718bd94", "address": "fa:16:3e:d6:6d:09", "network": {"id": "ac4ef079-a88d-4ba4-9e93-1ee01981c523", "bridge": "br-int", "label": "tempest-network-smoke--1497657702", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0f6bbb74396f4cb7bfa999ebdabfe722", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap282b94c3-10", "ovs_interfaceid": "282b94c3-1056-44d8-9ca4-959a3718bd94", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-09-30T14:39:17Z,direct_url=<?>,disk_format='qcow2',id=7c70cf84-edc3-42b2-a094-ae3c1dbaffe4,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5beed35d375f4bd185a6774dc475e0b9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-09-30T14:39:19Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'size': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'guest_format': None, 'disk_bus': 'virtio', 'image_id': '7c70cf84-edc3-42b2-a094-ae3c1dbaffe4'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Sep 30 14:40:13 compute-0 nova_compute[261524]: 2025-09-30 14:40:13.916 2 WARNING nova.virt.libvirt.driver [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 14:40:13 compute-0 nova_compute[261524]: 2025-09-30 14:40:13.922 2 DEBUG nova.virt.libvirt.host [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Sep 30 14:40:13 compute-0 nova_compute[261524]: 2025-09-30 14:40:13.923 2 DEBUG nova.virt.libvirt.host [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Sep 30 14:40:13 compute-0 nova_compute[261524]: 2025-09-30 14:40:13.926 2 DEBUG nova.virt.libvirt.host [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Sep 30 14:40:13 compute-0 nova_compute[261524]: 2025-09-30 14:40:13.927 2 DEBUG nova.virt.libvirt.host [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Sep 30 14:40:13 compute-0 nova_compute[261524]: 2025-09-30 14:40:13.929 2 DEBUG nova.virt.libvirt.driver [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Sep 30 14:40:13 compute-0 nova_compute[261524]: 2025-09-30 14:40:13.929 2 DEBUG nova.virt.hardware [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-09-30T14:39:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='64f3d3b9-41b6-4b89-8bbd-f654faf17546',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-09-30T14:39:17Z,direct_url=<?>,disk_format='qcow2',id=7c70cf84-edc3-42b2-a094-ae3c1dbaffe4,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5beed35d375f4bd185a6774dc475e0b9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-09-30T14:39:19Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Sep 30 14:40:13 compute-0 nova_compute[261524]: 2025-09-30 14:40:13.931 2 DEBUG nova.virt.hardware [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Sep 30 14:40:13 compute-0 nova_compute[261524]: 2025-09-30 14:40:13.931 2 DEBUG nova.virt.hardware [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Sep 30 14:40:13 compute-0 nova_compute[261524]: 2025-09-30 14:40:13.932 2 DEBUG nova.virt.hardware [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Sep 30 14:40:13 compute-0 nova_compute[261524]: 2025-09-30 14:40:13.933 2 DEBUG nova.virt.hardware [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Sep 30 14:40:13 compute-0 nova_compute[261524]: 2025-09-30 14:40:13.933 2 DEBUG nova.virt.hardware [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Sep 30 14:40:13 compute-0 nova_compute[261524]: 2025-09-30 14:40:13.934 2 DEBUG nova.virt.hardware [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Sep 30 14:40:13 compute-0 nova_compute[261524]: 2025-09-30 14:40:13.934 2 DEBUG nova.virt.hardware [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Sep 30 14:40:13 compute-0 nova_compute[261524]: 2025-09-30 14:40:13.935 2 DEBUG nova.virt.hardware [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Sep 30 14:40:13 compute-0 nova_compute[261524]: 2025-09-30 14:40:13.935 2 DEBUG nova.virt.hardware [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Sep 30 14:40:13 compute-0 nova_compute[261524]: 2025-09-30 14:40:13.935 2 DEBUG nova.virt.hardware [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Sep 30 14:40:13 compute-0 nova_compute[261524]: 2025-09-30 14:40:13.939 2 DEBUG nova.privsep.utils [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Sep 30 14:40:13 compute-0 nova_compute[261524]: 2025-09-30 14:40:13.939 2 DEBUG oslo_concurrency.processutils [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Sep 30 14:40:13 compute-0 ceph-mon[74194]: pgmap v740: 337 pgs: 337 active+clean; 41 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 255 B/s wr, 11 op/s
Sep 30 14:40:13 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/1880248532' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:40:14 compute-0 nova_compute[261524]: 2025-09-30 14:40:14.338 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:40:14 compute-0 nova_compute[261524]: 2025-09-30 14:40:14.339 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Sep 30 14:40:14 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Sep 30 14:40:14 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3536464946' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 14:40:14 compute-0 nova_compute[261524]: 2025-09-30 14:40:14.436 2 DEBUG oslo_concurrency.processutils [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Sep 30 14:40:14 compute-0 nova_compute[261524]: 2025-09-30 14:40:14.470 2 DEBUG nova.storage.rbd_utils [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] rbd image 4a2e4963-f354-48e2-af39-ce9e01d9eda1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Sep 30 14:40:14 compute-0 nova_compute[261524]: 2025-09-30 14:40:14.475 2 DEBUG oslo_concurrency.processutils [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Sep 30 14:40:14 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v741: 337 pgs: 337 active+clean; 41 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 255 B/s wr, 11 op/s
Sep 30 14:40:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:40:14 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0904001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:40:14 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:40:14 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:40:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:40:14 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f09100039b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:40:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:40:14] "GET /metrics HTTP/1.1" 200 48505 "" "Prometheus/2.51.0"
Sep 30 14:40:14 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:40:14] "GET /metrics HTTP/1.1" 200 48505 "" "Prometheus/2.51.0"
Sep 30 14:40:14 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:40:14 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:40:14 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:40:14.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:40:14 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Sep 30 14:40:14 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1068521447' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 14:40:14 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/3536464946' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 14:40:14 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:40:14 compute-0 nova_compute[261524]: 2025-09-30 14:40:14.986 2 DEBUG oslo_concurrency.processutils [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Sep 30 14:40:14 compute-0 nova_compute[261524]: 2025-09-30 14:40:14.989 2 DEBUG nova.virt.libvirt.vif [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-09-30T14:40:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1457461652',display_name='tempest-TestNetworkBasicOps-server-1457461652',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1457461652',id=1,image_ref='7c70cf84-edc3-42b2-a094-ae3c1dbaffe4',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNbHcRx+ioPEkeKlOP9E9zJz227uPsnvyJ1Yk+aBS4J9PIyvkuS/b/ZsYDRdrf5CTtnk9Ao6kff0l7PrelfecOiO5NvxZp3J3t640l4shG20oMhTFwH9twPyhww6w5ovpg==',key_name='tempest-TestNetworkBasicOps-693846457',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0f6bbb74396f4cb7bfa999ebdabfe722',ramdisk_id='',reservation_id='r-dn7nfker',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c70cf84-edc3-42b2-a094-ae3c1dbaffe4',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-195302952',owner_user_name='tempest-TestNetworkBasicOps-195302952-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-09-30T14:40:07Z,user_data=None,user_id='59c80c4f189d4667aec64b43afc69ed2',uuid=4a2e4963-f354-48e2-af39-ce9e01d9eda1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "282b94c3-1056-44d8-9ca4-959a3718bd94", "address": "fa:16:3e:d6:6d:09", "network": {"id": "ac4ef079-a88d-4ba4-9e93-1ee01981c523", "bridge": "br-int", "label": "tempest-network-smoke--1497657702", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0f6bbb74396f4cb7bfa999ebdabfe722", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap282b94c3-10", "ovs_interfaceid": "282b94c3-1056-44d8-9ca4-959a3718bd94", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Sep 30 14:40:14 compute-0 nova_compute[261524]: 2025-09-30 14:40:14.990 2 DEBUG nova.network.os_vif_util [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Converting VIF {"id": "282b94c3-1056-44d8-9ca4-959a3718bd94", "address": "fa:16:3e:d6:6d:09", "network": {"id": "ac4ef079-a88d-4ba4-9e93-1ee01981c523", "bridge": "br-int", "label": "tempest-network-smoke--1497657702", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0f6bbb74396f4cb7bfa999ebdabfe722", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap282b94c3-10", "ovs_interfaceid": "282b94c3-1056-44d8-9ca4-959a3718bd94", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Sep 30 14:40:14 compute-0 nova_compute[261524]: 2025-09-30 14:40:14.991 2 DEBUG nova.network.os_vif_util [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:6d:09,bridge_name='br-int',has_traffic_filtering=True,id=282b94c3-1056-44d8-9ca4-959a3718bd94,network=Network(ac4ef079-a88d-4ba4-9e93-1ee01981c523),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap282b94c3-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Sep 30 14:40:14 compute-0 nova_compute[261524]: 2025-09-30 14:40:14.995 2 DEBUG nova.objects.instance [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4a2e4963-f354-48e2-af39-ce9e01d9eda1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Sep 30 14:40:15 compute-0 nova_compute[261524]: 2025-09-30 14:40:15.015 2 DEBUG nova.virt.libvirt.driver [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: 4a2e4963-f354-48e2-af39-ce9e01d9eda1] End _get_guest_xml xml=<domain type="kvm">
Sep 30 14:40:15 compute-0 nova_compute[261524]:   <uuid>4a2e4963-f354-48e2-af39-ce9e01d9eda1</uuid>
Sep 30 14:40:15 compute-0 nova_compute[261524]:   <name>instance-00000001</name>
Sep 30 14:40:15 compute-0 nova_compute[261524]:   <memory>131072</memory>
Sep 30 14:40:15 compute-0 nova_compute[261524]:   <vcpu>1</vcpu>
Sep 30 14:40:15 compute-0 nova_compute[261524]:   <metadata>
Sep 30 14:40:15 compute-0 nova_compute[261524]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Sep 30 14:40:15 compute-0 nova_compute[261524]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Sep 30 14:40:15 compute-0 nova_compute[261524]:       <nova:name>tempest-TestNetworkBasicOps-server-1457461652</nova:name>
Sep 30 14:40:15 compute-0 nova_compute[261524]:       <nova:creationTime>2025-09-30 14:40:13</nova:creationTime>
Sep 30 14:40:15 compute-0 nova_compute[261524]:       <nova:flavor name="m1.nano">
Sep 30 14:40:15 compute-0 nova_compute[261524]:         <nova:memory>128</nova:memory>
Sep 30 14:40:15 compute-0 nova_compute[261524]:         <nova:disk>1</nova:disk>
Sep 30 14:40:15 compute-0 nova_compute[261524]:         <nova:swap>0</nova:swap>
Sep 30 14:40:15 compute-0 nova_compute[261524]:         <nova:ephemeral>0</nova:ephemeral>
Sep 30 14:40:15 compute-0 nova_compute[261524]:         <nova:vcpus>1</nova:vcpus>
Sep 30 14:40:15 compute-0 nova_compute[261524]:       </nova:flavor>
Sep 30 14:40:15 compute-0 nova_compute[261524]:       <nova:owner>
Sep 30 14:40:15 compute-0 nova_compute[261524]:         <nova:user uuid="59c80c4f189d4667aec64b43afc69ed2">tempest-TestNetworkBasicOps-195302952-project-member</nova:user>
Sep 30 14:40:15 compute-0 nova_compute[261524]:         <nova:project uuid="0f6bbb74396f4cb7bfa999ebdabfe722">tempest-TestNetworkBasicOps-195302952</nova:project>
Sep 30 14:40:15 compute-0 nova_compute[261524]:       </nova:owner>
Sep 30 14:40:15 compute-0 nova_compute[261524]:       <nova:root type="image" uuid="7c70cf84-edc3-42b2-a094-ae3c1dbaffe4"/>
Sep 30 14:40:15 compute-0 nova_compute[261524]:       <nova:ports>
Sep 30 14:40:15 compute-0 nova_compute[261524]:         <nova:port uuid="282b94c3-1056-44d8-9ca4-959a3718bd94">
Sep 30 14:40:15 compute-0 nova_compute[261524]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Sep 30 14:40:15 compute-0 nova_compute[261524]:         </nova:port>
Sep 30 14:40:15 compute-0 nova_compute[261524]:       </nova:ports>
Sep 30 14:40:15 compute-0 nova_compute[261524]:     </nova:instance>
Sep 30 14:40:15 compute-0 nova_compute[261524]:   </metadata>
Sep 30 14:40:15 compute-0 nova_compute[261524]:   <sysinfo type="smbios">
Sep 30 14:40:15 compute-0 nova_compute[261524]:     <system>
Sep 30 14:40:15 compute-0 nova_compute[261524]:       <entry name="manufacturer">RDO</entry>
Sep 30 14:40:15 compute-0 nova_compute[261524]:       <entry name="product">OpenStack Compute</entry>
Sep 30 14:40:15 compute-0 nova_compute[261524]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Sep 30 14:40:15 compute-0 nova_compute[261524]:       <entry name="serial">4a2e4963-f354-48e2-af39-ce9e01d9eda1</entry>
Sep 30 14:40:15 compute-0 nova_compute[261524]:       <entry name="uuid">4a2e4963-f354-48e2-af39-ce9e01d9eda1</entry>
Sep 30 14:40:15 compute-0 nova_compute[261524]:       <entry name="family">Virtual Machine</entry>
Sep 30 14:40:15 compute-0 nova_compute[261524]:     </system>
Sep 30 14:40:15 compute-0 nova_compute[261524]:   </sysinfo>
Sep 30 14:40:15 compute-0 nova_compute[261524]:   <os>
Sep 30 14:40:15 compute-0 nova_compute[261524]:     <type arch="x86_64" machine="q35">hvm</type>
Sep 30 14:40:15 compute-0 nova_compute[261524]:     <boot dev="hd"/>
Sep 30 14:40:15 compute-0 nova_compute[261524]:     <smbios mode="sysinfo"/>
Sep 30 14:40:15 compute-0 nova_compute[261524]:   </os>
Sep 30 14:40:15 compute-0 nova_compute[261524]:   <features>
Sep 30 14:40:15 compute-0 nova_compute[261524]:     <acpi/>
Sep 30 14:40:15 compute-0 nova_compute[261524]:     <apic/>
Sep 30 14:40:15 compute-0 nova_compute[261524]:     <vmcoreinfo/>
Sep 30 14:40:15 compute-0 nova_compute[261524]:   </features>
Sep 30 14:40:15 compute-0 nova_compute[261524]:   <clock offset="utc">
Sep 30 14:40:15 compute-0 nova_compute[261524]:     <timer name="pit" tickpolicy="delay"/>
Sep 30 14:40:15 compute-0 nova_compute[261524]:     <timer name="rtc" tickpolicy="catchup"/>
Sep 30 14:40:15 compute-0 nova_compute[261524]:     <timer name="hpet" present="no"/>
Sep 30 14:40:15 compute-0 nova_compute[261524]:   </clock>
Sep 30 14:40:15 compute-0 nova_compute[261524]:   <cpu mode="host-model" match="exact">
Sep 30 14:40:15 compute-0 nova_compute[261524]:     <topology sockets="1" cores="1" threads="1"/>
Sep 30 14:40:15 compute-0 nova_compute[261524]:   </cpu>
Sep 30 14:40:15 compute-0 nova_compute[261524]:   <devices>
Sep 30 14:40:15 compute-0 nova_compute[261524]:     <disk type="network" device="disk">
Sep 30 14:40:15 compute-0 nova_compute[261524]:       <driver type="raw" cache="none"/>
Sep 30 14:40:15 compute-0 nova_compute[261524]:       <source protocol="rbd" name="vms/4a2e4963-f354-48e2-af39-ce9e01d9eda1_disk">
Sep 30 14:40:15 compute-0 nova_compute[261524]:         <host name="192.168.122.100" port="6789"/>
Sep 30 14:40:15 compute-0 nova_compute[261524]:         <host name="192.168.122.102" port="6789"/>
Sep 30 14:40:15 compute-0 nova_compute[261524]:         <host name="192.168.122.101" port="6789"/>
Sep 30 14:40:15 compute-0 nova_compute[261524]:       </source>
Sep 30 14:40:15 compute-0 nova_compute[261524]:       <auth username="openstack">
Sep 30 14:40:15 compute-0 nova_compute[261524]:         <secret type="ceph" uuid="5e3c7776-ac03-5698-b79f-a6dc2d80cae6"/>
Sep 30 14:40:15 compute-0 nova_compute[261524]:       </auth>
Sep 30 14:40:15 compute-0 nova_compute[261524]:       <target dev="vda" bus="virtio"/>
Sep 30 14:40:15 compute-0 nova_compute[261524]:     </disk>
Sep 30 14:40:15 compute-0 nova_compute[261524]:     <disk type="network" device="cdrom">
Sep 30 14:40:15 compute-0 nova_compute[261524]:       <driver type="raw" cache="none"/>
Sep 30 14:40:15 compute-0 nova_compute[261524]:       <source protocol="rbd" name="vms/4a2e4963-f354-48e2-af39-ce9e01d9eda1_disk.config">
Sep 30 14:40:15 compute-0 nova_compute[261524]:         <host name="192.168.122.100" port="6789"/>
Sep 30 14:40:15 compute-0 nova_compute[261524]:         <host name="192.168.122.102" port="6789"/>
Sep 30 14:40:15 compute-0 nova_compute[261524]:         <host name="192.168.122.101" port="6789"/>
Sep 30 14:40:15 compute-0 nova_compute[261524]:       </source>
Sep 30 14:40:15 compute-0 nova_compute[261524]:       <auth username="openstack">
Sep 30 14:40:15 compute-0 nova_compute[261524]:         <secret type="ceph" uuid="5e3c7776-ac03-5698-b79f-a6dc2d80cae6"/>
Sep 30 14:40:15 compute-0 nova_compute[261524]:       </auth>
Sep 30 14:40:15 compute-0 nova_compute[261524]:       <target dev="sda" bus="sata"/>
Sep 30 14:40:15 compute-0 nova_compute[261524]:     </disk>
Sep 30 14:40:15 compute-0 nova_compute[261524]:     <interface type="ethernet">
Sep 30 14:40:15 compute-0 nova_compute[261524]:       <mac address="fa:16:3e:d6:6d:09"/>
Sep 30 14:40:15 compute-0 nova_compute[261524]:       <model type="virtio"/>
Sep 30 14:40:15 compute-0 nova_compute[261524]:       <driver name="vhost" rx_queue_size="512"/>
Sep 30 14:40:15 compute-0 nova_compute[261524]:       <mtu size="1442"/>
Sep 30 14:40:15 compute-0 nova_compute[261524]:       <target dev="tap282b94c3-10"/>
Sep 30 14:40:15 compute-0 nova_compute[261524]:     </interface>
Sep 30 14:40:15 compute-0 nova_compute[261524]:     <serial type="pty">
Sep 30 14:40:15 compute-0 nova_compute[261524]:       <log file="/var/lib/nova/instances/4a2e4963-f354-48e2-af39-ce9e01d9eda1/console.log" append="off"/>
Sep 30 14:40:15 compute-0 nova_compute[261524]:     </serial>
Sep 30 14:40:15 compute-0 nova_compute[261524]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Sep 30 14:40:15 compute-0 nova_compute[261524]:     <video>
Sep 30 14:40:15 compute-0 nova_compute[261524]:       <model type="virtio"/>
Sep 30 14:40:15 compute-0 nova_compute[261524]:     </video>
Sep 30 14:40:15 compute-0 nova_compute[261524]:     <input type="tablet" bus="usb"/>
Sep 30 14:40:15 compute-0 nova_compute[261524]:     <rng model="virtio">
Sep 30 14:40:15 compute-0 nova_compute[261524]:       <backend model="random">/dev/urandom</backend>
Sep 30 14:40:15 compute-0 nova_compute[261524]:     </rng>
Sep 30 14:40:15 compute-0 nova_compute[261524]:     <controller type="pci" model="pcie-root"/>
Sep 30 14:40:15 compute-0 nova_compute[261524]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 14:40:15 compute-0 nova_compute[261524]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 14:40:15 compute-0 nova_compute[261524]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 14:40:15 compute-0 nova_compute[261524]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 14:40:15 compute-0 nova_compute[261524]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 14:40:15 compute-0 nova_compute[261524]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 14:40:15 compute-0 nova_compute[261524]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 14:40:15 compute-0 nova_compute[261524]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 14:40:15 compute-0 nova_compute[261524]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 14:40:15 compute-0 nova_compute[261524]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 14:40:15 compute-0 nova_compute[261524]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 14:40:15 compute-0 nova_compute[261524]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 14:40:15 compute-0 nova_compute[261524]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 14:40:15 compute-0 nova_compute[261524]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 14:40:15 compute-0 nova_compute[261524]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 14:40:15 compute-0 nova_compute[261524]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 14:40:15 compute-0 nova_compute[261524]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 14:40:15 compute-0 nova_compute[261524]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 14:40:15 compute-0 nova_compute[261524]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 14:40:15 compute-0 nova_compute[261524]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 14:40:15 compute-0 nova_compute[261524]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 14:40:15 compute-0 nova_compute[261524]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 14:40:15 compute-0 nova_compute[261524]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 14:40:15 compute-0 nova_compute[261524]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 14:40:15 compute-0 nova_compute[261524]:     <controller type="usb" index="0"/>
Sep 30 14:40:15 compute-0 nova_compute[261524]:     <memballoon model="virtio">
Sep 30 14:40:15 compute-0 nova_compute[261524]:       <stats period="10"/>
Sep 30 14:40:15 compute-0 nova_compute[261524]:     </memballoon>
Sep 30 14:40:15 compute-0 nova_compute[261524]:   </devices>
Sep 30 14:40:15 compute-0 nova_compute[261524]: </domain>
Sep 30 14:40:15 compute-0 nova_compute[261524]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Sep 30 14:40:15 compute-0 nova_compute[261524]: 2025-09-30 14:40:15.017 2 DEBUG nova.compute.manager [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: 4a2e4963-f354-48e2-af39-ce9e01d9eda1] Preparing to wait for external event network-vif-plugged-282b94c3-1056-44d8-9ca4-959a3718bd94 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Sep 30 14:40:15 compute-0 nova_compute[261524]: 2025-09-30 14:40:15.018 2 DEBUG oslo_concurrency.lockutils [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Acquiring lock "4a2e4963-f354-48e2-af39-ce9e01d9eda1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:40:15 compute-0 nova_compute[261524]: 2025-09-30 14:40:15.018 2 DEBUG oslo_concurrency.lockutils [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Lock "4a2e4963-f354-48e2-af39-ce9e01d9eda1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:40:15 compute-0 nova_compute[261524]: 2025-09-30 14:40:15.019 2 DEBUG oslo_concurrency.lockutils [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Lock "4a2e4963-f354-48e2-af39-ce9e01d9eda1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:40:15 compute-0 nova_compute[261524]: 2025-09-30 14:40:15.020 2 DEBUG nova.virt.libvirt.vif [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-09-30T14:40:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1457461652',display_name='tempest-TestNetworkBasicOps-server-1457461652',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1457461652',id=1,image_ref='7c70cf84-edc3-42b2-a094-ae3c1dbaffe4',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNbHcRx+ioPEkeKlOP9E9zJz227uPsnvyJ1Yk+aBS4J9PIyvkuS/b/ZsYDRdrf5CTtnk9Ao6kff0l7PrelfecOiO5NvxZp3J3t640l4shG20oMhTFwH9twPyhww6w5ovpg==',key_name='tempest-TestNetworkBasicOps-693846457',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0f6bbb74396f4cb7bfa999ebdabfe722',ramdisk_id='',reservation_id='r-dn7nfker',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c70cf84-edc3-42b2-a094-ae3c1dbaffe4',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-195302952',owner_user_name='tempest-TestNetworkBasicOps-195302952-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-09-30T14:40:07Z,user_data=None,user_id='59c80c4f189d4667aec64b43afc69ed2',uuid=4a2e4963-f354-48e2-af39-ce9e01d9eda1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "282b94c3-1056-44d8-9ca4-959a3718bd94", "address": "fa:16:3e:d6:6d:09", "network": {"id": "ac4ef079-a88d-4ba4-9e93-1ee01981c523", "bridge": "br-int", "label": "tempest-network-smoke--1497657702", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0f6bbb74396f4cb7bfa999ebdabfe722", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap282b94c3-10", "ovs_interfaceid": "282b94c3-1056-44d8-9ca4-959a3718bd94", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Sep 30 14:40:15 compute-0 nova_compute[261524]: 2025-09-30 14:40:15.020 2 DEBUG nova.network.os_vif_util [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Converting VIF {"id": "282b94c3-1056-44d8-9ca4-959a3718bd94", "address": "fa:16:3e:d6:6d:09", "network": {"id": "ac4ef079-a88d-4ba4-9e93-1ee01981c523", "bridge": "br-int", "label": "tempest-network-smoke--1497657702", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0f6bbb74396f4cb7bfa999ebdabfe722", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap282b94c3-10", "ovs_interfaceid": "282b94c3-1056-44d8-9ca4-959a3718bd94", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Sep 30 14:40:15 compute-0 nova_compute[261524]: 2025-09-30 14:40:15.021 2 DEBUG nova.network.os_vif_util [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:6d:09,bridge_name='br-int',has_traffic_filtering=True,id=282b94c3-1056-44d8-9ca4-959a3718bd94,network=Network(ac4ef079-a88d-4ba4-9e93-1ee01981c523),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap282b94c3-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Sep 30 14:40:15 compute-0 nova_compute[261524]: 2025-09-30 14:40:15.022 2 DEBUG os_vif [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:6d:09,bridge_name='br-int',has_traffic_filtering=True,id=282b94c3-1056-44d8-9ca4-959a3718bd94,network=Network(ac4ef079-a88d-4ba4-9e93-1ee01981c523),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap282b94c3-10') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Sep 30 14:40:15 compute-0 nova_compute[261524]: 2025-09-30 14:40:15.093 2 DEBUG nova.network.neutron [req-1fe25cc5-ead0-40ea-8c3e-dd08ed4975da req-25fc3794-4cb0-440e-929c-213762624d15 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: 4a2e4963-f354-48e2-af39-ce9e01d9eda1] Updated VIF entry in instance network info cache for port 282b94c3-1056-44d8-9ca4-959a3718bd94. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Sep 30 14:40:15 compute-0 nova_compute[261524]: 2025-09-30 14:40:15.094 2 DEBUG nova.network.neutron [req-1fe25cc5-ead0-40ea-8c3e-dd08ed4975da req-25fc3794-4cb0-440e-929c-213762624d15 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: 4a2e4963-f354-48e2-af39-ce9e01d9eda1] Updating instance_info_cache with network_info: [{"id": "282b94c3-1056-44d8-9ca4-959a3718bd94", "address": "fa:16:3e:d6:6d:09", "network": {"id": "ac4ef079-a88d-4ba4-9e93-1ee01981c523", "bridge": "br-int", "label": "tempest-network-smoke--1497657702", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0f6bbb74396f4cb7bfa999ebdabfe722", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap282b94c3-10", "ovs_interfaceid": "282b94c3-1056-44d8-9ca4-959a3718bd94", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Sep 30 14:40:15 compute-0 nova_compute[261524]: 2025-09-30 14:40:15.114 2 DEBUG ovsdbapp.backend.ovs_idl [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Sep 30 14:40:15 compute-0 nova_compute[261524]: 2025-09-30 14:40:15.114 2 DEBUG ovsdbapp.backend.ovs_idl [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Sep 30 14:40:15 compute-0 nova_compute[261524]: 2025-09-30 14:40:15.114 2 DEBUG ovsdbapp.backend.ovs_idl [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Sep 30 14:40:15 compute-0 nova_compute[261524]: 2025-09-30 14:40:15.115 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Sep 30 14:40:15 compute-0 nova_compute[261524]: 2025-09-30 14:40:15.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [POLLOUT] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:40:15 compute-0 nova_compute[261524]: 2025-09-30 14:40:15.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Sep 30 14:40:15 compute-0 nova_compute[261524]: 2025-09-30 14:40:15.119 2 DEBUG oslo_concurrency.lockutils [req-1fe25cc5-ead0-40ea-8c3e-dd08ed4975da req-25fc3794-4cb0-440e-929c-213762624d15 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Releasing lock "refresh_cache-4a2e4963-f354-48e2-af39-ce9e01d9eda1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Sep 30 14:40:15 compute-0 nova_compute[261524]: 2025-09-30 14:40:15.120 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:40:15 compute-0 nova_compute[261524]: 2025-09-30 14:40:15.122 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:40:15 compute-0 nova_compute[261524]: 2025-09-30 14:40:15.126 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:40:15 compute-0 nova_compute[261524]: 2025-09-30 14:40:15.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:40:15 compute-0 nova_compute[261524]: 2025-09-30 14:40:15.147 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 14:40:15 compute-0 nova_compute[261524]: 2025-09-30 14:40:15.147 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 14:40:15 compute-0 nova_compute[261524]: 2025-09-30 14:40:15.149 2 INFO oslo.privsep.daemon [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmphrgo0kea/privsep.sock']
Sep 30 14:40:15 compute-0 nova_compute[261524]: 2025-09-30 14:40:15.196 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:40:15 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:40:15 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f092c004a40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:40:15 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:40:15 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:40:15 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:40:15.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:40:15 compute-0 nova_compute[261524]: 2025-09-30 14:40:15.835 2 INFO oslo.privsep.daemon [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Spawned new privsep daemon via rootwrap
Sep 30 14:40:15 compute-0 nova_compute[261524]: 2025-09-30 14:40:15.714 630 INFO oslo.privsep.daemon [-] privsep daemon starting
Sep 30 14:40:15 compute-0 nova_compute[261524]: 2025-09-30 14:40:15.720 630 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Sep 30 14:40:15 compute-0 nova_compute[261524]: 2025-09-30 14:40:15.724 630 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none
Sep 30 14:40:15 compute-0 nova_compute[261524]: 2025-09-30 14:40:15.725 630 INFO oslo.privsep.daemon [-] privsep daemon running as pid 630
Sep 30 14:40:16 compute-0 ceph-mon[74194]: pgmap v741: 337 pgs: 337 active+clean; 41 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 255 B/s wr, 11 op/s
Sep 30 14:40:16 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/1068521447' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 14:40:16 compute-0 nova_compute[261524]: 2025-09-30 14:40:16.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:40:16 compute-0 nova_compute[261524]: 2025-09-30 14:40:16.173 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap282b94c3-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 14:40:16 compute-0 nova_compute[261524]: 2025-09-30 14:40:16.174 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap282b94c3-10, col_values=(('external_ids', {'iface-id': '282b94c3-1056-44d8-9ca4-959a3718bd94', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d6:6d:09', 'vm-uuid': '4a2e4963-f354-48e2-af39-ce9e01d9eda1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 14:40:16 compute-0 nova_compute[261524]: 2025-09-30 14:40:16.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:40:16 compute-0 NetworkManager[45472]: <info>  [1759243216.1768] manager: (tap282b94c3-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/25)
Sep 30 14:40:16 compute-0 nova_compute[261524]: 2025-09-30 14:40:16.177 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Sep 30 14:40:16 compute-0 nova_compute[261524]: 2025-09-30 14:40:16.185 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:40:16 compute-0 nova_compute[261524]: 2025-09-30 14:40:16.186 2 INFO os_vif [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:6d:09,bridge_name='br-int',has_traffic_filtering=True,id=282b94c3-1056-44d8-9ca4-959a3718bd94,network=Network(ac4ef079-a88d-4ba4-9e93-1ee01981c523),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap282b94c3-10')
Sep 30 14:40:16 compute-0 nova_compute[261524]: 2025-09-30 14:40:16.250 2 DEBUG nova.virt.libvirt.driver [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Sep 30 14:40:16 compute-0 nova_compute[261524]: 2025-09-30 14:40:16.251 2 DEBUG nova.virt.libvirt.driver [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Sep 30 14:40:16 compute-0 nova_compute[261524]: 2025-09-30 14:40:16.251 2 DEBUG nova.virt.libvirt.driver [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] No VIF found with MAC fa:16:3e:d6:6d:09, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Sep 30 14:40:16 compute-0 nova_compute[261524]: 2025-09-30 14:40:16.252 2 INFO nova.virt.libvirt.driver [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: 4a2e4963-f354-48e2-af39-ce9e01d9eda1] Using config drive
Sep 30 14:40:16 compute-0 nova_compute[261524]: 2025-09-30 14:40:16.284 2 DEBUG nova.storage.rbd_utils [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] rbd image 4a2e4963-f354-48e2-af39-ce9e01d9eda1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Sep 30 14:40:16 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v742: 337 pgs: 337 active+clean; 88 MiB data, 224 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 53 op/s
Sep 30 14:40:16 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:40:16 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0934004920 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:40:16 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:40:16 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0904001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:40:16 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:40:16 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:40:16 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:40:16 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:40:16 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:40:16 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:40:16 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:40:16.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:40:17 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:40:17 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:40:17.114Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:40:17 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:40:17.114Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:40:17 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:40:17.114Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:40:17 compute-0 nova_compute[261524]: 2025-09-30 14:40:17.273 2 INFO nova.virt.libvirt.driver [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: 4a2e4963-f354-48e2-af39-ce9e01d9eda1] Creating config drive at /var/lib/nova/instances/4a2e4963-f354-48e2-af39-ce9e01d9eda1/disk.config
Sep 30 14:40:17 compute-0 nova_compute[261524]: 2025-09-30 14:40:17.281 2 DEBUG oslo_concurrency.processutils [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4a2e4963-f354-48e2-af39-ce9e01d9eda1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9ad_mq_1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Sep 30 14:40:17 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:40:17 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f09100039b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:40:17 compute-0 nova_compute[261524]: 2025-09-30 14:40:17.425 2 DEBUG oslo_concurrency.processutils [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4a2e4963-f354-48e2-af39-ce9e01d9eda1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9ad_mq_1" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Sep 30 14:40:17 compute-0 nova_compute[261524]: 2025-09-30 14:40:17.466 2 DEBUG nova.storage.rbd_utils [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] rbd image 4a2e4963-f354-48e2-af39-ce9e01d9eda1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Sep 30 14:40:17 compute-0 nova_compute[261524]: 2025-09-30 14:40:17.470 2 DEBUG oslo_concurrency.processutils [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4a2e4963-f354-48e2-af39-ce9e01d9eda1/disk.config 4a2e4963-f354-48e2-af39-ce9e01d9eda1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Sep 30 14:40:17 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:40:17 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:40:17 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:40:17.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:40:17 compute-0 nova_compute[261524]: 2025-09-30 14:40:17.782 2 DEBUG oslo_concurrency.processutils [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4a2e4963-f354-48e2-af39-ce9e01d9eda1/disk.config 4a2e4963-f354-48e2-af39-ce9e01d9eda1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.312s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Sep 30 14:40:17 compute-0 nova_compute[261524]: 2025-09-30 14:40:17.784 2 INFO nova.virt.libvirt.driver [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: 4a2e4963-f354-48e2-af39-ce9e01d9eda1] Deleting local config drive /var/lib/nova/instances/4a2e4963-f354-48e2-af39-ce9e01d9eda1/disk.config because it was imported into RBD.
Sep 30 14:40:17 compute-0 systemd[1]: Starting libvirt secret daemon...
Sep 30 14:40:17 compute-0 systemd[1]: Started libvirt secret daemon.
Sep 30 14:40:17 compute-0 kernel: tun: Universal TUN/TAP device driver, 1.6
Sep 30 14:40:17 compute-0 kernel: tap282b94c3-10: entered promiscuous mode
Sep 30 14:40:17 compute-0 NetworkManager[45472]: <info>  [1759243217.9206] manager: (tap282b94c3-10): new Tun device (/org/freedesktop/NetworkManager/Devices/26)
Sep 30 14:40:17 compute-0 ovn_controller[154021]: 2025-09-30T14:40:17Z|00027|binding|INFO|Claiming lport 282b94c3-1056-44d8-9ca4-959a3718bd94 for this chassis.
Sep 30 14:40:17 compute-0 ovn_controller[154021]: 2025-09-30T14:40:17Z|00028|binding|INFO|282b94c3-1056-44d8-9ca4-959a3718bd94: Claiming fa:16:3e:d6:6d:09 10.100.0.9
Sep 30 14:40:17 compute-0 nova_compute[261524]: 2025-09-30 14:40:17.924 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:40:17 compute-0 nova_compute[261524]: 2025-09-30 14:40:17.933 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:40:17 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:40:17.948 163966 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d6:6d:09 10.100.0.9'], port_security=['fa:16:3e:d6:6d:09 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '4a2e4963-f354-48e2-af39-ce9e01d9eda1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ac4ef079-a88d-4ba4-9e93-1ee01981c523', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0f6bbb74396f4cb7bfa999ebdabfe722', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd1baa670-82bf-43ae-8178-ebda74520dfe', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2f089b95-aa99-452c-956e-b34986024bcf, chassis=[<ovs.db.idl.Row object at 0x7f8c6753f7f0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f8c6753f7f0>], logical_port=282b94c3-1056-44d8-9ca4-959a3718bd94) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Sep 30 14:40:17 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:40:17.950 163966 INFO neutron.agent.ovn.metadata.agent [-] Port 282b94c3-1056-44d8-9ca4-959a3718bd94 in datapath ac4ef079-a88d-4ba4-9e93-1ee01981c523 bound to our chassis
Sep 30 14:40:17 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:40:17.952 163966 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ac4ef079-a88d-4ba4-9e93-1ee01981c523
Sep 30 14:40:17 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:40:17.953 163966 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpen1yimhz/privsep.sock']
Sep 30 14:40:17 compute-0 systemd-udevd[269010]: Network interface NamePolicy= disabled on kernel command line.
Sep 30 14:40:17 compute-0 NetworkManager[45472]: <info>  [1759243217.9839] device (tap282b94c3-10): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Sep 30 14:40:17 compute-0 NetworkManager[45472]: <info>  [1759243217.9848] device (tap282b94c3-10): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Sep 30 14:40:18 compute-0 systemd-machined[215710]: New machine qemu-1-instance-00000001.
Sep 30 14:40:18 compute-0 systemd[1]: Started Virtual Machine qemu-1-instance-00000001.
Sep 30 14:40:18 compute-0 nova_compute[261524]: 2025-09-30 14:40:18.032 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:40:18 compute-0 ovn_controller[154021]: 2025-09-30T14:40:18Z|00029|binding|INFO|Setting lport 282b94c3-1056-44d8-9ca4-959a3718bd94 ovn-installed in OVS
Sep 30 14:40:18 compute-0 ovn_controller[154021]: 2025-09-30T14:40:18Z|00030|binding|INFO|Setting lport 282b94c3-1056-44d8-9ca4-959a3718bd94 up in Southbound
Sep 30 14:40:18 compute-0 nova_compute[261524]: 2025-09-30 14:40:18.040 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:40:18 compute-0 ceph-mon[74194]: pgmap v742: 337 pgs: 337 active+clean; 88 MiB data, 224 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 53 op/s
Sep 30 14:40:18 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v743: 337 pgs: 337 active+clean; 88 MiB data, 224 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 53 op/s
Sep 30 14:40:18 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:40:18 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f092c004a40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:40:18 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:40:18.628 163966 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Sep 30 14:40:18 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:40:18.629 163966 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpen1yimhz/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Sep 30 14:40:18 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:40:18.507 269027 INFO oslo.privsep.daemon [-] privsep daemon starting
Sep 30 14:40:18 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:40:18.510 269027 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Sep 30 14:40:18 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:40:18.513 269027 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none
Sep 30 14:40:18 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:40:18.513 269027 INFO oslo.privsep.daemon [-] privsep daemon running as pid 269027
Sep 30 14:40:18 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:40:18.632 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[2da363c2-82a4-443c-8f34-7b4e6276af60]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:40:18 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:40:18 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0934004920 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:40:18 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:40:18 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:40:18 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:40:18.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:40:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:40:19 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0904001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:40:19 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:40:19.447 269027 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:40:19 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:40:19.447 269027 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:40:19 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:40:19.448 269027 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:40:19 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:40:19 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:40:19 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:40:19.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:40:19 compute-0 nova_compute[261524]: 2025-09-30 14:40:19.593 2 DEBUG nova.compute.manager [req-6d6fc3a4-dc8c-4887-b7db-ce1db3fe4ea2 req-b9a4f9f5-bcd2-41a6-8dad-c1dd50824f33 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: 4a2e4963-f354-48e2-af39-ce9e01d9eda1] Received event network-vif-plugged-282b94c3-1056-44d8-9ca4-959a3718bd94 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Sep 30 14:40:19 compute-0 nova_compute[261524]: 2025-09-30 14:40:19.594 2 DEBUG oslo_concurrency.lockutils [req-6d6fc3a4-dc8c-4887-b7db-ce1db3fe4ea2 req-b9a4f9f5-bcd2-41a6-8dad-c1dd50824f33 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Acquiring lock "4a2e4963-f354-48e2-af39-ce9e01d9eda1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:40:19 compute-0 nova_compute[261524]: 2025-09-30 14:40:19.595 2 DEBUG oslo_concurrency.lockutils [req-6d6fc3a4-dc8c-4887-b7db-ce1db3fe4ea2 req-b9a4f9f5-bcd2-41a6-8dad-c1dd50824f33 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Lock "4a2e4963-f354-48e2-af39-ce9e01d9eda1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:40:19 compute-0 nova_compute[261524]: 2025-09-30 14:40:19.595 2 DEBUG oslo_concurrency.lockutils [req-6d6fc3a4-dc8c-4887-b7db-ce1db3fe4ea2 req-b9a4f9f5-bcd2-41a6-8dad-c1dd50824f33 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Lock "4a2e4963-f354-48e2-af39-ce9e01d9eda1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:40:19 compute-0 nova_compute[261524]: 2025-09-30 14:40:19.596 2 DEBUG nova.compute.manager [req-6d6fc3a4-dc8c-4887-b7db-ce1db3fe4ea2 req-b9a4f9f5-bcd2-41a6-8dad-c1dd50824f33 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: 4a2e4963-f354-48e2-af39-ce9e01d9eda1] Processing event network-vif-plugged-282b94c3-1056-44d8-9ca4-959a3718bd94 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Sep 30 14:40:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:40:19 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Sep 30 14:40:20 compute-0 nova_compute[261524]: 2025-09-30 14:40:20.006 2 DEBUG nova.compute.manager [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: 4a2e4963-f354-48e2-af39-ce9e01d9eda1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Sep 30 14:40:20 compute-0 nova_compute[261524]: 2025-09-30 14:40:20.007 2 DEBUG nova.virt.driver [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] Emitting event <LifecycleEvent: 1759243220.0054338, 4a2e4963-f354-48e2-af39-ce9e01d9eda1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Sep 30 14:40:20 compute-0 nova_compute[261524]: 2025-09-30 14:40:20.008 2 INFO nova.compute.manager [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] [instance: 4a2e4963-f354-48e2-af39-ce9e01d9eda1] VM Started (Lifecycle Event)
Sep 30 14:40:20 compute-0 nova_compute[261524]: 2025-09-30 14:40:20.011 2 DEBUG nova.virt.libvirt.driver [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: 4a2e4963-f354-48e2-af39-ce9e01d9eda1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Sep 30 14:40:20 compute-0 nova_compute[261524]: 2025-09-30 14:40:20.014 2 INFO nova.virt.libvirt.driver [-] [instance: 4a2e4963-f354-48e2-af39-ce9e01d9eda1] Instance spawned successfully.
Sep 30 14:40:20 compute-0 nova_compute[261524]: 2025-09-30 14:40:20.015 2 DEBUG nova.virt.libvirt.driver [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: 4a2e4963-f354-48e2-af39-ce9e01d9eda1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Sep 30 14:40:20 compute-0 nova_compute[261524]: 2025-09-30 14:40:20.042 2 DEBUG nova.compute.manager [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] [instance: 4a2e4963-f354-48e2-af39-ce9e01d9eda1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Sep 30 14:40:20 compute-0 nova_compute[261524]: 2025-09-30 14:40:20.049 2 DEBUG nova.compute.manager [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] [instance: 4a2e4963-f354-48e2-af39-ce9e01d9eda1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Sep 30 14:40:20 compute-0 nova_compute[261524]: 2025-09-30 14:40:20.053 2 DEBUG nova.virt.libvirt.driver [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: 4a2e4963-f354-48e2-af39-ce9e01d9eda1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Sep 30 14:40:20 compute-0 nova_compute[261524]: 2025-09-30 14:40:20.054 2 DEBUG nova.virt.libvirt.driver [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: 4a2e4963-f354-48e2-af39-ce9e01d9eda1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Sep 30 14:40:20 compute-0 nova_compute[261524]: 2025-09-30 14:40:20.055 2 DEBUG nova.virt.libvirt.driver [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: 4a2e4963-f354-48e2-af39-ce9e01d9eda1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Sep 30 14:40:20 compute-0 nova_compute[261524]: 2025-09-30 14:40:20.055 2 DEBUG nova.virt.libvirt.driver [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: 4a2e4963-f354-48e2-af39-ce9e01d9eda1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Sep 30 14:40:20 compute-0 nova_compute[261524]: 2025-09-30 14:40:20.057 2 DEBUG nova.virt.libvirt.driver [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: 4a2e4963-f354-48e2-af39-ce9e01d9eda1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Sep 30 14:40:20 compute-0 nova_compute[261524]: 2025-09-30 14:40:20.058 2 DEBUG nova.virt.libvirt.driver [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: 4a2e4963-f354-48e2-af39-ce9e01d9eda1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Sep 30 14:40:20 compute-0 ceph-mon[74194]: pgmap v743: 337 pgs: 337 active+clean; 88 MiB data, 224 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 53 op/s
Sep 30 14:40:20 compute-0 nova_compute[261524]: 2025-09-30 14:40:20.129 2 INFO nova.compute.manager [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] [instance: 4a2e4963-f354-48e2-af39-ce9e01d9eda1] During sync_power_state the instance has a pending task (spawning). Skip.
Sep 30 14:40:20 compute-0 nova_compute[261524]: 2025-09-30 14:40:20.132 2 DEBUG nova.virt.driver [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] Emitting event <LifecycleEvent: 1759243220.0055914, 4a2e4963-f354-48e2-af39-ce9e01d9eda1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Sep 30 14:40:20 compute-0 nova_compute[261524]: 2025-09-30 14:40:20.133 2 INFO nova.compute.manager [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] [instance: 4a2e4963-f354-48e2-af39-ce9e01d9eda1] VM Paused (Lifecycle Event)
Sep 30 14:40:20 compute-0 nova_compute[261524]: 2025-09-30 14:40:20.156 2 DEBUG nova.compute.manager [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] [instance: 4a2e4963-f354-48e2-af39-ce9e01d9eda1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Sep 30 14:40:20 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:40:20.156 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[4f59683b-dba4-423f-9231-526a75979d8c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:40:20 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:40:20.157 163966 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapac4ef079-a1 in ovnmeta-ac4ef079-a88d-4ba4-9e93-1ee01981c523 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Sep 30 14:40:20 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:40:20.159 269027 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapac4ef079-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Sep 30 14:40:20 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:40:20.160 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[20ed6a2a-6a5e-41ca-8185-96967605f644]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:40:20 compute-0 nova_compute[261524]: 2025-09-30 14:40:20.162 2 DEBUG nova.virt.driver [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] Emitting event <LifecycleEvent: 1759243220.0106487, 4a2e4963-f354-48e2-af39-ce9e01d9eda1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Sep 30 14:40:20 compute-0 nova_compute[261524]: 2025-09-30 14:40:20.163 2 INFO nova.compute.manager [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] [instance: 4a2e4963-f354-48e2-af39-ce9e01d9eda1] VM Resumed (Lifecycle Event)
Sep 30 14:40:20 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:40:20.164 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[73b46cdd-3d2b-4003-af21-7196a0998f4a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:40:20 compute-0 nova_compute[261524]: 2025-09-30 14:40:20.169 2 INFO nova.compute.manager [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: 4a2e4963-f354-48e2-af39-ce9e01d9eda1] Took 12.36 seconds to spawn the instance on the hypervisor.
Sep 30 14:40:20 compute-0 nova_compute[261524]: 2025-09-30 14:40:20.170 2 DEBUG nova.compute.manager [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: 4a2e4963-f354-48e2-af39-ce9e01d9eda1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Sep 30 14:40:20 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:40:20.186 164124 DEBUG oslo.privsep.daemon [-] privsep: reply[861159ad-08a2-4c74-ab48-83f5c37d33dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:40:20 compute-0 nova_compute[261524]: 2025-09-30 14:40:20.190 2 DEBUG nova.compute.manager [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] [instance: 4a2e4963-f354-48e2-af39-ce9e01d9eda1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Sep 30 14:40:20 compute-0 nova_compute[261524]: 2025-09-30 14:40:20.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:40:20 compute-0 nova_compute[261524]: 2025-09-30 14:40:20.205 2 DEBUG nova.compute.manager [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] [instance: 4a2e4963-f354-48e2-af39-ce9e01d9eda1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Sep 30 14:40:20 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:40:20.209 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[0f673150-ea4a-4829-9307-3375bd3ef888]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:40:20 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:40:20.212 163966 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmptqxohpiq/privsep.sock']
Sep 30 14:40:20 compute-0 nova_compute[261524]: 2025-09-30 14:40:20.235 2 INFO nova.compute.manager [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] [instance: 4a2e4963-f354-48e2-af39-ce9e01d9eda1] During sync_power_state the instance has a pending task (spawning). Skip.
Sep 30 14:40:20 compute-0 nova_compute[261524]: 2025-09-30 14:40:20.270 2 INFO nova.compute.manager [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: 4a2e4963-f354-48e2-af39-ce9e01d9eda1] Took 13.56 seconds to build instance.
Sep 30 14:40:20 compute-0 nova_compute[261524]: 2025-09-30 14:40:20.295 2 DEBUG oslo_concurrency.lockutils [None req-de80583d-3b04-4a92-bdaa-c8e71decd4a9 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Lock "4a2e4963-f354-48e2-af39-ce9e01d9eda1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.710s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:40:20 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v744: 337 pgs: 337 active+clean; 88 MiB data, 224 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 42 op/s
Sep 30 14:40:20 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:40:20 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f09100039b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:40:20 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:40:20 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f092c004a40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:40:20 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:40:20 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:40:20 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:40:20.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:40:20 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:40:20.953 163966 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Sep 30 14:40:20 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:40:20.954 163966 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmptqxohpiq/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Sep 30 14:40:20 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:40:20.800 269085 INFO oslo.privsep.daemon [-] privsep daemon starting
Sep 30 14:40:20 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:40:20.804 269085 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Sep 30 14:40:20 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:40:20.807 269085 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Sep 30 14:40:20 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:40:20.807 269085 INFO oslo.privsep.daemon [-] privsep daemon running as pid 269085
Sep 30 14:40:20 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:40:20.956 269085 DEBUG oslo.privsep.daemon [-] privsep: reply[430b21e4-195b-48fb-a9bb-69afb0695716]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:40:21 compute-0 nova_compute[261524]: 2025-09-30 14:40:21.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:40:21 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:40:21 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0934004920 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:40:21 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:40:21.447 269085 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:40:21 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:40:21.447 269085 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:40:21 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:40:21.447 269085 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:40:21 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:40:21 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:40:21 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:40:21.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:40:21 compute-0 nova_compute[261524]: 2025-09-30 14:40:21.669 2 DEBUG nova.compute.manager [req-70d3c228-49c8-481a-83c4-a95e537d36fe req-6a6fba5d-ca04-4cd6-8667-729588956015 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: 4a2e4963-f354-48e2-af39-ce9e01d9eda1] Received event network-vif-plugged-282b94c3-1056-44d8-9ca4-959a3718bd94 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Sep 30 14:40:21 compute-0 nova_compute[261524]: 2025-09-30 14:40:21.669 2 DEBUG oslo_concurrency.lockutils [req-70d3c228-49c8-481a-83c4-a95e537d36fe req-6a6fba5d-ca04-4cd6-8667-729588956015 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Acquiring lock "4a2e4963-f354-48e2-af39-ce9e01d9eda1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:40:21 compute-0 nova_compute[261524]: 2025-09-30 14:40:21.670 2 DEBUG oslo_concurrency.lockutils [req-70d3c228-49c8-481a-83c4-a95e537d36fe req-6a6fba5d-ca04-4cd6-8667-729588956015 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Lock "4a2e4963-f354-48e2-af39-ce9e01d9eda1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:40:21 compute-0 nova_compute[261524]: 2025-09-30 14:40:21.670 2 DEBUG oslo_concurrency.lockutils [req-70d3c228-49c8-481a-83c4-a95e537d36fe req-6a6fba5d-ca04-4cd6-8667-729588956015 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Lock "4a2e4963-f354-48e2-af39-ce9e01d9eda1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:40:21 compute-0 nova_compute[261524]: 2025-09-30 14:40:21.670 2 DEBUG nova.compute.manager [req-70d3c228-49c8-481a-83c4-a95e537d36fe req-6a6fba5d-ca04-4cd6-8667-729588956015 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: 4a2e4963-f354-48e2-af39-ce9e01d9eda1] No waiting events found dispatching network-vif-plugged-282b94c3-1056-44d8-9ca4-959a3718bd94 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Sep 30 14:40:21 compute-0 nova_compute[261524]: 2025-09-30 14:40:21.671 2 WARNING nova.compute.manager [req-70d3c228-49c8-481a-83c4-a95e537d36fe req-6a6fba5d-ca04-4cd6-8667-729588956015 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: 4a2e4963-f354-48e2-af39-ce9e01d9eda1] Received unexpected event network-vif-plugged-282b94c3-1056-44d8-9ca4-959a3718bd94 for instance with vm_state active and task_state None.
Sep 30 14:40:22 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:40:22 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e148 do_prune osdmap full prune enabled
Sep 30 14:40:22 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 e149: 3 total, 3 up, 3 in
Sep 30 14:40:22 compute-0 ceph-mon[74194]: log_channel(cluster) log [DBG] : osdmap e149: 3 total, 3 up, 3 in
Sep 30 14:40:22 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:40:22.081 269085 DEBUG oslo.privsep.daemon [-] privsep: reply[7bec5c7d-f50f-4640-ab3a-85313cf1883e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:40:22 compute-0 NetworkManager[45472]: <info>  [1759243222.0970] manager: (tapac4ef079-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/27)
Sep 30 14:40:22 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:40:22.102 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[1c7ce775-06f6-41d7-9f9d-4960cbccc8c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:40:22 compute-0 ceph-mon[74194]: pgmap v744: 337 pgs: 337 active+clean; 88 MiB data, 224 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 42 op/s
Sep 30 14:40:22 compute-0 ceph-mon[74194]: osdmap e149: 3 total, 3 up, 3 in
Sep 30 14:40:22 compute-0 systemd-udevd[269097]: Network interface NamePolicy= disabled on kernel command line.
Sep 30 14:40:22 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:40:22.143 269085 DEBUG oslo.privsep.daemon [-] privsep: reply[2b92171b-4414-419e-bb46-b5224556cb19]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:40:22 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:40:22.148 269085 DEBUG oslo.privsep.daemon [-] privsep: reply[4de1880f-eccb-4962-be7c-f1d627c40097]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:40:22 compute-0 NetworkManager[45472]: <info>  [1759243222.1740] device (tapac4ef079-a0): carrier: link connected
Sep 30 14:40:22 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:40:22.182 269085 DEBUG oslo.privsep.daemon [-] privsep: reply[add72cb7-e62e-46e0-b751-42252b2a678a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:40:22 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:40:22.202 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[ae94af38-55b4-449b-8f89-23c070698c9e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapac4ef079-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2f:61:73'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 15], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 658452, 'reachable_time': 44477, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 269117, 'error': None, 'target': 'ovnmeta-ac4ef079-a88d-4ba4-9e93-1ee01981c523', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:40:22 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:40:22.218 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[34694d1e-f75c-4ca6-bf7e-1a49ce957f94]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2f:6173'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 658452, 'tstamp': 658452}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 269118, 'error': None, 'target': 'ovnmeta-ac4ef079-a88d-4ba4-9e93-1ee01981c523', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:40:22 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:40:22.234 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[d0d0ec14-cdc5-411a-b953-494838f1bc00]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapac4ef079-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2f:61:73'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 15], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 658452, 'reachable_time': 44477, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 269119, 'error': None, 'target': 'ovnmeta-ac4ef079-a88d-4ba4-9e93-1ee01981c523', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:40:22 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:40:22.270 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[5335fb36-7e96-42ad-a6bc-8111976d4871]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:40:22 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:40:22.335 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[cab12e83-1b93-445c-964d-ff8967a53f58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:40:22 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:40:22.337 163966 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapac4ef079-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 14:40:22 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:40:22.337 163966 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 14:40:22 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:40:22.338 163966 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapac4ef079-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 14:40:22 compute-0 kernel: tapac4ef079-a0: entered promiscuous mode
Sep 30 14:40:22 compute-0 NetworkManager[45472]: <info>  [1759243222.3410] manager: (tapac4ef079-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/28)
Sep 30 14:40:22 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:40:22.344 163966 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapac4ef079-a0, col_values=(('external_ids', {'iface-id': 'a812d02b-29c8-4471-9c6d-10114d9a1a29'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 14:40:22 compute-0 ovn_controller[154021]: 2025-09-30T14:40:22Z|00031|binding|INFO|Releasing lport a812d02b-29c8-4471-9c6d-10114d9a1a29 from this chassis (sb_readonly=0)
Sep 30 14:40:22 compute-0 nova_compute[261524]: 2025-09-30 14:40:22.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:40:22 compute-0 nova_compute[261524]: 2025-09-30 14:40:22.346 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:40:22 compute-0 nova_compute[261524]: 2025-09-30 14:40:22.362 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:40:22 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:40:22.365 163966 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ac4ef079-a88d-4ba4-9e93-1ee01981c523.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ac4ef079-a88d-4ba4-9e93-1ee01981c523.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Sep 30 14:40:22 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:40:22.366 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[b67df2cc-a0b3-408d-8984-1bcfce666daf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:40:22 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:40:22.368 163966 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Sep 30 14:40:22 compute-0 ovn_metadata_agent[163949]: global
Sep 30 14:40:22 compute-0 ovn_metadata_agent[163949]:     log         /dev/log local0 debug
Sep 30 14:40:22 compute-0 ovn_metadata_agent[163949]:     log-tag     haproxy-metadata-proxy-ac4ef079-a88d-4ba4-9e93-1ee01981c523
Sep 30 14:40:22 compute-0 ovn_metadata_agent[163949]:     user        root
Sep 30 14:40:22 compute-0 ovn_metadata_agent[163949]:     group       root
Sep 30 14:40:22 compute-0 ovn_metadata_agent[163949]:     maxconn     1024
Sep 30 14:40:22 compute-0 ovn_metadata_agent[163949]:     pidfile     /var/lib/neutron/external/pids/ac4ef079-a88d-4ba4-9e93-1ee01981c523.pid.haproxy
Sep 30 14:40:22 compute-0 ovn_metadata_agent[163949]:     daemon
Sep 30 14:40:22 compute-0 ovn_metadata_agent[163949]: 
Sep 30 14:40:22 compute-0 ovn_metadata_agent[163949]: defaults
Sep 30 14:40:22 compute-0 ovn_metadata_agent[163949]:     log global
Sep 30 14:40:22 compute-0 ovn_metadata_agent[163949]:     mode http
Sep 30 14:40:22 compute-0 ovn_metadata_agent[163949]:     option httplog
Sep 30 14:40:22 compute-0 ovn_metadata_agent[163949]:     option dontlognull
Sep 30 14:40:22 compute-0 ovn_metadata_agent[163949]:     option http-server-close
Sep 30 14:40:22 compute-0 ovn_metadata_agent[163949]:     option forwardfor
Sep 30 14:40:22 compute-0 ovn_metadata_agent[163949]:     retries                 3
Sep 30 14:40:22 compute-0 ovn_metadata_agent[163949]:     timeout http-request    30s
Sep 30 14:40:22 compute-0 ovn_metadata_agent[163949]:     timeout connect         30s
Sep 30 14:40:22 compute-0 ovn_metadata_agent[163949]:     timeout client          32s
Sep 30 14:40:22 compute-0 ovn_metadata_agent[163949]:     timeout server          32s
Sep 30 14:40:22 compute-0 ovn_metadata_agent[163949]:     timeout http-keep-alive 30s
Sep 30 14:40:22 compute-0 ovn_metadata_agent[163949]: 
Sep 30 14:40:22 compute-0 ovn_metadata_agent[163949]: 
Sep 30 14:40:22 compute-0 ovn_metadata_agent[163949]: listen listener
Sep 30 14:40:22 compute-0 ovn_metadata_agent[163949]:     bind 169.254.169.254:80
Sep 30 14:40:22 compute-0 ovn_metadata_agent[163949]:     server metadata /var/lib/neutron/metadata_proxy
Sep 30 14:40:22 compute-0 ovn_metadata_agent[163949]:     http-request add-header X-OVN-Network-ID ac4ef079-a88d-4ba4-9e93-1ee01981c523
Sep 30 14:40:22 compute-0 ovn_metadata_agent[163949]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Sep 30 14:40:22 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:40:22.370 163966 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ac4ef079-a88d-4ba4-9e93-1ee01981c523', 'env', 'PROCESS_TAG=haproxy-ac4ef079-a88d-4ba4-9e93-1ee01981c523', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ac4ef079-a88d-4ba4-9e93-1ee01981c523.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Sep 30 14:40:22 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v746: 337 pgs: 337 active+clean; 88 MiB data, 224 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 112 op/s
Sep 30 14:40:22 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:40:22 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0904001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:40:22 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:40:22 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f09100039b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:40:22 compute-0 podman[269152]: 2025-09-30 14:40:22.813960189 +0000 UTC m=+0.075635803 container create 37bbd899ab6ecf560a8403e29b337972a606807081818ac16fc6df648d9e3d00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ac4ef079-a88d-4ba4-9e93-1ee01981c523, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250923, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Sep 30 14:40:22 compute-0 podman[269152]: 2025-09-30 14:40:22.76759247 +0000 UTC m=+0.029268134 image pull aa21cc3d2531fe07b45a943d4ac1ba0268bfab26b0884a4a00fbad7695318ba9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Sep 30 14:40:22 compute-0 systemd[1]: Started libpod-conmon-37bbd899ab6ecf560a8403e29b337972a606807081818ac16fc6df648d9e3d00.scope.
Sep 30 14:40:22 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:40:22 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:40:22 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:40:22.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:40:22 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:40:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d295d6724a7d1d932ac986ff400d430b397abce6fcfecd85371456fa34249192/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Sep 30 14:40:22 compute-0 podman[269165]: 2025-09-30 14:40:22.930921066 +0000 UTC m=+0.080379040 container health_status 3f9405f717bf7bccb1d94628a6cea0442375ebf8d5cf43ef2536ee30dce6c6e0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Sep 30 14:40:22 compute-0 podman[269152]: 2025-09-30 14:40:22.943559624 +0000 UTC m=+0.205235208 container init 37bbd899ab6ecf560a8403e29b337972a606807081818ac16fc6df648d9e3d00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ac4ef079-a88d-4ba4-9e93-1ee01981c523, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250923, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:40:22 compute-0 podman[269152]: 2025-09-30 14:40:22.954296281 +0000 UTC m=+0.215971855 container start 37bbd899ab6ecf560a8403e29b337972a606807081818ac16fc6df648d9e3d00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ac4ef079-a88d-4ba4-9e93-1ee01981c523, org.label-schema.build-date=20250923, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Sep 30 14:40:22 compute-0 podman[269167]: 2025-09-30 14:40:22.961403251 +0000 UTC m=+0.090311586 container health_status b3fb2fd96e3ed0561f41ccf5f3a6a8f5edc1610051f196882b9fe64f54041c07 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:40:22 compute-0 podman[269173]: 2025-09-30 14:40:22.972019035 +0000 UTC m=+0.095907865 container health_status c96c6cc1b89de09f21811bef665f35e069874e3ad1bbd4c37d02227af98e3458 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250923, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Sep 30 14:40:22 compute-0 podman[269166]: 2025-09-30 14:40:22.988101565 +0000 UTC m=+0.124949622 container health_status 8ace90c1ad44d98a416105b72e1c541724757ab08efa10adfaa0df7e8c2027c6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Sep 30 14:40:22 compute-0 neutron-haproxy-ovnmeta-ac4ef079-a88d-4ba4-9e93-1ee01981c523[269203]: [NOTICE]   (269241) : New worker (269250) forked
Sep 30 14:40:22 compute-0 neutron-haproxy-ovnmeta-ac4ef079-a88d-4ba4-9e93-1ee01981c523[269203]: [NOTICE]   (269241) : Loading success.
Sep 30 14:40:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:40:23 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f092c004a40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:40:23 compute-0 sudo[269260]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:40:23 compute-0 sudo[269260]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:40:23 compute-0 sudo[269260]: pam_unix(sudo:session): session closed for user root
Sep 30 14:40:23 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:40:23 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:40:23 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:40:23.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:40:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:40:23.619Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:40:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:40:23.619Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:40:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:40:23.620Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:40:24 compute-0 ceph-mon[74194]: pgmap v746: 337 pgs: 337 active+clean; 88 MiB data, 224 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 112 op/s
Sep 30 14:40:24 compute-0 ovn_controller[154021]: 2025-09-30T14:40:24Z|00032|binding|INFO|Releasing lport a812d02b-29c8-4471-9c6d-10114d9a1a29 from this chassis (sb_readonly=0)
Sep 30 14:40:24 compute-0 nova_compute[261524]: 2025-09-30 14:40:24.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:40:24 compute-0 NetworkManager[45472]: <info>  [1759243224.5446] manager: (patch-br-int-to-provnet-5acf2efb-cf69-45fa-8cf3-f555bc74ee6d): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/29)
Sep 30 14:40:24 compute-0 NetworkManager[45472]: <info>  [1759243224.5454] device (patch-br-int-to-provnet-5acf2efb-cf69-45fa-8cf3-f555bc74ee6d)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Sep 30 14:40:24 compute-0 NetworkManager[45472]: <info>  [1759243224.5467] manager: (patch-provnet-5acf2efb-cf69-45fa-8cf3-f555bc74ee6d-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/30)
Sep 30 14:40:24 compute-0 NetworkManager[45472]: <info>  [1759243224.5471] device (patch-provnet-5acf2efb-cf69-45fa-8cf3-f555bc74ee6d-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Sep 30 14:40:24 compute-0 NetworkManager[45472]: <info>  [1759243224.5484] manager: (patch-provnet-5acf2efb-cf69-45fa-8cf3-f555bc74ee6d-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/31)
Sep 30 14:40:24 compute-0 NetworkManager[45472]: <info>  [1759243224.5491] manager: (patch-br-int-to-provnet-5acf2efb-cf69-45fa-8cf3-f555bc74ee6d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/32)
Sep 30 14:40:24 compute-0 NetworkManager[45472]: <info>  [1759243224.5496] device (patch-br-int-to-provnet-5acf2efb-cf69-45fa-8cf3-f555bc74ee6d)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Sep 30 14:40:24 compute-0 NetworkManager[45472]: <info>  [1759243224.5500] device (patch-provnet-5acf2efb-cf69-45fa-8cf3-f555bc74ee6d-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Sep 30 14:40:24 compute-0 ovn_controller[154021]: 2025-09-30T14:40:24Z|00033|binding|INFO|Releasing lport a812d02b-29c8-4471-9c6d-10114d9a1a29 from this chassis (sb_readonly=0)
Sep 30 14:40:24 compute-0 nova_compute[261524]: 2025-09-30 14:40:24.594 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:40:24 compute-0 nova_compute[261524]: 2025-09-30 14:40:24.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:40:24 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v747: 337 pgs: 337 active+clean; 88 MiB data, 224 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 112 op/s
Sep 30 14:40:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:40:24 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0934004920 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:40:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:40:24 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0904001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:40:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-nfs-cephfs-compute-0-yvkpei[97239]: [WARNING] 272/144024 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Sep 30 14:40:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:40:24] "GET /metrics HTTP/1.1" 200 48505 "" "Prometheus/2.51.0"
Sep 30 14:40:24 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:40:24] "GET /metrics HTTP/1.1" 200 48505 "" "Prometheus/2.51.0"
Sep 30 14:40:24 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:40:24 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:40:24 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:40:24.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:40:25 compute-0 nova_compute[261524]: 2025-09-30 14:40:25.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:40:25 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:40:25 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f09100039b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:40:25 compute-0 nova_compute[261524]: 2025-09-30 14:40:25.397 2 DEBUG nova.compute.manager [req-a2033318-fa20-48cc-9186-2497f0527389 req-f32d1833-0c7d-412a-b3ba-4edb641e0bdf e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: 4a2e4963-f354-48e2-af39-ce9e01d9eda1] Received event network-changed-282b94c3-1056-44d8-9ca4-959a3718bd94 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Sep 30 14:40:25 compute-0 nova_compute[261524]: 2025-09-30 14:40:25.397 2 DEBUG nova.compute.manager [req-a2033318-fa20-48cc-9186-2497f0527389 req-f32d1833-0c7d-412a-b3ba-4edb641e0bdf e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: 4a2e4963-f354-48e2-af39-ce9e01d9eda1] Refreshing instance network info cache due to event network-changed-282b94c3-1056-44d8-9ca4-959a3718bd94. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Sep 30 14:40:25 compute-0 nova_compute[261524]: 2025-09-30 14:40:25.398 2 DEBUG oslo_concurrency.lockutils [req-a2033318-fa20-48cc-9186-2497f0527389 req-f32d1833-0c7d-412a-b3ba-4edb641e0bdf e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Acquiring lock "refresh_cache-4a2e4963-f354-48e2-af39-ce9e01d9eda1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Sep 30 14:40:25 compute-0 nova_compute[261524]: 2025-09-30 14:40:25.398 2 DEBUG oslo_concurrency.lockutils [req-a2033318-fa20-48cc-9186-2497f0527389 req-f32d1833-0c7d-412a-b3ba-4edb641e0bdf e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Acquired lock "refresh_cache-4a2e4963-f354-48e2-af39-ce9e01d9eda1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Sep 30 14:40:25 compute-0 nova_compute[261524]: 2025-09-30 14:40:25.399 2 DEBUG nova.network.neutron [req-a2033318-fa20-48cc-9186-2497f0527389 req-f32d1833-0c7d-412a-b3ba-4edb641e0bdf e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: 4a2e4963-f354-48e2-af39-ce9e01d9eda1] Refreshing network info cache for port 282b94c3-1056-44d8-9ca4-959a3718bd94 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Sep 30 14:40:25 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:40:25 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:40:25 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:40:25.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:40:26 compute-0 nova_compute[261524]: 2025-09-30 14:40:26.179 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:40:26 compute-0 ceph-mon[74194]: pgmap v747: 337 pgs: 337 active+clean; 88 MiB data, 224 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 112 op/s
Sep 30 14:40:26 compute-0 nova_compute[261524]: 2025-09-30 14:40:26.416 2 DEBUG nova.network.neutron [req-a2033318-fa20-48cc-9186-2497f0527389 req-f32d1833-0c7d-412a-b3ba-4edb641e0bdf e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: 4a2e4963-f354-48e2-af39-ce9e01d9eda1] Updated VIF entry in instance network info cache for port 282b94c3-1056-44d8-9ca4-959a3718bd94. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Sep 30 14:40:26 compute-0 nova_compute[261524]: 2025-09-30 14:40:26.416 2 DEBUG nova.network.neutron [req-a2033318-fa20-48cc-9186-2497f0527389 req-f32d1833-0c7d-412a-b3ba-4edb641e0bdf e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: 4a2e4963-f354-48e2-af39-ce9e01d9eda1] Updating instance_info_cache with network_info: [{"id": "282b94c3-1056-44d8-9ca4-959a3718bd94", "address": "fa:16:3e:d6:6d:09", "network": {"id": "ac4ef079-a88d-4ba4-9e93-1ee01981c523", "bridge": "br-int", "label": "tempest-network-smoke--1497657702", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0f6bbb74396f4cb7bfa999ebdabfe722", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap282b94c3-10", "ovs_interfaceid": "282b94c3-1056-44d8-9ca4-959a3718bd94", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Sep 30 14:40:26 compute-0 nova_compute[261524]: 2025-09-30 14:40:26.448 2 DEBUG oslo_concurrency.lockutils [req-a2033318-fa20-48cc-9186-2497f0527389 req-f32d1833-0c7d-412a-b3ba-4edb641e0bdf e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Releasing lock "refresh_cache-4a2e4963-f354-48e2-af39-ce9e01d9eda1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Sep 30 14:40:26 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v748: 337 pgs: 337 active+clean; 88 MiB data, 224 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 15 KiB/s wr, 90 op/s
Sep 30 14:40:26 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:40:26 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f092c004a40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:40:26 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:40:26 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0934004920 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:40:26 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:40:26 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:40:26 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:40:26.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:40:27 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:40:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:40:27.115Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:40:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:40:27 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f09040039c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:40:27 compute-0 ceph-mon[74194]: pgmap v748: 337 pgs: 337 active+clean; 88 MiB data, 224 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 15 KiB/s wr, 90 op/s
Sep 30 14:40:27 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:40:27 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:40:27 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:40:27.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:40:28 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v749: 337 pgs: 337 active+clean; 88 MiB data, 224 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 15 KiB/s wr, 90 op/s
Sep 30 14:40:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:40:28 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f09100039b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:40:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:40:28 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f092c004a40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:40:28 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:40:28 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.002000054s ======
Sep 30 14:40:28 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:40:28.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Sep 30 14:40:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:40:29 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0934004940 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:40:29 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:40:29 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:40:29 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:40:29.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:40:29 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:40:29 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:40:29 compute-0 ceph-mon[74194]: pgmap v749: 337 pgs: 337 active+clean; 88 MiB data, 224 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 15 KiB/s wr, 90 op/s
Sep 30 14:40:29 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:40:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:40:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:40:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:40:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:40:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:40:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:40:30 compute-0 nova_compute[261524]: 2025-09-30 14:40:30.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:40:30 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v750: 337 pgs: 337 active+clean; 88 MiB data, 224 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 15 KiB/s wr, 90 op/s
Sep 30 14:40:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:40:30 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f09040039c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:40:30 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:40:30 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f09100039b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:40:30 compute-0 ceph-osd[82707]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Sep 30 14:40:30 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:40:30 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:40:30 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:40:30.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:40:31 compute-0 nova_compute[261524]: 2025-09-30 14:40:31.182 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:40:31 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:40:31 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f092c004a40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:40:31 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:40:31 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:40:31 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:40:31.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:40:31 compute-0 ceph-mon[74194]: pgmap v750: 337 pgs: 337 active+clean; 88 MiB data, 224 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 15 KiB/s wr, 90 op/s
Sep 30 14:40:32 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:40:32 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v751: 337 pgs: 337 active+clean; 109 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 579 KiB/s rd, 2.3 MiB/s wr, 54 op/s
Sep 30 14:40:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:40:32 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0934004960 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:40:32 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:40:32 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0934004960 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:40:32 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:40:32 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:40:32 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:40:32.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:40:32 compute-0 ovn_controller[154021]: 2025-09-30T14:40:32Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d6:6d:09 10.100.0.9
Sep 30 14:40:32 compute-0 ovn_controller[154021]: 2025-09-30T14:40:32Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d6:6d:09 10.100.0.9
Sep 30 14:40:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:40:33 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f09100039b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:40:33 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:40:33 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:40:33 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:40:33.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:40:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:40:33.620Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:40:33 compute-0 ceph-mon[74194]: pgmap v751: 337 pgs: 337 active+clean; 109 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 579 KiB/s rd, 2.3 MiB/s wr, 54 op/s
Sep 30 14:40:34 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v752: 337 pgs: 337 active+clean; 109 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 508 KiB/s rd, 2.0 MiB/s wr, 47 op/s
Sep 30 14:40:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:40:34 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f09040039c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:40:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:40:34 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f09040039c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:40:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:40:34] "GET /metrics HTTP/1.1" 200 48533 "" "Prometheus/2.51.0"
Sep 30 14:40:34 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:40:34] "GET /metrics HTTP/1.1" 200 48533 "" "Prometheus/2.51.0"
Sep 30 14:40:34 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:40:34 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:40:34 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:40:34.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:40:35 compute-0 nova_compute[261524]: 2025-09-30 14:40:35.209 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:40:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:40:35 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f092c004a40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:40:35 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:40:35 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:40:35 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:40:35.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:40:35 compute-0 ceph-mon[74194]: pgmap v752: 337 pgs: 337 active+clean; 109 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 508 KiB/s rd, 2.0 MiB/s wr, 47 op/s
Sep 30 14:40:36 compute-0 nova_compute[261524]: 2025-09-30 14:40:36.185 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:40:36 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v753: 337 pgs: 337 active+clean; 121 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 625 KiB/s rd, 2.1 MiB/s wr, 73 op/s
Sep 30 14:40:36 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:40:36 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f09100039b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:40:36 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:40:36 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f09040039c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:40:36 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:40:36 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:40:36 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:40:36.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:40:37 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:40:37 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:40:37.116Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:40:37 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:40:37 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0934004960 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:40:37 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:40:37 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:40:37 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:40:37.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:40:37 compute-0 ceph-mon[74194]: pgmap v753: 337 pgs: 337 active+clean; 121 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 625 KiB/s rd, 2.1 MiB/s wr, 73 op/s
Sep 30 14:40:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:40:38.257 163966 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:40:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:40:38.258 163966 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:40:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:40:38.259 163966 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:40:38 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v754: 337 pgs: 337 active+clean; 121 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Sep 30 14:40:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:40:38 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f092c004a40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:40:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:40:38 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f09100039b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:40:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-nfs-cephfs-compute-0-yvkpei[97239]: [WARNING] 272/144038 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Sep 30 14:40:38 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:40:38 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:40:38 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:40:38.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:40:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:40:39 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f09040039c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:40:39 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:40:39 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:40:39 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:40:39.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:40:39 compute-0 nova_compute[261524]: 2025-09-30 14:40:39.791 2 INFO nova.compute.manager [None req-ee62704d-ebc4-4550-b07c-0a938fd1ff18 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: 4a2e4963-f354-48e2-af39-ce9e01d9eda1] Get console output
Sep 30 14:40:39 compute-0 nova_compute[261524]: 2025-09-30 14:40:39.800 2 INFO oslo.privsep.daemon [None req-ee62704d-ebc4-4550-b07c-0a938fd1ff18 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmp5i2d4vwz/privsep.sock']
Sep 30 14:40:39 compute-0 ceph-mon[74194]: pgmap v754: 337 pgs: 337 active+clean; 121 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Sep 30 14:40:40 compute-0 nova_compute[261524]: 2025-09-30 14:40:40.211 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:40:40 compute-0 nova_compute[261524]: 2025-09-30 14:40:40.556 2 INFO oslo.privsep.daemon [None req-ee62704d-ebc4-4550-b07c-0a938fd1ff18 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Spawned new privsep daemon via rootwrap
Sep 30 14:40:40 compute-0 nova_compute[261524]: 2025-09-30 14:40:40.450 696 INFO oslo.privsep.daemon [-] privsep daemon starting
Sep 30 14:40:40 compute-0 nova_compute[261524]: 2025-09-30 14:40:40.457 696 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Sep 30 14:40:40 compute-0 nova_compute[261524]: 2025-09-30 14:40:40.460 696 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Sep 30 14:40:40 compute-0 nova_compute[261524]: 2025-09-30 14:40:40.461 696 INFO oslo.privsep.daemon [-] privsep daemon running as pid 696
Sep 30 14:40:40 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v755: 337 pgs: 337 active+clean; 121 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Sep 30 14:40:40 compute-0 nova_compute[261524]: 2025-09-30 14:40:40.644 696 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Sep 30 14:40:40 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:40:40 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0934004960 fd 48 proxy header rest len failed header rlen = % (will set dead)
Sep 30 14:40:40 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[265372]: 30/09/2025 14:40:40 : epoch 68dbeb00 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f092c004a40 fd 48 proxy ignored for local
Sep 30 14:40:40 compute-0 kernel: ganesha.nfsd[267539]: segfault at 50 ip 00007f09e5c3132e sp 00007f09b4ff8210 error 4 in libntirpc.so.5.8[7f09e5c16000+2c000] likely on CPU 5 (core 0, socket 5)
Sep 30 14:40:40 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Sep 30 14:40:40 compute-0 systemd[1]: Started Process Core Dump (PID 269311/UID 0).
Sep 30 14:40:40 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:40:40 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:40:40 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:40:40.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:40:41 compute-0 nova_compute[261524]: 2025-09-30 14:40:41.188 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:40:41 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:40:41 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:40:41 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:40:41.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:40:41 compute-0 systemd-coredump[269312]: Process 265376 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 62:
                                                    #0  0x00007f09e5c3132e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Sep 30 14:40:42 compute-0 systemd[1]: systemd-coredump@9-269311-0.service: Deactivated successfully.
Sep 30 14:40:42 compute-0 systemd[1]: systemd-coredump@9-269311-0.service: Consumed 1.193s CPU time.
Sep 30 14:40:42 compute-0 ceph-mon[74194]: pgmap v755: 337 pgs: 337 active+clean; 121 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Sep 30 14:40:42 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:40:42 compute-0 podman[269319]: 2025-09-30 14:40:42.079669002 +0000 UTC m=+0.039573189 container died 2494e710c4141598b3341817e6fb96cd6048dfc708ce657418d5686a2aecab76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:40:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-8573f3e31a3bbbdedf48cb8e88a8011d90de131ec9b33330d47f6a5dc20d2a08-merged.mount: Deactivated successfully.
Sep 30 14:40:42 compute-0 podman[269319]: 2025-09-30 14:40:42.124253164 +0000 UTC m=+0.084157361 container remove 2494e710c4141598b3341817e6fb96cd6048dfc708ce657418d5686a2aecab76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Sep 30 14:40:42 compute-0 systemd[1]: ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6@nfs.cephfs.2.0.compute-0.qrbicy.service: Main process exited, code=exited, status=139/n/a
Sep 30 14:40:42 compute-0 systemd[1]: ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6@nfs.cephfs.2.0.compute-0.qrbicy.service: Failed with result 'exit-code'.
Sep 30 14:40:42 compute-0 systemd[1]: ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6@nfs.cephfs.2.0.compute-0.qrbicy.service: Consumed 1.983s CPU time.
Sep 30 14:40:42 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v756: 337 pgs: 337 active+clean; 121 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Sep 30 14:40:42 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:40:42 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:40:42 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:40:42.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:40:43 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:40:43 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:40:43 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:40:43.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:40:43 compute-0 sudo[269361]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:40:43 compute-0 sudo[269361]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:40:43 compute-0 sudo[269361]: pam_unix(sudo:session): session closed for user root
Sep 30 14:40:43 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:40:43.621Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:40:44 compute-0 ceph-mon[74194]: pgmap v756: 337 pgs: 337 active+clean; 121 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Sep 30 14:40:44 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v757: 337 pgs: 337 active+clean; 121 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 117 KiB/s rd, 107 KiB/s wr, 26 op/s
Sep 30 14:40:44 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:40:44 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:40:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:40:44] "GET /metrics HTTP/1.1" 200 48537 "" "Prometheus/2.51.0"
Sep 30 14:40:44 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:40:44] "GET /metrics HTTP/1.1" 200 48537 "" "Prometheus/2.51.0"
Sep 30 14:40:44 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:40:44 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:40:44 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:40:44.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:40:45 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:40:45 compute-0 nova_compute[261524]: 2025-09-30 14:40:45.213 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:40:45 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:40:45 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:40:45 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:40:45.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:40:46 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:40:46.001 163966 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ea:30:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '86:54:af:bb:5a:5f'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Sep 30 14:40:46 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:40:46.002 163966 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Sep 30 14:40:46 compute-0 nova_compute[261524]: 2025-09-30 14:40:46.037 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:40:46 compute-0 ceph-mon[74194]: pgmap v757: 337 pgs: 337 active+clean; 121 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 117 KiB/s rd, 107 KiB/s wr, 26 op/s
Sep 30 14:40:46 compute-0 nova_compute[261524]: 2025-09-30 14:40:46.190 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:40:46 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v758: 337 pgs: 337 active+clean; 121 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 122 KiB/s rd, 111 KiB/s wr, 27 op/s
Sep 30 14:40:46 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-nfs-cephfs-compute-0-yvkpei[97239]: [WARNING] 272/144046 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Sep 30 14:40:46 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:40:46 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:40:46 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:40:46.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:40:47 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:40:47 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:40:47.117Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:40:47 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:40:47.117Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:40:47 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:40:47.118Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:40:47 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:40:47 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:40:47 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:40:47.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:40:48 compute-0 ceph-mon[74194]: pgmap v758: 337 pgs: 337 active+clean; 121 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 122 KiB/s rd, 111 KiB/s wr, 27 op/s
Sep 30 14:40:48 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v759: 337 pgs: 337 active+clean; 121 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 5.8 KiB/s rd, 16 KiB/s wr, 1 op/s
Sep 30 14:40:48 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:40:48 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:40:48 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:40:48.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:40:49 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:40:49 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:40:49 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:40:49.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:40:50 compute-0 ceph-mon[74194]: pgmap v759: 337 pgs: 337 active+clean; 121 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 5.8 KiB/s rd, 16 KiB/s wr, 1 op/s
Sep 30 14:40:50 compute-0 nova_compute[261524]: 2025-09-30 14:40:50.215 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:40:50 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v760: 337 pgs: 337 active+clean; 121 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 5.8 KiB/s rd, 16 KiB/s wr, 1 op/s
Sep 30 14:40:50 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:40:50 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:40:50 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:40:50.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:40:51 compute-0 nova_compute[261524]: 2025-09-30 14:40:51.192 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:40:51 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:40:51 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:40:51 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:40:51.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:40:52 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:40:52 compute-0 ceph-mon[74194]: pgmap v760: 337 pgs: 337 active+clean; 121 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 5.8 KiB/s rd, 16 KiB/s wr, 1 op/s
Sep 30 14:40:52 compute-0 systemd[1]: ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6@nfs.cephfs.2.0.compute-0.qrbicy.service: Scheduled restart job, restart counter is at 10.
Sep 30 14:40:52 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.qrbicy for 5e3c7776-ac03-5698-b79f-a6dc2d80cae6.
Sep 30 14:40:52 compute-0 systemd[1]: ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6@nfs.cephfs.2.0.compute-0.qrbicy.service: Consumed 1.983s CPU time.
Sep 30 14:40:52 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.qrbicy for 5e3c7776-ac03-5698-b79f-a6dc2d80cae6...
Sep 30 14:40:52 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v761: 337 pgs: 337 active+clean; 121 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 5.9 KiB/s rd, 16 KiB/s wr, 1 op/s
Sep 30 14:40:52 compute-0 podman[269443]: 2025-09-30 14:40:52.832591771 +0000 UTC m=+0.057902969 container create d88f0cc72f487145fcaf99f1acc03b1aff53e72e3f3f9612ed1bd244b07dfd6e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:40:52 compute-0 unix_chkpwd[269457]: password check failed for user (root)
Sep 30 14:40:52 compute-0 sshd-session[269395]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=91.224.92.28  user=root
Sep 30 14:40:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfb938d147bc7154783ae74efae8df2319ecc5e891a5a0d585fcb44698bb277f/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Sep 30 14:40:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfb938d147bc7154783ae74efae8df2319ecc5e891a5a0d585fcb44698bb277f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:40:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfb938d147bc7154783ae74efae8df2319ecc5e891a5a0d585fcb44698bb277f/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 14:40:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfb938d147bc7154783ae74efae8df2319ecc5e891a5a0d585fcb44698bb277f/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.qrbicy-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 14:40:52 compute-0 podman[269443]: 2025-09-30 14:40:52.886199644 +0000 UTC m=+0.111510862 container init d88f0cc72f487145fcaf99f1acc03b1aff53e72e3f3f9612ed1bd244b07dfd6e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:40:52 compute-0 podman[269443]: 2025-09-30 14:40:52.899203461 +0000 UTC m=+0.124514649 container start d88f0cc72f487145fcaf99f1acc03b1aff53e72e3f3f9612ed1bd244b07dfd6e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Sep 30 14:40:52 compute-0 podman[269443]: 2025-09-30 14:40:52.809323079 +0000 UTC m=+0.034634367 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:40:52 compute-0 bash[269443]: d88f0cc72f487145fcaf99f1acc03b1aff53e72e3f3f9612ed1bd244b07dfd6e
Sep 30 14:40:52 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.qrbicy for 5e3c7776-ac03-5698-b79f-a6dc2d80cae6.
Sep 30 14:40:52 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:40:52 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Sep 30 14:40:52 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:40:52 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Sep 30 14:40:52 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:40:52 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:40:52 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:40:52.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:40:52 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:40:52 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Sep 30 14:40:52 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:40:52 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Sep 30 14:40:52 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:40:52 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Sep 30 14:40:52 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:40:52 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Sep 30 14:40:52 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:40:52 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Sep 30 14:40:53 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:40:53 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:40:53 compute-0 podman[269502]: 2025-09-30 14:40:53.14865698 +0000 UTC m=+0.074104252 container health_status 3f9405f717bf7bccb1d94628a6cea0442375ebf8d5cf43ef2536ee30dce6c6e0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=iscsid, container_name=iscsid, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Sep 30 14:40:53 compute-0 podman[269510]: 2025-09-30 14:40:53.158346459 +0000 UTC m=+0.067639079 container health_status c96c6cc1b89de09f21811bef665f35e069874e3ad1bbd4c37d02227af98e3458 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=36bccb96575468ec919301205d8daa2c, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Sep 30 14:40:53 compute-0 podman[269504]: 2025-09-30 14:40:53.201349439 +0000 UTC m=+0.108290366 container health_status b3fb2fd96e3ed0561f41ccf5f3a6a8f5edc1610051f196882b9fe64f54041c07 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Sep 30 14:40:53 compute-0 podman[269503]: 2025-09-30 14:40:53.226664776 +0000 UTC m=+0.139161681 container health_status 8ace90c1ad44d98a416105b72e1c541724757ab08efa10adfaa0df7e8c2027c6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true)
Sep 30 14:40:53 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:40:53 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:40:53 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:40:53.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:40:53 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:40:53.623Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:40:54 compute-0 sshd-session[269395]: Failed password for root from 91.224.92.28 port 28284 ssh2
Sep 30 14:40:54 compute-0 ceph-mon[74194]: pgmap v761: 337 pgs: 337 active+clean; 121 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 5.9 KiB/s rd, 16 KiB/s wr, 1 op/s
Sep 30 14:40:54 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/998134467' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:40:54 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v762: 337 pgs: 337 active+clean; 121 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 5.1 KiB/s rd, 3.3 KiB/s wr, 0 op/s
Sep 30 14:40:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:40:54] "GET /metrics HTTP/1.1" 200 48537 "" "Prometheus/2.51.0"
Sep 30 14:40:54 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:40:54] "GET /metrics HTTP/1.1" 200 48537 "" "Prometheus/2.51.0"
Sep 30 14:40:54 compute-0 unix_chkpwd[269584]: password check failed for user (root)
Sep 30 14:40:54 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:40:54 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:40:54 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:40:54.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:40:55 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:40:55.005 163966 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c6331d25-78a2-493c-bb43-51ad387342be, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 14:40:55 compute-0 nova_compute[261524]: 2025-09-30 14:40:55.218 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:40:55 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:40:55 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:40:55 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:40:55.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:40:56 compute-0 nova_compute[261524]: 2025-09-30 14:40:56.195 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:40:56 compute-0 ceph-mon[74194]: pgmap v762: 337 pgs: 337 active+clean; 121 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 5.1 KiB/s rd, 3.3 KiB/s wr, 0 op/s
Sep 30 14:40:56 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v763: 337 pgs: 337 active+clean; 145 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 751 KiB/s wr, 18 op/s
Sep 30 14:40:56 compute-0 sshd-session[269395]: Failed password for root from 91.224.92.28 port 28284 ssh2
Sep 30 14:40:56 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:40:56 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:40:56 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:40:56.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:40:57 compute-0 unix_chkpwd[269587]: password check failed for user (root)
Sep 30 14:40:57 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:40:57 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:40:57.118Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:40:57 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:40:57.119Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:40:57 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:40:57.119Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:40:57 compute-0 ceph-mon[74194]: pgmap v763: 337 pgs: 337 active+clean; 145 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 751 KiB/s wr, 18 op/s
Sep 30 14:40:57 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:40:57 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:40:57 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:40:57.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:40:58 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v764: 337 pgs: 337 active+clean; 145 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 747 KiB/s wr, 17 op/s
Sep 30 14:40:58 compute-0 sshd-session[269395]: Failed password for root from 91.224.92.28 port 28284 ssh2
Sep 30 14:40:58 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:40:58 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:40:58 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:40:58.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:40:59 compute-0 sshd-session[269395]: Received disconnect from 91.224.92.28 port 28284:11:  [preauth]
Sep 30 14:40:59 compute-0 sshd-session[269395]: Disconnected from authenticating user root 91.224.92.28 port 28284 [preauth]
Sep 30 14:40:59 compute-0 sshd-session[269395]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=91.224.92.28  user=root
Sep 30 14:40:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:40:59 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:40:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:40:59 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:40:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:40:59 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:40:59 compute-0 ceph-mgr[74485]: [balancer INFO root] Optimize plan auto_2025-09-30_14:40:59
Sep 30 14:40:59 compute-0 ceph-mgr[74485]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 14:40:59 compute-0 ceph-mgr[74485]: [balancer INFO root] do_upmap
Sep 30 14:40:59 compute-0 ceph-mgr[74485]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.data', '.mgr', 'images', '.rgw.root', 'cephfs.cephfs.meta', '.nfs', 'default.rgw.control', 'default.rgw.log', 'vms', 'default.rgw.meta', 'volumes']
Sep 30 14:40:59 compute-0 ceph-mgr[74485]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 14:40:59 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:40:59 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:40:59 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:40:59.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:40:59 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:40:59 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:40:59 compute-0 ceph-mon[74194]: pgmap v764: 337 pgs: 337 active+clean; 145 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 747 KiB/s wr, 17 op/s
Sep 30 14:40:59 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/1928509374' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 14:40:59 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:40:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:40:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:40:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:40:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:40:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:40:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:40:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 14:40:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:40:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 14:40:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:40:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0009196488165006187 of space, bias 1.0, pg target 0.2758946449501856 quantized to 32 (current 32)
Sep 30 14:40:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:40:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:40:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:40:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:40:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:40:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Sep 30 14:40:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:40:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Sep 30 14:40:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:40:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:40:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:40:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Sep 30 14:40:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:40:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Sep 30 14:40:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:40:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:40:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:40:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 14:40:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:40:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 14:40:59 compute-0 unix_chkpwd[269594]: password check failed for user (root)
Sep 30 14:40:59 compute-0 sshd-session[269590]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=91.224.92.28  user=root
Sep 30 14:41:00 compute-0 nova_compute[261524]: 2025-09-30 14:41:00.221 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:41:00 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v765: 337 pgs: 337 active+clean; 145 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 747 KiB/s wr, 17 op/s
Sep 30 14:41:00 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/1310231084' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 14:41:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 14:41:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 14:41:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 14:41:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 14:41:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 14:41:00 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:41:00 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:41:00 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:41:00.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:41:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 14:41:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 14:41:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 14:41:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 14:41:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 14:41:01 compute-0 nova_compute[261524]: 2025-09-30 14:41:01.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:41:01 compute-0 sshd-session[269590]: Failed password for root from 91.224.92.28 port 28292 ssh2
Sep 30 14:41:01 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:41:01 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:41:01 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:41:01.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:41:01 compute-0 ceph-mon[74194]: pgmap v765: 337 pgs: 337 active+clean; 145 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 747 KiB/s wr, 17 op/s
Sep 30 14:41:02 compute-0 unix_chkpwd[269597]: password check failed for user (root)
Sep 30 14:41:02 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:41:02 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v766: 337 pgs: 337 active+clean; 167 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 35 op/s
Sep 30 14:41:02 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:41:02 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:41:02 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:41:02.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:41:03 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:41:03 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:41:03 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:41:03.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:41:03 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:41:03.624Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:41:03 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:41:03.624Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:41:03 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:41:03.625Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:41:03 compute-0 sudo[269599]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:41:03 compute-0 sudo[269599]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:41:03 compute-0 sudo[269599]: pam_unix(sudo:session): session closed for user root
Sep 30 14:41:03 compute-0 ceph-mon[74194]: pgmap v766: 337 pgs: 337 active+clean; 167 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 35 op/s
Sep 30 14:41:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:41:03 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:41:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:41:03 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:41:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:41:03 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:41:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:41:04 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:41:04 compute-0 sshd-session[269590]: Failed password for root from 91.224.92.28 port 28292 ssh2
Sep 30 14:41:04 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v767: 337 pgs: 337 active+clean; 167 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 35 op/s
Sep 30 14:41:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:41:04] "GET /metrics HTTP/1.1" 200 48536 "" "Prometheus/2.51.0"
Sep 30 14:41:04 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:41:04] "GET /metrics HTTP/1.1" 200 48536 "" "Prometheus/2.51.0"
Sep 30 14:41:04 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:41:04 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:41:04 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:41:04.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:41:05 compute-0 nova_compute[261524]: 2025-09-30 14:41:05.225 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:41:05 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:41:05 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:41:05 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:41:05.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:41:05 compute-0 ceph-mon[74194]: pgmap v767: 337 pgs: 337 active+clean; 167 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 35 op/s
Sep 30 14:41:06 compute-0 unix_chkpwd[269627]: password check failed for user (root)
Sep 30 14:41:06 compute-0 nova_compute[261524]: 2025-09-30 14:41:06.199 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:41:06 compute-0 sudo[269628]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:41:06 compute-0 sudo[269628]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:41:06 compute-0 sudo[269628]: pam_unix(sudo:session): session closed for user root
Sep 30 14:41:06 compute-0 sudo[269653]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Sep 30 14:41:06 compute-0 sudo[269653]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:41:06 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v768: 337 pgs: 337 active+clean; 167 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 110 op/s
Sep 30 14:41:06 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/21886635' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:41:06 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:41:06 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:41:06 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:41:06.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:41:06 compute-0 podman[269750]: 2025-09-30 14:41:06.997533837 +0000 UTC m=+0.073510496 container exec a277d7b6b6f3cf10a7ce0ade5eebf0f8127074c248f9bce4451399614b97ded5 (image=quay.io/ceph/ceph:v19, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mon-compute-0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Sep 30 14:41:07 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:41:07 compute-0 podman[269750]: 2025-09-30 14:41:07.093585955 +0000 UTC m=+0.169562634 container exec_died a277d7b6b6f3cf10a7ce0ade5eebf0f8127074c248f9bce4451399614b97ded5 (image=quay.io/ceph/ceph:v19, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mon-compute-0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid)
Sep 30 14:41:07 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:41:07.119Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:41:07 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:41:07 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:41:07 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:41:07.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:41:07 compute-0 podman[269869]: 2025-09-30 14:41:07.682912 +0000 UTC m=+0.068424800 container exec 7517aa84b8564a81255eab7821e47762fe9b9d86aae2c7d77e10c0dfa057ab6d (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:41:07 compute-0 podman[269869]: 2025-09-30 14:41:07.693654937 +0000 UTC m=+0.079167687 container exec_died 7517aa84b8564a81255eab7821e47762fe9b9d86aae2c7d77e10c0dfa057ab6d (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:41:07 compute-0 sshd-session[269590]: Failed password for root from 91.224.92.28 port 28292 ssh2
Sep 30 14:41:07 compute-0 ceph-mon[74194]: pgmap v768: 337 pgs: 337 active+clean; 167 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 110 op/s
Sep 30 14:41:07 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/3635476583' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:41:08 compute-0 podman[269963]: 2025-09-30 14:41:08.028979382 +0000 UTC m=+0.055056973 container exec d88f0cc72f487145fcaf99f1acc03b1aff53e72e3f3f9612ed1bd244b07dfd6e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1)
Sep 30 14:41:08 compute-0 podman[269963]: 2025-09-30 14:41:08.050628701 +0000 UTC m=+0.076706282 container exec_died d88f0cc72f487145fcaf99f1acc03b1aff53e72e3f3f9612ed1bd244b07dfd6e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:41:08 compute-0 sshd-session[269590]: Received disconnect from 91.224.92.28 port 28292:11:  [preauth]
Sep 30 14:41:08 compute-0 sshd-session[269590]: Disconnected from authenticating user root 91.224.92.28 port 28292 [preauth]
Sep 30 14:41:08 compute-0 sshd-session[269590]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=91.224.92.28  user=root
Sep 30 14:41:08 compute-0 podman[270029]: 2025-09-30 14:41:08.270465768 +0000 UTC m=+0.053971094 container exec ec49c6e24c4fbc830188fe80824f1adb9a8c3cd6d4f4491a3e9330b04061bea8 (image=quay.io/ceph/haproxy:2.3, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-nfs-cephfs-compute-0-yvkpei)
Sep 30 14:41:08 compute-0 podman[270029]: 2025-09-30 14:41:08.281528394 +0000 UTC m=+0.065033700 container exec_died ec49c6e24c4fbc830188fe80824f1adb9a8c3cd6d4f4491a3e9330b04061bea8 (image=quay.io/ceph/haproxy:2.3, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-nfs-cephfs-compute-0-yvkpei)
Sep 30 14:41:08 compute-0 podman[270097]: 2025-09-30 14:41:08.499921422 +0000 UTC m=+0.052793612 container exec df25873f420822291a2a2f3e4272e6ab946447daa59ec12441fae67f848da096 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-keepalived-nfs-cephfs-compute-0-nfjjcv, com.redhat.component=keepalived-container, distribution-scope=public, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, version=2.2.4, vcs-type=git, build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, release=1793, io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc.)
Sep 30 14:41:08 compute-0 podman[270097]: 2025-09-30 14:41:08.507600847 +0000 UTC m=+0.060473057 container exec_died df25873f420822291a2a2f3e4272e6ab946447daa59ec12441fae67f848da096 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-keepalived-nfs-cephfs-compute-0-nfjjcv, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.expose-services=, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, architecture=x86_64, description=keepalived for Ceph, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, build-date=2023-02-22T09:23:20, release=1793, vcs-type=git)
Sep 30 14:41:08 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v769: 337 pgs: 337 active+clean; 167 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 92 op/s
Sep 30 14:41:08 compute-0 podman[270163]: 2025-09-30 14:41:08.760058377 +0000 UTC m=+0.070032683 container exec b02a1f46575144d1c0fa40fb1da73aeaa83cbe57512ae5912168f030bf7101d3 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:41:08 compute-0 podman[270163]: 2025-09-30 14:41:08.793712886 +0000 UTC m=+0.103687142 container exec_died b02a1f46575144d1c0fa40fb1da73aeaa83cbe57512ae5912168f030bf7101d3 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:41:08 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:41:08 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.002000053s ======
Sep 30 14:41:08 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:41:08.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Sep 30 14:41:09 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:41:08 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:41:09 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:41:09 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:41:09 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:41:09 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:41:09 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:41:09 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:41:09 compute-0 podman[270240]: 2025-09-30 14:41:09.022353908 +0000 UTC m=+0.053942172 container exec 4fd9639868c9fdb652f2d65dd14f46e8bfbcca13240732508ba689971c876ee0 (image=quay.io/ceph/grafana:10.4.0, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Sep 30 14:41:09 compute-0 unix_chkpwd[270268]: password check failed for user (root)
Sep 30 14:41:09 compute-0 sshd-session[270065]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=91.224.92.28  user=root
Sep 30 14:41:09 compute-0 podman[270240]: 2025-09-30 14:41:09.260720241 +0000 UTC m=+0.292308485 container exec_died 4fd9639868c9fdb652f2d65dd14f46e8bfbcca13240732508ba689971c876ee0 (image=quay.io/ceph/grafana:10.4.0, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Sep 30 14:41:09 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:41:09 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:41:09 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:41:09.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:41:09 compute-0 podman[270353]: 2025-09-30 14:41:09.73388224 +0000 UTC m=+0.058240368 container exec e4a50bbeb60f228cd09239a211f5e468f7ca87363229c6999e3900e12da32b57 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:41:09 compute-0 podman[270353]: 2025-09-30 14:41:09.786494516 +0000 UTC m=+0.110852635 container exec_died e4a50bbeb60f228cd09239a211f5e468f7ca87363229c6999e3900e12da32b57 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:41:09 compute-0 sudo[269653]: pam_unix(sudo:session): session closed for user root
Sep 30 14:41:09 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 14:41:09 compute-0 ceph-mon[74194]: pgmap v769: 337 pgs: 337 active+clean; 167 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 92 op/s
Sep 30 14:41:09 compute-0 nova_compute[261524]: 2025-09-30 14:41:09.948 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:41:09 compute-0 nova_compute[261524]: 2025-09-30 14:41:09.951 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:41:09 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:41:09 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 14:41:10 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:41:10 compute-0 nova_compute[261524]: 2025-09-30 14:41:10.228 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:41:10 compute-0 sudo[270395]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:41:10 compute-0 sudo[270395]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:41:10 compute-0 sudo[270395]: pam_unix(sudo:session): session closed for user root
Sep 30 14:41:10 compute-0 sudo[270420]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 14:41:10 compute-0 sudo[270420]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:41:10 compute-0 nova_compute[261524]: 2025-09-30 14:41:10.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:41:10 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v770: 337 pgs: 337 active+clean; 167 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 92 op/s
Sep 30 14:41:10 compute-0 sudo[270420]: pam_unix(sudo:session): session closed for user root
Sep 30 14:41:10 compute-0 nova_compute[261524]: 2025-09-30 14:41:10.953 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:41:10 compute-0 nova_compute[261524]: 2025-09-30 14:41:10.953 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Sep 30 14:41:10 compute-0 nova_compute[261524]: 2025-09-30 14:41:10.954 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Sep 30 14:41:10 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:41:10 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:41:10 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 14:41:10 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 14:41:10 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:41:10 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:41:10 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:41:10.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:41:10 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 14:41:11 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:41:11 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 14:41:11 compute-0 sshd-session[270065]: Failed password for root from 91.224.92.28 port 27606 ssh2
Sep 30 14:41:11 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:41:11 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:41:11 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:41:11 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 14:41:11 compute-0 nova_compute[261524]: 2025-09-30 14:41:11.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:41:11 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:41:11 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 14:41:11 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 14:41:11 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 14:41:11 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 14:41:11 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:41:11 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:41:11 compute-0 sudo[270478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:41:11 compute-0 sudo[270478]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:41:11 compute-0 sudo[270478]: pam_unix(sudo:session): session closed for user root
Sep 30 14:41:11 compute-0 sudo[270503]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 14:41:11 compute-0 sudo[270503]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:41:11 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:41:11 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:41:11 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:41:11.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:41:11 compute-0 podman[270571]: 2025-09-30 14:41:11.849839908 +0000 UTC m=+0.049387961 container create 22fbe42253d3969055bf3e74bf453c560bc85383bb5d78cd7e58b8f43c0b3d6b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_hopper, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Sep 30 14:41:11 compute-0 systemd[1]: Started libpod-conmon-22fbe42253d3969055bf3e74bf453c560bc85383bb5d78cd7e58b8f43c0b3d6b.scope.
Sep 30 14:41:11 compute-0 podman[270571]: 2025-09-30 14:41:11.825799296 +0000 UTC m=+0.025347429 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:41:11 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:41:11 compute-0 podman[270571]: 2025-09-30 14:41:11.953363056 +0000 UTC m=+0.152911139 container init 22fbe42253d3969055bf3e74bf453c560bc85383bb5d78cd7e58b8f43c0b3d6b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_hopper, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:41:11 compute-0 nova_compute[261524]: 2025-09-30 14:41:11.957 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Acquiring lock "refresh_cache-4a2e4963-f354-48e2-af39-ce9e01d9eda1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Sep 30 14:41:11 compute-0 nova_compute[261524]: 2025-09-30 14:41:11.959 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Acquired lock "refresh_cache-4a2e4963-f354-48e2-af39-ce9e01d9eda1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Sep 30 14:41:11 compute-0 nova_compute[261524]: 2025-09-30 14:41:11.960 2 DEBUG nova.network.neutron [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] [instance: 4a2e4963-f354-48e2-af39-ce9e01d9eda1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Sep 30 14:41:11 compute-0 nova_compute[261524]: 2025-09-30 14:41:11.960 2 DEBUG nova.objects.instance [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 4a2e4963-f354-48e2-af39-ce9e01d9eda1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Sep 30 14:41:11 compute-0 podman[270571]: 2025-09-30 14:41:11.962972543 +0000 UTC m=+0.162520636 container start 22fbe42253d3969055bf3e74bf453c560bc85383bb5d78cd7e58b8f43c0b3d6b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:41:11 compute-0 podman[270571]: 2025-09-30 14:41:11.968213043 +0000 UTC m=+0.167761116 container attach 22fbe42253d3969055bf3e74bf453c560bc85383bb5d78cd7e58b8f43c0b3d6b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_hopper, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Sep 30 14:41:11 compute-0 systemd[1]: libpod-22fbe42253d3969055bf3e74bf453c560bc85383bb5d78cd7e58b8f43c0b3d6b.scope: Deactivated successfully.
Sep 30 14:41:11 compute-0 wonderful_hopper[270588]: 167 167
Sep 30 14:41:11 compute-0 conmon[270588]: conmon 22fbe42253d3969055bf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-22fbe42253d3969055bf3e74bf453c560bc85383bb5d78cd7e58b8f43c0b3d6b.scope/container/memory.events
Sep 30 14:41:11 compute-0 podman[270571]: 2025-09-30 14:41:11.971548162 +0000 UTC m=+0.171096285 container died 22fbe42253d3969055bf3e74bf453c560bc85383bb5d78cd7e58b8f43c0b3d6b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Sep 30 14:41:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-77ee0eb705fe252aa9d21f3f800ba36a8f41196bf870db6dc0c4018cb7ddef87-merged.mount: Deactivated successfully.
Sep 30 14:41:12 compute-0 podman[270571]: 2025-09-30 14:41:12.020323006 +0000 UTC m=+0.219871099 container remove 22fbe42253d3969055bf3e74bf453c560bc85383bb5d78cd7e58b8f43c0b3d6b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_hopper, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:41:12 compute-0 systemd[1]: libpod-conmon-22fbe42253d3969055bf3e74bf453c560bc85383bb5d78cd7e58b8f43c0b3d6b.scope: Deactivated successfully.
Sep 30 14:41:12 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:41:12 compute-0 ceph-mon[74194]: pgmap v770: 337 pgs: 337 active+clean; 167 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 92 op/s
Sep 30 14:41:12 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:41:12 compute-0 ceph-mon[74194]: from='client.? 192.168.122.10:0/3701260167' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 14:41:12 compute-0 ceph-mon[74194]: from='client.? 192.168.122.10:0/3701260167' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 14:41:12 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:41:12 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 14:41:12 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 14:41:12 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:41:12 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/4126971093' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:41:12 compute-0 podman[270613]: 2025-09-30 14:41:12.212574526 +0000 UTC m=+0.049656939 container create 93e96975332f6f3da8ed9b879770ed8be10d20cd16b13607f4434898ffa2b015 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_davinci, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Sep 30 14:41:12 compute-0 systemd[1]: Started libpod-conmon-93e96975332f6f3da8ed9b879770ed8be10d20cd16b13607f4434898ffa2b015.scope.
Sep 30 14:41:12 compute-0 podman[270613]: 2025-09-30 14:41:12.191955594 +0000 UTC m=+0.029038037 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:41:12 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:41:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90c0051efb4ebdac501fa02f6e4717bed1f9f81c6b08ca3336fbb1192e50556b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:41:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90c0051efb4ebdac501fa02f6e4717bed1f9f81c6b08ca3336fbb1192e50556b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:41:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90c0051efb4ebdac501fa02f6e4717bed1f9f81c6b08ca3336fbb1192e50556b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:41:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90c0051efb4ebdac501fa02f6e4717bed1f9f81c6b08ca3336fbb1192e50556b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:41:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90c0051efb4ebdac501fa02f6e4717bed1f9f81c6b08ca3336fbb1192e50556b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 14:41:12 compute-0 podman[270613]: 2025-09-30 14:41:12.320428119 +0000 UTC m=+0.157510592 container init 93e96975332f6f3da8ed9b879770ed8be10d20cd16b13607f4434898ffa2b015 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_davinci, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Sep 30 14:41:12 compute-0 podman[270613]: 2025-09-30 14:41:12.332226055 +0000 UTC m=+0.169308478 container start 93e96975332f6f3da8ed9b879770ed8be10d20cd16b13607f4434898ffa2b015 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_davinci, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Sep 30 14:41:12 compute-0 podman[270613]: 2025-09-30 14:41:12.33543679 +0000 UTC m=+0.172519213 container attach 93e96975332f6f3da8ed9b879770ed8be10d20cd16b13607f4434898ffa2b015 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_davinci, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Sep 30 14:41:12 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v771: 337 pgs: 337 active+clean; 167 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 93 op/s
Sep 30 14:41:12 compute-0 zealous_davinci[270629]: --> passed data devices: 0 physical, 1 LVM
Sep 30 14:41:12 compute-0 zealous_davinci[270629]: --> All data devices are unavailable
Sep 30 14:41:12 compute-0 systemd[1]: libpod-93e96975332f6f3da8ed9b879770ed8be10d20cd16b13607f4434898ffa2b015.scope: Deactivated successfully.
Sep 30 14:41:12 compute-0 podman[270613]: 2025-09-30 14:41:12.73639228 +0000 UTC m=+0.573474733 container died 93e96975332f6f3da8ed9b879770ed8be10d20cd16b13607f4434898ffa2b015 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_davinci, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True)
Sep 30 14:41:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-90c0051efb4ebdac501fa02f6e4717bed1f9f81c6b08ca3336fbb1192e50556b-merged.mount: Deactivated successfully.
Sep 30 14:41:12 compute-0 podman[270613]: 2025-09-30 14:41:12.793524667 +0000 UTC m=+0.630607120 container remove 93e96975332f6f3da8ed9b879770ed8be10d20cd16b13607f4434898ffa2b015 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_davinci, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Sep 30 14:41:12 compute-0 systemd[1]: libpod-conmon-93e96975332f6f3da8ed9b879770ed8be10d20cd16b13607f4434898ffa2b015.scope: Deactivated successfully.
Sep 30 14:41:12 compute-0 sudo[270503]: pam_unix(sudo:session): session closed for user root
Sep 30 14:41:12 compute-0 sudo[270658]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:41:12 compute-0 sudo[270658]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:41:12 compute-0 sudo[270658]: pam_unix(sudo:session): session closed for user root
Sep 30 14:41:12 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:41:12 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:41:12 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:41:12.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:41:13 compute-0 sudo[270683]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -- lvm list --format json
Sep 30 14:41:13 compute-0 sudo[270683]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:41:13 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/346645917' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:41:13 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/4270157773' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:41:13 compute-0 unix_chkpwd[270708]: password check failed for user (root)
Sep 30 14:41:13 compute-0 podman[270752]: 2025-09-30 14:41:13.491153497 +0000 UTC m=+0.051809637 container create 35ecf98d94c17a5321578869c59f59fc3c694b01a493016aeb9391f7964510e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_kalam, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:41:13 compute-0 systemd[1]: Started libpod-conmon-35ecf98d94c17a5321578869c59f59fc3c694b01a493016aeb9391f7964510e3.scope.
Sep 30 14:41:13 compute-0 podman[270752]: 2025-09-30 14:41:13.468976634 +0000 UTC m=+0.029632804 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:41:13 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:41:13 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:41:13 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:41:13 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:41:13.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:41:13 compute-0 podman[270752]: 2025-09-30 14:41:13.592389353 +0000 UTC m=+0.153045553 container init 35ecf98d94c17a5321578869c59f59fc3c694b01a493016aeb9391f7964510e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_kalam, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Sep 30 14:41:13 compute-0 podman[270752]: 2025-09-30 14:41:13.600051808 +0000 UTC m=+0.160707968 container start 35ecf98d94c17a5321578869c59f59fc3c694b01a493016aeb9391f7964510e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_kalam, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Sep 30 14:41:13 compute-0 podman[270752]: 2025-09-30 14:41:13.603725626 +0000 UTC m=+0.164381786 container attach 35ecf98d94c17a5321578869c59f59fc3c694b01a493016aeb9391f7964510e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_kalam, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:41:13 compute-0 crazy_kalam[270768]: 167 167
Sep 30 14:41:13 compute-0 systemd[1]: libpod-35ecf98d94c17a5321578869c59f59fc3c694b01a493016aeb9391f7964510e3.scope: Deactivated successfully.
Sep 30 14:41:13 compute-0 conmon[270768]: conmon 35ecf98d94c17a532157 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-35ecf98d94c17a5321578869c59f59fc3c694b01a493016aeb9391f7964510e3.scope/container/memory.events
Sep 30 14:41:13 compute-0 podman[270752]: 2025-09-30 14:41:13.61023264 +0000 UTC m=+0.170888800 container died 35ecf98d94c17a5321578869c59f59fc3c694b01a493016aeb9391f7964510e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:41:13 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:41:13.626Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:41:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-ec4fb4b444e001785c258e96692b7e229445deeb9551312aed6e2590148b96d0-merged.mount: Deactivated successfully.
Sep 30 14:41:13 compute-0 podman[270752]: 2025-09-30 14:41:13.652098329 +0000 UTC m=+0.212754489 container remove 35ecf98d94c17a5321578869c59f59fc3c694b01a493016aeb9391f7964510e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:41:13 compute-0 systemd[1]: libpod-conmon-35ecf98d94c17a5321578869c59f59fc3c694b01a493016aeb9391f7964510e3.scope: Deactivated successfully.
Sep 30 14:41:13 compute-0 podman[270794]: 2025-09-30 14:41:13.86196257 +0000 UTC m=+0.048111097 container create ce74d85da85c76c65826b8e4103490195011a35d3a3f442ca80d03f02cb98ef5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_shtern, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Sep 30 14:41:13 compute-0 systemd[1]: Started libpod-conmon-ce74d85da85c76c65826b8e4103490195011a35d3a3f442ca80d03f02cb98ef5.scope.
Sep 30 14:41:13 compute-0 podman[270794]: 2025-09-30 14:41:13.842974472 +0000 UTC m=+0.029123029 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:41:13 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:41:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/637a018ef29d7e47a50b42d5c37da586c31d00deabd1d7b87bded15751a5866f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:41:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/637a018ef29d7e47a50b42d5c37da586c31d00deabd1d7b87bded15751a5866f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:41:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/637a018ef29d7e47a50b42d5c37da586c31d00deabd1d7b87bded15751a5866f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:41:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/637a018ef29d7e47a50b42d5c37da586c31d00deabd1d7b87bded15751a5866f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:41:13 compute-0 podman[270794]: 2025-09-30 14:41:13.961336876 +0000 UTC m=+0.147485413 container init ce74d85da85c76c65826b8e4103490195011a35d3a3f442ca80d03f02cb98ef5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_shtern, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:41:13 compute-0 podman[270794]: 2025-09-30 14:41:13.969638248 +0000 UTC m=+0.155786775 container start ce74d85da85c76c65826b8e4103490195011a35d3a3f442ca80d03f02cb98ef5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_shtern, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:41:13 compute-0 podman[270794]: 2025-09-30 14:41:13.972485064 +0000 UTC m=+0.158633611 container attach ce74d85da85c76c65826b8e4103490195011a35d3a3f442ca80d03f02cb98ef5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_shtern, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Sep 30 14:41:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:41:13 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:41:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:41:13 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:41:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:41:13 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:41:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:41:14 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:41:14 compute-0 ceph-mon[74194]: pgmap v771: 337 pgs: 337 active+clean; 167 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 93 op/s
Sep 30 14:41:14 compute-0 elegant_shtern[270812]: {
Sep 30 14:41:14 compute-0 elegant_shtern[270812]:     "0": [
Sep 30 14:41:14 compute-0 elegant_shtern[270812]:         {
Sep 30 14:41:14 compute-0 elegant_shtern[270812]:             "devices": [
Sep 30 14:41:14 compute-0 elegant_shtern[270812]:                 "/dev/loop3"
Sep 30 14:41:14 compute-0 elegant_shtern[270812]:             ],
Sep 30 14:41:14 compute-0 elegant_shtern[270812]:             "lv_name": "ceph_lv0",
Sep 30 14:41:14 compute-0 elegant_shtern[270812]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:41:14 compute-0 elegant_shtern[270812]:             "lv_size": "21470642176",
Sep 30 14:41:14 compute-0 elegant_shtern[270812]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5e3c7776-ac03-5698-b79f-a6dc2d80cae6,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1bf35304-bfb4-41f5-b832-570aa31de1b2,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 14:41:14 compute-0 elegant_shtern[270812]:             "lv_uuid": "f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L",
Sep 30 14:41:14 compute-0 elegant_shtern[270812]:             "name": "ceph_lv0",
Sep 30 14:41:14 compute-0 elegant_shtern[270812]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:41:14 compute-0 elegant_shtern[270812]:             "tags": {
Sep 30 14:41:14 compute-0 elegant_shtern[270812]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:41:14 compute-0 elegant_shtern[270812]:                 "ceph.block_uuid": "f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L",
Sep 30 14:41:14 compute-0 elegant_shtern[270812]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 14:41:14 compute-0 elegant_shtern[270812]:                 "ceph.cluster_fsid": "5e3c7776-ac03-5698-b79f-a6dc2d80cae6",
Sep 30 14:41:14 compute-0 elegant_shtern[270812]:                 "ceph.cluster_name": "ceph",
Sep 30 14:41:14 compute-0 elegant_shtern[270812]:                 "ceph.crush_device_class": "",
Sep 30 14:41:14 compute-0 elegant_shtern[270812]:                 "ceph.encrypted": "0",
Sep 30 14:41:14 compute-0 elegant_shtern[270812]:                 "ceph.osd_fsid": "1bf35304-bfb4-41f5-b832-570aa31de1b2",
Sep 30 14:41:14 compute-0 elegant_shtern[270812]:                 "ceph.osd_id": "0",
Sep 30 14:41:14 compute-0 elegant_shtern[270812]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 14:41:14 compute-0 elegant_shtern[270812]:                 "ceph.type": "block",
Sep 30 14:41:14 compute-0 elegant_shtern[270812]:                 "ceph.vdo": "0",
Sep 30 14:41:14 compute-0 elegant_shtern[270812]:                 "ceph.with_tpm": "0"
Sep 30 14:41:14 compute-0 elegant_shtern[270812]:             },
Sep 30 14:41:14 compute-0 elegant_shtern[270812]:             "type": "block",
Sep 30 14:41:14 compute-0 elegant_shtern[270812]:             "vg_name": "ceph_vg0"
Sep 30 14:41:14 compute-0 elegant_shtern[270812]:         }
Sep 30 14:41:14 compute-0 elegant_shtern[270812]:     ]
Sep 30 14:41:14 compute-0 elegant_shtern[270812]: }
Sep 30 14:41:14 compute-0 systemd[1]: libpod-ce74d85da85c76c65826b8e4103490195011a35d3a3f442ca80d03f02cb98ef5.scope: Deactivated successfully.
Sep 30 14:41:14 compute-0 podman[270794]: 2025-09-30 14:41:14.290729062 +0000 UTC m=+0.476877619 container died ce74d85da85c76c65826b8e4103490195011a35d3a3f442ca80d03f02cb98ef5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:41:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-637a018ef29d7e47a50b42d5c37da586c31d00deabd1d7b87bded15751a5866f-merged.mount: Deactivated successfully.
Sep 30 14:41:14 compute-0 podman[270794]: 2025-09-30 14:41:14.330221508 +0000 UTC m=+0.516370035 container remove ce74d85da85c76c65826b8e4103490195011a35d3a3f442ca80d03f02cb98ef5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_shtern, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Sep 30 14:41:14 compute-0 systemd[1]: libpod-conmon-ce74d85da85c76c65826b8e4103490195011a35d3a3f442ca80d03f02cb98ef5.scope: Deactivated successfully.
Sep 30 14:41:14 compute-0 sudo[270683]: pam_unix(sudo:session): session closed for user root
Sep 30 14:41:14 compute-0 sudo[270832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:41:14 compute-0 sudo[270832]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:41:14 compute-0 sudo[270832]: pam_unix(sudo:session): session closed for user root
Sep 30 14:41:14 compute-0 sudo[270857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -- raw list --format json
Sep 30 14:41:14 compute-0 sudo[270857]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:41:14 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v772: 337 pgs: 337 active+clean; 167 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Sep 30 14:41:14 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:41:14 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:41:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:41:14] "GET /metrics HTTP/1.1" 200 48537 "" "Prometheus/2.51.0"
Sep 30 14:41:14 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:41:14] "GET /metrics HTTP/1.1" 200 48537 "" "Prometheus/2.51.0"
Sep 30 14:41:14 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:41:14 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:41:14 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:41:14.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:41:14 compute-0 nova_compute[261524]: 2025-09-30 14:41:14.976 2 DEBUG nova.network.neutron [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] [instance: 4a2e4963-f354-48e2-af39-ce9e01d9eda1] Updating instance_info_cache with network_info: [{"id": "282b94c3-1056-44d8-9ca4-959a3718bd94", "address": "fa:16:3e:d6:6d:09", "network": {"id": "ac4ef079-a88d-4ba4-9e93-1ee01981c523", "bridge": "br-int", "label": "tempest-network-smoke--1497657702", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0f6bbb74396f4cb7bfa999ebdabfe722", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap282b94c3-10", "ovs_interfaceid": "282b94c3-1056-44d8-9ca4-959a3718bd94", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Sep 30 14:41:15 compute-0 nova_compute[261524]: 2025-09-30 14:41:15.037 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Releasing lock "refresh_cache-4a2e4963-f354-48e2-af39-ce9e01d9eda1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Sep 30 14:41:15 compute-0 nova_compute[261524]: 2025-09-30 14:41:15.038 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] [instance: 4a2e4963-f354-48e2-af39-ce9e01d9eda1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Sep 30 14:41:15 compute-0 nova_compute[261524]: 2025-09-30 14:41:15.038 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:41:15 compute-0 nova_compute[261524]: 2025-09-30 14:41:15.039 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:41:15 compute-0 nova_compute[261524]: 2025-09-30 14:41:15.039 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:41:15 compute-0 nova_compute[261524]: 2025-09-30 14:41:15.039 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:41:15 compute-0 nova_compute[261524]: 2025-09-30 14:41:15.039 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:41:15 compute-0 nova_compute[261524]: 2025-09-30 14:41:15.040 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Sep 30 14:41:15 compute-0 nova_compute[261524]: 2025-09-30 14:41:15.040 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:41:15 compute-0 podman[270924]: 2025-09-30 14:41:15.05294278 +0000 UTC m=+0.066286833 container create dfec6b2f7a68b3f67c069129288691a41d394130705e603e54a24eef20079d64 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_allen, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:41:15 compute-0 nova_compute[261524]: 2025-09-30 14:41:15.065 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:41:15 compute-0 nova_compute[261524]: 2025-09-30 14:41:15.066 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:41:15 compute-0 nova_compute[261524]: 2025-09-30 14:41:15.066 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:41:15 compute-0 nova_compute[261524]: 2025-09-30 14:41:15.066 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Sep 30 14:41:15 compute-0 nova_compute[261524]: 2025-09-30 14:41:15.066 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Sep 30 14:41:15 compute-0 systemd[1]: Started libpod-conmon-dfec6b2f7a68b3f67c069129288691a41d394130705e603e54a24eef20079d64.scope.
Sep 30 14:41:15 compute-0 podman[270924]: 2025-09-30 14:41:15.017558954 +0000 UTC m=+0.030903077 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:41:15 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:41:15 compute-0 podman[270924]: 2025-09-30 14:41:15.158542493 +0000 UTC m=+0.171886546 container init dfec6b2f7a68b3f67c069129288691a41d394130705e603e54a24eef20079d64 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_allen, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Sep 30 14:41:15 compute-0 podman[270924]: 2025-09-30 14:41:15.169033283 +0000 UTC m=+0.182377336 container start dfec6b2f7a68b3f67c069129288691a41d394130705e603e54a24eef20079d64 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:41:15 compute-0 podman[270924]: 2025-09-30 14:41:15.172760343 +0000 UTC m=+0.186104396 container attach dfec6b2f7a68b3f67c069129288691a41d394130705e603e54a24eef20079d64 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_allen, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Sep 30 14:41:15 compute-0 festive_allen[270941]: 167 167
Sep 30 14:41:15 compute-0 systemd[1]: libpod-dfec6b2f7a68b3f67c069129288691a41d394130705e603e54a24eef20079d64.scope: Deactivated successfully.
Sep 30 14:41:15 compute-0 conmon[270941]: conmon dfec6b2f7a68b3f67c06 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-dfec6b2f7a68b3f67c069129288691a41d394130705e603e54a24eef20079d64.scope/container/memory.events
Sep 30 14:41:15 compute-0 podman[270924]: 2025-09-30 14:41:15.178759023 +0000 UTC m=+0.192103076 container died dfec6b2f7a68b3f67c069129288691a41d394130705e603e54a24eef20079d64 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_allen, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Sep 30 14:41:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-926f7d3e192c995e70f1d8b313d8a23128085833f81632b91d0bd6a2cbc318e8-merged.mount: Deactivated successfully.
Sep 30 14:41:15 compute-0 podman[270924]: 2025-09-30 14:41:15.229035807 +0000 UTC m=+0.242379870 container remove dfec6b2f7a68b3f67c069129288691a41d394130705e603e54a24eef20079d64 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_allen, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:41:15 compute-0 nova_compute[261524]: 2025-09-30 14:41:15.231 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:41:15 compute-0 systemd[1]: libpod-conmon-dfec6b2f7a68b3f67c069129288691a41d394130705e603e54a24eef20079d64.scope: Deactivated successfully.
Sep 30 14:41:15 compute-0 podman[270985]: 2025-09-30 14:41:15.44646335 +0000 UTC m=+0.051574140 container create 367319ce5232aac90935b1f23cc264c3b81332ae0c8e6e7887f0cd57044471ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_moser, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Sep 30 14:41:15 compute-0 systemd[1]: Started libpod-conmon-367319ce5232aac90935b1f23cc264c3b81332ae0c8e6e7887f0cd57044471ca.scope.
Sep 30 14:41:15 compute-0 podman[270985]: 2025-09-30 14:41:15.424995816 +0000 UTC m=+0.030106586 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:41:15 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:41:15 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:41:15 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 14:41:15 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1644108541' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:41:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13bf48819afd5a77d3e7a707b2b94468df4791498396812489a5c1f64086476b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:41:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13bf48819afd5a77d3e7a707b2b94468df4791498396812489a5c1f64086476b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:41:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13bf48819afd5a77d3e7a707b2b94468df4791498396812489a5c1f64086476b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:41:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13bf48819afd5a77d3e7a707b2b94468df4791498396812489a5c1f64086476b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:41:15 compute-0 podman[270985]: 2025-09-30 14:41:15.572348646 +0000 UTC m=+0.177459436 container init 367319ce5232aac90935b1f23cc264c3b81332ae0c8e6e7887f0cd57044471ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_moser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Sep 30 14:41:15 compute-0 nova_compute[261524]: 2025-09-30 14:41:15.573 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Sep 30 14:41:15 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:41:15 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:41:15 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:41:15.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:41:15 compute-0 podman[270985]: 2025-09-30 14:41:15.579902308 +0000 UTC m=+0.185013078 container start 367319ce5232aac90935b1f23cc264c3b81332ae0c8e6e7887f0cd57044471ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_moser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Sep 30 14:41:15 compute-0 podman[270985]: 2025-09-30 14:41:15.584287565 +0000 UTC m=+0.189398415 container attach 367319ce5232aac90935b1f23cc264c3b81332ae0c8e6e7887f0cd57044471ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_moser, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:41:15 compute-0 nova_compute[261524]: 2025-09-30 14:41:15.656 2 DEBUG nova.virt.libvirt.driver [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Sep 30 14:41:15 compute-0 nova_compute[261524]: 2025-09-30 14:41:15.656 2 DEBUG nova.virt.libvirt.driver [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Sep 30 14:41:15 compute-0 sshd-session[270065]: Failed password for root from 91.224.92.28 port 27606 ssh2
Sep 30 14:41:15 compute-0 nova_compute[261524]: 2025-09-30 14:41:15.852 2 WARNING nova.virt.libvirt.driver [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 14:41:15 compute-0 nova_compute[261524]: 2025-09-30 14:41:15.853 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4370MB free_disk=59.92181396484375GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Sep 30 14:41:15 compute-0 nova_compute[261524]: 2025-09-30 14:41:15.853 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:41:15 compute-0 nova_compute[261524]: 2025-09-30 14:41:15.853 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:41:15 compute-0 nova_compute[261524]: 2025-09-30 14:41:15.943 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Instance 4a2e4963-f354-48e2-af39-ce9e01d9eda1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Sep 30 14:41:15 compute-0 nova_compute[261524]: 2025-09-30 14:41:15.944 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Sep 30 14:41:15 compute-0 nova_compute[261524]: 2025-09-30 14:41:15.944 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Sep 30 14:41:15 compute-0 nova_compute[261524]: 2025-09-30 14:41:15.978 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Sep 30 14:41:16 compute-0 nova_compute[261524]: 2025-09-30 14:41:16.203 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:41:16 compute-0 lvm[271100]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 14:41:16 compute-0 lvm[271100]: VG ceph_vg0 finished
Sep 30 14:41:16 compute-0 lvm[271104]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 14:41:16 compute-0 lvm[271104]: VG ceph_vg0 finished
Sep 30 14:41:16 compute-0 nice_moser[271002]: {}
Sep 30 14:41:16 compute-0 systemd[1]: libpod-367319ce5232aac90935b1f23cc264c3b81332ae0c8e6e7887f0cd57044471ca.scope: Deactivated successfully.
Sep 30 14:41:16 compute-0 systemd[1]: libpod-367319ce5232aac90935b1f23cc264c3b81332ae0c8e6e7887f0cd57044471ca.scope: Consumed 1.246s CPU time.
Sep 30 14:41:16 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 14:41:16 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3218881717' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:41:16 compute-0 podman[271105]: 2025-09-30 14:41:16.441107031 +0000 UTC m=+0.029244173 container died 367319ce5232aac90935b1f23cc264c3b81332ae0c8e6e7887f0cd57044471ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_moser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Sep 30 14:41:16 compute-0 nova_compute[261524]: 2025-09-30 14:41:16.454 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Sep 30 14:41:16 compute-0 nova_compute[261524]: 2025-09-30 14:41:16.458 2 DEBUG nova.compute.provider_tree [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Inventory has not changed in ProviderTree for provider: 06783cfc-6d32-454d-9501-ebd8adea3735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Sep 30 14:41:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-13bf48819afd5a77d3e7a707b2b94468df4791498396812489a5c1f64086476b-merged.mount: Deactivated successfully.
Sep 30 14:41:16 compute-0 nova_compute[261524]: 2025-09-30 14:41:16.480 2 DEBUG nova.scheduler.client.report [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Inventory has not changed for provider 06783cfc-6d32-454d-9501-ebd8adea3735 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Sep 30 14:41:16 compute-0 podman[271105]: 2025-09-30 14:41:16.486584827 +0000 UTC m=+0.074721939 container remove 367319ce5232aac90935b1f23cc264c3b81332ae0c8e6e7887f0cd57044471ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_moser, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Sep 30 14:41:16 compute-0 systemd[1]: libpod-conmon-367319ce5232aac90935b1f23cc264c3b81332ae0c8e6e7887f0cd57044471ca.scope: Deactivated successfully.
Sep 30 14:41:16 compute-0 nova_compute[261524]: 2025-09-30 14:41:16.514 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Sep 30 14:41:16 compute-0 nova_compute[261524]: 2025-09-30 14:41:16.515 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.661s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:41:16 compute-0 ceph-mon[74194]: pgmap v772: 337 pgs: 337 active+clean; 167 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Sep 30 14:41:16 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/1644108541' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:41:16 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/3218881717' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:41:16 compute-0 sudo[270857]: pam_unix(sudo:session): session closed for user root
Sep 30 14:41:16 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 14:41:16 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:41:16 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 14:41:16 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:41:16 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v773: 337 pgs: 337 active+clean; 188 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 121 op/s
Sep 30 14:41:16 compute-0 sudo[271121]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 14:41:16 compute-0 sudo[271121]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:41:16 compute-0 sudo[271121]: pam_unix(sudo:session): session closed for user root
Sep 30 14:41:16 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:41:16 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:41:16 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:41:16.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:41:17 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:41:17 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:41:17.122Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:41:17 compute-0 unix_chkpwd[271146]: password check failed for user (root)
Sep 30 14:41:17 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:41:17 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:41:17 compute-0 ceph-mon[74194]: pgmap v773: 337 pgs: 337 active+clean; 188 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 121 op/s
Sep 30 14:41:17 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:41:17 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:41:17 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:41:17.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:41:18 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v774: 337 pgs: 337 active+clean; 188 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 275 KiB/s rd, 2.0 MiB/s wr, 46 op/s
Sep 30 14:41:18 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:41:18 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:41:18 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:41:18.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:41:18 compute-0 sshd-session[270065]: Failed password for root from 91.224.92.28 port 27606 ssh2
Sep 30 14:41:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:41:18 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:41:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:41:18 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:41:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:41:18 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:41:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:41:19 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:41:19 compute-0 sshd-session[270065]: Received disconnect from 91.224.92.28 port 27606:11:  [preauth]
Sep 30 14:41:19 compute-0 sshd-session[270065]: Disconnected from authenticating user root 91.224.92.28 port 27606 [preauth]
Sep 30 14:41:19 compute-0 sshd-session[270065]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=91.224.92.28  user=root
Sep 30 14:41:19 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:41:19 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:41:19 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:41:19.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:41:19 compute-0 ceph-mon[74194]: pgmap v774: 337 pgs: 337 active+clean; 188 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 275 KiB/s rd, 2.0 MiB/s wr, 46 op/s
Sep 30 14:41:20 compute-0 nova_compute[261524]: 2025-09-30 14:41:20.234 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:41:20 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v775: 337 pgs: 337 active+clean; 188 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 275 KiB/s rd, 2.0 MiB/s wr, 46 op/s
Sep 30 14:41:20 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:41:20 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:41:20 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:41:20.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:41:21 compute-0 nova_compute[261524]: 2025-09-30 14:41:21.207 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:41:21 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:41:21 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:41:21 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:41:21.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:41:21 compute-0 ceph-mon[74194]: pgmap v775: 337 pgs: 337 active+clean; 188 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 275 KiB/s rd, 2.0 MiB/s wr, 46 op/s
Sep 30 14:41:22 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:41:22 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v776: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Sep 30 14:41:22 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:41:22 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:41:22 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:41:22.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:41:23 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:41:23 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:41:23 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:41:23.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:41:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:41:23.629Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:41:23 compute-0 ceph-mon[74194]: pgmap v776: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Sep 30 14:41:23 compute-0 sudo[271154]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:41:23 compute-0 sudo[271154]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:41:23 compute-0 sudo[271154]: pam_unix(sudo:session): session closed for user root
Sep 30 14:41:23 compute-0 podman[271178]: 2025-09-30 14:41:23.916508628 +0000 UTC m=+0.085507126 container health_status 3f9405f717bf7bccb1d94628a6cea0442375ebf8d5cf43ef2536ee30dce6c6e0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=iscsid, managed_by=edpm_ansible, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Sep 30 14:41:23 compute-0 podman[271180]: 2025-09-30 14:41:23.928131149 +0000 UTC m=+0.096449249 container health_status 8ace90c1ad44d98a416105b72e1c541724757ab08efa10adfaa0df7e8c2027c6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20250923, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Sep 30 14:41:23 compute-0 podman[271182]: 2025-09-30 14:41:23.932885336 +0000 UTC m=+0.086997077 container health_status c96c6cc1b89de09f21811bef665f35e069874e3ad1bbd4c37d02227af98e3458 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20250923, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Sep 30 14:41:23 compute-0 podman[271181]: 2025-09-30 14:41:23.935927457 +0000 UTC m=+0.106327172 container health_status b3fb2fd96e3ed0561f41ccf5f3a6a8f5edc1610051f196882b9fe64f54041c07 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, tcib_build_tag=36bccb96575468ec919301205d8daa2c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Sep 30 14:41:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:41:23 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:41:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:41:23 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:41:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:41:23 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:41:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:41:24 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:41:24 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v777: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Sep 30 14:41:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:41:24] "GET /metrics HTTP/1.1" 200 48537 "" "Prometheus/2.51.0"
Sep 30 14:41:24 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:41:24] "GET /metrics HTTP/1.1" 200 48537 "" "Prometheus/2.51.0"
Sep 30 14:41:24 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:41:24 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:41:24 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:41:24.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:41:25 compute-0 nova_compute[261524]: 2025-09-30 14:41:25.238 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:41:25 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:41:25 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:41:25 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:41:25.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:41:25 compute-0 ceph-mon[74194]: pgmap v777: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Sep 30 14:41:26 compute-0 nova_compute[261524]: 2025-09-30 14:41:26.209 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:41:26 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v778: 337 pgs: 337 active+clean; 121 MiB data, 339 MiB used, 60 GiB / 60 GiB avail; 342 KiB/s rd, 2.2 MiB/s wr, 90 op/s
Sep 30 14:41:26 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:41:26 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:41:26 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:41:26.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:41:27 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:41:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:41:27.123Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:41:27 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:41:27 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:41:27 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:41:27.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:41:28 compute-0 ceph-mon[74194]: pgmap v778: 337 pgs: 337 active+clean; 121 MiB data, 339 MiB used, 60 GiB / 60 GiB avail; 342 KiB/s rd, 2.2 MiB/s wr, 90 op/s
Sep 30 14:41:28 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/2285275552' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:41:28 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v779: 337 pgs: 337 active+clean; 121 MiB data, 339 MiB used, 60 GiB / 60 GiB avail; 67 KiB/s rd, 118 KiB/s wr, 43 op/s
Sep 30 14:41:28 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:41:28 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:41:28 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:41:28.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:41:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:41:28 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:41:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:41:28 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:41:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:41:28 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:41:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:41:29 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:41:29 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:41:29 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:41:29 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:41:29.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:41:29 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:41:29 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:41:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:41:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:41:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:41:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:41:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:41:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:41:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-nfs-cephfs-compute-0-yvkpei[97239]: [WARNING] 272/144129 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Sep 30 14:41:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-nfs-cephfs-compute-0-yvkpei[97239]: [ALERT] 272/144129 (4) : backend 'backend' has no server available!
Sep 30 14:41:30 compute-0 ovn_controller[154021]: 2025-09-30T14:41:30Z|00034|binding|INFO|Releasing lport a812d02b-29c8-4471-9c6d-10114d9a1a29 from this chassis (sb_readonly=0)
Sep 30 14:41:30 compute-0 nova_compute[261524]: 2025-09-30 14:41:30.147 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:41:30 compute-0 ceph-mon[74194]: pgmap v779: 337 pgs: 337 active+clean; 121 MiB data, 339 MiB used, 60 GiB / 60 GiB avail; 67 KiB/s rd, 118 KiB/s wr, 43 op/s
Sep 30 14:41:30 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:41:30 compute-0 nova_compute[261524]: 2025-09-30 14:41:30.241 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:41:30 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v780: 337 pgs: 337 active+clean; 121 MiB data, 339 MiB used, 60 GiB / 60 GiB avail; 67 KiB/s rd, 118 KiB/s wr, 43 op/s
Sep 30 14:41:30 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:41:30 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:41:30 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:41:30.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:41:31 compute-0 nova_compute[261524]: 2025-09-30 14:41:31.193 2 DEBUG nova.compute.manager [req-2f41b436-ecd2-40c4-807f-094d4de24c6f req-08cd720b-4bc8-4138-af36-2d23a32b6b2a e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: 4a2e4963-f354-48e2-af39-ce9e01d9eda1] Received event network-changed-282b94c3-1056-44d8-9ca4-959a3718bd94 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Sep 30 14:41:31 compute-0 nova_compute[261524]: 2025-09-30 14:41:31.194 2 DEBUG nova.compute.manager [req-2f41b436-ecd2-40c4-807f-094d4de24c6f req-08cd720b-4bc8-4138-af36-2d23a32b6b2a e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: 4a2e4963-f354-48e2-af39-ce9e01d9eda1] Refreshing instance network info cache due to event network-changed-282b94c3-1056-44d8-9ca4-959a3718bd94. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Sep 30 14:41:31 compute-0 nova_compute[261524]: 2025-09-30 14:41:31.194 2 DEBUG oslo_concurrency.lockutils [req-2f41b436-ecd2-40c4-807f-094d4de24c6f req-08cd720b-4bc8-4138-af36-2d23a32b6b2a e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Acquiring lock "refresh_cache-4a2e4963-f354-48e2-af39-ce9e01d9eda1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Sep 30 14:41:31 compute-0 nova_compute[261524]: 2025-09-30 14:41:31.195 2 DEBUG oslo_concurrency.lockutils [req-2f41b436-ecd2-40c4-807f-094d4de24c6f req-08cd720b-4bc8-4138-af36-2d23a32b6b2a e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Acquired lock "refresh_cache-4a2e4963-f354-48e2-af39-ce9e01d9eda1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Sep 30 14:41:31 compute-0 nova_compute[261524]: 2025-09-30 14:41:31.195 2 DEBUG nova.network.neutron [req-2f41b436-ecd2-40c4-807f-094d4de24c6f req-08cd720b-4bc8-4138-af36-2d23a32b6b2a e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: 4a2e4963-f354-48e2-af39-ce9e01d9eda1] Refreshing network info cache for port 282b94c3-1056-44d8-9ca4-959a3718bd94 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Sep 30 14:41:31 compute-0 nova_compute[261524]: 2025-09-30 14:41:31.211 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:41:31 compute-0 ceph-mon[74194]: pgmap v780: 337 pgs: 337 active+clean; 121 MiB data, 339 MiB used, 60 GiB / 60 GiB avail; 67 KiB/s rd, 118 KiB/s wr, 43 op/s
Sep 30 14:41:31 compute-0 nova_compute[261524]: 2025-09-30 14:41:31.328 2 DEBUG oslo_concurrency.lockutils [None req-614ae054-3ca3-4eec-b9b7-0f5bfc1c7aa3 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Acquiring lock "4a2e4963-f354-48e2-af39-ce9e01d9eda1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:41:31 compute-0 nova_compute[261524]: 2025-09-30 14:41:31.328 2 DEBUG oslo_concurrency.lockutils [None req-614ae054-3ca3-4eec-b9b7-0f5bfc1c7aa3 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Lock "4a2e4963-f354-48e2-af39-ce9e01d9eda1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:41:31 compute-0 nova_compute[261524]: 2025-09-30 14:41:31.329 2 DEBUG oslo_concurrency.lockutils [None req-614ae054-3ca3-4eec-b9b7-0f5bfc1c7aa3 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Acquiring lock "4a2e4963-f354-48e2-af39-ce9e01d9eda1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:41:31 compute-0 nova_compute[261524]: 2025-09-30 14:41:31.329 2 DEBUG oslo_concurrency.lockutils [None req-614ae054-3ca3-4eec-b9b7-0f5bfc1c7aa3 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Lock "4a2e4963-f354-48e2-af39-ce9e01d9eda1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:41:31 compute-0 nova_compute[261524]: 2025-09-30 14:41:31.329 2 DEBUG oslo_concurrency.lockutils [None req-614ae054-3ca3-4eec-b9b7-0f5bfc1c7aa3 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Lock "4a2e4963-f354-48e2-af39-ce9e01d9eda1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:41:31 compute-0 nova_compute[261524]: 2025-09-30 14:41:31.330 2 INFO nova.compute.manager [None req-614ae054-3ca3-4eec-b9b7-0f5bfc1c7aa3 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: 4a2e4963-f354-48e2-af39-ce9e01d9eda1] Terminating instance
Sep 30 14:41:31 compute-0 nova_compute[261524]: 2025-09-30 14:41:31.331 2 DEBUG nova.compute.manager [None req-614ae054-3ca3-4eec-b9b7-0f5bfc1c7aa3 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: 4a2e4963-f354-48e2-af39-ce9e01d9eda1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Sep 30 14:41:31 compute-0 kernel: tap282b94c3-10 (unregistering): left promiscuous mode
Sep 30 14:41:31 compute-0 NetworkManager[45472]: <info>  [1759243291.4025] device (tap282b94c3-10): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Sep 30 14:41:31 compute-0 nova_compute[261524]: 2025-09-30 14:41:31.411 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:41:31 compute-0 ovn_controller[154021]: 2025-09-30T14:41:31Z|00035|binding|INFO|Releasing lport 282b94c3-1056-44d8-9ca4-959a3718bd94 from this chassis (sb_readonly=0)
Sep 30 14:41:31 compute-0 ovn_controller[154021]: 2025-09-30T14:41:31Z|00036|binding|INFO|Setting lport 282b94c3-1056-44d8-9ca4-959a3718bd94 down in Southbound
Sep 30 14:41:31 compute-0 ovn_controller[154021]: 2025-09-30T14:41:31Z|00037|binding|INFO|Removing iface tap282b94c3-10 ovn-installed in OVS
Sep 30 14:41:31 compute-0 nova_compute[261524]: 2025-09-30 14:41:31.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:41:31 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:41:31.430 163966 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d6:6d:09 10.100.0.9'], port_security=['fa:16:3e:d6:6d:09 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '4a2e4963-f354-48e2-af39-ce9e01d9eda1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ac4ef079-a88d-4ba4-9e93-1ee01981c523', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0f6bbb74396f4cb7bfa999ebdabfe722', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd1baa670-82bf-43ae-8178-ebda74520dfe', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2f089b95-aa99-452c-956e-b34986024bcf, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f8c6753f7f0>], logical_port=282b94c3-1056-44d8-9ca4-959a3718bd94) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f8c6753f7f0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Sep 30 14:41:31 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:41:31.432 163966 INFO neutron.agent.ovn.metadata.agent [-] Port 282b94c3-1056-44d8-9ca4-959a3718bd94 in datapath ac4ef079-a88d-4ba4-9e93-1ee01981c523 unbound from our chassis
Sep 30 14:41:31 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:41:31.435 163966 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ac4ef079-a88d-4ba4-9e93-1ee01981c523, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Sep 30 14:41:31 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:41:31.438 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[d71f4e08-4329-4fd8-93ac-b98f28f2596e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:41:31 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:41:31.439 163966 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ac4ef079-a88d-4ba4-9e93-1ee01981c523 namespace which is not needed anymore
Sep 30 14:41:31 compute-0 nova_compute[261524]: 2025-09-30 14:41:31.444 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:41:31 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Deactivated successfully.
Sep 30 14:41:31 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Consumed 17.102s CPU time.
Sep 30 14:41:31 compute-0 systemd-machined[215710]: Machine qemu-1-instance-00000001 terminated.
Sep 30 14:41:31 compute-0 nova_compute[261524]: 2025-09-30 14:41:31.578 2 INFO nova.virt.libvirt.driver [-] [instance: 4a2e4963-f354-48e2-af39-ce9e01d9eda1] Instance destroyed successfully.
Sep 30 14:41:31 compute-0 nova_compute[261524]: 2025-09-30 14:41:31.579 2 DEBUG nova.objects.instance [None req-614ae054-3ca3-4eec-b9b7-0f5bfc1c7aa3 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Lazy-loading 'resources' on Instance uuid 4a2e4963-f354-48e2-af39-ce9e01d9eda1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Sep 30 14:41:31 compute-0 nova_compute[261524]: 2025-09-30 14:41:31.592 2 DEBUG nova.virt.libvirt.vif [None req-614ae054-3ca3-4eec-b9b7-0f5bfc1c7aa3 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-09-30T14:40:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1457461652',display_name='tempest-TestNetworkBasicOps-server-1457461652',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1457461652',id=1,image_ref='7c70cf84-edc3-42b2-a094-ae3c1dbaffe4',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNbHcRx+ioPEkeKlOP9E9zJz227uPsnvyJ1Yk+aBS4J9PIyvkuS/b/ZsYDRdrf5CTtnk9Ao6kff0l7PrelfecOiO5NvxZp3J3t640l4shG20oMhTFwH9twPyhww6w5ovpg==',key_name='tempest-TestNetworkBasicOps-693846457',keypairs=<?>,launch_index=0,launched_at=2025-09-30T14:40:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0f6bbb74396f4cb7bfa999ebdabfe722',ramdisk_id='',reservation_id='r-dn7nfker',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c70cf84-edc3-42b2-a094-ae3c1dbaffe4',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-195302952',owner_user_name='tempest-TestNetworkBasicOps-195302952-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-09-30T14:40:20Z,user_data=None,user_id='59c80c4f189d4667aec64b43afc69ed2',uuid=4a2e4963-f354-48e2-af39-ce9e01d9eda1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "282b94c3-1056-44d8-9ca4-959a3718bd94", "address": "fa:16:3e:d6:6d:09", "network": {"id": "ac4ef079-a88d-4ba4-9e93-1ee01981c523", "bridge": "br-int", "label": "tempest-network-smoke--1497657702", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0f6bbb74396f4cb7bfa999ebdabfe722", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap282b94c3-10", "ovs_interfaceid": "282b94c3-1056-44d8-9ca4-959a3718bd94", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Sep 30 14:41:31 compute-0 nova_compute[261524]: 2025-09-30 14:41:31.593 2 DEBUG nova.network.os_vif_util [None req-614ae054-3ca3-4eec-b9b7-0f5bfc1c7aa3 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Converting VIF {"id": "282b94c3-1056-44d8-9ca4-959a3718bd94", "address": "fa:16:3e:d6:6d:09", "network": {"id": "ac4ef079-a88d-4ba4-9e93-1ee01981c523", "bridge": "br-int", "label": "tempest-network-smoke--1497657702", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0f6bbb74396f4cb7bfa999ebdabfe722", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap282b94c3-10", "ovs_interfaceid": "282b94c3-1056-44d8-9ca4-959a3718bd94", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Sep 30 14:41:31 compute-0 nova_compute[261524]: 2025-09-30 14:41:31.594 2 DEBUG nova.network.os_vif_util [None req-614ae054-3ca3-4eec-b9b7-0f5bfc1c7aa3 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d6:6d:09,bridge_name='br-int',has_traffic_filtering=True,id=282b94c3-1056-44d8-9ca4-959a3718bd94,network=Network(ac4ef079-a88d-4ba4-9e93-1ee01981c523),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap282b94c3-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Sep 30 14:41:31 compute-0 nova_compute[261524]: 2025-09-30 14:41:31.594 2 DEBUG os_vif [None req-614ae054-3ca3-4eec-b9b7-0f5bfc1c7aa3 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d6:6d:09,bridge_name='br-int',has_traffic_filtering=True,id=282b94c3-1056-44d8-9ca4-959a3718bd94,network=Network(ac4ef079-a88d-4ba4-9e93-1ee01981c523),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap282b94c3-10') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Sep 30 14:41:31 compute-0 nova_compute[261524]: 2025-09-30 14:41:31.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:41:31 compute-0 nova_compute[261524]: 2025-09-30 14:41:31.596 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap282b94c3-10, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 14:41:31 compute-0 nova_compute[261524]: 2025-09-30 14:41:31.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:41:31 compute-0 nova_compute[261524]: 2025-09-30 14:41:31.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:41:31 compute-0 nova_compute[261524]: 2025-09-30 14:41:31.601 2 INFO os_vif [None req-614ae054-3ca3-4eec-b9b7-0f5bfc1c7aa3 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d6:6d:09,bridge_name='br-int',has_traffic_filtering=True,id=282b94c3-1056-44d8-9ca4-959a3718bd94,network=Network(ac4ef079-a88d-4ba4-9e93-1ee01981c523),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap282b94c3-10')
Sep 30 14:41:31 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:41:31 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:41:31 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:41:31.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:41:31 compute-0 neutron-haproxy-ovnmeta-ac4ef079-a88d-4ba4-9e93-1ee01981c523[269203]: [NOTICE]   (269241) : haproxy version is 2.8.14-c23fe91
Sep 30 14:41:31 compute-0 neutron-haproxy-ovnmeta-ac4ef079-a88d-4ba4-9e93-1ee01981c523[269203]: [NOTICE]   (269241) : path to executable is /usr/sbin/haproxy
Sep 30 14:41:31 compute-0 neutron-haproxy-ovnmeta-ac4ef079-a88d-4ba4-9e93-1ee01981c523[269203]: [WARNING]  (269241) : Exiting Master process...
Sep 30 14:41:31 compute-0 neutron-haproxy-ovnmeta-ac4ef079-a88d-4ba4-9e93-1ee01981c523[269203]: [ALERT]    (269241) : Current worker (269250) exited with code 143 (Terminated)
Sep 30 14:41:31 compute-0 neutron-haproxy-ovnmeta-ac4ef079-a88d-4ba4-9e93-1ee01981c523[269203]: [WARNING]  (269241) : All workers exited. Exiting... (0)
Sep 30 14:41:31 compute-0 systemd[1]: libpod-37bbd899ab6ecf560a8403e29b337972a606807081818ac16fc6df648d9e3d00.scope: Deactivated successfully.
Sep 30 14:41:31 compute-0 podman[271291]: 2025-09-30 14:41:31.62283744 +0000 UTC m=+0.071210145 container died 37bbd899ab6ecf560a8403e29b337972a606807081818ac16fc6df648d9e3d00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ac4ef079-a88d-4ba4-9e93-1ee01981c523, tcib_build_tag=36bccb96575468ec919301205d8daa2c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Sep 30 14:41:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-d295d6724a7d1d932ac986ff400d430b397abce6fcfecd85371456fa34249192-merged.mount: Deactivated successfully.
Sep 30 14:41:31 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-37bbd899ab6ecf560a8403e29b337972a606807081818ac16fc6df648d9e3d00-userdata-shm.mount: Deactivated successfully.
Sep 30 14:41:31 compute-0 podman[271291]: 2025-09-30 14:41:31.662559402 +0000 UTC m=+0.110932107 container cleanup 37bbd899ab6ecf560a8403e29b337972a606807081818ac16fc6df648d9e3d00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ac4ef079-a88d-4ba4-9e93-1ee01981c523, org.label-schema.build-date=20250923, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, org.label-schema.schema-version=1.0)
Sep 30 14:41:31 compute-0 systemd[1]: libpod-conmon-37bbd899ab6ecf560a8403e29b337972a606807081818ac16fc6df648d9e3d00.scope: Deactivated successfully.
Sep 30 14:41:31 compute-0 podman[271346]: 2025-09-30 14:41:31.728155415 +0000 UTC m=+0.038408617 container remove 37bbd899ab6ecf560a8403e29b337972a606807081818ac16fc6df648d9e3d00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ac4ef079-a88d-4ba4-9e93-1ee01981c523, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2)
Sep 30 14:41:31 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:41:31.734 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[a77ff7d4-58a0-47e4-b39c-a656f3f7dc50]: (4, ('Tue Sep 30 02:41:31 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ac4ef079-a88d-4ba4-9e93-1ee01981c523 (37bbd899ab6ecf560a8403e29b337972a606807081818ac16fc6df648d9e3d00)\n37bbd899ab6ecf560a8403e29b337972a606807081818ac16fc6df648d9e3d00\nTue Sep 30 02:41:31 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ac4ef079-a88d-4ba4-9e93-1ee01981c523 (37bbd899ab6ecf560a8403e29b337972a606807081818ac16fc6df648d9e3d00)\n37bbd899ab6ecf560a8403e29b337972a606807081818ac16fc6df648d9e3d00\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:41:31 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:41:31.737 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[4a300f02-c5eb-4d8e-98ee-acf8009b0676]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:41:31 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:41:31.739 163966 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapac4ef079-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 14:41:31 compute-0 kernel: tapac4ef079-a0: left promiscuous mode
Sep 30 14:41:31 compute-0 nova_compute[261524]: 2025-09-30 14:41:31.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:41:31 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:41:31.746 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[6a2b837e-8be2-40e1-a81a-0b844fcba208]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:41:31 compute-0 nova_compute[261524]: 2025-09-30 14:41:31.757 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:41:31 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:41:31.783 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[b6f116ef-89b9-40a9-a54d-50652665a596]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:41:31 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:41:31.785 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[cbe726e4-89e2-4e92-9ff8-222b545aadc5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:41:31 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:41:31.801 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[74903621-2252-4791-8476-c2d15d6e876c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 658442, 'reachable_time': 20234, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271361, 'error': None, 'target': 'ovnmeta-ac4ef079-a88d-4ba4-9e93-1ee01981c523', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:41:31 compute-0 systemd[1]: run-netns-ovnmeta\x2dac4ef079\x2da88d\x2d4ba4\x2d9e93\x2d1ee01981c523.mount: Deactivated successfully.
Sep 30 14:41:31 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:41:31.825 164124 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ac4ef079-a88d-4ba4-9e93-1ee01981c523 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Sep 30 14:41:31 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:41:31.828 164124 DEBUG oslo.privsep.daemon [-] privsep: reply[56c71351-0054-4b4c-a7e2-ec65106db021]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:41:32 compute-0 nova_compute[261524]: 2025-09-30 14:41:32.050 2 INFO nova.virt.libvirt.driver [None req-614ae054-3ca3-4eec-b9b7-0f5bfc1c7aa3 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: 4a2e4963-f354-48e2-af39-ce9e01d9eda1] Deleting instance files /var/lib/nova/instances/4a2e4963-f354-48e2-af39-ce9e01d9eda1_del
Sep 30 14:41:32 compute-0 nova_compute[261524]: 2025-09-30 14:41:32.051 2 INFO nova.virt.libvirt.driver [None req-614ae054-3ca3-4eec-b9b7-0f5bfc1c7aa3 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: 4a2e4963-f354-48e2-af39-ce9e01d9eda1] Deletion of /var/lib/nova/instances/4a2e4963-f354-48e2-af39-ce9e01d9eda1_del complete
Sep 30 14:41:32 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:41:32 compute-0 nova_compute[261524]: 2025-09-30 14:41:32.091 2 DEBUG nova.compute.manager [req-ba0a88ac-50ab-4d57-a375-54021e30197f req-ba32b885-2017-480d-9e46-da936d91a7e0 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: 4a2e4963-f354-48e2-af39-ce9e01d9eda1] Received event network-vif-unplugged-282b94c3-1056-44d8-9ca4-959a3718bd94 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Sep 30 14:41:32 compute-0 nova_compute[261524]: 2025-09-30 14:41:32.091 2 DEBUG oslo_concurrency.lockutils [req-ba0a88ac-50ab-4d57-a375-54021e30197f req-ba32b885-2017-480d-9e46-da936d91a7e0 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Acquiring lock "4a2e4963-f354-48e2-af39-ce9e01d9eda1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:41:32 compute-0 nova_compute[261524]: 2025-09-30 14:41:32.092 2 DEBUG oslo_concurrency.lockutils [req-ba0a88ac-50ab-4d57-a375-54021e30197f req-ba32b885-2017-480d-9e46-da936d91a7e0 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Lock "4a2e4963-f354-48e2-af39-ce9e01d9eda1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:41:32 compute-0 nova_compute[261524]: 2025-09-30 14:41:32.092 2 DEBUG oslo_concurrency.lockutils [req-ba0a88ac-50ab-4d57-a375-54021e30197f req-ba32b885-2017-480d-9e46-da936d91a7e0 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Lock "4a2e4963-f354-48e2-af39-ce9e01d9eda1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:41:32 compute-0 nova_compute[261524]: 2025-09-30 14:41:32.092 2 DEBUG nova.compute.manager [req-ba0a88ac-50ab-4d57-a375-54021e30197f req-ba32b885-2017-480d-9e46-da936d91a7e0 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: 4a2e4963-f354-48e2-af39-ce9e01d9eda1] No waiting events found dispatching network-vif-unplugged-282b94c3-1056-44d8-9ca4-959a3718bd94 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Sep 30 14:41:32 compute-0 nova_compute[261524]: 2025-09-30 14:41:32.092 2 DEBUG nova.compute.manager [req-ba0a88ac-50ab-4d57-a375-54021e30197f req-ba32b885-2017-480d-9e46-da936d91a7e0 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: 4a2e4963-f354-48e2-af39-ce9e01d9eda1] Received event network-vif-unplugged-282b94c3-1056-44d8-9ca4-959a3718bd94 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Sep 30 14:41:32 compute-0 nova_compute[261524]: 2025-09-30 14:41:32.130 2 DEBUG nova.virt.libvirt.host [None req-614ae054-3ca3-4eec-b9b7-0f5bfc1c7aa3 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754
Sep 30 14:41:32 compute-0 nova_compute[261524]: 2025-09-30 14:41:32.131 2 INFO nova.virt.libvirt.host [None req-614ae054-3ca3-4eec-b9b7-0f5bfc1c7aa3 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] UEFI support detected
Sep 30 14:41:32 compute-0 nova_compute[261524]: 2025-09-30 14:41:32.134 2 INFO nova.compute.manager [None req-614ae054-3ca3-4eec-b9b7-0f5bfc1c7aa3 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: 4a2e4963-f354-48e2-af39-ce9e01d9eda1] Took 0.80 seconds to destroy the instance on the hypervisor.
Sep 30 14:41:32 compute-0 nova_compute[261524]: 2025-09-30 14:41:32.135 2 DEBUG oslo.service.loopingcall [None req-614ae054-3ca3-4eec-b9b7-0f5bfc1c7aa3 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Sep 30 14:41:32 compute-0 nova_compute[261524]: 2025-09-30 14:41:32.136 2 DEBUG nova.compute.manager [-] [instance: 4a2e4963-f354-48e2-af39-ce9e01d9eda1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Sep 30 14:41:32 compute-0 nova_compute[261524]: 2025-09-30 14:41:32.136 2 DEBUG nova.network.neutron [-] [instance: 4a2e4963-f354-48e2-af39-ce9e01d9eda1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Sep 30 14:41:32 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v781: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 70 KiB/s rd, 120 KiB/s wr, 47 op/s
Sep 30 14:41:32 compute-0 nova_compute[261524]: 2025-09-30 14:41:32.899 2 DEBUG nova.network.neutron [-] [instance: 4a2e4963-f354-48e2-af39-ce9e01d9eda1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Sep 30 14:41:32 compute-0 nova_compute[261524]: 2025-09-30 14:41:32.920 2 INFO nova.compute.manager [-] [instance: 4a2e4963-f354-48e2-af39-ce9e01d9eda1] Took 0.78 seconds to deallocate network for instance.
Sep 30 14:41:32 compute-0 nova_compute[261524]: 2025-09-30 14:41:32.986 2 DEBUG oslo_concurrency.lockutils [None req-614ae054-3ca3-4eec-b9b7-0f5bfc1c7aa3 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:41:32 compute-0 nova_compute[261524]: 2025-09-30 14:41:32.987 2 DEBUG oslo_concurrency.lockutils [None req-614ae054-3ca3-4eec-b9b7-0f5bfc1c7aa3 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:41:32 compute-0 nova_compute[261524]: 2025-09-30 14:41:32.989 2 DEBUG nova.network.neutron [req-2f41b436-ecd2-40c4-807f-094d4de24c6f req-08cd720b-4bc8-4138-af36-2d23a32b6b2a e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: 4a2e4963-f354-48e2-af39-ce9e01d9eda1] Updated VIF entry in instance network info cache for port 282b94c3-1056-44d8-9ca4-959a3718bd94. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Sep 30 14:41:32 compute-0 nova_compute[261524]: 2025-09-30 14:41:32.990 2 DEBUG nova.network.neutron [req-2f41b436-ecd2-40c4-807f-094d4de24c6f req-08cd720b-4bc8-4138-af36-2d23a32b6b2a e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: 4a2e4963-f354-48e2-af39-ce9e01d9eda1] Updating instance_info_cache with network_info: [{"id": "282b94c3-1056-44d8-9ca4-959a3718bd94", "address": "fa:16:3e:d6:6d:09", "network": {"id": "ac4ef079-a88d-4ba4-9e93-1ee01981c523", "bridge": "br-int", "label": "tempest-network-smoke--1497657702", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0f6bbb74396f4cb7bfa999ebdabfe722", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap282b94c3-10", "ovs_interfaceid": "282b94c3-1056-44d8-9ca4-959a3718bd94", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Sep 30 14:41:32 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:41:32 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:41:32 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:41:32.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:41:33 compute-0 nova_compute[261524]: 2025-09-30 14:41:33.014 2 DEBUG oslo_concurrency.lockutils [req-2f41b436-ecd2-40c4-807f-094d4de24c6f req-08cd720b-4bc8-4138-af36-2d23a32b6b2a e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Releasing lock "refresh_cache-4a2e4963-f354-48e2-af39-ce9e01d9eda1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Sep 30 14:41:33 compute-0 nova_compute[261524]: 2025-09-30 14:41:33.066 2 DEBUG oslo_concurrency.processutils [None req-614ae054-3ca3-4eec-b9b7-0f5bfc1c7aa3 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Sep 30 14:41:33 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 14:41:33 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3031313937' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:41:33 compute-0 nova_compute[261524]: 2025-09-30 14:41:33.535 2 DEBUG oslo_concurrency.processutils [None req-614ae054-3ca3-4eec-b9b7-0f5bfc1c7aa3 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Sep 30 14:41:33 compute-0 nova_compute[261524]: 2025-09-30 14:41:33.541 2 DEBUG nova.compute.provider_tree [None req-614ae054-3ca3-4eec-b9b7-0f5bfc1c7aa3 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Inventory has not changed in ProviderTree for provider: 06783cfc-6d32-454d-9501-ebd8adea3735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Sep 30 14:41:33 compute-0 nova_compute[261524]: 2025-09-30 14:41:33.561 2 DEBUG nova.scheduler.client.report [None req-614ae054-3ca3-4eec-b9b7-0f5bfc1c7aa3 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Inventory has not changed for provider 06783cfc-6d32-454d-9501-ebd8adea3735 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Sep 30 14:41:33 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:41:33 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:41:33 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:41:33.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:41:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:41:33.630Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:41:33 compute-0 ceph-mon[74194]: pgmap v781: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 70 KiB/s rd, 120 KiB/s wr, 47 op/s
Sep 30 14:41:33 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/3031313937' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:41:33 compute-0 nova_compute[261524]: 2025-09-30 14:41:33.715 2 DEBUG oslo_concurrency.lockutils [None req-614ae054-3ca3-4eec-b9b7-0f5bfc1c7aa3 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.728s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:41:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:41:33 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:41:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:41:33 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:41:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:41:33 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:41:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:41:34 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:41:34 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v782: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 15 KiB/s wr, 29 op/s
Sep 30 14:41:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:41:34] "GET /metrics HTTP/1.1" 200 48530 "" "Prometheus/2.51.0"
Sep 30 14:41:34 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:41:34] "GET /metrics HTTP/1.1" 200 48530 "" "Prometheus/2.51.0"
Sep 30 14:41:34 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:41:34 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.002000054s ======
Sep 30 14:41:34 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:41:34.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Sep 30 14:41:35 compute-0 nova_compute[261524]: 2025-09-30 14:41:35.234 2 INFO nova.scheduler.client.report [None req-614ae054-3ca3-4eec-b9b7-0f5bfc1c7aa3 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Deleted allocations for instance 4a2e4963-f354-48e2-af39-ce9e01d9eda1
Sep 30 14:41:35 compute-0 nova_compute[261524]: 2025-09-30 14:41:35.244 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:41:35 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:41:35 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:41:35 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:41:35.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:41:35 compute-0 ceph-mon[74194]: pgmap v782: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 15 KiB/s wr, 29 op/s
Sep 30 14:41:35 compute-0 nova_compute[261524]: 2025-09-30 14:41:35.876 2 DEBUG oslo_concurrency.lockutils [None req-614ae054-3ca3-4eec-b9b7-0f5bfc1c7aa3 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Lock "4a2e4963-f354-48e2-af39-ce9e01d9eda1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.548s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:41:36 compute-0 nova_compute[261524]: 2025-09-30 14:41:36.009 2 DEBUG nova.compute.manager [req-7774968b-8a68-4d77-b3bc-575d22ba814f req-6c65ce44-97c9-4207-84d0-5295b0a0650e e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: 4a2e4963-f354-48e2-af39-ce9e01d9eda1] Received event network-vif-plugged-282b94c3-1056-44d8-9ca4-959a3718bd94 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Sep 30 14:41:36 compute-0 nova_compute[261524]: 2025-09-30 14:41:36.010 2 DEBUG oslo_concurrency.lockutils [req-7774968b-8a68-4d77-b3bc-575d22ba814f req-6c65ce44-97c9-4207-84d0-5295b0a0650e e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Acquiring lock "4a2e4963-f354-48e2-af39-ce9e01d9eda1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:41:36 compute-0 nova_compute[261524]: 2025-09-30 14:41:36.011 2 DEBUG oslo_concurrency.lockutils [req-7774968b-8a68-4d77-b3bc-575d22ba814f req-6c65ce44-97c9-4207-84d0-5295b0a0650e e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Lock "4a2e4963-f354-48e2-af39-ce9e01d9eda1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:41:36 compute-0 nova_compute[261524]: 2025-09-30 14:41:36.012 2 DEBUG oslo_concurrency.lockutils [req-7774968b-8a68-4d77-b3bc-575d22ba814f req-6c65ce44-97c9-4207-84d0-5295b0a0650e e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Lock "4a2e4963-f354-48e2-af39-ce9e01d9eda1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:41:36 compute-0 nova_compute[261524]: 2025-09-30 14:41:36.012 2 DEBUG nova.compute.manager [req-7774968b-8a68-4d77-b3bc-575d22ba814f req-6c65ce44-97c9-4207-84d0-5295b0a0650e e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: 4a2e4963-f354-48e2-af39-ce9e01d9eda1] No waiting events found dispatching network-vif-plugged-282b94c3-1056-44d8-9ca4-959a3718bd94 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Sep 30 14:41:36 compute-0 nova_compute[261524]: 2025-09-30 14:41:36.013 2 WARNING nova.compute.manager [req-7774968b-8a68-4d77-b3bc-575d22ba814f req-6c65ce44-97c9-4207-84d0-5295b0a0650e e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: 4a2e4963-f354-48e2-af39-ce9e01d9eda1] Received unexpected event network-vif-plugged-282b94c3-1056-44d8-9ca4-959a3718bd94 for instance with vm_state deleted and task_state None.
Sep 30 14:41:36 compute-0 nova_compute[261524]: 2025-09-30 14:41:36.013 2 DEBUG nova.compute.manager [req-7774968b-8a68-4d77-b3bc-575d22ba814f req-6c65ce44-97c9-4207-84d0-5295b0a0650e e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: 4a2e4963-f354-48e2-af39-ce9e01d9eda1] Received event network-vif-deleted-282b94c3-1056-44d8-9ca4-959a3718bd94 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Sep 30 14:41:36 compute-0 nova_compute[261524]: 2025-09-30 14:41:36.014 2 INFO nova.compute.manager [req-7774968b-8a68-4d77-b3bc-575d22ba814f req-6c65ce44-97c9-4207-84d0-5295b0a0650e e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: 4a2e4963-f354-48e2-af39-ce9e01d9eda1] Neutron deleted interface 282b94c3-1056-44d8-9ca4-959a3718bd94; detaching it from the instance and deleting it from the info cache
Sep 30 14:41:36 compute-0 nova_compute[261524]: 2025-09-30 14:41:36.014 2 DEBUG nova.network.neutron [req-7774968b-8a68-4d77-b3bc-575d22ba814f req-6c65ce44-97c9-4207-84d0-5295b0a0650e e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: 4a2e4963-f354-48e2-af39-ce9e01d9eda1] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106
Sep 30 14:41:36 compute-0 nova_compute[261524]: 2025-09-30 14:41:36.017 2 DEBUG nova.compute.manager [req-7774968b-8a68-4d77-b3bc-575d22ba814f req-6c65ce44-97c9-4207-84d0-5295b0a0650e e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: 4a2e4963-f354-48e2-af39-ce9e01d9eda1] Detach interface failed, port_id=282b94c3-1056-44d8-9ca4-959a3718bd94, reason: Instance 4a2e4963-f354-48e2-af39-ce9e01d9eda1 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Sep 30 14:41:36 compute-0 nova_compute[261524]: 2025-09-30 14:41:36.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:41:36 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v783: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 17 KiB/s wr, 56 op/s
Sep 30 14:41:36 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:41:36 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:41:36 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:41:36.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:41:37 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:41:37 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:41:37.124Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:41:37 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:41:37 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:41:37 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:41:37.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:41:37 compute-0 ceph-mon[74194]: pgmap v783: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 17 KiB/s wr, 56 op/s
Sep 30 14:41:37 compute-0 ceph-mon[74194]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Sep 30 14:41:37 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:41:37.756318) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Sep 30 14:41:37 compute-0 ceph-mon[74194]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Sep 30 14:41:37 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759243297756358, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 1272, "num_deletes": 251, "total_data_size": 2332666, "memory_usage": 2365344, "flush_reason": "Manual Compaction"}
Sep 30 14:41:37 compute-0 ceph-mon[74194]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Sep 30 14:41:37 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759243297770045, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 2262713, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23672, "largest_seqno": 24943, "table_properties": {"data_size": 2256652, "index_size": 3324, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 13520, "raw_average_key_size": 20, "raw_value_size": 2244193, "raw_average_value_size": 3384, "num_data_blocks": 145, "num_entries": 663, "num_filter_entries": 663, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759243193, "oldest_key_time": 1759243193, "file_creation_time": 1759243297, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4a74fe2f-a33e-416b-ba25-743e7942b3ac", "db_session_id": "KY5CTSKWFSFJYE5835A9", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Sep 30 14:41:37 compute-0 ceph-mon[74194]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 13800 microseconds, and 6373 cpu microseconds.
Sep 30 14:41:37 compute-0 ceph-mon[74194]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 14:41:37 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:41:37.770109) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 2262713 bytes OK
Sep 30 14:41:37 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:41:37.770202) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Sep 30 14:41:37 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:41:37.773440) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Sep 30 14:41:37 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:41:37.773513) EVENT_LOG_v1 {"time_micros": 1759243297773498, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Sep 30 14:41:37 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:41:37.773548) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Sep 30 14:41:37 compute-0 ceph-mon[74194]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 2326991, prev total WAL file size 2326991, number of live WAL files 2.
Sep 30 14:41:37 compute-0 ceph-mon[74194]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 14:41:37 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:41:37.774917) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Sep 30 14:41:37 compute-0 ceph-mon[74194]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Sep 30 14:41:37 compute-0 ceph-mon[74194]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(2209KB)], [53(12MB)]
Sep 30 14:41:37 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759243297774967, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 15272623, "oldest_snapshot_seqno": -1}
Sep 30 14:41:37 compute-0 ceph-mon[74194]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 5462 keys, 12995094 bytes, temperature: kUnknown
Sep 30 14:41:37 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759243297855390, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 12995094, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12958656, "index_size": 21672, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13701, "raw_key_size": 140036, "raw_average_key_size": 25, "raw_value_size": 12859766, "raw_average_value_size": 2354, "num_data_blocks": 880, "num_entries": 5462, "num_filter_entries": 5462, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759241526, "oldest_key_time": 0, "file_creation_time": 1759243297, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4a74fe2f-a33e-416b-ba25-743e7942b3ac", "db_session_id": "KY5CTSKWFSFJYE5835A9", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Sep 30 14:41:37 compute-0 ceph-mon[74194]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 14:41:37 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:41:37.855763) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 12995094 bytes
Sep 30 14:41:37 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:41:37.857101) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 189.6 rd, 161.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 12.4 +0.0 blob) out(12.4 +0.0 blob), read-write-amplify(12.5) write-amplify(5.7) OK, records in: 5986, records dropped: 524 output_compression: NoCompression
Sep 30 14:41:37 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:41:37.857121) EVENT_LOG_v1 {"time_micros": 1759243297857112, "job": 28, "event": "compaction_finished", "compaction_time_micros": 80534, "compaction_time_cpu_micros": 25890, "output_level": 6, "num_output_files": 1, "total_output_size": 12995094, "num_input_records": 5986, "num_output_records": 5462, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Sep 30 14:41:37 compute-0 ceph-mon[74194]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 14:41:37 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759243297857775, "job": 28, "event": "table_file_deletion", "file_number": 55}
Sep 30 14:41:37 compute-0 ceph-mon[74194]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 14:41:37 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759243297860636, "job": 28, "event": "table_file_deletion", "file_number": 53}
Sep 30 14:41:37 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:41:37.774804) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:41:37 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:41:37.860758) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:41:37 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:41:37.860766) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:41:37 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:41:37.860767) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:41:37 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:41:37.860769) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:41:37 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:41:37.860771) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:41:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:41:38.257 163966 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:41:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:41:38.258 163966 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:41:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:41:38.258 163966 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:41:38 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v784: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 3.2 KiB/s wr, 31 op/s
Sep 30 14:41:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:41:38 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:41:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:41:38 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:41:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:41:38 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:41:39 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:41:39 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:41:39 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:41:38.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:41:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:41:39 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:41:39 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:41:39 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:41:39 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:41:39.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:41:39 compute-0 ceph-mon[74194]: pgmap v784: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 3.2 KiB/s wr, 31 op/s
Sep 30 14:41:40 compute-0 nova_compute[261524]: 2025-09-30 14:41:40.248 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:41:40 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v785: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 3.2 KiB/s wr, 31 op/s
Sep 30 14:41:41 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:41:41 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:41:41 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:41:41.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:41:41 compute-0 nova_compute[261524]: 2025-09-30 14:41:41.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:41:41 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:41:41 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:41:41 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:41:41.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:41:41 compute-0 ceph-mon[74194]: pgmap v785: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 3.2 KiB/s wr, 31 op/s
Sep 30 14:41:42 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:41:42 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v786: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 3.2 KiB/s wr, 32 op/s
Sep 30 14:41:42 compute-0 nova_compute[261524]: 2025-09-30 14:41:42.746 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:41:42 compute-0 nova_compute[261524]: 2025-09-30 14:41:42.827 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:41:43 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:41:43 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:41:43 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:41:43.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:41:43 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:41:43 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:41:43 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:41:43.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:41:43 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:41:43.631Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:41:43 compute-0 ceph-mon[74194]: pgmap v786: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 3.2 KiB/s wr, 32 op/s
Sep 30 14:41:43 compute-0 sudo[271400]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:41:43 compute-0 sudo[271400]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:41:43 compute-0 sudo[271400]: pam_unix(sudo:session): session closed for user root
Sep 30 14:41:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:41:43 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:41:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:41:44 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:41:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:41:44 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:41:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:41:44 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:41:44 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v787: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 14:41:44 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:41:44 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:41:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:41:44] "GET /metrics HTTP/1.1" 200 48520 "" "Prometheus/2.51.0"
Sep 30 14:41:44 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:41:44] "GET /metrics HTTP/1.1" 200 48520 "" "Prometheus/2.51.0"
Sep 30 14:41:44 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:41:45 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:41:45 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:41:45 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:41:45.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:41:45 compute-0 nova_compute[261524]: 2025-09-30 14:41:45.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:41:45 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:41:45 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:41:45 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:41:45.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:41:45 compute-0 ceph-mon[74194]: pgmap v787: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 14:41:46 compute-0 nova_compute[261524]: 2025-09-30 14:41:46.576 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759243291.5754204, 4a2e4963-f354-48e2-af39-ce9e01d9eda1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Sep 30 14:41:46 compute-0 nova_compute[261524]: 2025-09-30 14:41:46.577 2 INFO nova.compute.manager [-] [instance: 4a2e4963-f354-48e2-af39-ce9e01d9eda1] VM Stopped (Lifecycle Event)
Sep 30 14:41:46 compute-0 nova_compute[261524]: 2025-09-30 14:41:46.601 2 DEBUG nova.compute.manager [None req-06f1c2a6-b45b-4da2-bda9-a506ae395595 - - - - - -] [instance: 4a2e4963-f354-48e2-af39-ce9e01d9eda1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Sep 30 14:41:46 compute-0 nova_compute[261524]: 2025-09-30 14:41:46.607 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:41:46 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v788: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 14:41:47 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:41:47 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:41:47 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:41:47.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:41:47 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:41:47 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:41:47.124Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:41:47 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:41:47.215 163966 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ea:30:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '86:54:af:bb:5a:5f'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Sep 30 14:41:47 compute-0 nova_compute[261524]: 2025-09-30 14:41:47.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:41:47 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:41:47.216 163966 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Sep 30 14:41:47 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:41:47 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:41:47 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:41:47.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:41:47 compute-0 ceph-mon[74194]: pgmap v788: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 14:41:48 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v789: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:41:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:41:48 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:41:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:41:48 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:41:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:41:48 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:41:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:41:49 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:41:49 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:41:49 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:41:49 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:41:49.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:41:49 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:41:49 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:41:49 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:41:49.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:41:49 compute-0 ceph-mon[74194]: pgmap v789: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:41:50 compute-0 nova_compute[261524]: 2025-09-30 14:41:50.252 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:41:50 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v790: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:41:51 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:41:51 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:41:51 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:41:51.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:41:51 compute-0 nova_compute[261524]: 2025-09-30 14:41:51.610 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:41:51 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:41:51 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:41:51 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:41:51.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:41:51 compute-0 ceph-mon[74194]: pgmap v790: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:41:52 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:41:52 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v791: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:41:53 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:41:53 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:41:53 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:41:53.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:41:53 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:41:53.633Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:41:53 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:41:53.633Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:41:53 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:41:53.634Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:41:53 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:41:53 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:41:53 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:41:53.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:41:53 compute-0 ceph-mon[74194]: pgmap v791: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:41:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:41:53 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:41:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:41:53 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:41:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:41:53 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:41:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:41:54 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:41:54 compute-0 podman[271438]: 2025-09-30 14:41:54.123260299 +0000 UTC m=+0.047516813 container health_status c96c6cc1b89de09f21811bef665f35e069874e3ad1bbd4c37d02227af98e3458 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Sep 30 14:41:54 compute-0 podman[271435]: 2025-09-30 14:41:54.127042697 +0000 UTC m=+0.057657776 container health_status 3f9405f717bf7bccb1d94628a6cea0442375ebf8d5cf43ef2536ee30dce6c6e0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, container_name=iscsid)
Sep 30 14:41:54 compute-0 podman[271437]: 2025-09-30 14:41:54.159493288 +0000 UTC m=+0.074434681 container health_status b3fb2fd96e3ed0561f41ccf5f3a6a8f5edc1610051f196882b9fe64f54041c07 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd)
Sep 30 14:41:54 compute-0 podman[271436]: 2025-09-30 14:41:54.176410557 +0000 UTC m=+0.095739234 container health_status 8ace90c1ad44d98a416105b72e1c541724757ab08efa10adfaa0df7e8c2027c6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.schema-version=1.0)
Sep 30 14:41:54 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v792: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:41:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:41:54] "GET /metrics HTTP/1.1" 200 48520 "" "Prometheus/2.51.0"
Sep 30 14:41:54 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:41:54] "GET /metrics HTTP/1.1" 200 48520 "" "Prometheus/2.51.0"
Sep 30 14:41:55 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:41:55 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:41:55 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:41:55.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:41:55 compute-0 nova_compute[261524]: 2025-09-30 14:41:55.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:41:55 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:41:55 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:41:55 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:41:55.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:41:55 compute-0 ceph-mon[74194]: pgmap v792: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:41:56 compute-0 nova_compute[261524]: 2025-09-30 14:41:56.301 2 DEBUG oslo_concurrency.lockutils [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Acquiring lock "c7b89511-067a-4ecf-9b88-41170118da87" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:41:56 compute-0 nova_compute[261524]: 2025-09-30 14:41:56.301 2 DEBUG oslo_concurrency.lockutils [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Lock "c7b89511-067a-4ecf-9b88-41170118da87" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:41:56 compute-0 nova_compute[261524]: 2025-09-30 14:41:56.324 2 DEBUG nova.compute.manager [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Sep 30 14:41:56 compute-0 nova_compute[261524]: 2025-09-30 14:41:56.404 2 DEBUG oslo_concurrency.lockutils [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:41:56 compute-0 nova_compute[261524]: 2025-09-30 14:41:56.404 2 DEBUG oslo_concurrency.lockutils [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:41:56 compute-0 nova_compute[261524]: 2025-09-30 14:41:56.412 2 DEBUG nova.virt.hardware [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Sep 30 14:41:56 compute-0 nova_compute[261524]: 2025-09-30 14:41:56.413 2 INFO nova.compute.claims [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Claim successful on node compute-0.ctlplane.example.com
Sep 30 14:41:56 compute-0 nova_compute[261524]: 2025-09-30 14:41:56.512 2 DEBUG oslo_concurrency.processutils [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Sep 30 14:41:56 compute-0 nova_compute[261524]: 2025-09-30 14:41:56.614 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:41:56 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v793: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:41:56 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 14:41:56 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1260674491' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:41:57 compute-0 nova_compute[261524]: 2025-09-30 14:41:57.012 2 DEBUG oslo_concurrency.processutils [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Sep 30 14:41:57 compute-0 nova_compute[261524]: 2025-09-30 14:41:57.020 2 DEBUG nova.compute.provider_tree [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Inventory has not changed in ProviderTree for provider: 06783cfc-6d32-454d-9501-ebd8adea3735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Sep 30 14:41:57 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:41:57 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:41:57 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:41:57.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:41:57 compute-0 nova_compute[261524]: 2025-09-30 14:41:57.040 2 DEBUG nova.scheduler.client.report [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Inventory has not changed for provider 06783cfc-6d32-454d-9501-ebd8adea3735 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Sep 30 14:41:57 compute-0 nova_compute[261524]: 2025-09-30 14:41:57.070 2 DEBUG oslo_concurrency.lockutils [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.666s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:41:57 compute-0 nova_compute[261524]: 2025-09-30 14:41:57.072 2 DEBUG nova.compute.manager [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Sep 30 14:41:57 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:41:57 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:41:57.125Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:41:57 compute-0 nova_compute[261524]: 2025-09-30 14:41:57.125 2 DEBUG nova.compute.manager [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Sep 30 14:41:57 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:41:57.125Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:41:57 compute-0 nova_compute[261524]: 2025-09-30 14:41:57.126 2 DEBUG nova.network.neutron [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: c7b89511-067a-4ecf-9b88-41170118da87] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Sep 30 14:41:57 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:41:57.126Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:41:57 compute-0 nova_compute[261524]: 2025-09-30 14:41:57.149 2 INFO nova.virt.libvirt.driver [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Sep 30 14:41:57 compute-0 nova_compute[261524]: 2025-09-30 14:41:57.173 2 DEBUG nova.compute.manager [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Sep 30 14:41:57 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:41:57.220 163966 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c6331d25-78a2-493c-bb43-51ad387342be, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 14:41:57 compute-0 nova_compute[261524]: 2025-09-30 14:41:57.269 2 DEBUG nova.compute.manager [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Sep 30 14:41:57 compute-0 nova_compute[261524]: 2025-09-30 14:41:57.270 2 DEBUG nova.virt.libvirt.driver [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Sep 30 14:41:57 compute-0 nova_compute[261524]: 2025-09-30 14:41:57.270 2 INFO nova.virt.libvirt.driver [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Creating image(s)
Sep 30 14:41:57 compute-0 nova_compute[261524]: 2025-09-30 14:41:57.304 2 DEBUG nova.storage.rbd_utils [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] rbd image c7b89511-067a-4ecf-9b88-41170118da87_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Sep 30 14:41:57 compute-0 nova_compute[261524]: 2025-09-30 14:41:57.347 2 DEBUG nova.storage.rbd_utils [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] rbd image c7b89511-067a-4ecf-9b88-41170118da87_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Sep 30 14:41:57 compute-0 nova_compute[261524]: 2025-09-30 14:41:57.386 2 DEBUG nova.storage.rbd_utils [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] rbd image c7b89511-067a-4ecf-9b88-41170118da87_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Sep 30 14:41:57 compute-0 nova_compute[261524]: 2025-09-30 14:41:57.390 2 DEBUG oslo_concurrency.processutils [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5be88f2030ae3f90b4568c2fe3300967dbe88639 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Sep 30 14:41:57 compute-0 nova_compute[261524]: 2025-09-30 14:41:57.470 2 DEBUG oslo_concurrency.processutils [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5be88f2030ae3f90b4568c2fe3300967dbe88639 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Sep 30 14:41:57 compute-0 nova_compute[261524]: 2025-09-30 14:41:57.471 2 DEBUG oslo_concurrency.lockutils [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Acquiring lock "5be88f2030ae3f90b4568c2fe3300967dbe88639" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:41:57 compute-0 nova_compute[261524]: 2025-09-30 14:41:57.472 2 DEBUG oslo_concurrency.lockutils [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Lock "5be88f2030ae3f90b4568c2fe3300967dbe88639" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:41:57 compute-0 nova_compute[261524]: 2025-09-30 14:41:57.473 2 DEBUG oslo_concurrency.lockutils [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Lock "5be88f2030ae3f90b4568c2fe3300967dbe88639" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:41:57 compute-0 nova_compute[261524]: 2025-09-30 14:41:57.505 2 DEBUG nova.storage.rbd_utils [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] rbd image c7b89511-067a-4ecf-9b88-41170118da87_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Sep 30 14:41:57 compute-0 nova_compute[261524]: 2025-09-30 14:41:57.508 2 DEBUG oslo_concurrency.processutils [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/5be88f2030ae3f90b4568c2fe3300967dbe88639 c7b89511-067a-4ecf-9b88-41170118da87_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Sep 30 14:41:57 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:41:57 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:41:57 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:41:57.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:41:57 compute-0 nova_compute[261524]: 2025-09-30 14:41:57.995 2 DEBUG oslo_concurrency.processutils [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/5be88f2030ae3f90b4568c2fe3300967dbe88639 c7b89511-067a-4ecf-9b88-41170118da87_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Sep 30 14:41:58 compute-0 ceph-mon[74194]: pgmap v793: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:41:58 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/1260674491' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:41:58 compute-0 nova_compute[261524]: 2025-09-30 14:41:58.050 2 DEBUG nova.policy [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '59c80c4f189d4667aec64b43afc69ed2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0f6bbb74396f4cb7bfa999ebdabfe722', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Sep 30 14:41:58 compute-0 nova_compute[261524]: 2025-09-30 14:41:58.105 2 DEBUG nova.storage.rbd_utils [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] resizing rbd image c7b89511-067a-4ecf-9b88-41170118da87_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Sep 30 14:41:58 compute-0 nova_compute[261524]: 2025-09-30 14:41:58.237 2 DEBUG nova.objects.instance [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Lazy-loading 'migration_context' on Instance uuid c7b89511-067a-4ecf-9b88-41170118da87 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Sep 30 14:41:58 compute-0 nova_compute[261524]: 2025-09-30 14:41:58.251 2 DEBUG nova.virt.libvirt.driver [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Sep 30 14:41:58 compute-0 nova_compute[261524]: 2025-09-30 14:41:58.252 2 DEBUG nova.virt.libvirt.driver [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Ensure instance console log exists: /var/lib/nova/instances/c7b89511-067a-4ecf-9b88-41170118da87/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Sep 30 14:41:58 compute-0 nova_compute[261524]: 2025-09-30 14:41:58.252 2 DEBUG oslo_concurrency.lockutils [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:41:58 compute-0 nova_compute[261524]: 2025-09-30 14:41:58.252 2 DEBUG oslo_concurrency.lockutils [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:41:58 compute-0 nova_compute[261524]: 2025-09-30 14:41:58.252 2 DEBUG oslo_concurrency.lockutils [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:41:58 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v794: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:41:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:41:58 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:41:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:41:58 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:41:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:41:58 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:41:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:41:59 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:41:59 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:41:59 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:41:59 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:41:59.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:41:59 compute-0 ceph-mgr[74485]: [balancer INFO root] Optimize plan auto_2025-09-30_14:41:59
Sep 30 14:41:59 compute-0 ceph-mgr[74485]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 14:41:59 compute-0 ceph-mgr[74485]: [balancer INFO root] do_upmap
Sep 30 14:41:59 compute-0 ceph-mgr[74485]: [balancer INFO root] pools ['.nfs', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.control', 'cephfs.cephfs.data', '.mgr', 'images', 'volumes', 'default.rgw.meta', 'backups', 'vms']
Sep 30 14:41:59 compute-0 ceph-mgr[74485]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 14:41:59 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:41:59 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:41:59 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:41:59.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:41:59 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:41:59 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:41:59 compute-0 nova_compute[261524]: 2025-09-30 14:41:59.713 2 DEBUG nova.network.neutron [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Successfully created port: fdd76f4a-6a11-467c-8f19-0b00baa4dbd1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Sep 30 14:41:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:41:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:41:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:41:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:41:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:41:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:41:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 14:41:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:41:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 14:41:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:41:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 14:41:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:41:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:41:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:41:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:41:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:41:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Sep 30 14:41:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:41:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Sep 30 14:41:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:41:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:41:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:41:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Sep 30 14:41:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:41:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Sep 30 14:41:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:41:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:41:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:41:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 14:41:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:41:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 14:42:00 compute-0 ceph-mon[74194]: pgmap v794: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:42:00 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:42:00 compute-0 nova_compute[261524]: 2025-09-30 14:42:00.255 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:42:00 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v795: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:42:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 14:42:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 14:42:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 14:42:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 14:42:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 14:42:01 compute-0 nova_compute[261524]: 2025-09-30 14:42:01.011 2 DEBUG nova.network.neutron [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Successfully updated port: fdd76f4a-6a11-467c-8f19-0b00baa4dbd1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Sep 30 14:42:01 compute-0 nova_compute[261524]: 2025-09-30 14:42:01.027 2 DEBUG oslo_concurrency.lockutils [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Acquiring lock "refresh_cache-c7b89511-067a-4ecf-9b88-41170118da87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Sep 30 14:42:01 compute-0 nova_compute[261524]: 2025-09-30 14:42:01.028 2 DEBUG oslo_concurrency.lockutils [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Acquired lock "refresh_cache-c7b89511-067a-4ecf-9b88-41170118da87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Sep 30 14:42:01 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:42:01 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:42:01 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:42:01.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:42:01 compute-0 nova_compute[261524]: 2025-09-30 14:42:01.028 2 DEBUG nova.network.neutron [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Sep 30 14:42:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 14:42:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 14:42:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 14:42:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 14:42:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 14:42:01 compute-0 nova_compute[261524]: 2025-09-30 14:42:01.109 2 DEBUG nova.compute.manager [req-cde08b26-ad09-48bb-8ee2-8222becd1bda req-e4dec396-9eef-47a9-89bb-961407bf4488 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Received event network-changed-fdd76f4a-6a11-467c-8f19-0b00baa4dbd1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Sep 30 14:42:01 compute-0 nova_compute[261524]: 2025-09-30 14:42:01.110 2 DEBUG nova.compute.manager [req-cde08b26-ad09-48bb-8ee2-8222becd1bda req-e4dec396-9eef-47a9-89bb-961407bf4488 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Refreshing instance network info cache due to event network-changed-fdd76f4a-6a11-467c-8f19-0b00baa4dbd1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Sep 30 14:42:01 compute-0 nova_compute[261524]: 2025-09-30 14:42:01.110 2 DEBUG oslo_concurrency.lockutils [req-cde08b26-ad09-48bb-8ee2-8222becd1bda req-e4dec396-9eef-47a9-89bb-961407bf4488 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Acquiring lock "refresh_cache-c7b89511-067a-4ecf-9b88-41170118da87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Sep 30 14:42:01 compute-0 nova_compute[261524]: 2025-09-30 14:42:01.197 2 DEBUG nova.network.neutron [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Sep 30 14:42:01 compute-0 nova_compute[261524]: 2025-09-30 14:42:01.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:42:01 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:42:01 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 14:42:01 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:42:01.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 14:42:02 compute-0 ceph-mon[74194]: pgmap v795: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:42:02 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:42:02 compute-0 nova_compute[261524]: 2025-09-30 14:42:02.559 2 DEBUG nova.network.neutron [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Updating instance_info_cache with network_info: [{"id": "fdd76f4a-6a11-467c-8f19-0b00baa4dbd1", "address": "fa:16:3e:0e:c5:53", "network": {"id": "31e82792-2132-423c-8fb3-0fd2453172b3", "bridge": "br-int", "label": "tempest-network-smoke--156271235", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0f6bbb74396f4cb7bfa999ebdabfe722", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdd76f4a-6a", "ovs_interfaceid": "fdd76f4a-6a11-467c-8f19-0b00baa4dbd1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Sep 30 14:42:02 compute-0 nova_compute[261524]: 2025-09-30 14:42:02.580 2 DEBUG oslo_concurrency.lockutils [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Releasing lock "refresh_cache-c7b89511-067a-4ecf-9b88-41170118da87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Sep 30 14:42:02 compute-0 nova_compute[261524]: 2025-09-30 14:42:02.580 2 DEBUG nova.compute.manager [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Instance network_info: |[{"id": "fdd76f4a-6a11-467c-8f19-0b00baa4dbd1", "address": "fa:16:3e:0e:c5:53", "network": {"id": "31e82792-2132-423c-8fb3-0fd2453172b3", "bridge": "br-int", "label": "tempest-network-smoke--156271235", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0f6bbb74396f4cb7bfa999ebdabfe722", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdd76f4a-6a", "ovs_interfaceid": "fdd76f4a-6a11-467c-8f19-0b00baa4dbd1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Sep 30 14:42:02 compute-0 nova_compute[261524]: 2025-09-30 14:42:02.580 2 DEBUG oslo_concurrency.lockutils [req-cde08b26-ad09-48bb-8ee2-8222becd1bda req-e4dec396-9eef-47a9-89bb-961407bf4488 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Acquired lock "refresh_cache-c7b89511-067a-4ecf-9b88-41170118da87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Sep 30 14:42:02 compute-0 nova_compute[261524]: 2025-09-30 14:42:02.581 2 DEBUG nova.network.neutron [req-cde08b26-ad09-48bb-8ee2-8222becd1bda req-e4dec396-9eef-47a9-89bb-961407bf4488 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Refreshing network info cache for port fdd76f4a-6a11-467c-8f19-0b00baa4dbd1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Sep 30 14:42:02 compute-0 nova_compute[261524]: 2025-09-30 14:42:02.584 2 DEBUG nova.virt.libvirt.driver [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Start _get_guest_xml network_info=[{"id": "fdd76f4a-6a11-467c-8f19-0b00baa4dbd1", "address": "fa:16:3e:0e:c5:53", "network": {"id": "31e82792-2132-423c-8fb3-0fd2453172b3", "bridge": "br-int", "label": "tempest-network-smoke--156271235", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0f6bbb74396f4cb7bfa999ebdabfe722", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdd76f4a-6a", "ovs_interfaceid": "fdd76f4a-6a11-467c-8f19-0b00baa4dbd1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-09-30T14:39:17Z,direct_url=<?>,disk_format='qcow2',id=7c70cf84-edc3-42b2-a094-ae3c1dbaffe4,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5beed35d375f4bd185a6774dc475e0b9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-09-30T14:39:19Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'size': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'guest_format': None, 'disk_bus': 'virtio', 'image_id': '7c70cf84-edc3-42b2-a094-ae3c1dbaffe4'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Sep 30 14:42:02 compute-0 nova_compute[261524]: 2025-09-30 14:42:02.591 2 WARNING nova.virt.libvirt.driver [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 14:42:02 compute-0 nova_compute[261524]: 2025-09-30 14:42:02.600 2 DEBUG nova.virt.libvirt.host [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Sep 30 14:42:02 compute-0 nova_compute[261524]: 2025-09-30 14:42:02.600 2 DEBUG nova.virt.libvirt.host [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Sep 30 14:42:02 compute-0 nova_compute[261524]: 2025-09-30 14:42:02.604 2 DEBUG nova.virt.libvirt.host [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Sep 30 14:42:02 compute-0 nova_compute[261524]: 2025-09-30 14:42:02.605 2 DEBUG nova.virt.libvirt.host [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Sep 30 14:42:02 compute-0 nova_compute[261524]: 2025-09-30 14:42:02.605 2 DEBUG nova.virt.libvirt.driver [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Sep 30 14:42:02 compute-0 nova_compute[261524]: 2025-09-30 14:42:02.605 2 DEBUG nova.virt.hardware [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-09-30T14:39:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='64f3d3b9-41b6-4b89-8bbd-f654faf17546',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-09-30T14:39:17Z,direct_url=<?>,disk_format='qcow2',id=7c70cf84-edc3-42b2-a094-ae3c1dbaffe4,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5beed35d375f4bd185a6774dc475e0b9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-09-30T14:39:19Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Sep 30 14:42:02 compute-0 nova_compute[261524]: 2025-09-30 14:42:02.606 2 DEBUG nova.virt.hardware [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Sep 30 14:42:02 compute-0 nova_compute[261524]: 2025-09-30 14:42:02.606 2 DEBUG nova.virt.hardware [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Sep 30 14:42:02 compute-0 nova_compute[261524]: 2025-09-30 14:42:02.606 2 DEBUG nova.virt.hardware [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Sep 30 14:42:02 compute-0 nova_compute[261524]: 2025-09-30 14:42:02.606 2 DEBUG nova.virt.hardware [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Sep 30 14:42:02 compute-0 nova_compute[261524]: 2025-09-30 14:42:02.606 2 DEBUG nova.virt.hardware [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Sep 30 14:42:02 compute-0 nova_compute[261524]: 2025-09-30 14:42:02.607 2 DEBUG nova.virt.hardware [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Sep 30 14:42:02 compute-0 nova_compute[261524]: 2025-09-30 14:42:02.607 2 DEBUG nova.virt.hardware [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Sep 30 14:42:02 compute-0 nova_compute[261524]: 2025-09-30 14:42:02.607 2 DEBUG nova.virt.hardware [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Sep 30 14:42:02 compute-0 nova_compute[261524]: 2025-09-30 14:42:02.607 2 DEBUG nova.virt.hardware [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Sep 30 14:42:02 compute-0 nova_compute[261524]: 2025-09-30 14:42:02.607 2 DEBUG nova.virt.hardware [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Sep 30 14:42:02 compute-0 nova_compute[261524]: 2025-09-30 14:42:02.610 2 DEBUG oslo_concurrency.processutils [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Sep 30 14:42:02 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v796: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 14:42:03 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:42:03 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:42:03 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:42:03.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:42:03 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Sep 30 14:42:03 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/793151755' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 14:42:03 compute-0 nova_compute[261524]: 2025-09-30 14:42:03.147 2 DEBUG oslo_concurrency.processutils [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Sep 30 14:42:03 compute-0 nova_compute[261524]: 2025-09-30 14:42:03.179 2 DEBUG nova.storage.rbd_utils [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] rbd image c7b89511-067a-4ecf-9b88-41170118da87_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Sep 30 14:42:03 compute-0 nova_compute[261524]: 2025-09-30 14:42:03.184 2 DEBUG oslo_concurrency.processutils [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Sep 30 14:42:03 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Sep 30 14:42:03 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/866187569' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 14:42:03 compute-0 nova_compute[261524]: 2025-09-30 14:42:03.616 2 DEBUG oslo_concurrency.processutils [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Sep 30 14:42:03 compute-0 nova_compute[261524]: 2025-09-30 14:42:03.618 2 DEBUG nova.virt.libvirt.vif [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-09-30T14:41:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2031188285',display_name='tempest-TestNetworkBasicOps-server-2031188285',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2031188285',id=3,image_ref='7c70cf84-edc3-42b2-a094-ae3c1dbaffe4',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEAImrCwKSNyEdm98pPfZ4sjS6exrK+14H3hUnFOdL8Y5dY0kzO28iP+MIhWAQTc22os7ImKeOILYxLVSkpa7J7So6O1Rtmi7C5fPdNcVDkCJS373V5RS7Al59MW7kPAog==',key_name='tempest-TestNetworkBasicOps-1988791059',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0f6bbb74396f4cb7bfa999ebdabfe722',ramdisk_id='',reservation_id='r-nwtvr0g5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c70cf84-edc3-42b2-a094-ae3c1dbaffe4',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-195302952',owner_user_name='tempest-TestNetworkBasicOps-195302952-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-09-30T14:41:57Z,user_data=None,user_id='59c80c4f189d4667aec64b43afc69ed2',uuid=c7b89511-067a-4ecf-9b88-41170118da87,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fdd76f4a-6a11-467c-8f19-0b00baa4dbd1", "address": "fa:16:3e:0e:c5:53", "network": {"id": "31e82792-2132-423c-8fb3-0fd2453172b3", "bridge": "br-int", "label": "tempest-network-smoke--156271235", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0f6bbb74396f4cb7bfa999ebdabfe722", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdd76f4a-6a", "ovs_interfaceid": "fdd76f4a-6a11-467c-8f19-0b00baa4dbd1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Sep 30 14:42:03 compute-0 nova_compute[261524]: 2025-09-30 14:42:03.619 2 DEBUG nova.network.os_vif_util [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Converting VIF {"id": "fdd76f4a-6a11-467c-8f19-0b00baa4dbd1", "address": "fa:16:3e:0e:c5:53", "network": {"id": "31e82792-2132-423c-8fb3-0fd2453172b3", "bridge": "br-int", "label": "tempest-network-smoke--156271235", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0f6bbb74396f4cb7bfa999ebdabfe722", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdd76f4a-6a", "ovs_interfaceid": "fdd76f4a-6a11-467c-8f19-0b00baa4dbd1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Sep 30 14:42:03 compute-0 nova_compute[261524]: 2025-09-30 14:42:03.620 2 DEBUG nova.network.os_vif_util [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0e:c5:53,bridge_name='br-int',has_traffic_filtering=True,id=fdd76f4a-6a11-467c-8f19-0b00baa4dbd1,network=Network(31e82792-2132-423c-8fb3-0fd2453172b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdd76f4a-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Sep 30 14:42:03 compute-0 nova_compute[261524]: 2025-09-30 14:42:03.621 2 DEBUG nova.objects.instance [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Lazy-loading 'pci_devices' on Instance uuid c7b89511-067a-4ecf-9b88-41170118da87 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Sep 30 14:42:03 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:42:03.634Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:42:03 compute-0 nova_compute[261524]: 2025-09-30 14:42:03.634 2 DEBUG nova.virt.libvirt.driver [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: c7b89511-067a-4ecf-9b88-41170118da87] End _get_guest_xml xml=<domain type="kvm">
Sep 30 14:42:03 compute-0 nova_compute[261524]:   <uuid>c7b89511-067a-4ecf-9b88-41170118da87</uuid>
Sep 30 14:42:03 compute-0 nova_compute[261524]:   <name>instance-00000003</name>
Sep 30 14:42:03 compute-0 nova_compute[261524]:   <memory>131072</memory>
Sep 30 14:42:03 compute-0 nova_compute[261524]:   <vcpu>1</vcpu>
Sep 30 14:42:03 compute-0 nova_compute[261524]:   <metadata>
Sep 30 14:42:03 compute-0 nova_compute[261524]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Sep 30 14:42:03 compute-0 nova_compute[261524]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Sep 30 14:42:03 compute-0 nova_compute[261524]:       <nova:name>tempest-TestNetworkBasicOps-server-2031188285</nova:name>
Sep 30 14:42:03 compute-0 nova_compute[261524]:       <nova:creationTime>2025-09-30 14:42:02</nova:creationTime>
Sep 30 14:42:03 compute-0 nova_compute[261524]:       <nova:flavor name="m1.nano">
Sep 30 14:42:03 compute-0 nova_compute[261524]:         <nova:memory>128</nova:memory>
Sep 30 14:42:03 compute-0 nova_compute[261524]:         <nova:disk>1</nova:disk>
Sep 30 14:42:03 compute-0 nova_compute[261524]:         <nova:swap>0</nova:swap>
Sep 30 14:42:03 compute-0 nova_compute[261524]:         <nova:ephemeral>0</nova:ephemeral>
Sep 30 14:42:03 compute-0 nova_compute[261524]:         <nova:vcpus>1</nova:vcpus>
Sep 30 14:42:03 compute-0 nova_compute[261524]:       </nova:flavor>
Sep 30 14:42:03 compute-0 nova_compute[261524]:       <nova:owner>
Sep 30 14:42:03 compute-0 nova_compute[261524]:         <nova:user uuid="59c80c4f189d4667aec64b43afc69ed2">tempest-TestNetworkBasicOps-195302952-project-member</nova:user>
Sep 30 14:42:03 compute-0 nova_compute[261524]:         <nova:project uuid="0f6bbb74396f4cb7bfa999ebdabfe722">tempest-TestNetworkBasicOps-195302952</nova:project>
Sep 30 14:42:03 compute-0 nova_compute[261524]:       </nova:owner>
Sep 30 14:42:03 compute-0 nova_compute[261524]:       <nova:root type="image" uuid="7c70cf84-edc3-42b2-a094-ae3c1dbaffe4"/>
Sep 30 14:42:03 compute-0 nova_compute[261524]:       <nova:ports>
Sep 30 14:42:03 compute-0 nova_compute[261524]:         <nova:port uuid="fdd76f4a-6a11-467c-8f19-0b00baa4dbd1">
Sep 30 14:42:03 compute-0 nova_compute[261524]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Sep 30 14:42:03 compute-0 nova_compute[261524]:         </nova:port>
Sep 30 14:42:03 compute-0 nova_compute[261524]:       </nova:ports>
Sep 30 14:42:03 compute-0 nova_compute[261524]:     </nova:instance>
Sep 30 14:42:03 compute-0 nova_compute[261524]:   </metadata>
Sep 30 14:42:03 compute-0 nova_compute[261524]:   <sysinfo type="smbios">
Sep 30 14:42:03 compute-0 nova_compute[261524]:     <system>
Sep 30 14:42:03 compute-0 nova_compute[261524]:       <entry name="manufacturer">RDO</entry>
Sep 30 14:42:03 compute-0 nova_compute[261524]:       <entry name="product">OpenStack Compute</entry>
Sep 30 14:42:03 compute-0 nova_compute[261524]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Sep 30 14:42:03 compute-0 nova_compute[261524]:       <entry name="serial">c7b89511-067a-4ecf-9b88-41170118da87</entry>
Sep 30 14:42:03 compute-0 nova_compute[261524]:       <entry name="uuid">c7b89511-067a-4ecf-9b88-41170118da87</entry>
Sep 30 14:42:03 compute-0 nova_compute[261524]:       <entry name="family">Virtual Machine</entry>
Sep 30 14:42:03 compute-0 nova_compute[261524]:     </system>
Sep 30 14:42:03 compute-0 nova_compute[261524]:   </sysinfo>
Sep 30 14:42:03 compute-0 nova_compute[261524]:   <os>
Sep 30 14:42:03 compute-0 nova_compute[261524]:     <type arch="x86_64" machine="q35">hvm</type>
Sep 30 14:42:03 compute-0 nova_compute[261524]:     <boot dev="hd"/>
Sep 30 14:42:03 compute-0 nova_compute[261524]:     <smbios mode="sysinfo"/>
Sep 30 14:42:03 compute-0 nova_compute[261524]:   </os>
Sep 30 14:42:03 compute-0 nova_compute[261524]:   <features>
Sep 30 14:42:03 compute-0 nova_compute[261524]:     <acpi/>
Sep 30 14:42:03 compute-0 nova_compute[261524]:     <apic/>
Sep 30 14:42:03 compute-0 nova_compute[261524]:     <vmcoreinfo/>
Sep 30 14:42:03 compute-0 nova_compute[261524]:   </features>
Sep 30 14:42:03 compute-0 nova_compute[261524]:   <clock offset="utc">
Sep 30 14:42:03 compute-0 nova_compute[261524]:     <timer name="pit" tickpolicy="delay"/>
Sep 30 14:42:03 compute-0 nova_compute[261524]:     <timer name="rtc" tickpolicy="catchup"/>
Sep 30 14:42:03 compute-0 nova_compute[261524]:     <timer name="hpet" present="no"/>
Sep 30 14:42:03 compute-0 nova_compute[261524]:   </clock>
Sep 30 14:42:03 compute-0 nova_compute[261524]:   <cpu mode="host-model" match="exact">
Sep 30 14:42:03 compute-0 nova_compute[261524]:     <topology sockets="1" cores="1" threads="1"/>
Sep 30 14:42:03 compute-0 nova_compute[261524]:   </cpu>
Sep 30 14:42:03 compute-0 nova_compute[261524]:   <devices>
Sep 30 14:42:03 compute-0 nova_compute[261524]:     <disk type="network" device="disk">
Sep 30 14:42:03 compute-0 nova_compute[261524]:       <driver type="raw" cache="none"/>
Sep 30 14:42:03 compute-0 nova_compute[261524]:       <source protocol="rbd" name="vms/c7b89511-067a-4ecf-9b88-41170118da87_disk">
Sep 30 14:42:03 compute-0 nova_compute[261524]:         <host name="192.168.122.100" port="6789"/>
Sep 30 14:42:03 compute-0 nova_compute[261524]:         <host name="192.168.122.102" port="6789"/>
Sep 30 14:42:03 compute-0 nova_compute[261524]:         <host name="192.168.122.101" port="6789"/>
Sep 30 14:42:03 compute-0 nova_compute[261524]:       </source>
Sep 30 14:42:03 compute-0 nova_compute[261524]:       <auth username="openstack">
Sep 30 14:42:03 compute-0 nova_compute[261524]:         <secret type="ceph" uuid="5e3c7776-ac03-5698-b79f-a6dc2d80cae6"/>
Sep 30 14:42:03 compute-0 nova_compute[261524]:       </auth>
Sep 30 14:42:03 compute-0 nova_compute[261524]:       <target dev="vda" bus="virtio"/>
Sep 30 14:42:03 compute-0 nova_compute[261524]:     </disk>
Sep 30 14:42:03 compute-0 nova_compute[261524]:     <disk type="network" device="cdrom">
Sep 30 14:42:03 compute-0 nova_compute[261524]:       <driver type="raw" cache="none"/>
Sep 30 14:42:03 compute-0 nova_compute[261524]:       <source protocol="rbd" name="vms/c7b89511-067a-4ecf-9b88-41170118da87_disk.config">
Sep 30 14:42:03 compute-0 nova_compute[261524]:         <host name="192.168.122.100" port="6789"/>
Sep 30 14:42:03 compute-0 nova_compute[261524]:         <host name="192.168.122.102" port="6789"/>
Sep 30 14:42:03 compute-0 nova_compute[261524]:         <host name="192.168.122.101" port="6789"/>
Sep 30 14:42:03 compute-0 nova_compute[261524]:       </source>
Sep 30 14:42:03 compute-0 nova_compute[261524]:       <auth username="openstack">
Sep 30 14:42:03 compute-0 nova_compute[261524]:         <secret type="ceph" uuid="5e3c7776-ac03-5698-b79f-a6dc2d80cae6"/>
Sep 30 14:42:03 compute-0 nova_compute[261524]:       </auth>
Sep 30 14:42:03 compute-0 nova_compute[261524]:       <target dev="sda" bus="sata"/>
Sep 30 14:42:03 compute-0 nova_compute[261524]:     </disk>
Sep 30 14:42:03 compute-0 nova_compute[261524]:     <interface type="ethernet">
Sep 30 14:42:03 compute-0 nova_compute[261524]:       <mac address="fa:16:3e:0e:c5:53"/>
Sep 30 14:42:03 compute-0 nova_compute[261524]:       <model type="virtio"/>
Sep 30 14:42:03 compute-0 nova_compute[261524]:       <driver name="vhost" rx_queue_size="512"/>
Sep 30 14:42:03 compute-0 nova_compute[261524]:       <mtu size="1442"/>
Sep 30 14:42:03 compute-0 nova_compute[261524]:       <target dev="tapfdd76f4a-6a"/>
Sep 30 14:42:03 compute-0 nova_compute[261524]:     </interface>
Sep 30 14:42:03 compute-0 nova_compute[261524]:     <serial type="pty">
Sep 30 14:42:03 compute-0 nova_compute[261524]:       <log file="/var/lib/nova/instances/c7b89511-067a-4ecf-9b88-41170118da87/console.log" append="off"/>
Sep 30 14:42:03 compute-0 nova_compute[261524]:     </serial>
Sep 30 14:42:03 compute-0 nova_compute[261524]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Sep 30 14:42:03 compute-0 nova_compute[261524]:     <video>
Sep 30 14:42:03 compute-0 nova_compute[261524]:       <model type="virtio"/>
Sep 30 14:42:03 compute-0 nova_compute[261524]:     </video>
Sep 30 14:42:03 compute-0 nova_compute[261524]:     <input type="tablet" bus="usb"/>
Sep 30 14:42:03 compute-0 nova_compute[261524]:     <rng model="virtio">
Sep 30 14:42:03 compute-0 nova_compute[261524]:       <backend model="random">/dev/urandom</backend>
Sep 30 14:42:03 compute-0 nova_compute[261524]:     </rng>
Sep 30 14:42:03 compute-0 nova_compute[261524]:     <controller type="pci" model="pcie-root"/>
Sep 30 14:42:03 compute-0 nova_compute[261524]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 14:42:03 compute-0 nova_compute[261524]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 14:42:03 compute-0 nova_compute[261524]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 14:42:03 compute-0 nova_compute[261524]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 14:42:03 compute-0 nova_compute[261524]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 14:42:03 compute-0 nova_compute[261524]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 14:42:03 compute-0 nova_compute[261524]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 14:42:03 compute-0 nova_compute[261524]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 14:42:03 compute-0 nova_compute[261524]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 14:42:03 compute-0 nova_compute[261524]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 14:42:03 compute-0 nova_compute[261524]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 14:42:03 compute-0 nova_compute[261524]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 14:42:03 compute-0 nova_compute[261524]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 14:42:03 compute-0 nova_compute[261524]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 14:42:03 compute-0 nova_compute[261524]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 14:42:03 compute-0 nova_compute[261524]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 14:42:03 compute-0 nova_compute[261524]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 14:42:03 compute-0 nova_compute[261524]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 14:42:03 compute-0 nova_compute[261524]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 14:42:03 compute-0 nova_compute[261524]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 14:42:03 compute-0 nova_compute[261524]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 14:42:03 compute-0 nova_compute[261524]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 14:42:03 compute-0 nova_compute[261524]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 14:42:03 compute-0 nova_compute[261524]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 14:42:03 compute-0 nova_compute[261524]:     <controller type="usb" index="0"/>
Sep 30 14:42:03 compute-0 nova_compute[261524]:     <memballoon model="virtio">
Sep 30 14:42:03 compute-0 nova_compute[261524]:       <stats period="10"/>
Sep 30 14:42:03 compute-0 nova_compute[261524]:     </memballoon>
Sep 30 14:42:03 compute-0 nova_compute[261524]:   </devices>
Sep 30 14:42:03 compute-0 nova_compute[261524]: </domain>
Sep 30 14:42:03 compute-0 nova_compute[261524]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Sep 30 14:42:03 compute-0 nova_compute[261524]: 2025-09-30 14:42:03.636 2 DEBUG nova.compute.manager [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Preparing to wait for external event network-vif-plugged-fdd76f4a-6a11-467c-8f19-0b00baa4dbd1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Sep 30 14:42:03 compute-0 nova_compute[261524]: 2025-09-30 14:42:03.636 2 DEBUG oslo_concurrency.lockutils [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Acquiring lock "c7b89511-067a-4ecf-9b88-41170118da87-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:42:03 compute-0 nova_compute[261524]: 2025-09-30 14:42:03.637 2 DEBUG oslo_concurrency.lockutils [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Lock "c7b89511-067a-4ecf-9b88-41170118da87-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:42:03 compute-0 nova_compute[261524]: 2025-09-30 14:42:03.637 2 DEBUG oslo_concurrency.lockutils [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Lock "c7b89511-067a-4ecf-9b88-41170118da87-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:42:03 compute-0 nova_compute[261524]: 2025-09-30 14:42:03.637 2 DEBUG nova.virt.libvirt.vif [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-09-30T14:41:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2031188285',display_name='tempest-TestNetworkBasicOps-server-2031188285',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2031188285',id=3,image_ref='7c70cf84-edc3-42b2-a094-ae3c1dbaffe4',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEAImrCwKSNyEdm98pPfZ4sjS6exrK+14H3hUnFOdL8Y5dY0kzO28iP+MIhWAQTc22os7ImKeOILYxLVSkpa7J7So6O1Rtmi7C5fPdNcVDkCJS373V5RS7Al59MW7kPAog==',key_name='tempest-TestNetworkBasicOps-1988791059',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0f6bbb74396f4cb7bfa999ebdabfe722',ramdisk_id='',reservation_id='r-nwtvr0g5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c70cf84-edc3-42b2-a094-ae3c1dbaffe4',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-195302952',owner_user_name='tempest-TestNetworkBasicOps-195302952-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-09-30T14:41:57Z,user_data=None,user_id='59c80c4f189d4667aec64b43afc69ed2',uuid=c7b89511-067a-4ecf-9b88-41170118da87,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fdd76f4a-6a11-467c-8f19-0b00baa4dbd1", "address": "fa:16:3e:0e:c5:53", "network": {"id": "31e82792-2132-423c-8fb3-0fd2453172b3", "bridge": "br-int", "label": "tempest-network-smoke--156271235", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0f6bbb74396f4cb7bfa999ebdabfe722", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdd76f4a-6a", "ovs_interfaceid": "fdd76f4a-6a11-467c-8f19-0b00baa4dbd1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Sep 30 14:42:03 compute-0 nova_compute[261524]: 2025-09-30 14:42:03.638 2 DEBUG nova.network.os_vif_util [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Converting VIF {"id": "fdd76f4a-6a11-467c-8f19-0b00baa4dbd1", "address": "fa:16:3e:0e:c5:53", "network": {"id": "31e82792-2132-423c-8fb3-0fd2453172b3", "bridge": "br-int", "label": "tempest-network-smoke--156271235", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0f6bbb74396f4cb7bfa999ebdabfe722", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdd76f4a-6a", "ovs_interfaceid": "fdd76f4a-6a11-467c-8f19-0b00baa4dbd1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Sep 30 14:42:03 compute-0 nova_compute[261524]: 2025-09-30 14:42:03.638 2 DEBUG nova.network.os_vif_util [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0e:c5:53,bridge_name='br-int',has_traffic_filtering=True,id=fdd76f4a-6a11-467c-8f19-0b00baa4dbd1,network=Network(31e82792-2132-423c-8fb3-0fd2453172b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdd76f4a-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Sep 30 14:42:03 compute-0 nova_compute[261524]: 2025-09-30 14:42:03.639 2 DEBUG os_vif [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0e:c5:53,bridge_name='br-int',has_traffic_filtering=True,id=fdd76f4a-6a11-467c-8f19-0b00baa4dbd1,network=Network(31e82792-2132-423c-8fb3-0fd2453172b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdd76f4a-6a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Sep 30 14:42:03 compute-0 nova_compute[261524]: 2025-09-30 14:42:03.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:42:03 compute-0 nova_compute[261524]: 2025-09-30 14:42:03.640 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 14:42:03 compute-0 nova_compute[261524]: 2025-09-30 14:42:03.640 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 14:42:03 compute-0 nova_compute[261524]: 2025-09-30 14:42:03.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:42:03 compute-0 nova_compute[261524]: 2025-09-30 14:42:03.645 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfdd76f4a-6a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 14:42:03 compute-0 nova_compute[261524]: 2025-09-30 14:42:03.645 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfdd76f4a-6a, col_values=(('external_ids', {'iface-id': 'fdd76f4a-6a11-467c-8f19-0b00baa4dbd1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0e:c5:53', 'vm-uuid': 'c7b89511-067a-4ecf-9b88-41170118da87'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 14:42:03 compute-0 nova_compute[261524]: 2025-09-30 14:42:03.647 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:42:03 compute-0 NetworkManager[45472]: <info>  [1759243323.6487] manager: (tapfdd76f4a-6a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/33)
Sep 30 14:42:03 compute-0 nova_compute[261524]: 2025-09-30 14:42:03.652 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Sep 30 14:42:03 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:42:03 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:42:03 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:42:03.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:42:03 compute-0 nova_compute[261524]: 2025-09-30 14:42:03.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:42:03 compute-0 nova_compute[261524]: 2025-09-30 14:42:03.658 2 INFO os_vif [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0e:c5:53,bridge_name='br-int',has_traffic_filtering=True,id=fdd76f4a-6a11-467c-8f19-0b00baa4dbd1,network=Network(31e82792-2132-423c-8fb3-0fd2453172b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdd76f4a-6a')
Sep 30 14:42:03 compute-0 nova_compute[261524]: 2025-09-30 14:42:03.715 2 DEBUG nova.virt.libvirt.driver [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Sep 30 14:42:03 compute-0 nova_compute[261524]: 2025-09-30 14:42:03.716 2 DEBUG nova.virt.libvirt.driver [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Sep 30 14:42:03 compute-0 nova_compute[261524]: 2025-09-30 14:42:03.716 2 DEBUG nova.virt.libvirt.driver [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] No VIF found with MAC fa:16:3e:0e:c5:53, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Sep 30 14:42:03 compute-0 nova_compute[261524]: 2025-09-30 14:42:03.716 2 INFO nova.virt.libvirt.driver [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Using config drive
Sep 30 14:42:03 compute-0 nova_compute[261524]: 2025-09-30 14:42:03.743 2 DEBUG nova.storage.rbd_utils [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] rbd image c7b89511-067a-4ecf-9b88-41170118da87_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Sep 30 14:42:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:42:03 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:42:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:42:03 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:42:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:42:03 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:42:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:42:04 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:42:04 compute-0 sudo[271797]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:42:04 compute-0 sudo[271797]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:42:04 compute-0 sudo[271797]: pam_unix(sudo:session): session closed for user root
Sep 30 14:42:04 compute-0 ceph-mon[74194]: pgmap v796: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 14:42:04 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/793151755' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 14:42:04 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/866187569' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 14:42:04 compute-0 nova_compute[261524]: 2025-09-30 14:42:04.107 2 INFO nova.virt.libvirt.driver [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Creating config drive at /var/lib/nova/instances/c7b89511-067a-4ecf-9b88-41170118da87/disk.config
Sep 30 14:42:04 compute-0 nova_compute[261524]: 2025-09-30 14:42:04.113 2 DEBUG oslo_concurrency.processutils [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c7b89511-067a-4ecf-9b88-41170118da87/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpk4b65jg6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Sep 30 14:42:04 compute-0 nova_compute[261524]: 2025-09-30 14:42:04.171 2 DEBUG nova.network.neutron [req-cde08b26-ad09-48bb-8ee2-8222becd1bda req-e4dec396-9eef-47a9-89bb-961407bf4488 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Updated VIF entry in instance network info cache for port fdd76f4a-6a11-467c-8f19-0b00baa4dbd1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Sep 30 14:42:04 compute-0 nova_compute[261524]: 2025-09-30 14:42:04.172 2 DEBUG nova.network.neutron [req-cde08b26-ad09-48bb-8ee2-8222becd1bda req-e4dec396-9eef-47a9-89bb-961407bf4488 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Updating instance_info_cache with network_info: [{"id": "fdd76f4a-6a11-467c-8f19-0b00baa4dbd1", "address": "fa:16:3e:0e:c5:53", "network": {"id": "31e82792-2132-423c-8fb3-0fd2453172b3", "bridge": "br-int", "label": "tempest-network-smoke--156271235", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0f6bbb74396f4cb7bfa999ebdabfe722", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdd76f4a-6a", "ovs_interfaceid": "fdd76f4a-6a11-467c-8f19-0b00baa4dbd1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Sep 30 14:42:04 compute-0 nova_compute[261524]: 2025-09-30 14:42:04.188 2 DEBUG oslo_concurrency.lockutils [req-cde08b26-ad09-48bb-8ee2-8222becd1bda req-e4dec396-9eef-47a9-89bb-961407bf4488 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Releasing lock "refresh_cache-c7b89511-067a-4ecf-9b88-41170118da87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Sep 30 14:42:04 compute-0 nova_compute[261524]: 2025-09-30 14:42:04.244 2 DEBUG oslo_concurrency.processutils [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c7b89511-067a-4ecf-9b88-41170118da87/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpk4b65jg6" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Sep 30 14:42:04 compute-0 nova_compute[261524]: 2025-09-30 14:42:04.280 2 DEBUG nova.storage.rbd_utils [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] rbd image c7b89511-067a-4ecf-9b88-41170118da87_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Sep 30 14:42:04 compute-0 nova_compute[261524]: 2025-09-30 14:42:04.285 2 DEBUG oslo_concurrency.processutils [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c7b89511-067a-4ecf-9b88-41170118da87/disk.config c7b89511-067a-4ecf-9b88-41170118da87_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Sep 30 14:42:04 compute-0 nova_compute[261524]: 2025-09-30 14:42:04.479 2 DEBUG oslo_concurrency.processutils [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c7b89511-067a-4ecf-9b88-41170118da87/disk.config c7b89511-067a-4ecf-9b88-41170118da87_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.193s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Sep 30 14:42:04 compute-0 nova_compute[261524]: 2025-09-30 14:42:04.480 2 INFO nova.virt.libvirt.driver [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Deleting local config drive /var/lib/nova/instances/c7b89511-067a-4ecf-9b88-41170118da87/disk.config because it was imported into RBD.
Sep 30 14:42:04 compute-0 kernel: tapfdd76f4a-6a: entered promiscuous mode
Sep 30 14:42:04 compute-0 NetworkManager[45472]: <info>  [1759243324.5459] manager: (tapfdd76f4a-6a): new Tun device (/org/freedesktop/NetworkManager/Devices/34)
Sep 30 14:42:04 compute-0 ovn_controller[154021]: 2025-09-30T14:42:04Z|00038|binding|INFO|Claiming lport fdd76f4a-6a11-467c-8f19-0b00baa4dbd1 for this chassis.
Sep 30 14:42:04 compute-0 ovn_controller[154021]: 2025-09-30T14:42:04Z|00039|binding|INFO|fdd76f4a-6a11-467c-8f19-0b00baa4dbd1: Claiming fa:16:3e:0e:c5:53 10.100.0.11
Sep 30 14:42:04 compute-0 nova_compute[261524]: 2025-09-30 14:42:04.548 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:42:04 compute-0 nova_compute[261524]: 2025-09-30 14:42:04.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:42:04 compute-0 nova_compute[261524]: 2025-09-30 14:42:04.556 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:42:04 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:42:04.566 163966 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0e:c5:53 10.100.0.11'], port_security=['fa:16:3e:0e:c5:53 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'c7b89511-067a-4ecf-9b88-41170118da87', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-31e82792-2132-423c-8fb3-0fd2453172b3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0f6bbb74396f4cb7bfa999ebdabfe722', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8607df68-326f-4ba4-bbe1-2a261640b927', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bb4fab7e-c674-4340-863c-8e9b5fa39d81, chassis=[<ovs.db.idl.Row object at 0x7f8c6753f7f0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f8c6753f7f0>], logical_port=fdd76f4a-6a11-467c-8f19-0b00baa4dbd1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Sep 30 14:42:04 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:42:04.568 163966 INFO neutron.agent.ovn.metadata.agent [-] Port fdd76f4a-6a11-467c-8f19-0b00baa4dbd1 in datapath 31e82792-2132-423c-8fb3-0fd2453172b3 bound to our chassis
Sep 30 14:42:04 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:42:04.570 163966 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 31e82792-2132-423c-8fb3-0fd2453172b3
Sep 30 14:42:04 compute-0 systemd-machined[215710]: New machine qemu-2-instance-00000003.
Sep 30 14:42:04 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:42:04.585 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[539e7b09-ea92-406f-aecc-505e78b728c9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:42:04 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:42:04.586 163966 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap31e82792-21 in ovnmeta-31e82792-2132-423c-8fb3-0fd2453172b3 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Sep 30 14:42:04 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:42:04.588 269027 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap31e82792-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Sep 30 14:42:04 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:42:04.588 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[355bf72b-7c54-4aef-907b-38dba38d4c93]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:42:04 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:42:04.589 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[6ad1c655-9e23-4410-8a4b-8c7519d9a7da]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:42:04 compute-0 systemd[1]: Started Virtual Machine qemu-2-instance-00000003.
Sep 30 14:42:04 compute-0 systemd-udevd[271876]: Network interface NamePolicy= disabled on kernel command line.
Sep 30 14:42:04 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:42:04.611 164124 DEBUG oslo.privsep.daemon [-] privsep: reply[4015b030-7cdc-47a9-809f-9db5a573896b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:42:04 compute-0 NetworkManager[45472]: <info>  [1759243324.6257] device (tapfdd76f4a-6a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Sep 30 14:42:04 compute-0 NetworkManager[45472]: <info>  [1759243324.6266] device (tapfdd76f4a-6a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Sep 30 14:42:04 compute-0 nova_compute[261524]: 2025-09-30 14:42:04.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:42:04 compute-0 ovn_controller[154021]: 2025-09-30T14:42:04Z|00040|binding|INFO|Setting lport fdd76f4a-6a11-467c-8f19-0b00baa4dbd1 ovn-installed in OVS
Sep 30 14:42:04 compute-0 ovn_controller[154021]: 2025-09-30T14:42:04Z|00041|binding|INFO|Setting lport fdd76f4a-6a11-467c-8f19-0b00baa4dbd1 up in Southbound
Sep 30 14:42:04 compute-0 nova_compute[261524]: 2025-09-30 14:42:04.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:42:04 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:42:04.648 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[f7019082-8697-4dbd-b03e-0f73db9accb9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:42:04 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v797: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 14:42:04 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:42:04.686 269085 DEBUG oslo.privsep.daemon [-] privsep: reply[f1b01e69-ec5f-4bc4-9e27-e83f63302e2f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:42:04 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:42:04.692 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[9616f36d-0106-4401-b5e9-16569b705371]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:42:04 compute-0 systemd-udevd[271879]: Network interface NamePolicy= disabled on kernel command line.
Sep 30 14:42:04 compute-0 NetworkManager[45472]: <info>  [1759243324.6938] manager: (tap31e82792-20): new Veth device (/org/freedesktop/NetworkManager/Devices/35)
Sep 30 14:42:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:42:04] "GET /metrics HTTP/1.1" 200 48515 "" "Prometheus/2.51.0"
Sep 30 14:42:04 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:42:04] "GET /metrics HTTP/1.1" 200 48515 "" "Prometheus/2.51.0"
Sep 30 14:42:04 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:42:04.733 269085 DEBUG oslo.privsep.daemon [-] privsep: reply[d8694abd-f837-47a1-a7b1-b79b8ba33c4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:42:04 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:42:04.736 269085 DEBUG oslo.privsep.daemon [-] privsep: reply[bbbe8760-3b15-41d7-8f47-63520e30e601]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:42:04 compute-0 NetworkManager[45472]: <info>  [1759243324.7609] device (tap31e82792-20): carrier: link connected
Sep 30 14:42:04 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:42:04.765 269085 DEBUG oslo.privsep.daemon [-] privsep: reply[77ab8657-d2cf-418c-97f3-9e5791d39fad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:42:04 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:42:04.784 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[5dff7d72-e1c3-4fea-8f86-f377a49b815b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap31e82792-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e6:4b:23'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 668711, 'reachable_time': 42318, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271907, 'error': None, 'target': 'ovnmeta-31e82792-2132-423c-8fb3-0fd2453172b3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:42:04 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:42:04.801 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[b0e208bc-11ff-4d6c-a515-5425e2e95ae5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee6:4b23'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 668711, 'tstamp': 668711}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 271908, 'error': None, 'target': 'ovnmeta-31e82792-2132-423c-8fb3-0fd2453172b3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:42:04 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:42:04.823 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[e05f896f-1db4-4410-ae4f-29b974f2fd01]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap31e82792-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e6:4b:23'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 668711, 'reachable_time': 42318, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 271909, 'error': None, 'target': 'ovnmeta-31e82792-2132-423c-8fb3-0fd2453172b3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:42:04 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:42:04.862 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[aae1e4de-ac2a-4015-a2c8-8363df6135f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:42:04 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:42:04.928 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[f0f78349-6544-42e9-9229-1e580c5daea8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:42:04 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:42:04.929 163966 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap31e82792-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 14:42:04 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:42:04.929 163966 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 14:42:04 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:42:04.930 163966 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap31e82792-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 14:42:04 compute-0 nova_compute[261524]: 2025-09-30 14:42:04.932 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:42:04 compute-0 kernel: tap31e82792-20: entered promiscuous mode
Sep 30 14:42:04 compute-0 NetworkManager[45472]: <info>  [1759243324.9330] manager: (tap31e82792-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/36)
Sep 30 14:42:04 compute-0 nova_compute[261524]: 2025-09-30 14:42:04.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:42:04 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:42:04.935 163966 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap31e82792-20, col_values=(('external_ids', {'iface-id': 'ab328d8f-edae-49de-941c-ed1296da8c90'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 14:42:04 compute-0 nova_compute[261524]: 2025-09-30 14:42:04.936 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:42:04 compute-0 ovn_controller[154021]: 2025-09-30T14:42:04Z|00042|binding|INFO|Releasing lport ab328d8f-edae-49de-941c-ed1296da8c90 from this chassis (sb_readonly=0)
Sep 30 14:42:04 compute-0 nova_compute[261524]: 2025-09-30 14:42:04.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:42:04 compute-0 nova_compute[261524]: 2025-09-30 14:42:04.953 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:42:04 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:42:04.953 163966 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/31e82792-2132-423c-8fb3-0fd2453172b3.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/31e82792-2132-423c-8fb3-0fd2453172b3.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Sep 30 14:42:04 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:42:04.954 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[3c6a16d3-7cf7-4c37-b7f7-4aa152e3c36b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:42:04 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:42:04.955 163966 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Sep 30 14:42:04 compute-0 ovn_metadata_agent[163949]: global
Sep 30 14:42:04 compute-0 ovn_metadata_agent[163949]:     log         /dev/log local0 debug
Sep 30 14:42:04 compute-0 ovn_metadata_agent[163949]:     log-tag     haproxy-metadata-proxy-31e82792-2132-423c-8fb3-0fd2453172b3
Sep 30 14:42:04 compute-0 ovn_metadata_agent[163949]:     user        root
Sep 30 14:42:04 compute-0 ovn_metadata_agent[163949]:     group       root
Sep 30 14:42:04 compute-0 ovn_metadata_agent[163949]:     maxconn     1024
Sep 30 14:42:04 compute-0 ovn_metadata_agent[163949]:     pidfile     /var/lib/neutron/external/pids/31e82792-2132-423c-8fb3-0fd2453172b3.pid.haproxy
Sep 30 14:42:04 compute-0 ovn_metadata_agent[163949]:     daemon
Sep 30 14:42:04 compute-0 ovn_metadata_agent[163949]: 
Sep 30 14:42:04 compute-0 ovn_metadata_agent[163949]: defaults
Sep 30 14:42:04 compute-0 ovn_metadata_agent[163949]:     log global
Sep 30 14:42:04 compute-0 ovn_metadata_agent[163949]:     mode http
Sep 30 14:42:04 compute-0 ovn_metadata_agent[163949]:     option httplog
Sep 30 14:42:04 compute-0 ovn_metadata_agent[163949]:     option dontlognull
Sep 30 14:42:04 compute-0 ovn_metadata_agent[163949]:     option http-server-close
Sep 30 14:42:04 compute-0 ovn_metadata_agent[163949]:     option forwardfor
Sep 30 14:42:04 compute-0 ovn_metadata_agent[163949]:     retries                 3
Sep 30 14:42:04 compute-0 ovn_metadata_agent[163949]:     timeout http-request    30s
Sep 30 14:42:04 compute-0 ovn_metadata_agent[163949]:     timeout connect         30s
Sep 30 14:42:04 compute-0 ovn_metadata_agent[163949]:     timeout client          32s
Sep 30 14:42:04 compute-0 ovn_metadata_agent[163949]:     timeout server          32s
Sep 30 14:42:04 compute-0 ovn_metadata_agent[163949]:     timeout http-keep-alive 30s
Sep 30 14:42:04 compute-0 ovn_metadata_agent[163949]: 
Sep 30 14:42:04 compute-0 ovn_metadata_agent[163949]: 
Sep 30 14:42:04 compute-0 ovn_metadata_agent[163949]: listen listener
Sep 30 14:42:04 compute-0 ovn_metadata_agent[163949]:     bind 169.254.169.254:80
Sep 30 14:42:04 compute-0 ovn_metadata_agent[163949]:     server metadata /var/lib/neutron/metadata_proxy
Sep 30 14:42:04 compute-0 ovn_metadata_agent[163949]:     http-request add-header X-OVN-Network-ID 31e82792-2132-423c-8fb3-0fd2453172b3
Sep 30 14:42:04 compute-0 ovn_metadata_agent[163949]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Sep 30 14:42:04 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:42:04.956 163966 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-31e82792-2132-423c-8fb3-0fd2453172b3', 'env', 'PROCESS_TAG=haproxy-31e82792-2132-423c-8fb3-0fd2453172b3', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/31e82792-2132-423c-8fb3-0fd2453172b3.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Sep 30 14:42:05 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:42:05 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:42:05 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:42:05.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:42:05 compute-0 nova_compute[261524]: 2025-09-30 14:42:05.192 2 DEBUG nova.compute.manager [req-a87bb2e8-f874-43d1-a1c0-b514c0d04854 req-dce96520-1419-4e85-b9e7-6bc5d02bd1a5 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Received event network-vif-plugged-fdd76f4a-6a11-467c-8f19-0b00baa4dbd1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Sep 30 14:42:05 compute-0 nova_compute[261524]: 2025-09-30 14:42:05.193 2 DEBUG oslo_concurrency.lockutils [req-a87bb2e8-f874-43d1-a1c0-b514c0d04854 req-dce96520-1419-4e85-b9e7-6bc5d02bd1a5 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Acquiring lock "c7b89511-067a-4ecf-9b88-41170118da87-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:42:05 compute-0 nova_compute[261524]: 2025-09-30 14:42:05.194 2 DEBUG oslo_concurrency.lockutils [req-a87bb2e8-f874-43d1-a1c0-b514c0d04854 req-dce96520-1419-4e85-b9e7-6bc5d02bd1a5 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Lock "c7b89511-067a-4ecf-9b88-41170118da87-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:42:05 compute-0 nova_compute[261524]: 2025-09-30 14:42:05.195 2 DEBUG oslo_concurrency.lockutils [req-a87bb2e8-f874-43d1-a1c0-b514c0d04854 req-dce96520-1419-4e85-b9e7-6bc5d02bd1a5 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Lock "c7b89511-067a-4ecf-9b88-41170118da87-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:42:05 compute-0 nova_compute[261524]: 2025-09-30 14:42:05.195 2 DEBUG nova.compute.manager [req-a87bb2e8-f874-43d1-a1c0-b514c0d04854 req-dce96520-1419-4e85-b9e7-6bc5d02bd1a5 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Processing event network-vif-plugged-fdd76f4a-6a11-467c-8f19-0b00baa4dbd1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Sep 30 14:42:05 compute-0 nova_compute[261524]: 2025-09-30 14:42:05.257 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:42:05 compute-0 podman[271983]: 2025-09-30 14:42:05.391132821 +0000 UTC m=+0.064112693 container create 8c243b1a2bf5d7b5a4e3660893c53b614fd40e5e8b252794386e9341147fdc7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-31e82792-2132-423c-8fb3-0fd2453172b3, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Sep 30 14:42:05 compute-0 systemd[1]: Started libpod-conmon-8c243b1a2bf5d7b5a4e3660893c53b614fd40e5e8b252794386e9341147fdc7b.scope.
Sep 30 14:42:05 compute-0 podman[271983]: 2025-09-30 14:42:05.358380822 +0000 UTC m=+0.031360704 image pull aa21cc3d2531fe07b45a943d4ac1ba0268bfab26b0884a4a00fbad7695318ba9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Sep 30 14:42:05 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:42:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2dc783c118845abbf0c38aaf0f8f63b9de3df681da2b424b7896423242f58901/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Sep 30 14:42:05 compute-0 podman[271983]: 2025-09-30 14:42:05.483384273 +0000 UTC m=+0.156364165 container init 8c243b1a2bf5d7b5a4e3660893c53b614fd40e5e8b252794386e9341147fdc7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-31e82792-2132-423c-8fb3-0fd2453172b3, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Sep 30 14:42:05 compute-0 podman[271983]: 2025-09-30 14:42:05.488969338 +0000 UTC m=+0.161949210 container start 8c243b1a2bf5d7b5a4e3660893c53b614fd40e5e8b252794386e9341147fdc7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-31e82792-2132-423c-8fb3-0fd2453172b3, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Sep 30 14:42:05 compute-0 neutron-haproxy-ovnmeta-31e82792-2132-423c-8fb3-0fd2453172b3[271999]: [NOTICE]   (272003) : New worker (272005) forked
Sep 30 14:42:05 compute-0 neutron-haproxy-ovnmeta-31e82792-2132-423c-8fb3-0fd2453172b3[271999]: [NOTICE]   (272003) : Loading success.
Sep 30 14:42:05 compute-0 nova_compute[261524]: 2025-09-30 14:42:05.530 2 DEBUG nova.virt.driver [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] Emitting event <LifecycleEvent: 1759243325.530122, c7b89511-067a-4ecf-9b88-41170118da87 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Sep 30 14:42:05 compute-0 nova_compute[261524]: 2025-09-30 14:42:05.531 2 INFO nova.compute.manager [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] [instance: c7b89511-067a-4ecf-9b88-41170118da87] VM Started (Lifecycle Event)
Sep 30 14:42:05 compute-0 nova_compute[261524]: 2025-09-30 14:42:05.534 2 DEBUG nova.compute.manager [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Sep 30 14:42:05 compute-0 nova_compute[261524]: 2025-09-30 14:42:05.538 2 DEBUG nova.virt.libvirt.driver [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Sep 30 14:42:05 compute-0 nova_compute[261524]: 2025-09-30 14:42:05.542 2 INFO nova.virt.libvirt.driver [-] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Instance spawned successfully.
Sep 30 14:42:05 compute-0 nova_compute[261524]: 2025-09-30 14:42:05.543 2 DEBUG nova.virt.libvirt.driver [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Sep 30 14:42:05 compute-0 nova_compute[261524]: 2025-09-30 14:42:05.548 2 DEBUG nova.compute.manager [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Sep 30 14:42:05 compute-0 nova_compute[261524]: 2025-09-30 14:42:05.552 2 DEBUG nova.compute.manager [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Sep 30 14:42:05 compute-0 nova_compute[261524]: 2025-09-30 14:42:05.563 2 DEBUG nova.virt.libvirt.driver [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Sep 30 14:42:05 compute-0 nova_compute[261524]: 2025-09-30 14:42:05.564 2 DEBUG nova.virt.libvirt.driver [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Sep 30 14:42:05 compute-0 nova_compute[261524]: 2025-09-30 14:42:05.564 2 DEBUG nova.virt.libvirt.driver [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Sep 30 14:42:05 compute-0 nova_compute[261524]: 2025-09-30 14:42:05.565 2 DEBUG nova.virt.libvirt.driver [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Sep 30 14:42:05 compute-0 nova_compute[261524]: 2025-09-30 14:42:05.566 2 DEBUG nova.virt.libvirt.driver [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Sep 30 14:42:05 compute-0 nova_compute[261524]: 2025-09-30 14:42:05.566 2 DEBUG nova.virt.libvirt.driver [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Sep 30 14:42:05 compute-0 nova_compute[261524]: 2025-09-30 14:42:05.570 2 INFO nova.compute.manager [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] [instance: c7b89511-067a-4ecf-9b88-41170118da87] During sync_power_state the instance has a pending task (spawning). Skip.
Sep 30 14:42:05 compute-0 nova_compute[261524]: 2025-09-30 14:42:05.571 2 DEBUG nova.virt.driver [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] Emitting event <LifecycleEvent: 1759243325.5313087, c7b89511-067a-4ecf-9b88-41170118da87 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Sep 30 14:42:05 compute-0 nova_compute[261524]: 2025-09-30 14:42:05.571 2 INFO nova.compute.manager [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] [instance: c7b89511-067a-4ecf-9b88-41170118da87] VM Paused (Lifecycle Event)
Sep 30 14:42:05 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:42:05 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 14:42:05 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:42:05.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 14:42:05 compute-0 nova_compute[261524]: 2025-09-30 14:42:05.698 2 DEBUG nova.compute.manager [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Sep 30 14:42:05 compute-0 nova_compute[261524]: 2025-09-30 14:42:05.700 2 INFO nova.compute.manager [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Took 8.43 seconds to spawn the instance on the hypervisor.
Sep 30 14:42:05 compute-0 nova_compute[261524]: 2025-09-30 14:42:05.700 2 DEBUG nova.compute.manager [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Sep 30 14:42:05 compute-0 nova_compute[261524]: 2025-09-30 14:42:05.704 2 DEBUG nova.virt.driver [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] Emitting event <LifecycleEvent: 1759243325.537435, c7b89511-067a-4ecf-9b88-41170118da87 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Sep 30 14:42:05 compute-0 nova_compute[261524]: 2025-09-30 14:42:05.704 2 INFO nova.compute.manager [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] [instance: c7b89511-067a-4ecf-9b88-41170118da87] VM Resumed (Lifecycle Event)
Sep 30 14:42:05 compute-0 nova_compute[261524]: 2025-09-30 14:42:05.743 2 DEBUG nova.compute.manager [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Sep 30 14:42:05 compute-0 nova_compute[261524]: 2025-09-30 14:42:05.746 2 DEBUG nova.compute.manager [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Sep 30 14:42:05 compute-0 nova_compute[261524]: 2025-09-30 14:42:05.766 2 INFO nova.compute.manager [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Took 9.39 seconds to build instance.
Sep 30 14:42:05 compute-0 nova_compute[261524]: 2025-09-30 14:42:05.783 2 DEBUG oslo_concurrency.lockutils [None req-e4f090de-fa59-49d0-ba0e-e481ddf6a43d 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Lock "c7b89511-067a-4ecf-9b88-41170118da87" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.482s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:42:06 compute-0 ceph-mon[74194]: pgmap v797: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 14:42:06 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v798: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 761 KiB/s rd, 1.8 MiB/s wr, 62 op/s
Sep 30 14:42:07 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:42:07 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 14:42:07 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:42:07.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 14:42:07 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:42:07 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:42:07.128Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:42:07 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/1719762284' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:42:07 compute-0 nova_compute[261524]: 2025-09-30 14:42:07.272 2 DEBUG nova.compute.manager [req-fd11c95b-7543-4c65-be61-45beeb2e9947 req-ab312bea-e070-437c-bd61-97a7bb7fd6dd e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Received event network-vif-plugged-fdd76f4a-6a11-467c-8f19-0b00baa4dbd1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Sep 30 14:42:07 compute-0 nova_compute[261524]: 2025-09-30 14:42:07.273 2 DEBUG oslo_concurrency.lockutils [req-fd11c95b-7543-4c65-be61-45beeb2e9947 req-ab312bea-e070-437c-bd61-97a7bb7fd6dd e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Acquiring lock "c7b89511-067a-4ecf-9b88-41170118da87-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:42:07 compute-0 nova_compute[261524]: 2025-09-30 14:42:07.273 2 DEBUG oslo_concurrency.lockutils [req-fd11c95b-7543-4c65-be61-45beeb2e9947 req-ab312bea-e070-437c-bd61-97a7bb7fd6dd e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Lock "c7b89511-067a-4ecf-9b88-41170118da87-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:42:07 compute-0 nova_compute[261524]: 2025-09-30 14:42:07.274 2 DEBUG oslo_concurrency.lockutils [req-fd11c95b-7543-4c65-be61-45beeb2e9947 req-ab312bea-e070-437c-bd61-97a7bb7fd6dd e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Lock "c7b89511-067a-4ecf-9b88-41170118da87-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:42:07 compute-0 nova_compute[261524]: 2025-09-30 14:42:07.274 2 DEBUG nova.compute.manager [req-fd11c95b-7543-4c65-be61-45beeb2e9947 req-ab312bea-e070-437c-bd61-97a7bb7fd6dd e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: c7b89511-067a-4ecf-9b88-41170118da87] No waiting events found dispatching network-vif-plugged-fdd76f4a-6a11-467c-8f19-0b00baa4dbd1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Sep 30 14:42:07 compute-0 nova_compute[261524]: 2025-09-30 14:42:07.275 2 WARNING nova.compute.manager [req-fd11c95b-7543-4c65-be61-45beeb2e9947 req-ab312bea-e070-437c-bd61-97a7bb7fd6dd e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Received unexpected event network-vif-plugged-fdd76f4a-6a11-467c-8f19-0b00baa4dbd1 for instance with vm_state active and task_state None.
Sep 30 14:42:07 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:42:07 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:42:07 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:42:07.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:42:08 compute-0 ceph-mon[74194]: pgmap v798: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 761 KiB/s rd, 1.8 MiB/s wr, 62 op/s
Sep 30 14:42:08 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/360485833' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:42:08 compute-0 nova_compute[261524]: 2025-09-30 14:42:08.649 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:42:08 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v799: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 761 KiB/s rd, 1.8 MiB/s wr, 62 op/s
Sep 30 14:42:09 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:42:08 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:42:09 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:42:08 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:42:09 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:42:08 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:42:09 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:42:09 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:42:09 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:42:09 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:42:09 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:42:09.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:42:09 compute-0 nova_compute[261524]: 2025-09-30 14:42:09.281 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:42:09 compute-0 NetworkManager[45472]: <info>  [1759243329.2840] manager: (patch-provnet-5acf2efb-cf69-45fa-8cf3-f555bc74ee6d-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/37)
Sep 30 14:42:09 compute-0 ovn_controller[154021]: 2025-09-30T14:42:09Z|00043|binding|INFO|Releasing lport ab328d8f-edae-49de-941c-ed1296da8c90 from this chassis (sb_readonly=0)
Sep 30 14:42:09 compute-0 NetworkManager[45472]: <info>  [1759243329.2858] manager: (patch-br-int-to-provnet-5acf2efb-cf69-45fa-8cf3-f555bc74ee6d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/38)
Sep 30 14:42:09 compute-0 ovn_controller[154021]: 2025-09-30T14:42:09Z|00044|binding|INFO|Releasing lport ab328d8f-edae-49de-941c-ed1296da8c90 from this chassis (sb_readonly=0)
Sep 30 14:42:09 compute-0 nova_compute[261524]: 2025-09-30 14:42:09.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:42:09 compute-0 nova_compute[261524]: 2025-09-30 14:42:09.336 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:42:09 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:42:09 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:42:09 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:42:09.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:42:09 compute-0 nova_compute[261524]: 2025-09-30 14:42:09.680 2 DEBUG nova.compute.manager [req-8fba6c2f-ae3c-446d-a744-d59051e24e9f req-5da8ca0a-6935-440a-9ea3-d4f9a8de25b4 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Received event network-changed-fdd76f4a-6a11-467c-8f19-0b00baa4dbd1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Sep 30 14:42:09 compute-0 nova_compute[261524]: 2025-09-30 14:42:09.681 2 DEBUG nova.compute.manager [req-8fba6c2f-ae3c-446d-a744-d59051e24e9f req-5da8ca0a-6935-440a-9ea3-d4f9a8de25b4 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Refreshing instance network info cache due to event network-changed-fdd76f4a-6a11-467c-8f19-0b00baa4dbd1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Sep 30 14:42:09 compute-0 nova_compute[261524]: 2025-09-30 14:42:09.681 2 DEBUG oslo_concurrency.lockutils [req-8fba6c2f-ae3c-446d-a744-d59051e24e9f req-5da8ca0a-6935-440a-9ea3-d4f9a8de25b4 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Acquiring lock "refresh_cache-c7b89511-067a-4ecf-9b88-41170118da87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Sep 30 14:42:09 compute-0 nova_compute[261524]: 2025-09-30 14:42:09.682 2 DEBUG oslo_concurrency.lockutils [req-8fba6c2f-ae3c-446d-a744-d59051e24e9f req-5da8ca0a-6935-440a-9ea3-d4f9a8de25b4 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Acquired lock "refresh_cache-c7b89511-067a-4ecf-9b88-41170118da87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Sep 30 14:42:09 compute-0 nova_compute[261524]: 2025-09-30 14:42:09.682 2 DEBUG nova.network.neutron [req-8fba6c2f-ae3c-446d-a744-d59051e24e9f req-5da8ca0a-6935-440a-9ea3-d4f9a8de25b4 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Refreshing network info cache for port fdd76f4a-6a11-467c-8f19-0b00baa4dbd1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Sep 30 14:42:09 compute-0 ceph-mon[74194]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Sep 30 14:42:09 compute-0 ceph-mon[74194]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Cumulative writes: 5563 writes, 25K keys, 5563 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.03 MB/s
                                           Cumulative WAL: 5563 writes, 5563 syncs, 1.00 writes per sync, written: 0.04 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1534 writes, 6794 keys, 1534 commit groups, 1.0 writes per commit group, ingest: 11.41 MB, 0.02 MB/s
                                           Interval WAL: 1534 writes, 1534 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     77.0      0.50              0.09        14    0.036       0      0       0.0       0.0
                                             L6      1/0   12.39 MB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   4.2    120.5    103.8      1.58              0.39        13    0.122     67K   6764       0.0       0.0
                                            Sum      1/0   12.39 MB   0.0      0.2     0.0      0.1       0.2      0.1       0.0   5.2     91.4     97.3      2.08              0.48        27    0.077     67K   6764       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.5    119.8    121.3      0.73              0.23        12    0.061     34K   3109       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   0.0    120.5    103.8      1.58              0.39        13    0.122     67K   6764       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     77.5      0.50              0.09        13    0.038       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     13.7      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.038, interval 0.012
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.20 GB write, 0.11 MB/s write, 0.19 GB read, 0.11 MB/s read, 2.1 seconds
                                           Interval compaction: 0.09 GB write, 0.15 MB/s write, 0.09 GB read, 0.15 MB/s read, 0.7 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5596d7211350#2 capacity: 304.00 MB usage: 14.38 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 0.000164 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(809,13.83 MB,4.54929%) FilterBlock(28,201.80 KB,0.0648248%) IndexBlock(28,356.70 KB,0.114586%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Sep 30 14:42:10 compute-0 ceph-mon[74194]: pgmap v799: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 761 KiB/s rd, 1.8 MiB/s wr, 62 op/s
Sep 30 14:42:10 compute-0 nova_compute[261524]: 2025-09-30 14:42:10.260 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:42:10 compute-0 nova_compute[261524]: 2025-09-30 14:42:10.644 2 DEBUG nova.network.neutron [req-8fba6c2f-ae3c-446d-a744-d59051e24e9f req-5da8ca0a-6935-440a-9ea3-d4f9a8de25b4 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Updated VIF entry in instance network info cache for port fdd76f4a-6a11-467c-8f19-0b00baa4dbd1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Sep 30 14:42:10 compute-0 nova_compute[261524]: 2025-09-30 14:42:10.644 2 DEBUG nova.network.neutron [req-8fba6c2f-ae3c-446d-a744-d59051e24e9f req-5da8ca0a-6935-440a-9ea3-d4f9a8de25b4 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Updating instance_info_cache with network_info: [{"id": "fdd76f4a-6a11-467c-8f19-0b00baa4dbd1", "address": "fa:16:3e:0e:c5:53", "network": {"id": "31e82792-2132-423c-8fb3-0fd2453172b3", "bridge": "br-int", "label": "tempest-network-smoke--156271235", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0f6bbb74396f4cb7bfa999ebdabfe722", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdd76f4a-6a", "ovs_interfaceid": "fdd76f4a-6a11-467c-8f19-0b00baa4dbd1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Sep 30 14:42:10 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v800: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 761 KiB/s rd, 1.8 MiB/s wr, 62 op/s
Sep 30 14:42:10 compute-0 nova_compute[261524]: 2025-09-30 14:42:10.669 2 DEBUG oslo_concurrency.lockutils [req-8fba6c2f-ae3c-446d-a744-d59051e24e9f req-5da8ca0a-6935-440a-9ea3-d4f9a8de25b4 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Releasing lock "refresh_cache-c7b89511-067a-4ecf-9b88-41170118da87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Sep 30 14:42:11 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:42:11 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:42:11 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:42:11.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:42:11 compute-0 ceph-mon[74194]: from='client.? 192.168.122.10:0/2358223063' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 14:42:11 compute-0 ceph-mon[74194]: from='client.? 192.168.122.10:0/2358223063' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 14:42:11 compute-0 nova_compute[261524]: 2025-09-30 14:42:11.427 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:42:11 compute-0 nova_compute[261524]: 2025-09-30 14:42:11.448 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:42:11 compute-0 nova_compute[261524]: 2025-09-30 14:42:11.449 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:42:11 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:42:11 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:42:11 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:42:11.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:42:11 compute-0 nova_compute[261524]: 2025-09-30 14:42:11.952 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:42:11 compute-0 nova_compute[261524]: 2025-09-30 14:42:11.953 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:42:11 compute-0 nova_compute[261524]: 2025-09-30 14:42:11.953 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Sep 30 14:42:11 compute-0 nova_compute[261524]: 2025-09-30 14:42:11.953 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Sep 30 14:42:12 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 14:42:12 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/716772960' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:42:12 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:42:12 compute-0 ceph-mon[74194]: pgmap v800: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 761 KiB/s rd, 1.8 MiB/s wr, 62 op/s
Sep 30 14:42:12 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/3544428309' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:42:12 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/716772960' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:42:12 compute-0 nova_compute[261524]: 2025-09-30 14:42:12.508 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Acquiring lock "refresh_cache-c7b89511-067a-4ecf-9b88-41170118da87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Sep 30 14:42:12 compute-0 nova_compute[261524]: 2025-09-30 14:42:12.509 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Acquired lock "refresh_cache-c7b89511-067a-4ecf-9b88-41170118da87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Sep 30 14:42:12 compute-0 nova_compute[261524]: 2025-09-30 14:42:12.509 2 DEBUG nova.network.neutron [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Sep 30 14:42:12 compute-0 nova_compute[261524]: 2025-09-30 14:42:12.510 2 DEBUG nova.objects.instance [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lazy-loading 'info_cache' on Instance uuid c7b89511-067a-4ecf-9b88-41170118da87 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Sep 30 14:42:12 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v801: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Sep 30 14:42:13 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:42:13 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:42:13 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:42:13.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:42:13 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:42:13.635Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:42:13 compute-0 nova_compute[261524]: 2025-09-30 14:42:13.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:42:13 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:42:13 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:42:13 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:42:13.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:42:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:42:13 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:42:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:42:13 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:42:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:42:13 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:42:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:42:14 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:42:14 compute-0 ceph-mon[74194]: pgmap v801: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Sep 30 14:42:14 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:42:14 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:42:14 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v802: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Sep 30 14:42:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:42:14] "GET /metrics HTTP/1.1" 200 48540 "" "Prometheus/2.51.0"
Sep 30 14:42:14 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:42:14] "GET /metrics HTTP/1.1" 200 48540 "" "Prometheus/2.51.0"
Sep 30 14:42:15 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:42:15 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:42:15 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:42:15.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:42:15 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:42:15 compute-0 nova_compute[261524]: 2025-09-30 14:42:15.262 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:42:15 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:42:15 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:42:15 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:42:15.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:42:15 compute-0 nova_compute[261524]: 2025-09-30 14:42:15.787 2 DEBUG nova.network.neutron [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Updating instance_info_cache with network_info: [{"id": "fdd76f4a-6a11-467c-8f19-0b00baa4dbd1", "address": "fa:16:3e:0e:c5:53", "network": {"id": "31e82792-2132-423c-8fb3-0fd2453172b3", "bridge": "br-int", "label": "tempest-network-smoke--156271235", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0f6bbb74396f4cb7bfa999ebdabfe722", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdd76f4a-6a", "ovs_interfaceid": "fdd76f4a-6a11-467c-8f19-0b00baa4dbd1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Sep 30 14:42:15 compute-0 nova_compute[261524]: 2025-09-30 14:42:15.803 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Releasing lock "refresh_cache-c7b89511-067a-4ecf-9b88-41170118da87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Sep 30 14:42:15 compute-0 nova_compute[261524]: 2025-09-30 14:42:15.803 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Sep 30 14:42:15 compute-0 nova_compute[261524]: 2025-09-30 14:42:15.804 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:42:15 compute-0 nova_compute[261524]: 2025-09-30 14:42:15.804 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:42:15 compute-0 nova_compute[261524]: 2025-09-30 14:42:15.804 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:42:15 compute-0 nova_compute[261524]: 2025-09-30 14:42:15.804 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:42:15 compute-0 nova_compute[261524]: 2025-09-30 14:42:15.805 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Sep 30 14:42:15 compute-0 nova_compute[261524]: 2025-09-30 14:42:15.805 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:42:15 compute-0 nova_compute[261524]: 2025-09-30 14:42:15.825 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:42:15 compute-0 nova_compute[261524]: 2025-09-30 14:42:15.825 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:42:15 compute-0 nova_compute[261524]: 2025-09-30 14:42:15.826 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:42:15 compute-0 nova_compute[261524]: 2025-09-30 14:42:15.826 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Sep 30 14:42:15 compute-0 nova_compute[261524]: 2025-09-30 14:42:15.827 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Sep 30 14:42:16 compute-0 ceph-mon[74194]: pgmap v802: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Sep 30 14:42:16 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 14:42:16 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4251857364' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:42:16 compute-0 nova_compute[261524]: 2025-09-30 14:42:16.350 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Sep 30 14:42:16 compute-0 nova_compute[261524]: 2025-09-30 14:42:16.421 2 DEBUG nova.virt.libvirt.driver [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Sep 30 14:42:16 compute-0 nova_compute[261524]: 2025-09-30 14:42:16.422 2 DEBUG nova.virt.libvirt.driver [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Sep 30 14:42:16 compute-0 nova_compute[261524]: 2025-09-30 14:42:16.638 2 WARNING nova.virt.libvirt.driver [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 14:42:16 compute-0 nova_compute[261524]: 2025-09-30 14:42:16.639 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4420MB free_disk=59.96738052368164GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Sep 30 14:42:16 compute-0 nova_compute[261524]: 2025-09-30 14:42:16.640 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:42:16 compute-0 nova_compute[261524]: 2025-09-30 14:42:16.640 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:42:16 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v803: 337 pgs: 337 active+clean; 109 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.0 MiB/s wr, 100 op/s
Sep 30 14:42:16 compute-0 nova_compute[261524]: 2025-09-30 14:42:16.703 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Instance c7b89511-067a-4ecf-9b88-41170118da87 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Sep 30 14:42:16 compute-0 nova_compute[261524]: 2025-09-30 14:42:16.704 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Sep 30 14:42:16 compute-0 nova_compute[261524]: 2025-09-30 14:42:16.704 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Sep 30 14:42:16 compute-0 nova_compute[261524]: 2025-09-30 14:42:16.732 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Sep 30 14:42:16 compute-0 sudo[272068]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:42:16 compute-0 sudo[272068]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:42:16 compute-0 sudo[272068]: pam_unix(sudo:session): session closed for user root
Sep 30 14:42:16 compute-0 sudo[272094]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 14:42:16 compute-0 sudo[272094]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:42:17 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:42:17 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:42:17 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:42:17.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:42:17 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:42:17 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:42:17.128Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:42:17 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:42:17.129Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:42:17 compute-0 ovn_controller[154021]: 2025-09-30T14:42:17Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:0e:c5:53 10.100.0.11
Sep 30 14:42:17 compute-0 ovn_controller[154021]: 2025-09-30T14:42:17Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:0e:c5:53 10.100.0.11
Sep 30 14:42:17 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 14:42:17 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1918857988' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:42:17 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/4251857364' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:42:17 compute-0 ceph-mon[74194]: pgmap v803: 337 pgs: 337 active+clean; 109 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.0 MiB/s wr, 100 op/s
Sep 30 14:42:17 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/1918857988' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:42:17 compute-0 nova_compute[261524]: 2025-09-30 14:42:17.266 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Sep 30 14:42:17 compute-0 nova_compute[261524]: 2025-09-30 14:42:17.278 2 DEBUG nova.compute.provider_tree [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Inventory has not changed in ProviderTree for provider: 06783cfc-6d32-454d-9501-ebd8adea3735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Sep 30 14:42:17 compute-0 nova_compute[261524]: 2025-09-30 14:42:17.298 2 DEBUG nova.scheduler.client.report [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Inventory has not changed for provider 06783cfc-6d32-454d-9501-ebd8adea3735 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Sep 30 14:42:17 compute-0 nova_compute[261524]: 2025-09-30 14:42:17.330 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Sep 30 14:42:17 compute-0 nova_compute[261524]: 2025-09-30 14:42:17.330 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.690s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:42:17 compute-0 sudo[272094]: pam_unix(sudo:session): session closed for user root
Sep 30 14:42:17 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:42:17 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:42:17 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 14:42:17 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 14:42:17 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 14:42:17 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:42:17 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 14:42:17 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:42:17 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:42:17 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:42:17.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:42:17 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:42:17 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 14:42:17 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 14:42:17 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 14:42:17 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 14:42:17 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:42:17 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:42:17 compute-0 sudo[272155]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:42:17 compute-0 sudo[272155]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:42:17 compute-0 sudo[272155]: pam_unix(sudo:session): session closed for user root
Sep 30 14:42:17 compute-0 sudo[272180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 14:42:17 compute-0 sudo[272180]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:42:18 compute-0 podman[272248]: 2025-09-30 14:42:18.270123423 +0000 UTC m=+0.059652328 container create 820bec13ed66ff20e011c52047f2cb54bacc7d3f8d4999bd21e6bacd3ee4db19 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_buck, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:42:18 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:42:18 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 14:42:18 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:42:18 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:42:18 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 14:42:18 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 14:42:18 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:42:18 compute-0 systemd[1]: Started libpod-conmon-820bec13ed66ff20e011c52047f2cb54bacc7d3f8d4999bd21e6bacd3ee4db19.scope.
Sep 30 14:42:18 compute-0 podman[272248]: 2025-09-30 14:42:18.239387306 +0000 UTC m=+0.028916291 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:42:18 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:42:18 compute-0 podman[272248]: 2025-09-30 14:42:18.35794855 +0000 UTC m=+0.147477515 container init 820bec13ed66ff20e011c52047f2cb54bacc7d3f8d4999bd21e6bacd3ee4db19 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_buck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:42:18 compute-0 podman[272248]: 2025-09-30 14:42:18.370229028 +0000 UTC m=+0.159757933 container start 820bec13ed66ff20e011c52047f2cb54bacc7d3f8d4999bd21e6bacd3ee4db19 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_buck, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Sep 30 14:42:18 compute-0 podman[272248]: 2025-09-30 14:42:18.373260727 +0000 UTC m=+0.162789662 container attach 820bec13ed66ff20e011c52047f2cb54bacc7d3f8d4999bd21e6bacd3ee4db19 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_buck, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:42:18 compute-0 elegant_buck[272263]: 167 167
Sep 30 14:42:18 compute-0 systemd[1]: libpod-820bec13ed66ff20e011c52047f2cb54bacc7d3f8d4999bd21e6bacd3ee4db19.scope: Deactivated successfully.
Sep 30 14:42:18 compute-0 podman[272248]: 2025-09-30 14:42:18.380905985 +0000 UTC m=+0.170434890 container died 820bec13ed66ff20e011c52047f2cb54bacc7d3f8d4999bd21e6bacd3ee4db19 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_buck, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:42:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-95e76f7f497c8437a448d24e1749401af1f0f3a63ce7dfce8e2fe626353bd0c2-merged.mount: Deactivated successfully.
Sep 30 14:42:18 compute-0 podman[272248]: 2025-09-30 14:42:18.426261781 +0000 UTC m=+0.215790716 container remove 820bec13ed66ff20e011c52047f2cb54bacc7d3f8d4999bd21e6bacd3ee4db19 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_buck, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Sep 30 14:42:18 compute-0 systemd[1]: libpod-conmon-820bec13ed66ff20e011c52047f2cb54bacc7d3f8d4999bd21e6bacd3ee4db19.scope: Deactivated successfully.
Sep 30 14:42:18 compute-0 nova_compute[261524]: 2025-09-30 14:42:18.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:42:18 compute-0 podman[272288]: 2025-09-30 14:42:18.662556997 +0000 UTC m=+0.059941305 container create a367a85f65792bba8a8312e8472ece90d3bc2cd6c0741e7c361ea4a1a2714cc7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:42:18 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v804: 337 pgs: 337 active+clean; 109 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 2.0 MiB/s wr, 65 op/s
Sep 30 14:42:18 compute-0 systemd[1]: Started libpod-conmon-a367a85f65792bba8a8312e8472ece90d3bc2cd6c0741e7c361ea4a1a2714cc7.scope.
Sep 30 14:42:18 compute-0 podman[272288]: 2025-09-30 14:42:18.641485891 +0000 UTC m=+0.038870189 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:42:18 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:42:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfc3db35ffe736f953e8b0b2b4594aacc29a178abfc5817d8d68c93408411ce1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:42:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfc3db35ffe736f953e8b0b2b4594aacc29a178abfc5817d8d68c93408411ce1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:42:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfc3db35ffe736f953e8b0b2b4594aacc29a178abfc5817d8d68c93408411ce1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:42:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfc3db35ffe736f953e8b0b2b4594aacc29a178abfc5817d8d68c93408411ce1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:42:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfc3db35ffe736f953e8b0b2b4594aacc29a178abfc5817d8d68c93408411ce1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 14:42:18 compute-0 podman[272288]: 2025-09-30 14:42:18.768085933 +0000 UTC m=+0.165470221 container init a367a85f65792bba8a8312e8472ece90d3bc2cd6c0741e7c361ea4a1a2714cc7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_leavitt, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:42:18 compute-0 podman[272288]: 2025-09-30 14:42:18.782277241 +0000 UTC m=+0.179661519 container start a367a85f65792bba8a8312e8472ece90d3bc2cd6c0741e7c361ea4a1a2714cc7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:42:18 compute-0 podman[272288]: 2025-09-30 14:42:18.785962756 +0000 UTC m=+0.183347074 container attach a367a85f65792bba8a8312e8472ece90d3bc2cd6c0741e7c361ea4a1a2714cc7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_leavitt, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Sep 30 14:42:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:42:18 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:42:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:42:19 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:42:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:42:19 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:42:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:42:19 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:42:19 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:42:19 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:42:19 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:42:19.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:42:19 compute-0 nice_leavitt[272304]: --> passed data devices: 0 physical, 1 LVM
Sep 30 14:42:19 compute-0 nice_leavitt[272304]: --> All data devices are unavailable
Sep 30 14:42:19 compute-0 systemd[1]: libpod-a367a85f65792bba8a8312e8472ece90d3bc2cd6c0741e7c361ea4a1a2714cc7.scope: Deactivated successfully.
Sep 30 14:42:19 compute-0 podman[272319]: 2025-09-30 14:42:19.266598877 +0000 UTC m=+0.042528054 container died a367a85f65792bba8a8312e8472ece90d3bc2cd6c0741e7c361ea4a1a2714cc7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1)
Sep 30 14:42:19 compute-0 ceph-mon[74194]: pgmap v804: 337 pgs: 337 active+clean; 109 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 2.0 MiB/s wr, 65 op/s
Sep 30 14:42:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-dfc3db35ffe736f953e8b0b2b4594aacc29a178abfc5817d8d68c93408411ce1-merged.mount: Deactivated successfully.
Sep 30 14:42:19 compute-0 podman[272319]: 2025-09-30 14:42:19.324506238 +0000 UTC m=+0.100435435 container remove a367a85f65792bba8a8312e8472ece90d3bc2cd6c0741e7c361ea4a1a2714cc7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_leavitt, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:42:19 compute-0 systemd[1]: libpod-conmon-a367a85f65792bba8a8312e8472ece90d3bc2cd6c0741e7c361ea4a1a2714cc7.scope: Deactivated successfully.
Sep 30 14:42:19 compute-0 sudo[272180]: pam_unix(sudo:session): session closed for user root
Sep 30 14:42:19 compute-0 sudo[272335]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:42:19 compute-0 sudo[272335]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:42:19 compute-0 sudo[272335]: pam_unix(sudo:session): session closed for user root
Sep 30 14:42:19 compute-0 sudo[272360]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -- lvm list --format json
Sep 30 14:42:19 compute-0 sudo[272360]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:42:19 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:42:19 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:42:19 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:42:19.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:42:20 compute-0 podman[272429]: 2025-09-30 14:42:20.081621977 +0000 UTC m=+0.055859079 container create 231e572cb1d0002b4286747d5358098eb17f1bfd6798127212be95b5d1e80d47 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:42:20 compute-0 systemd[1]: Started libpod-conmon-231e572cb1d0002b4286747d5358098eb17f1bfd6798127212be95b5d1e80d47.scope.
Sep 30 14:42:20 compute-0 podman[272429]: 2025-09-30 14:42:20.061517886 +0000 UTC m=+0.035755018 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:42:20 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:42:20 compute-0 podman[272429]: 2025-09-30 14:42:20.198861746 +0000 UTC m=+0.173098918 container init 231e572cb1d0002b4286747d5358098eb17f1bfd6798127212be95b5d1e80d47 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Sep 30 14:42:20 compute-0 podman[272429]: 2025-09-30 14:42:20.21059268 +0000 UTC m=+0.184829792 container start 231e572cb1d0002b4286747d5358098eb17f1bfd6798127212be95b5d1e80d47 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_bell, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Sep 30 14:42:20 compute-0 vibrant_bell[272446]: 167 167
Sep 30 14:42:20 compute-0 systemd[1]: libpod-231e572cb1d0002b4286747d5358098eb17f1bfd6798127212be95b5d1e80d47.scope: Deactivated successfully.
Sep 30 14:42:20 compute-0 podman[272429]: 2025-09-30 14:42:20.222697364 +0000 UTC m=+0.196934526 container attach 231e572cb1d0002b4286747d5358098eb17f1bfd6798127212be95b5d1e80d47 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:42:20 compute-0 podman[272429]: 2025-09-30 14:42:20.223136766 +0000 UTC m=+0.197373878 container died 231e572cb1d0002b4286747d5358098eb17f1bfd6798127212be95b5d1e80d47 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Sep 30 14:42:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-d19ef6a7f8ec3cb5e3226f1a13df4f3c74cf8e1de240fdf6a681cc1146ca15c2-merged.mount: Deactivated successfully.
Sep 30 14:42:20 compute-0 nova_compute[261524]: 2025-09-30 14:42:20.265 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:42:20 compute-0 podman[272429]: 2025-09-30 14:42:20.280695688 +0000 UTC m=+0.254932780 container remove 231e572cb1d0002b4286747d5358098eb17f1bfd6798127212be95b5d1e80d47 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_bell, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:42:20 compute-0 systemd[1]: libpod-conmon-231e572cb1d0002b4286747d5358098eb17f1bfd6798127212be95b5d1e80d47.scope: Deactivated successfully.
Sep 30 14:42:20 compute-0 podman[272470]: 2025-09-30 14:42:20.497900249 +0000 UTC m=+0.060511620 container create 513d0dba4bec06b2bff419363a5963c32060a50312487a8816c0a3fad34b716e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_villani, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:42:20 compute-0 systemd[1]: Started libpod-conmon-513d0dba4bec06b2bff419363a5963c32060a50312487a8816c0a3fad34b716e.scope.
Sep 30 14:42:20 compute-0 podman[272470]: 2025-09-30 14:42:20.47054496 +0000 UTC m=+0.033156401 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:42:20 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:42:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46dc857fbc96567fcf6a408847d7c1833e1cd3e8cc64f19fd2fb6438d3ac9895/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:42:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46dc857fbc96567fcf6a408847d7c1833e1cd3e8cc64f19fd2fb6438d3ac9895/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:42:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46dc857fbc96567fcf6a408847d7c1833e1cd3e8cc64f19fd2fb6438d3ac9895/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:42:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46dc857fbc96567fcf6a408847d7c1833e1cd3e8cc64f19fd2fb6438d3ac9895/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:42:20 compute-0 podman[272470]: 2025-09-30 14:42:20.617208162 +0000 UTC m=+0.179819563 container init 513d0dba4bec06b2bff419363a5963c32060a50312487a8816c0a3fad34b716e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_villani, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:42:20 compute-0 podman[272470]: 2025-09-30 14:42:20.634738177 +0000 UTC m=+0.197349568 container start 513d0dba4bec06b2bff419363a5963c32060a50312487a8816c0a3fad34b716e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_villani, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Sep 30 14:42:20 compute-0 podman[272470]: 2025-09-30 14:42:20.639258794 +0000 UTC m=+0.201870195 container attach 513d0dba4bec06b2bff419363a5963c32060a50312487a8816c0a3fad34b716e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_villani, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:42:20 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v805: 337 pgs: 337 active+clean; 109 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 2.0 MiB/s wr, 65 op/s
Sep 30 14:42:20 compute-0 peaceful_villani[272487]: {
Sep 30 14:42:20 compute-0 peaceful_villani[272487]:     "0": [
Sep 30 14:42:20 compute-0 peaceful_villani[272487]:         {
Sep 30 14:42:20 compute-0 peaceful_villani[272487]:             "devices": [
Sep 30 14:42:20 compute-0 peaceful_villani[272487]:                 "/dev/loop3"
Sep 30 14:42:20 compute-0 peaceful_villani[272487]:             ],
Sep 30 14:42:20 compute-0 peaceful_villani[272487]:             "lv_name": "ceph_lv0",
Sep 30 14:42:20 compute-0 peaceful_villani[272487]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:42:20 compute-0 peaceful_villani[272487]:             "lv_size": "21470642176",
Sep 30 14:42:20 compute-0 peaceful_villani[272487]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5e3c7776-ac03-5698-b79f-a6dc2d80cae6,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1bf35304-bfb4-41f5-b832-570aa31de1b2,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 14:42:20 compute-0 peaceful_villani[272487]:             "lv_uuid": "f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L",
Sep 30 14:42:20 compute-0 peaceful_villani[272487]:             "name": "ceph_lv0",
Sep 30 14:42:20 compute-0 peaceful_villani[272487]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:42:20 compute-0 peaceful_villani[272487]:             "tags": {
Sep 30 14:42:20 compute-0 peaceful_villani[272487]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:42:20 compute-0 peaceful_villani[272487]:                 "ceph.block_uuid": "f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L",
Sep 30 14:42:20 compute-0 peaceful_villani[272487]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 14:42:20 compute-0 peaceful_villani[272487]:                 "ceph.cluster_fsid": "5e3c7776-ac03-5698-b79f-a6dc2d80cae6",
Sep 30 14:42:20 compute-0 peaceful_villani[272487]:                 "ceph.cluster_name": "ceph",
Sep 30 14:42:20 compute-0 peaceful_villani[272487]:                 "ceph.crush_device_class": "",
Sep 30 14:42:20 compute-0 peaceful_villani[272487]:                 "ceph.encrypted": "0",
Sep 30 14:42:20 compute-0 peaceful_villani[272487]:                 "ceph.osd_fsid": "1bf35304-bfb4-41f5-b832-570aa31de1b2",
Sep 30 14:42:20 compute-0 peaceful_villani[272487]:                 "ceph.osd_id": "0",
Sep 30 14:42:20 compute-0 peaceful_villani[272487]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 14:42:20 compute-0 peaceful_villani[272487]:                 "ceph.type": "block",
Sep 30 14:42:20 compute-0 peaceful_villani[272487]:                 "ceph.vdo": "0",
Sep 30 14:42:20 compute-0 peaceful_villani[272487]:                 "ceph.with_tpm": "0"
Sep 30 14:42:20 compute-0 peaceful_villani[272487]:             },
Sep 30 14:42:20 compute-0 peaceful_villani[272487]:             "type": "block",
Sep 30 14:42:20 compute-0 peaceful_villani[272487]:             "vg_name": "ceph_vg0"
Sep 30 14:42:20 compute-0 peaceful_villani[272487]:         }
Sep 30 14:42:20 compute-0 peaceful_villani[272487]:     ]
Sep 30 14:42:20 compute-0 peaceful_villani[272487]: }
Sep 30 14:42:20 compute-0 systemd[1]: libpod-513d0dba4bec06b2bff419363a5963c32060a50312487a8816c0a3fad34b716e.scope: Deactivated successfully.
Sep 30 14:42:20 compute-0 podman[272470]: 2025-09-30 14:42:20.957086504 +0000 UTC m=+0.519697885 container died 513d0dba4bec06b2bff419363a5963c32060a50312487a8816c0a3fad34b716e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_villani, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:42:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-46dc857fbc96567fcf6a408847d7c1833e1cd3e8cc64f19fd2fb6438d3ac9895-merged.mount: Deactivated successfully.
Sep 30 14:42:21 compute-0 podman[272470]: 2025-09-30 14:42:21.005950351 +0000 UTC m=+0.568561712 container remove 513d0dba4bec06b2bff419363a5963c32060a50312487a8816c0a3fad34b716e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_villani, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:42:21 compute-0 systemd[1]: libpod-conmon-513d0dba4bec06b2bff419363a5963c32060a50312487a8816c0a3fad34b716e.scope: Deactivated successfully.
Sep 30 14:42:21 compute-0 sudo[272360]: pam_unix(sudo:session): session closed for user root
Sep 30 14:42:21 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:42:21 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:42:21 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:42:21.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:42:21 compute-0 sudo[272509]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:42:21 compute-0 sudo[272509]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:42:21 compute-0 sudo[272509]: pam_unix(sudo:session): session closed for user root
Sep 30 14:42:21 compute-0 sudo[272534]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -- raw list --format json
Sep 30 14:42:21 compute-0 sudo[272534]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:42:21 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:42:21 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:42:21 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:42:21.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:42:21 compute-0 podman[272604]: 2025-09-30 14:42:21.701303067 +0000 UTC m=+0.066908026 container create c43890d6e817faf2c417ee7a6119191aba0a376d7e27474288ea2f9330ee344b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_solomon, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Sep 30 14:42:21 compute-0 ceph-mon[74194]: pgmap v805: 337 pgs: 337 active+clean; 109 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 2.0 MiB/s wr, 65 op/s
Sep 30 14:42:21 compute-0 systemd[1]: Started libpod-conmon-c43890d6e817faf2c417ee7a6119191aba0a376d7e27474288ea2f9330ee344b.scope.
Sep 30 14:42:21 compute-0 podman[272604]: 2025-09-30 14:42:21.679545783 +0000 UTC m=+0.045150782 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:42:21 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:42:21 compute-0 podman[272604]: 2025-09-30 14:42:21.809893622 +0000 UTC m=+0.175498591 container init c43890d6e817faf2c417ee7a6119191aba0a376d7e27474288ea2f9330ee344b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_solomon, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:42:21 compute-0 podman[272604]: 2025-09-30 14:42:21.820022505 +0000 UTC m=+0.185627444 container start c43890d6e817faf2c417ee7a6119191aba0a376d7e27474288ea2f9330ee344b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_solomon, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:42:21 compute-0 podman[272604]: 2025-09-30 14:42:21.823376922 +0000 UTC m=+0.188981891 container attach c43890d6e817faf2c417ee7a6119191aba0a376d7e27474288ea2f9330ee344b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_solomon, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Sep 30 14:42:21 compute-0 hopeful_solomon[272621]: 167 167
Sep 30 14:42:21 compute-0 podman[272604]: 2025-09-30 14:42:21.825952959 +0000 UTC m=+0.191557898 container died c43890d6e817faf2c417ee7a6119191aba0a376d7e27474288ea2f9330ee344b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_solomon, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:42:21 compute-0 systemd[1]: libpod-c43890d6e817faf2c417ee7a6119191aba0a376d7e27474288ea2f9330ee344b.scope: Deactivated successfully.
Sep 30 14:42:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-f470c93eb0b876c5a7dc6fd523a3830354eab06d93e584722a556614920d8271-merged.mount: Deactivated successfully.
Sep 30 14:42:21 compute-0 podman[272604]: 2025-09-30 14:42:21.869928009 +0000 UTC m=+0.235532948 container remove c43890d6e817faf2c417ee7a6119191aba0a376d7e27474288ea2f9330ee344b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_solomon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:42:21 compute-0 systemd[1]: libpod-conmon-c43890d6e817faf2c417ee7a6119191aba0a376d7e27474288ea2f9330ee344b.scope: Deactivated successfully.
Sep 30 14:42:22 compute-0 rsyslogd[1004]: imjournal: 3804 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Sep 30 14:42:22 compute-0 podman[272646]: 2025-09-30 14:42:22.082961692 +0000 UTC m=+0.065888299 container create fca9fe071dc1c1621fb3be830e5c1f178794e5bf43191b031468bb1d2cd679fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_roentgen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:42:22 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:42:22 compute-0 systemd[1]: Started libpod-conmon-fca9fe071dc1c1621fb3be830e5c1f178794e5bf43191b031468bb1d2cd679fb.scope.
Sep 30 14:42:22 compute-0 podman[272646]: 2025-09-30 14:42:22.050592183 +0000 UTC m=+0.033518850 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:42:22 compute-0 nova_compute[261524]: 2025-09-30 14:42:22.146 2 INFO nova.compute.manager [None req-fb229901-97b4-4b37-8360-fe40ae5c91c5 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Get console output
Sep 30 14:42:22 compute-0 nova_compute[261524]: 2025-09-30 14:42:22.155 696 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Sep 30 14:42:22 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:42:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02cb9ec8fc54fe9a87a3ece397c5e20caaef6d36e793e71c68a979bba3c08082/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:42:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02cb9ec8fc54fe9a87a3ece397c5e20caaef6d36e793e71c68a979bba3c08082/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:42:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02cb9ec8fc54fe9a87a3ece397c5e20caaef6d36e793e71c68a979bba3c08082/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:42:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02cb9ec8fc54fe9a87a3ece397c5e20caaef6d36e793e71c68a979bba3c08082/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:42:22 compute-0 podman[272646]: 2025-09-30 14:42:22.191052554 +0000 UTC m=+0.173979161 container init fca9fe071dc1c1621fb3be830e5c1f178794e5bf43191b031468bb1d2cd679fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_roentgen, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid)
Sep 30 14:42:22 compute-0 podman[272646]: 2025-09-30 14:42:22.204374649 +0000 UTC m=+0.187301226 container start fca9fe071dc1c1621fb3be830e5c1f178794e5bf43191b031468bb1d2cd679fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_roentgen, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True)
Sep 30 14:42:22 compute-0 podman[272646]: 2025-09-30 14:42:22.208323562 +0000 UTC m=+0.191250169 container attach fca9fe071dc1c1621fb3be830e5c1f178794e5bf43191b031468bb1d2cd679fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_roentgen, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid)
Sep 30 14:42:22 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v806: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.1 MiB/s wr, 101 op/s
Sep 30 14:42:22 compute-0 lvm[272738]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 14:42:22 compute-0 lvm[272738]: VG ceph_vg0 finished
Sep 30 14:42:22 compute-0 infallible_roentgen[272663]: {}
Sep 30 14:42:22 compute-0 systemd[1]: libpod-fca9fe071dc1c1621fb3be830e5c1f178794e5bf43191b031468bb1d2cd679fb.scope: Deactivated successfully.
Sep 30 14:42:22 compute-0 podman[272646]: 2025-09-30 14:42:22.984256688 +0000 UTC m=+0.967183305 container died fca9fe071dc1c1621fb3be830e5c1f178794e5bf43191b031468bb1d2cd679fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_roentgen, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Sep 30 14:42:22 compute-0 systemd[1]: libpod-fca9fe071dc1c1621fb3be830e5c1f178794e5bf43191b031468bb1d2cd679fb.scope: Consumed 1.287s CPU time.
Sep 30 14:42:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-02cb9ec8fc54fe9a87a3ece397c5e20caaef6d36e793e71c68a979bba3c08082-merged.mount: Deactivated successfully.
Sep 30 14:42:23 compute-0 podman[272646]: 2025-09-30 14:42:23.044211703 +0000 UTC m=+1.027138310 container remove fca9fe071dc1c1621fb3be830e5c1f178794e5bf43191b031468bb1d2cd679fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_roentgen, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1)
Sep 30 14:42:23 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:42:23 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:42:23 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:42:23.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:42:23 compute-0 systemd[1]: libpod-conmon-fca9fe071dc1c1621fb3be830e5c1f178794e5bf43191b031468bb1d2cd679fb.scope: Deactivated successfully.
Sep 30 14:42:23 compute-0 sudo[272534]: pam_unix(sudo:session): session closed for user root
Sep 30 14:42:23 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 14:42:23 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:42:23 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 14:42:23 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:42:23 compute-0 sudo[272752]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 14:42:23 compute-0 sudo[272752]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:42:23 compute-0 sudo[272752]: pam_unix(sudo:session): session closed for user root
Sep 30 14:42:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:42:23.636Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:42:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:42:23.636Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:42:23 compute-0 nova_compute[261524]: 2025-09-30 14:42:23.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:42:23 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:42:23 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:42:23 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:42:23.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:42:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:42:24 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:42:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:42:24 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:42:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:42:24 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:42:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:42:24 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:42:24 compute-0 sudo[272779]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:42:24 compute-0 sudo[272779]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:42:24 compute-0 sudo[272779]: pam_unix(sudo:session): session closed for user root
Sep 30 14:42:24 compute-0 ceph-mon[74194]: pgmap v806: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.1 MiB/s wr, 101 op/s
Sep 30 14:42:24 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:42:24 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:42:24 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v807: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 304 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Sep 30 14:42:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:42:24] "GET /metrics HTTP/1.1" 200 48540 "" "Prometheus/2.51.0"
Sep 30 14:42:24 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:42:24] "GET /metrics HTTP/1.1" 200 48540 "" "Prometheus/2.51.0"
Sep 30 14:42:25 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:42:25 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:42:25 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:42:25.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:42:25 compute-0 podman[272807]: 2025-09-30 14:42:25.16830627 +0000 UTC m=+0.078859256 container health_status b3fb2fd96e3ed0561f41ccf5f3a6a8f5edc1610051f196882b9fe64f54041c07 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Sep 30 14:42:25 compute-0 podman[272808]: 2025-09-30 14:42:25.179674395 +0000 UTC m=+0.091950685 container health_status c96c6cc1b89de09f21811bef665f35e069874e3ad1bbd4c37d02227af98e3458 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Sep 30 14:42:25 compute-0 podman[272805]: 2025-09-30 14:42:25.191657305 +0000 UTC m=+0.111130832 container health_status 3f9405f717bf7bccb1d94628a6cea0442375ebf8d5cf43ef2536ee30dce6c6e0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, org.label-schema.license=GPLv2)
Sep 30 14:42:25 compute-0 podman[272806]: 2025-09-30 14:42:25.207190768 +0000 UTC m=+0.127301481 container health_status 8ace90c1ad44d98a416105b72e1c541724757ab08efa10adfaa0df7e8c2027c6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Sep 30 14:42:25 compute-0 nova_compute[261524]: 2025-09-30 14:42:25.266 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:42:25 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:42:25 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:42:25 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:42:25.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:42:26 compute-0 ceph-mon[74194]: pgmap v807: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 304 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Sep 30 14:42:26 compute-0 nova_compute[261524]: 2025-09-30 14:42:26.179 2 DEBUG oslo_concurrency.lockutils [None req-ce851311-e0f7-4801-bd85-87f83684caa6 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Acquiring lock "interface-c7b89511-067a-4ecf-9b88-41170118da87-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:42:26 compute-0 nova_compute[261524]: 2025-09-30 14:42:26.180 2 DEBUG oslo_concurrency.lockutils [None req-ce851311-e0f7-4801-bd85-87f83684caa6 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Lock "interface-c7b89511-067a-4ecf-9b88-41170118da87-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:42:26 compute-0 nova_compute[261524]: 2025-09-30 14:42:26.180 2 DEBUG nova.objects.instance [None req-ce851311-e0f7-4801-bd85-87f83684caa6 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Lazy-loading 'flavor' on Instance uuid c7b89511-067a-4ecf-9b88-41170118da87 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Sep 30 14:42:26 compute-0 nova_compute[261524]: 2025-09-30 14:42:26.460 2 DEBUG nova.objects.instance [None req-ce851311-e0f7-4801-bd85-87f83684caa6 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Lazy-loading 'pci_requests' on Instance uuid c7b89511-067a-4ecf-9b88-41170118da87 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Sep 30 14:42:26 compute-0 nova_compute[261524]: 2025-09-30 14:42:26.476 2 DEBUG nova.network.neutron [None req-ce851311-e0f7-4801-bd85-87f83684caa6 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: c7b89511-067a-4ecf-9b88-41170118da87] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Sep 30 14:42:26 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v808: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 304 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Sep 30 14:42:27 compute-0 nova_compute[261524]: 2025-09-30 14:42:27.034 2 DEBUG nova.policy [None req-ce851311-e0f7-4801-bd85-87f83684caa6 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '59c80c4f189d4667aec64b43afc69ed2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0f6bbb74396f4cb7bfa999ebdabfe722', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Sep 30 14:42:27 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:42:27 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:42:27 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:42:27.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:42:27 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:42:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:42:27.130Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:42:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:42:27.130Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:42:27 compute-0 ceph-mon[74194]: pgmap v808: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 304 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Sep 30 14:42:27 compute-0 nova_compute[261524]: 2025-09-30 14:42:27.504 2 DEBUG nova.network.neutron [None req-ce851311-e0f7-4801-bd85-87f83684caa6 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Successfully created port: 03e495c3-98e6-487d-b0e9-ad172586b71d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Sep 30 14:42:27 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:42:27 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:42:27 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:42:27.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:42:28 compute-0 nova_compute[261524]: 2025-09-30 14:42:28.591 2 DEBUG nova.network.neutron [None req-ce851311-e0f7-4801-bd85-87f83684caa6 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Successfully updated port: 03e495c3-98e6-487d-b0e9-ad172586b71d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Sep 30 14:42:28 compute-0 nova_compute[261524]: 2025-09-30 14:42:28.622 2 DEBUG oslo_concurrency.lockutils [None req-ce851311-e0f7-4801-bd85-87f83684caa6 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Acquiring lock "refresh_cache-c7b89511-067a-4ecf-9b88-41170118da87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Sep 30 14:42:28 compute-0 nova_compute[261524]: 2025-09-30 14:42:28.622 2 DEBUG oslo_concurrency.lockutils [None req-ce851311-e0f7-4801-bd85-87f83684caa6 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Acquired lock "refresh_cache-c7b89511-067a-4ecf-9b88-41170118da87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Sep 30 14:42:28 compute-0 nova_compute[261524]: 2025-09-30 14:42:28.622 2 DEBUG nova.network.neutron [None req-ce851311-e0f7-4801-bd85-87f83684caa6 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Sep 30 14:42:28 compute-0 nova_compute[261524]: 2025-09-30 14:42:28.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:42:28 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v809: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 252 KiB/s rd, 148 KiB/s wr, 37 op/s
Sep 30 14:42:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:42:28 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:42:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:42:28 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:42:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:42:28 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:42:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:42:29 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:42:29 compute-0 nova_compute[261524]: 2025-09-30 14:42:29.053 2 DEBUG nova.compute.manager [req-e3f74327-57d3-44cf-b369-f82c8a9063a5 req-5262f112-b4fa-41f7-9fcb-61e4c6b13116 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Received event network-changed-03e495c3-98e6-487d-b0e9-ad172586b71d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Sep 30 14:42:29 compute-0 nova_compute[261524]: 2025-09-30 14:42:29.054 2 DEBUG nova.compute.manager [req-e3f74327-57d3-44cf-b369-f82c8a9063a5 req-5262f112-b4fa-41f7-9fcb-61e4c6b13116 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Refreshing instance network info cache due to event network-changed-03e495c3-98e6-487d-b0e9-ad172586b71d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Sep 30 14:42:29 compute-0 nova_compute[261524]: 2025-09-30 14:42:29.054 2 DEBUG oslo_concurrency.lockutils [req-e3f74327-57d3-44cf-b369-f82c8a9063a5 req-5262f112-b4fa-41f7-9fcb-61e4c6b13116 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Acquiring lock "refresh_cache-c7b89511-067a-4ecf-9b88-41170118da87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Sep 30 14:42:29 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:42:29 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:42:29 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:42:29.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:42:29 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:42:29 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:42:29 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:42:29 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:42:29 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:42:29.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:42:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:42:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:42:29 compute-0 ceph-mon[74194]: pgmap v809: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 252 KiB/s rd, 148 KiB/s wr, 37 op/s
Sep 30 14:42:29 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:42:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:42:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:42:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:42:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:42:30 compute-0 nova_compute[261524]: 2025-09-30 14:42:30.269 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:42:30 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v810: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 252 KiB/s rd, 148 KiB/s wr, 37 op/s
Sep 30 14:42:31 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:42:31 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:42:31 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:42:31.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:42:31 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:42:31 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:42:31 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:42:31.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:42:31 compute-0 ceph-mon[74194]: pgmap v810: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 252 KiB/s rd, 148 KiB/s wr, 37 op/s
Sep 30 14:42:32 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:42:32 compute-0 nova_compute[261524]: 2025-09-30 14:42:32.172 2 DEBUG nova.network.neutron [None req-ce851311-e0f7-4801-bd85-87f83684caa6 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Updating instance_info_cache with network_info: [{"id": "fdd76f4a-6a11-467c-8f19-0b00baa4dbd1", "address": "fa:16:3e:0e:c5:53", "network": {"id": "31e82792-2132-423c-8fb3-0fd2453172b3", "bridge": "br-int", "label": "tempest-network-smoke--156271235", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0f6bbb74396f4cb7bfa999ebdabfe722", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdd76f4a-6a", "ovs_interfaceid": "fdd76f4a-6a11-467c-8f19-0b00baa4dbd1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "03e495c3-98e6-487d-b0e9-ad172586b71d", "address": "fa:16:3e:bb:7b:b7", "network": {"id": "3d9d4b9c-e1ad-44e2-8fb0-3f1c8b1cb1c1", "bridge": "br-int", "label": "tempest-network-smoke--1716238045", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0f6bbb74396f4cb7bfa999ebdabfe722", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap03e495c3-98", "ovs_interfaceid": "03e495c3-98e6-487d-b0e9-ad172586b71d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Sep 30 14:42:32 compute-0 nova_compute[261524]: 2025-09-30 14:42:32.196 2 DEBUG oslo_concurrency.lockutils [None req-ce851311-e0f7-4801-bd85-87f83684caa6 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Releasing lock "refresh_cache-c7b89511-067a-4ecf-9b88-41170118da87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Sep 30 14:42:32 compute-0 nova_compute[261524]: 2025-09-30 14:42:32.197 2 DEBUG oslo_concurrency.lockutils [req-e3f74327-57d3-44cf-b369-f82c8a9063a5 req-5262f112-b4fa-41f7-9fcb-61e4c6b13116 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Acquired lock "refresh_cache-c7b89511-067a-4ecf-9b88-41170118da87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Sep 30 14:42:32 compute-0 nova_compute[261524]: 2025-09-30 14:42:32.197 2 DEBUG nova.network.neutron [req-e3f74327-57d3-44cf-b369-f82c8a9063a5 req-5262f112-b4fa-41f7-9fcb-61e4c6b13116 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Refreshing network info cache for port 03e495c3-98e6-487d-b0e9-ad172586b71d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Sep 30 14:42:32 compute-0 nova_compute[261524]: 2025-09-30 14:42:32.200 2 DEBUG nova.virt.libvirt.vif [None req-ce851311-e0f7-4801-bd85-87f83684caa6 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-09-30T14:41:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2031188285',display_name='tempest-TestNetworkBasicOps-server-2031188285',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2031188285',id=3,image_ref='7c70cf84-edc3-42b2-a094-ae3c1dbaffe4',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEAImrCwKSNyEdm98pPfZ4sjS6exrK+14H3hUnFOdL8Y5dY0kzO28iP+MIhWAQTc22os7ImKeOILYxLVSkpa7J7So6O1Rtmi7C5fPdNcVDkCJS373V5RS7Al59MW7kPAog==',key_name='tempest-TestNetworkBasicOps-1988791059',keypairs=<?>,launch_index=0,launched_at=2025-09-30T14:42:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='0f6bbb74396f4cb7bfa999ebdabfe722',ramdisk_id='',reservation_id='r-nwtvr0g5',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c70cf84-edc3-42b2-a094-ae3c1dbaffe4',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-195302952',owner_user_name='tempest-TestNetworkBasicOps-195302952-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-09-30T14:42:05Z,user_data=None,user_id='59c80c4f189d4667aec64b43afc69ed2',uuid=c7b89511-067a-4ecf-9b88-41170118da87,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "03e495c3-98e6-487d-b0e9-ad172586b71d", "address": "fa:16:3e:bb:7b:b7", "network": {"id": "3d9d4b9c-e1ad-44e2-8fb0-3f1c8b1cb1c1", "bridge": "br-int", "label": "tempest-network-smoke--1716238045", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0f6bbb74396f4cb7bfa999ebdabfe722", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap03e495c3-98", "ovs_interfaceid": "03e495c3-98e6-487d-b0e9-ad172586b71d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Sep 30 14:42:32 compute-0 nova_compute[261524]: 2025-09-30 14:42:32.201 2 DEBUG nova.network.os_vif_util [None req-ce851311-e0f7-4801-bd85-87f83684caa6 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Converting VIF {"id": "03e495c3-98e6-487d-b0e9-ad172586b71d", "address": "fa:16:3e:bb:7b:b7", "network": {"id": "3d9d4b9c-e1ad-44e2-8fb0-3f1c8b1cb1c1", "bridge": "br-int", "label": "tempest-network-smoke--1716238045", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0f6bbb74396f4cb7bfa999ebdabfe722", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap03e495c3-98", "ovs_interfaceid": "03e495c3-98e6-487d-b0e9-ad172586b71d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Sep 30 14:42:32 compute-0 nova_compute[261524]: 2025-09-30 14:42:32.201 2 DEBUG nova.network.os_vif_util [None req-ce851311-e0f7-4801-bd85-87f83684caa6 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bb:7b:b7,bridge_name='br-int',has_traffic_filtering=True,id=03e495c3-98e6-487d-b0e9-ad172586b71d,network=Network(3d9d4b9c-e1ad-44e2-8fb0-3f1c8b1cb1c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap03e495c3-98') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Sep 30 14:42:32 compute-0 nova_compute[261524]: 2025-09-30 14:42:32.202 2 DEBUG os_vif [None req-ce851311-e0f7-4801-bd85-87f83684caa6 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:7b:b7,bridge_name='br-int',has_traffic_filtering=True,id=03e495c3-98e6-487d-b0e9-ad172586b71d,network=Network(3d9d4b9c-e1ad-44e2-8fb0-3f1c8b1cb1c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap03e495c3-98') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Sep 30 14:42:32 compute-0 nova_compute[261524]: 2025-09-30 14:42:32.203 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:42:32 compute-0 nova_compute[261524]: 2025-09-30 14:42:32.203 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 14:42:32 compute-0 nova_compute[261524]: 2025-09-30 14:42:32.203 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 14:42:32 compute-0 nova_compute[261524]: 2025-09-30 14:42:32.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:42:32 compute-0 nova_compute[261524]: 2025-09-30 14:42:32.207 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap03e495c3-98, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 14:42:32 compute-0 nova_compute[261524]: 2025-09-30 14:42:32.207 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap03e495c3-98, col_values=(('external_ids', {'iface-id': '03e495c3-98e6-487d-b0e9-ad172586b71d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bb:7b:b7', 'vm-uuid': 'c7b89511-067a-4ecf-9b88-41170118da87'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 14:42:32 compute-0 nova_compute[261524]: 2025-09-30 14:42:32.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:42:32 compute-0 NetworkManager[45472]: <info>  [1759243352.2104] manager: (tap03e495c3-98): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/39)
Sep 30 14:42:32 compute-0 nova_compute[261524]: 2025-09-30 14:42:32.213 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Sep 30 14:42:32 compute-0 nova_compute[261524]: 2025-09-30 14:42:32.220 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:42:32 compute-0 nova_compute[261524]: 2025-09-30 14:42:32.221 2 INFO os_vif [None req-ce851311-e0f7-4801-bd85-87f83684caa6 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:7b:b7,bridge_name='br-int',has_traffic_filtering=True,id=03e495c3-98e6-487d-b0e9-ad172586b71d,network=Network(3d9d4b9c-e1ad-44e2-8fb0-3f1c8b1cb1c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap03e495c3-98')
Sep 30 14:42:32 compute-0 nova_compute[261524]: 2025-09-30 14:42:32.222 2 DEBUG nova.virt.libvirt.vif [None req-ce851311-e0f7-4801-bd85-87f83684caa6 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-09-30T14:41:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2031188285',display_name='tempest-TestNetworkBasicOps-server-2031188285',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2031188285',id=3,image_ref='7c70cf84-edc3-42b2-a094-ae3c1dbaffe4',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEAImrCwKSNyEdm98pPfZ4sjS6exrK+14H3hUnFOdL8Y5dY0kzO28iP+MIhWAQTc22os7ImKeOILYxLVSkpa7J7So6O1Rtmi7C5fPdNcVDkCJS373V5RS7Al59MW7kPAog==',key_name='tempest-TestNetworkBasicOps-1988791059',keypairs=<?>,launch_index=0,launched_at=2025-09-30T14:42:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='0f6bbb74396f4cb7bfa999ebdabfe722',ramdisk_id='',reservation_id='r-nwtvr0g5',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c70cf84-edc3-42b2-a094-ae3c1dbaffe4',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-195302952',owner_user_name='tempest-TestNetworkBasicOps-195302952-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-09-30T14:42:05Z,user_data=None,user_id='59c80c4f189d4667aec64b43afc69ed2',uuid=c7b89511-067a-4ecf-9b88-41170118da87,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "03e495c3-98e6-487d-b0e9-ad172586b71d", "address": "fa:16:3e:bb:7b:b7", "network": {"id": "3d9d4b9c-e1ad-44e2-8fb0-3f1c8b1cb1c1", "bridge": "br-int", "label": "tempest-network-smoke--1716238045", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0f6bbb74396f4cb7bfa999ebdabfe722", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap03e495c3-98", "ovs_interfaceid": "03e495c3-98e6-487d-b0e9-ad172586b71d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Sep 30 14:42:32 compute-0 nova_compute[261524]: 2025-09-30 14:42:32.222 2 DEBUG nova.network.os_vif_util [None req-ce851311-e0f7-4801-bd85-87f83684caa6 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Converting VIF {"id": "03e495c3-98e6-487d-b0e9-ad172586b71d", "address": "fa:16:3e:bb:7b:b7", "network": {"id": "3d9d4b9c-e1ad-44e2-8fb0-3f1c8b1cb1c1", "bridge": "br-int", "label": "tempest-network-smoke--1716238045", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0f6bbb74396f4cb7bfa999ebdabfe722", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap03e495c3-98", "ovs_interfaceid": "03e495c3-98e6-487d-b0e9-ad172586b71d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Sep 30 14:42:32 compute-0 nova_compute[261524]: 2025-09-30 14:42:32.223 2 DEBUG nova.network.os_vif_util [None req-ce851311-e0f7-4801-bd85-87f83684caa6 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bb:7b:b7,bridge_name='br-int',has_traffic_filtering=True,id=03e495c3-98e6-487d-b0e9-ad172586b71d,network=Network(3d9d4b9c-e1ad-44e2-8fb0-3f1c8b1cb1c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap03e495c3-98') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Sep 30 14:42:32 compute-0 nova_compute[261524]: 2025-09-30 14:42:32.226 2 DEBUG nova.virt.libvirt.guest [None req-ce851311-e0f7-4801-bd85-87f83684caa6 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] attach device xml: <interface type="ethernet">
Sep 30 14:42:32 compute-0 nova_compute[261524]:   <mac address="fa:16:3e:bb:7b:b7"/>
Sep 30 14:42:32 compute-0 nova_compute[261524]:   <model type="virtio"/>
Sep 30 14:42:32 compute-0 nova_compute[261524]:   <driver name="vhost" rx_queue_size="512"/>
Sep 30 14:42:32 compute-0 nova_compute[261524]:   <mtu size="1442"/>
Sep 30 14:42:32 compute-0 nova_compute[261524]:   <target dev="tap03e495c3-98"/>
Sep 30 14:42:32 compute-0 nova_compute[261524]: </interface>
Sep 30 14:42:32 compute-0 nova_compute[261524]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Sep 30 14:42:32 compute-0 kernel: tap03e495c3-98: entered promiscuous mode
Sep 30 14:42:32 compute-0 NetworkManager[45472]: <info>  [1759243352.2424] manager: (tap03e495c3-98): new Tun device (/org/freedesktop/NetworkManager/Devices/40)
Sep 30 14:42:32 compute-0 ovn_controller[154021]: 2025-09-30T14:42:32Z|00045|binding|INFO|Claiming lport 03e495c3-98e6-487d-b0e9-ad172586b71d for this chassis.
Sep 30 14:42:32 compute-0 ovn_controller[154021]: 2025-09-30T14:42:32Z|00046|binding|INFO|03e495c3-98e6-487d-b0e9-ad172586b71d: Claiming fa:16:3e:bb:7b:b7 10.100.0.21
Sep 30 14:42:32 compute-0 nova_compute[261524]: 2025-09-30 14:42:32.246 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:42:32 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:42:32.259 163966 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bb:7b:b7 10.100.0.21'], port_security=['fa:16:3e:bb:7b:b7 10.100.0.21'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.21/28', 'neutron:device_id': 'c7b89511-067a-4ecf-9b88-41170118da87', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3d9d4b9c-e1ad-44e2-8fb0-3f1c8b1cb1c1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0f6bbb74396f4cb7bfa999ebdabfe722', 'neutron:revision_number': '2', 'neutron:security_group_ids': '577c7718-6276-434c-be06-b394756c15c1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=46771cda-5331-426b-97b3-f94a6a201932, chassis=[<ovs.db.idl.Row object at 0x7f8c6753f7f0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f8c6753f7f0>], logical_port=03e495c3-98e6-487d-b0e9-ad172586b71d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Sep 30 14:42:32 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:42:32.261 163966 INFO neutron.agent.ovn.metadata.agent [-] Port 03e495c3-98e6-487d-b0e9-ad172586b71d in datapath 3d9d4b9c-e1ad-44e2-8fb0-3f1c8b1cb1c1 bound to our chassis
Sep 30 14:42:32 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:42:32.263 163966 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3d9d4b9c-e1ad-44e2-8fb0-3f1c8b1cb1c1
Sep 30 14:42:32 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:42:32.281 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[e7a16aa0-614a-4ec8-8484-b8ae735ceb1a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:42:32 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:42:32.283 163966 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3d9d4b9c-e1 in ovnmeta-3d9d4b9c-e1ad-44e2-8fb0-3f1c8b1cb1c1 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Sep 30 14:42:32 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:42:32.284 269027 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3d9d4b9c-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Sep 30 14:42:32 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:42:32.284 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[8a294806-57a6-4f17-9f18-c45da259c5cc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:42:32 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:42:32.285 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[429f1959-640e-40a4-88e2-9041ef88ee83]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:42:32 compute-0 systemd-udevd[272901]: Network interface NamePolicy= disabled on kernel command line.
Sep 30 14:42:32 compute-0 nova_compute[261524]: 2025-09-30 14:42:32.289 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:42:32 compute-0 ovn_controller[154021]: 2025-09-30T14:42:32Z|00047|binding|INFO|Setting lport 03e495c3-98e6-487d-b0e9-ad172586b71d ovn-installed in OVS
Sep 30 14:42:32 compute-0 ovn_controller[154021]: 2025-09-30T14:42:32Z|00048|binding|INFO|Setting lport 03e495c3-98e6-487d-b0e9-ad172586b71d up in Southbound
Sep 30 14:42:32 compute-0 nova_compute[261524]: 2025-09-30 14:42:32.299 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:42:32 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:42:32.308 164124 DEBUG oslo.privsep.daemon [-] privsep: reply[8f78a675-faa9-4766-8aa7-c631b289188f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:42:32 compute-0 NetworkManager[45472]: <info>  [1759243352.3199] device (tap03e495c3-98): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Sep 30 14:42:32 compute-0 NetworkManager[45472]: <info>  [1759243352.3213] device (tap03e495c3-98): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Sep 30 14:42:32 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:42:32.339 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[cfa37b5d-5d5b-453e-8211-1bd91f1b2236]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:42:32 compute-0 nova_compute[261524]: 2025-09-30 14:42:32.368 2 DEBUG nova.virt.libvirt.driver [None req-ce851311-e0f7-4801-bd85-87f83684caa6 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Sep 30 14:42:32 compute-0 nova_compute[261524]: 2025-09-30 14:42:32.369 2 DEBUG nova.virt.libvirt.driver [None req-ce851311-e0f7-4801-bd85-87f83684caa6 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Sep 30 14:42:32 compute-0 nova_compute[261524]: 2025-09-30 14:42:32.369 2 DEBUG nova.virt.libvirt.driver [None req-ce851311-e0f7-4801-bd85-87f83684caa6 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] No VIF found with MAC fa:16:3e:0e:c5:53, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Sep 30 14:42:32 compute-0 nova_compute[261524]: 2025-09-30 14:42:32.369 2 DEBUG nova.virt.libvirt.driver [None req-ce851311-e0f7-4801-bd85-87f83684caa6 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] No VIF found with MAC fa:16:3e:bb:7b:b7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Sep 30 14:42:32 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:42:32.380 269085 DEBUG oslo.privsep.daemon [-] privsep: reply[f2cf5532-a10d-47f1-9a7f-ab0cfd54d20f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:42:32 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:42:32.386 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[e626bfff-2ea2-4168-b5c3-aeef418ed2dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:42:32 compute-0 NetworkManager[45472]: <info>  [1759243352.3880] manager: (tap3d9d4b9c-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/41)
Sep 30 14:42:32 compute-0 systemd-udevd[272905]: Network interface NamePolicy= disabled on kernel command line.
Sep 30 14:42:32 compute-0 nova_compute[261524]: 2025-09-30 14:42:32.396 2 DEBUG nova.virt.libvirt.guest [None req-ce851311-e0f7-4801-bd85-87f83684caa6 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Sep 30 14:42:32 compute-0 nova_compute[261524]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Sep 30 14:42:32 compute-0 nova_compute[261524]:   <nova:name>tempest-TestNetworkBasicOps-server-2031188285</nova:name>
Sep 30 14:42:32 compute-0 nova_compute[261524]:   <nova:creationTime>2025-09-30 14:42:32</nova:creationTime>
Sep 30 14:42:32 compute-0 nova_compute[261524]:   <nova:flavor name="m1.nano">
Sep 30 14:42:32 compute-0 nova_compute[261524]:     <nova:memory>128</nova:memory>
Sep 30 14:42:32 compute-0 nova_compute[261524]:     <nova:disk>1</nova:disk>
Sep 30 14:42:32 compute-0 nova_compute[261524]:     <nova:swap>0</nova:swap>
Sep 30 14:42:32 compute-0 nova_compute[261524]:     <nova:ephemeral>0</nova:ephemeral>
Sep 30 14:42:32 compute-0 nova_compute[261524]:     <nova:vcpus>1</nova:vcpus>
Sep 30 14:42:32 compute-0 nova_compute[261524]:   </nova:flavor>
Sep 30 14:42:32 compute-0 nova_compute[261524]:   <nova:owner>
Sep 30 14:42:32 compute-0 nova_compute[261524]:     <nova:user uuid="59c80c4f189d4667aec64b43afc69ed2">tempest-TestNetworkBasicOps-195302952-project-member</nova:user>
Sep 30 14:42:32 compute-0 nova_compute[261524]:     <nova:project uuid="0f6bbb74396f4cb7bfa999ebdabfe722">tempest-TestNetworkBasicOps-195302952</nova:project>
Sep 30 14:42:32 compute-0 nova_compute[261524]:   </nova:owner>
Sep 30 14:42:32 compute-0 nova_compute[261524]:   <nova:root type="image" uuid="7c70cf84-edc3-42b2-a094-ae3c1dbaffe4"/>
Sep 30 14:42:32 compute-0 nova_compute[261524]:   <nova:ports>
Sep 30 14:42:32 compute-0 nova_compute[261524]:     <nova:port uuid="fdd76f4a-6a11-467c-8f19-0b00baa4dbd1">
Sep 30 14:42:32 compute-0 nova_compute[261524]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Sep 30 14:42:32 compute-0 nova_compute[261524]:     </nova:port>
Sep 30 14:42:32 compute-0 nova_compute[261524]:     <nova:port uuid="03e495c3-98e6-487d-b0e9-ad172586b71d">
Sep 30 14:42:32 compute-0 nova_compute[261524]:       <nova:ip type="fixed" address="10.100.0.21" ipVersion="4"/>
Sep 30 14:42:32 compute-0 nova_compute[261524]:     </nova:port>
Sep 30 14:42:32 compute-0 nova_compute[261524]:   </nova:ports>
Sep 30 14:42:32 compute-0 nova_compute[261524]: </nova:instance>
Sep 30 14:42:32 compute-0 nova_compute[261524]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Sep 30 14:42:32 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:42:32.423 269085 DEBUG oslo.privsep.daemon [-] privsep: reply[df93c98c-3cb5-4371-b649-28ad476f61e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:42:32 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:42:32.426 269085 DEBUG oslo.privsep.daemon [-] privsep: reply[705d5a47-9f8a-4581-890c-7c906c563acd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:42:32 compute-0 nova_compute[261524]: 2025-09-30 14:42:32.429 2 DEBUG oslo_concurrency.lockutils [None req-ce851311-e0f7-4801-bd85-87f83684caa6 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Lock "interface-c7b89511-067a-4ecf-9b88-41170118da87-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 6.250s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:42:32 compute-0 NetworkManager[45472]: <info>  [1759243352.4589] device (tap3d9d4b9c-e0): carrier: link connected
Sep 30 14:42:32 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:42:32.467 269085 DEBUG oslo.privsep.daemon [-] privsep: reply[c414dbb4-6161-4f9c-948b-63a9952d24d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:42:32 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:42:32.488 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[a5119a25-f92e-48bd-a6c9-578e3faa8cd3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3d9d4b9c-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:74:ad:d0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 20], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 671481, 'reachable_time': 44368, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 272927, 'error': None, 'target': 'ovnmeta-3d9d4b9c-e1ad-44e2-8fb0-3f1c8b1cb1c1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:42:32 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:42:32.506 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[72917fe1-0633-4368-b7b1-766be001de0e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe74:add0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 671481, 'tstamp': 671481}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 272928, 'error': None, 'target': 'ovnmeta-3d9d4b9c-e1ad-44e2-8fb0-3f1c8b1cb1c1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:42:32 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:42:32.525 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[e9d0eb9a-f508-41dc-bc5b-39c1873c3561]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3d9d4b9c-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:74:ad:d0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 20], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 671481, 'reachable_time': 44368, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 272929, 'error': None, 'target': 'ovnmeta-3d9d4b9c-e1ad-44e2-8fb0-3f1c8b1cb1c1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:42:32 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:42:32.561 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[989452f5-a4f8-4a9d-a20d-9e669bc4cf16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:42:32 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:42:32.636 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[cdb6b3ed-fdcb-4790-8468-82afabd4fb04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:42:32 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:42:32.638 163966 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3d9d4b9c-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 14:42:32 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:42:32.639 163966 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 14:42:32 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:42:32.639 163966 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3d9d4b9c-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 14:42:32 compute-0 NetworkManager[45472]: <info>  [1759243352.6429] manager: (tap3d9d4b9c-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/42)
Sep 30 14:42:32 compute-0 kernel: tap3d9d4b9c-e0: entered promiscuous mode
Sep 30 14:42:32 compute-0 nova_compute[261524]: 2025-09-30 14:42:32.643 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:42:32 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:42:32.647 163966 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3d9d4b9c-e0, col_values=(('external_ids', {'iface-id': '3fc76fc1-b7a0-4b40-8547-540abc8847bb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 14:42:32 compute-0 nova_compute[261524]: 2025-09-30 14:42:32.648 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:42:32 compute-0 ovn_controller[154021]: 2025-09-30T14:42:32Z|00049|binding|INFO|Releasing lport 3fc76fc1-b7a0-4b40-8547-540abc8847bb from this chassis (sb_readonly=0)
Sep 30 14:42:32 compute-0 nova_compute[261524]: 2025-09-30 14:42:32.649 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:42:32 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:42:32.650 163966 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3d9d4b9c-e1ad-44e2-8fb0-3f1c8b1cb1c1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3d9d4b9c-e1ad-44e2-8fb0-3f1c8b1cb1c1.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Sep 30 14:42:32 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:42:32.651 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[e69147b5-3b23-4103-af79-6309ff81e6ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:42:32 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:42:32.652 163966 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Sep 30 14:42:32 compute-0 ovn_metadata_agent[163949]: global
Sep 30 14:42:32 compute-0 ovn_metadata_agent[163949]:     log         /dev/log local0 debug
Sep 30 14:42:32 compute-0 ovn_metadata_agent[163949]:     log-tag     haproxy-metadata-proxy-3d9d4b9c-e1ad-44e2-8fb0-3f1c8b1cb1c1
Sep 30 14:42:32 compute-0 ovn_metadata_agent[163949]:     user        root
Sep 30 14:42:32 compute-0 ovn_metadata_agent[163949]:     group       root
Sep 30 14:42:32 compute-0 ovn_metadata_agent[163949]:     maxconn     1024
Sep 30 14:42:32 compute-0 ovn_metadata_agent[163949]:     pidfile     /var/lib/neutron/external/pids/3d9d4b9c-e1ad-44e2-8fb0-3f1c8b1cb1c1.pid.haproxy
Sep 30 14:42:32 compute-0 ovn_metadata_agent[163949]:     daemon
Sep 30 14:42:32 compute-0 ovn_metadata_agent[163949]: 
Sep 30 14:42:32 compute-0 ovn_metadata_agent[163949]: defaults
Sep 30 14:42:32 compute-0 ovn_metadata_agent[163949]:     log global
Sep 30 14:42:32 compute-0 ovn_metadata_agent[163949]:     mode http
Sep 30 14:42:32 compute-0 ovn_metadata_agent[163949]:     option httplog
Sep 30 14:42:32 compute-0 ovn_metadata_agent[163949]:     option dontlognull
Sep 30 14:42:32 compute-0 ovn_metadata_agent[163949]:     option http-server-close
Sep 30 14:42:32 compute-0 ovn_metadata_agent[163949]:     option forwardfor
Sep 30 14:42:32 compute-0 ovn_metadata_agent[163949]:     retries                 3
Sep 30 14:42:32 compute-0 ovn_metadata_agent[163949]:     timeout http-request    30s
Sep 30 14:42:32 compute-0 ovn_metadata_agent[163949]:     timeout connect         30s
Sep 30 14:42:32 compute-0 ovn_metadata_agent[163949]:     timeout client          32s
Sep 30 14:42:32 compute-0 ovn_metadata_agent[163949]:     timeout server          32s
Sep 30 14:42:32 compute-0 ovn_metadata_agent[163949]:     timeout http-keep-alive 30s
Sep 30 14:42:32 compute-0 ovn_metadata_agent[163949]: 
Sep 30 14:42:32 compute-0 ovn_metadata_agent[163949]: 
Sep 30 14:42:32 compute-0 ovn_metadata_agent[163949]: listen listener
Sep 30 14:42:32 compute-0 ovn_metadata_agent[163949]:     bind 169.254.169.254:80
Sep 30 14:42:32 compute-0 ovn_metadata_agent[163949]:     server metadata /var/lib/neutron/metadata_proxy
Sep 30 14:42:32 compute-0 ovn_metadata_agent[163949]:     http-request add-header X-OVN-Network-ID 3d9d4b9c-e1ad-44e2-8fb0-3f1c8b1cb1c1
Sep 30 14:42:32 compute-0 ovn_metadata_agent[163949]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Sep 30 14:42:32 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:42:32.653 163966 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3d9d4b9c-e1ad-44e2-8fb0-3f1c8b1cb1c1', 'env', 'PROCESS_TAG=haproxy-3d9d4b9c-e1ad-44e2-8fb0-3f1c8b1cb1c1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3d9d4b9c-e1ad-44e2-8fb0-3f1c8b1cb1c1.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Sep 30 14:42:32 compute-0 nova_compute[261524]: 2025-09-30 14:42:32.662 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:42:32 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v811: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 252 KiB/s rd, 148 KiB/s wr, 37 op/s
Sep 30 14:42:32 compute-0 nova_compute[261524]: 2025-09-30 14:42:32.794 2 DEBUG nova.compute.manager [req-993297bd-3877-466a-813d-ce77d454944c req-2ff1ab9f-055e-440a-8c78-21014217fb8b e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Received event network-vif-plugged-03e495c3-98e6-487d-b0e9-ad172586b71d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Sep 30 14:42:32 compute-0 nova_compute[261524]: 2025-09-30 14:42:32.795 2 DEBUG oslo_concurrency.lockutils [req-993297bd-3877-466a-813d-ce77d454944c req-2ff1ab9f-055e-440a-8c78-21014217fb8b e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Acquiring lock "c7b89511-067a-4ecf-9b88-41170118da87-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:42:32 compute-0 nova_compute[261524]: 2025-09-30 14:42:32.795 2 DEBUG oslo_concurrency.lockutils [req-993297bd-3877-466a-813d-ce77d454944c req-2ff1ab9f-055e-440a-8c78-21014217fb8b e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Lock "c7b89511-067a-4ecf-9b88-41170118da87-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:42:32 compute-0 nova_compute[261524]: 2025-09-30 14:42:32.795 2 DEBUG oslo_concurrency.lockutils [req-993297bd-3877-466a-813d-ce77d454944c req-2ff1ab9f-055e-440a-8c78-21014217fb8b e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Lock "c7b89511-067a-4ecf-9b88-41170118da87-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:42:32 compute-0 nova_compute[261524]: 2025-09-30 14:42:32.795 2 DEBUG nova.compute.manager [req-993297bd-3877-466a-813d-ce77d454944c req-2ff1ab9f-055e-440a-8c78-21014217fb8b e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: c7b89511-067a-4ecf-9b88-41170118da87] No waiting events found dispatching network-vif-plugged-03e495c3-98e6-487d-b0e9-ad172586b71d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Sep 30 14:42:32 compute-0 nova_compute[261524]: 2025-09-30 14:42:32.796 2 WARNING nova.compute.manager [req-993297bd-3877-466a-813d-ce77d454944c req-2ff1ab9f-055e-440a-8c78-21014217fb8b e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Received unexpected event network-vif-plugged-03e495c3-98e6-487d-b0e9-ad172586b71d for instance with vm_state active and task_state None.
Sep 30 14:42:33 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:42:33 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:42:33 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:42:33.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:42:33 compute-0 podman[272959]: 2025-09-30 14:42:33.089760848 +0000 UTC m=+0.054997247 container create b44ff627fc6ab8aa661d16f5b7df72bee4bfec8996e46651f5f50e4b7d22505c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d9d4b9c-e1ad-44e2-8fb0-3f1c8b1cb1c1, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Sep 30 14:42:33 compute-0 systemd[1]: Started libpod-conmon-b44ff627fc6ab8aa661d16f5b7df72bee4bfec8996e46651f5f50e4b7d22505c.scope.
Sep 30 14:42:33 compute-0 podman[272959]: 2025-09-30 14:42:33.061079254 +0000 UTC m=+0.026315653 image pull aa21cc3d2531fe07b45a943d4ac1ba0268bfab26b0884a4a00fbad7695318ba9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Sep 30 14:42:33 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:42:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cca7f63b0c5236ce9c03ac30e96f2a013ce595c2095fd4328461df9779148c15/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Sep 30 14:42:33 compute-0 podman[272959]: 2025-09-30 14:42:33.19863264 +0000 UTC m=+0.163869069 container init b44ff627fc6ab8aa661d16f5b7df72bee4bfec8996e46651f5f50e4b7d22505c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d9d4b9c-e1ad-44e2-8fb0-3f1c8b1cb1c1, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Sep 30 14:42:33 compute-0 podman[272959]: 2025-09-30 14:42:33.205670483 +0000 UTC m=+0.170906902 container start b44ff627fc6ab8aa661d16f5b7df72bee4bfec8996e46651f5f50e4b7d22505c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d9d4b9c-e1ad-44e2-8fb0-3f1c8b1cb1c1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Sep 30 14:42:33 compute-0 neutron-haproxy-ovnmeta-3d9d4b9c-e1ad-44e2-8fb0-3f1c8b1cb1c1[272974]: [NOTICE]   (272980) : New worker (272982) forked
Sep 30 14:42:33 compute-0 neutron-haproxy-ovnmeta-3d9d4b9c-e1ad-44e2-8fb0-3f1c8b1cb1c1[272974]: [NOTICE]   (272980) : Loading success.
Sep 30 14:42:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:42:33.637Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:42:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:42:33.638Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:42:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:42:33.638Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:42:33 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:42:33 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 14:42:33 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:42:33.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 14:42:33 compute-0 ovn_controller[154021]: 2025-09-30T14:42:33Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:bb:7b:b7 10.100.0.21
Sep 30 14:42:33 compute-0 ovn_controller[154021]: 2025-09-30T14:42:33Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:bb:7b:b7 10.100.0.21
Sep 30 14:42:33 compute-0 ceph-mon[74194]: pgmap v811: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 252 KiB/s rd, 148 KiB/s wr, 37 op/s
Sep 30 14:42:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:42:33 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:42:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:42:33 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:42:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:42:33 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:42:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:42:34 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:42:34 compute-0 nova_compute[261524]: 2025-09-30 14:42:34.235 2 DEBUG nova.network.neutron [req-e3f74327-57d3-44cf-b369-f82c8a9063a5 req-5262f112-b4fa-41f7-9fcb-61e4c6b13116 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Updated VIF entry in instance network info cache for port 03e495c3-98e6-487d-b0e9-ad172586b71d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Sep 30 14:42:34 compute-0 nova_compute[261524]: 2025-09-30 14:42:34.236 2 DEBUG nova.network.neutron [req-e3f74327-57d3-44cf-b369-f82c8a9063a5 req-5262f112-b4fa-41f7-9fcb-61e4c6b13116 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Updating instance_info_cache with network_info: [{"id": "fdd76f4a-6a11-467c-8f19-0b00baa4dbd1", "address": "fa:16:3e:0e:c5:53", "network": {"id": "31e82792-2132-423c-8fb3-0fd2453172b3", "bridge": "br-int", "label": "tempest-network-smoke--156271235", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0f6bbb74396f4cb7bfa999ebdabfe722", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdd76f4a-6a", "ovs_interfaceid": "fdd76f4a-6a11-467c-8f19-0b00baa4dbd1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "03e495c3-98e6-487d-b0e9-ad172586b71d", "address": "fa:16:3e:bb:7b:b7", "network": {"id": "3d9d4b9c-e1ad-44e2-8fb0-3f1c8b1cb1c1", "bridge": "br-int", "label": "tempest-network-smoke--1716238045", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0f6bbb74396f4cb7bfa999ebdabfe722", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap03e495c3-98", "ovs_interfaceid": "03e495c3-98e6-487d-b0e9-ad172586b71d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Sep 30 14:42:34 compute-0 nova_compute[261524]: 2025-09-30 14:42:34.258 2 DEBUG oslo_concurrency.lockutils [req-e3f74327-57d3-44cf-b369-f82c8a9063a5 req-5262f112-b4fa-41f7-9fcb-61e4c6b13116 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Releasing lock "refresh_cache-c7b89511-067a-4ecf-9b88-41170118da87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Sep 30 14:42:34 compute-0 nova_compute[261524]: 2025-09-30 14:42:34.637 2 DEBUG oslo_concurrency.lockutils [None req-c81b8d85-1bc7-4fc5-9090-e6475859672c 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Acquiring lock "interface-c7b89511-067a-4ecf-9b88-41170118da87-03e495c3-98e6-487d-b0e9-ad172586b71d" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:42:34 compute-0 nova_compute[261524]: 2025-09-30 14:42:34.637 2 DEBUG oslo_concurrency.lockutils [None req-c81b8d85-1bc7-4fc5-9090-e6475859672c 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Lock "interface-c7b89511-067a-4ecf-9b88-41170118da87-03e495c3-98e6-487d-b0e9-ad172586b71d" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:42:34 compute-0 nova_compute[261524]: 2025-09-30 14:42:34.659 2 DEBUG nova.objects.instance [None req-c81b8d85-1bc7-4fc5-9090-e6475859672c 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Lazy-loading 'flavor' on Instance uuid c7b89511-067a-4ecf-9b88-41170118da87 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Sep 30 14:42:34 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v812: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 13 KiB/s wr, 0 op/s
Sep 30 14:42:34 compute-0 nova_compute[261524]: 2025-09-30 14:42:34.683 2 DEBUG nova.virt.libvirt.vif [None req-c81b8d85-1bc7-4fc5-9090-e6475859672c 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-09-30T14:41:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2031188285',display_name='tempest-TestNetworkBasicOps-server-2031188285',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2031188285',id=3,image_ref='7c70cf84-edc3-42b2-a094-ae3c1dbaffe4',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEAImrCwKSNyEdm98pPfZ4sjS6exrK+14H3hUnFOdL8Y5dY0kzO28iP+MIhWAQTc22os7ImKeOILYxLVSkpa7J7So6O1Rtmi7C5fPdNcVDkCJS373V5RS7Al59MW7kPAog==',key_name='tempest-TestNetworkBasicOps-1988791059',keypairs=<?>,launch_index=0,launched_at=2025-09-30T14:42:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0f6bbb74396f4cb7bfa999ebdabfe722',ramdisk_id='',reservation_id='r-nwtvr0g5',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c70cf84-edc3-42b2-a094-ae3c1dbaffe4',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-195302952',owner_user_name='tempest-TestNetworkBasicOps-195302952-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-09-30T14:42:05Z,user_data=None,user_id='59c80c4f189d4667aec64b43afc69ed2',uuid=c7b89511-067a-4ecf-9b88-41170118da87,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "03e495c3-98e6-487d-b0e9-ad172586b71d", "address": "fa:16:3e:bb:7b:b7", "network": {"id": "3d9d4b9c-e1ad-44e2-8fb0-3f1c8b1cb1c1", "bridge": "br-int", "label": "tempest-network-smoke--1716238045", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0f6bbb74396f4cb7bfa999ebdabfe722", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap03e495c3-98", "ovs_interfaceid": "03e495c3-98e6-487d-b0e9-ad172586b71d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Sep 30 14:42:34 compute-0 nova_compute[261524]: 2025-09-30 14:42:34.683 2 DEBUG nova.network.os_vif_util [None req-c81b8d85-1bc7-4fc5-9090-e6475859672c 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Converting VIF {"id": "03e495c3-98e6-487d-b0e9-ad172586b71d", "address": "fa:16:3e:bb:7b:b7", "network": {"id": "3d9d4b9c-e1ad-44e2-8fb0-3f1c8b1cb1c1", "bridge": "br-int", "label": "tempest-network-smoke--1716238045", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0f6bbb74396f4cb7bfa999ebdabfe722", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap03e495c3-98", "ovs_interfaceid": "03e495c3-98e6-487d-b0e9-ad172586b71d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Sep 30 14:42:34 compute-0 nova_compute[261524]: 2025-09-30 14:42:34.684 2 DEBUG nova.network.os_vif_util [None req-c81b8d85-1bc7-4fc5-9090-e6475859672c 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bb:7b:b7,bridge_name='br-int',has_traffic_filtering=True,id=03e495c3-98e6-487d-b0e9-ad172586b71d,network=Network(3d9d4b9c-e1ad-44e2-8fb0-3f1c8b1cb1c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap03e495c3-98') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Sep 30 14:42:34 compute-0 nova_compute[261524]: 2025-09-30 14:42:34.686 2 DEBUG nova.virt.libvirt.guest [None req-c81b8d85-1bc7-4fc5-9090-e6475859672c 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:bb:7b:b7"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap03e495c3-98"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Sep 30 14:42:34 compute-0 nova_compute[261524]: 2025-09-30 14:42:34.688 2 DEBUG nova.virt.libvirt.guest [None req-c81b8d85-1bc7-4fc5-9090-e6475859672c 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:bb:7b:b7"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap03e495c3-98"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Sep 30 14:42:34 compute-0 nova_compute[261524]: 2025-09-30 14:42:34.689 2 DEBUG nova.virt.libvirt.driver [None req-c81b8d85-1bc7-4fc5-9090-e6475859672c 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Attempting to detach device tap03e495c3-98 from instance c7b89511-067a-4ecf-9b88-41170118da87 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Sep 30 14:42:34 compute-0 nova_compute[261524]: 2025-09-30 14:42:34.690 2 DEBUG nova.virt.libvirt.guest [None req-c81b8d85-1bc7-4fc5-9090-e6475859672c 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] detach device xml: <interface type="ethernet">
Sep 30 14:42:34 compute-0 nova_compute[261524]:   <mac address="fa:16:3e:bb:7b:b7"/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:   <model type="virtio"/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:   <driver name="vhost" rx_queue_size="512"/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:   <mtu size="1442"/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:   <target dev="tap03e495c3-98"/>
Sep 30 14:42:34 compute-0 nova_compute[261524]: </interface>
Sep 30 14:42:34 compute-0 nova_compute[261524]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Sep 30 14:42:34 compute-0 nova_compute[261524]: 2025-09-30 14:42:34.694 2 DEBUG nova.virt.libvirt.guest [None req-c81b8d85-1bc7-4fc5-9090-e6475859672c 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:bb:7b:b7"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap03e495c3-98"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Sep 30 14:42:34 compute-0 nova_compute[261524]: 2025-09-30 14:42:34.696 2 DEBUG nova.virt.libvirt.guest [None req-c81b8d85-1bc7-4fc5-9090-e6475859672c 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:bb:7b:b7"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap03e495c3-98"/></interface>not found in domain: <domain type='kvm' id='2'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:   <name>instance-00000003</name>
Sep 30 14:42:34 compute-0 nova_compute[261524]:   <uuid>c7b89511-067a-4ecf-9b88-41170118da87</uuid>
Sep 30 14:42:34 compute-0 nova_compute[261524]:   <metadata>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Sep 30 14:42:34 compute-0 nova_compute[261524]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:   <nova:name>tempest-TestNetworkBasicOps-server-2031188285</nova:name>
Sep 30 14:42:34 compute-0 nova_compute[261524]:   <nova:creationTime>2025-09-30 14:42:32</nova:creationTime>
Sep 30 14:42:34 compute-0 nova_compute[261524]:   <nova:flavor name="m1.nano">
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <nova:memory>128</nova:memory>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <nova:disk>1</nova:disk>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <nova:swap>0</nova:swap>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <nova:ephemeral>0</nova:ephemeral>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <nova:vcpus>1</nova:vcpus>
Sep 30 14:42:34 compute-0 nova_compute[261524]:   </nova:flavor>
Sep 30 14:42:34 compute-0 nova_compute[261524]:   <nova:owner>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <nova:user uuid="59c80c4f189d4667aec64b43afc69ed2">tempest-TestNetworkBasicOps-195302952-project-member</nova:user>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <nova:project uuid="0f6bbb74396f4cb7bfa999ebdabfe722">tempest-TestNetworkBasicOps-195302952</nova:project>
Sep 30 14:42:34 compute-0 nova_compute[261524]:   </nova:owner>
Sep 30 14:42:34 compute-0 nova_compute[261524]:   <nova:root type="image" uuid="7c70cf84-edc3-42b2-a094-ae3c1dbaffe4"/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:   <nova:ports>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <nova:port uuid="fdd76f4a-6a11-467c-8f19-0b00baa4dbd1">
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     </nova:port>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <nova:port uuid="03e495c3-98e6-487d-b0e9-ad172586b71d">
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <nova:ip type="fixed" address="10.100.0.21" ipVersion="4"/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     </nova:port>
Sep 30 14:42:34 compute-0 nova_compute[261524]:   </nova:ports>
Sep 30 14:42:34 compute-0 nova_compute[261524]: </nova:instance>
Sep 30 14:42:34 compute-0 nova_compute[261524]:   </metadata>
Sep 30 14:42:34 compute-0 nova_compute[261524]:   <memory unit='KiB'>131072</memory>
Sep 30 14:42:34 compute-0 nova_compute[261524]:   <currentMemory unit='KiB'>131072</currentMemory>
Sep 30 14:42:34 compute-0 nova_compute[261524]:   <vcpu placement='static'>1</vcpu>
Sep 30 14:42:34 compute-0 nova_compute[261524]:   <resource>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <partition>/machine</partition>
Sep 30 14:42:34 compute-0 nova_compute[261524]:   </resource>
Sep 30 14:42:34 compute-0 nova_compute[261524]:   <sysinfo type='smbios'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <system>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <entry name='manufacturer'>RDO</entry>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <entry name='product'>OpenStack Compute</entry>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <entry name='serial'>c7b89511-067a-4ecf-9b88-41170118da87</entry>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <entry name='uuid'>c7b89511-067a-4ecf-9b88-41170118da87</entry>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <entry name='family'>Virtual Machine</entry>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     </system>
Sep 30 14:42:34 compute-0 nova_compute[261524]:   </sysinfo>
Sep 30 14:42:34 compute-0 nova_compute[261524]:   <os>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <boot dev='hd'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <smbios mode='sysinfo'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:   </os>
Sep 30 14:42:34 compute-0 nova_compute[261524]:   <features>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <acpi/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <apic/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <vmcoreinfo state='on'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:   </features>
Sep 30 14:42:34 compute-0 nova_compute[261524]:   <cpu mode='custom' match='exact' check='full'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <model fallback='forbid'>EPYC-Rome</model>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <vendor>AMD</vendor>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <feature policy='require' name='x2apic'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <feature policy='require' name='tsc-deadline'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <feature policy='require' name='hypervisor'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <feature policy='require' name='tsc_adjust'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <feature policy='require' name='spec-ctrl'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <feature policy='require' name='stibp'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <feature policy='require' name='arch-capabilities'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <feature policy='require' name='ssbd'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <feature policy='require' name='cmp_legacy'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <feature policy='require' name='overflow-recov'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <feature policy='require' name='succor'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <feature policy='require' name='ibrs'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <feature policy='require' name='amd-ssbd'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <feature policy='require' name='virt-ssbd'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <feature policy='disable' name='lbrv'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <feature policy='disable' name='tsc-scale'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <feature policy='disable' name='vmcb-clean'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <feature policy='disable' name='flushbyasid'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <feature policy='disable' name='pause-filter'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <feature policy='disable' name='pfthreshold'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <feature policy='disable' name='svme-addr-chk'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <feature policy='require' name='lfence-always-serializing'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <feature policy='require' name='rdctl-no'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <feature policy='require' name='skip-l1dfl-vmentry'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <feature policy='require' name='mds-no'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <feature policy='require' name='pschange-mc-no'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <feature policy='require' name='gds-no'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <feature policy='require' name='rfds-no'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <feature policy='disable' name='xsaves'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <feature policy='disable' name='svm'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <feature policy='require' name='topoext'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <feature policy='disable' name='npt'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <feature policy='disable' name='nrip-save'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:   </cpu>
Sep 30 14:42:34 compute-0 nova_compute[261524]:   <clock offset='utc'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <timer name='pit' tickpolicy='delay'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <timer name='rtc' tickpolicy='catchup'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <timer name='hpet' present='no'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:   </clock>
Sep 30 14:42:34 compute-0 nova_compute[261524]:   <on_poweroff>destroy</on_poweroff>
Sep 30 14:42:34 compute-0 nova_compute[261524]:   <on_reboot>restart</on_reboot>
Sep 30 14:42:34 compute-0 nova_compute[261524]:   <on_crash>destroy</on_crash>
Sep 30 14:42:34 compute-0 nova_compute[261524]:   <devices>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <disk type='network' device='disk'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <driver name='qemu' type='raw' cache='none'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <auth username='openstack'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:         <secret type='ceph' uuid='5e3c7776-ac03-5698-b79f-a6dc2d80cae6'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       </auth>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <source protocol='rbd' name='vms/c7b89511-067a-4ecf-9b88-41170118da87_disk' index='2'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:         <host name='192.168.122.100' port='6789'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:         <host name='192.168.122.102' port='6789'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:         <host name='192.168.122.101' port='6789'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       </source>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <target dev='vda' bus='virtio'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <alias name='virtio-disk0'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     </disk>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <disk type='network' device='cdrom'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <driver name='qemu' type='raw' cache='none'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <auth username='openstack'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:         <secret type='ceph' uuid='5e3c7776-ac03-5698-b79f-a6dc2d80cae6'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       </auth>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <source protocol='rbd' name='vms/c7b89511-067a-4ecf-9b88-41170118da87_disk.config' index='1'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:         <host name='192.168.122.100' port='6789'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:         <host name='192.168.122.102' port='6789'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:         <host name='192.168.122.101' port='6789'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       </source>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <target dev='sda' bus='sata'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <readonly/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <alias name='sata0-0-0'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     </disk>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <controller type='pci' index='0' model='pcie-root'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <alias name='pcie.0'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <controller type='pci' index='1' model='pcie-root-port'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <target chassis='1' port='0x10'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <alias name='pci.1'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <controller type='pci' index='2' model='pcie-root-port'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <target chassis='2' port='0x11'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <alias name='pci.2'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <controller type='pci' index='3' model='pcie-root-port'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <target chassis='3' port='0x12'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <alias name='pci.3'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <controller type='pci' index='4' model='pcie-root-port'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <target chassis='4' port='0x13'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <alias name='pci.4'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <controller type='pci' index='5' model='pcie-root-port'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <target chassis='5' port='0x14'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <alias name='pci.5'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <controller type='pci' index='6' model='pcie-root-port'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <target chassis='6' port='0x15'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <alias name='pci.6'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <controller type='pci' index='7' model='pcie-root-port'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <target chassis='7' port='0x16'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <alias name='pci.7'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <controller type='pci' index='8' model='pcie-root-port'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <target chassis='8' port='0x17'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <alias name='pci.8'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <controller type='pci' index='9' model='pcie-root-port'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <target chassis='9' port='0x18'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <alias name='pci.9'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <controller type='pci' index='10' model='pcie-root-port'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <target chassis='10' port='0x19'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <alias name='pci.10'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <controller type='pci' index='11' model='pcie-root-port'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <target chassis='11' port='0x1a'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <alias name='pci.11'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <controller type='pci' index='12' model='pcie-root-port'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <target chassis='12' port='0x1b'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <alias name='pci.12'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <controller type='pci' index='13' model='pcie-root-port'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <target chassis='13' port='0x1c'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <alias name='pci.13'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <controller type='pci' index='14' model='pcie-root-port'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <target chassis='14' port='0x1d'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <alias name='pci.14'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <controller type='pci' index='15' model='pcie-root-port'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <target chassis='15' port='0x1e'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <alias name='pci.15'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <controller type='pci' index='16' model='pcie-root-port'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <target chassis='16' port='0x1f'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <alias name='pci.16'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <controller type='pci' index='17' model='pcie-root-port'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <target chassis='17' port='0x20'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <alias name='pci.17'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <controller type='pci' index='18' model='pcie-root-port'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <target chassis='18' port='0x21'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <alias name='pci.18'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <controller type='pci' index='19' model='pcie-root-port'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <target chassis='19' port='0x22'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <alias name='pci.19'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <controller type='pci' index='20' model='pcie-root-port'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <target chassis='20' port='0x23'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <alias name='pci.20'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <controller type='pci' index='21' model='pcie-root-port'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <target chassis='21' port='0x24'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <alias name='pci.21'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <controller type='pci' index='22' model='pcie-root-port'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <target chassis='22' port='0x25'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <alias name='pci.22'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <controller type='pci' index='23' model='pcie-root-port'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <target chassis='23' port='0x26'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <alias name='pci.23'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <controller type='pci' index='24' model='pcie-root-port'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <target chassis='24' port='0x27'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <alias name='pci.24'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <controller type='pci' index='25' model='pcie-root-port'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <target chassis='25' port='0x28'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <alias name='pci.25'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <model name='pcie-pci-bridge'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <alias name='pci.26'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <controller type='usb' index='0' model='piix3-uhci'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <alias name='usb'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <controller type='sata' index='0'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <alias name='ide'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <interface type='ethernet'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <mac address='fa:16:3e:0e:c5:53'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <target dev='tapfdd76f4a-6a'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <model type='virtio'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <driver name='vhost' rx_queue_size='512'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <mtu size='1442'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <alias name='net0'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     </interface>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <interface type='ethernet'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <mac address='fa:16:3e:bb:7b:b7'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <target dev='tap03e495c3-98'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <model type='virtio'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <driver name='vhost' rx_queue_size='512'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <mtu size='1442'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <alias name='net1'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     </interface>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <serial type='pty'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <source path='/dev/pts/0'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <log file='/var/lib/nova/instances/c7b89511-067a-4ecf-9b88-41170118da87/console.log' append='off'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <target type='isa-serial' port='0'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:         <model name='isa-serial'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       </target>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <alias name='serial0'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     </serial>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <console type='pty' tty='/dev/pts/0'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <source path='/dev/pts/0'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <log file='/var/lib/nova/instances/c7b89511-067a-4ecf-9b88-41170118da87/console.log' append='off'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <target type='serial' port='0'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <alias name='serial0'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     </console>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <input type='tablet' bus='usb'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <alias name='input0'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <address type='usb' bus='0' port='1'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     </input>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <input type='mouse' bus='ps2'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <alias name='input1'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     </input>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <input type='keyboard' bus='ps2'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <alias name='input2'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     </input>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <listen type='address' address='::0'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     </graphics>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <audio id='1' type='none'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <video>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <model type='virtio' heads='1' primary='yes'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <alias name='video0'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     </video>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <watchdog model='itco' action='reset'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <alias name='watchdog0'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     </watchdog>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <memballoon model='virtio'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <stats period='10'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <alias name='balloon0'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     </memballoon>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <rng model='virtio'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <backend model='random'>/dev/urandom</backend>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <alias name='rng0'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     </rng>
Sep 30 14:42:34 compute-0 nova_compute[261524]:   </devices>
Sep 30 14:42:34 compute-0 nova_compute[261524]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <label>system_u:system_r:svirt_t:s0:c592,c924</label>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c592,c924</imagelabel>
Sep 30 14:42:34 compute-0 nova_compute[261524]:   </seclabel>
Sep 30 14:42:34 compute-0 nova_compute[261524]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <label>+107:+107</label>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <imagelabel>+107:+107</imagelabel>
Sep 30 14:42:34 compute-0 nova_compute[261524]:   </seclabel>
Sep 30 14:42:34 compute-0 nova_compute[261524]: </domain>
Sep 30 14:42:34 compute-0 nova_compute[261524]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Sep 30 14:42:34 compute-0 nova_compute[261524]: 2025-09-30 14:42:34.699 2 INFO nova.virt.libvirt.driver [None req-c81b8d85-1bc7-4fc5-9090-e6475859672c 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Successfully detached device tap03e495c3-98 from instance c7b89511-067a-4ecf-9b88-41170118da87 from the persistent domain config.
Sep 30 14:42:34 compute-0 nova_compute[261524]: 2025-09-30 14:42:34.699 2 DEBUG nova.virt.libvirt.driver [None req-c81b8d85-1bc7-4fc5-9090-e6475859672c 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] (1/8): Attempting to detach device tap03e495c3-98 with device alias net1 from instance c7b89511-067a-4ecf-9b88-41170118da87 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Sep 30 14:42:34 compute-0 nova_compute[261524]: 2025-09-30 14:42:34.700 2 DEBUG nova.virt.libvirt.guest [None req-c81b8d85-1bc7-4fc5-9090-e6475859672c 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] detach device xml: <interface type="ethernet">
Sep 30 14:42:34 compute-0 nova_compute[261524]:   <mac address="fa:16:3e:bb:7b:b7"/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:   <model type="virtio"/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:   <driver name="vhost" rx_queue_size="512"/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:   <mtu size="1442"/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:   <target dev="tap03e495c3-98"/>
Sep 30 14:42:34 compute-0 nova_compute[261524]: </interface>
Sep 30 14:42:34 compute-0 nova_compute[261524]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Sep 30 14:42:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:42:34] "GET /metrics HTTP/1.1" 200 48536 "" "Prometheus/2.51.0"
Sep 30 14:42:34 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:42:34] "GET /metrics HTTP/1.1" 200 48536 "" "Prometheus/2.51.0"
Sep 30 14:42:34 compute-0 kernel: tap03e495c3-98 (unregistering): left promiscuous mode
Sep 30 14:42:34 compute-0 NetworkManager[45472]: <info>  [1759243354.8022] device (tap03e495c3-98): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Sep 30 14:42:34 compute-0 ovn_controller[154021]: 2025-09-30T14:42:34Z|00050|binding|INFO|Releasing lport 03e495c3-98e6-487d-b0e9-ad172586b71d from this chassis (sb_readonly=0)
Sep 30 14:42:34 compute-0 ovn_controller[154021]: 2025-09-30T14:42:34Z|00051|binding|INFO|Setting lport 03e495c3-98e6-487d-b0e9-ad172586b71d down in Southbound
Sep 30 14:42:34 compute-0 ovn_controller[154021]: 2025-09-30T14:42:34Z|00052|binding|INFO|Removing iface tap03e495c3-98 ovn-installed in OVS
Sep 30 14:42:34 compute-0 nova_compute[261524]: 2025-09-30 14:42:34.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:42:34 compute-0 nova_compute[261524]: 2025-09-30 14:42:34.812 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:42:34 compute-0 nova_compute[261524]: 2025-09-30 14:42:34.815 2 DEBUG nova.virt.libvirt.driver [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] Received event <DeviceRemovedEvent: 1759243354.814682, c7b89511-067a-4ecf-9b88-41170118da87 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Sep 30 14:42:34 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:42:34.819 163966 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bb:7b:b7 10.100.0.21'], port_security=['fa:16:3e:bb:7b:b7 10.100.0.21'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.21/28', 'neutron:device_id': 'c7b89511-067a-4ecf-9b88-41170118da87', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3d9d4b9c-e1ad-44e2-8fb0-3f1c8b1cb1c1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0f6bbb74396f4cb7bfa999ebdabfe722', 'neutron:revision_number': '4', 'neutron:security_group_ids': '577c7718-6276-434c-be06-b394756c15c1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=46771cda-5331-426b-97b3-f94a6a201932, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f8c6753f7f0>], logical_port=03e495c3-98e6-487d-b0e9-ad172586b71d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f8c6753f7f0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Sep 30 14:42:34 compute-0 nova_compute[261524]: 2025-09-30 14:42:34.820 2 DEBUG nova.virt.libvirt.driver [None req-c81b8d85-1bc7-4fc5-9090-e6475859672c 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Start waiting for the detach event from libvirt for device tap03e495c3-98 with device alias net1 for instance c7b89511-067a-4ecf-9b88-41170118da87 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Sep 30 14:42:34 compute-0 nova_compute[261524]: 2025-09-30 14:42:34.820 2 DEBUG nova.virt.libvirt.guest [None req-c81b8d85-1bc7-4fc5-9090-e6475859672c 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:bb:7b:b7"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap03e495c3-98"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Sep 30 14:42:34 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:42:34.820 163966 INFO neutron.agent.ovn.metadata.agent [-] Port 03e495c3-98e6-487d-b0e9-ad172586b71d in datapath 3d9d4b9c-e1ad-44e2-8fb0-3f1c8b1cb1c1 unbound from our chassis
Sep 30 14:42:34 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:42:34.821 163966 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3d9d4b9c-e1ad-44e2-8fb0-3f1c8b1cb1c1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Sep 30 14:42:34 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:42:34.822 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[430f62fc-6e94-4829-8fc9-0bcddc89605c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:42:34 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:42:34.822 163966 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3d9d4b9c-e1ad-44e2-8fb0-3f1c8b1cb1c1 namespace which is not needed anymore
Sep 30 14:42:34 compute-0 nova_compute[261524]: 2025-09-30 14:42:34.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:42:34 compute-0 nova_compute[261524]: 2025-09-30 14:42:34.826 2 DEBUG nova.virt.libvirt.guest [None req-c81b8d85-1bc7-4fc5-9090-e6475859672c 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:bb:7b:b7"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap03e495c3-98"/></interface>not found in domain: <domain type='kvm' id='2'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:   <name>instance-00000003</name>
Sep 30 14:42:34 compute-0 nova_compute[261524]:   <uuid>c7b89511-067a-4ecf-9b88-41170118da87</uuid>
Sep 30 14:42:34 compute-0 nova_compute[261524]:   <metadata>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Sep 30 14:42:34 compute-0 nova_compute[261524]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:   <nova:name>tempest-TestNetworkBasicOps-server-2031188285</nova:name>
Sep 30 14:42:34 compute-0 nova_compute[261524]:   <nova:creationTime>2025-09-30 14:42:32</nova:creationTime>
Sep 30 14:42:34 compute-0 nova_compute[261524]:   <nova:flavor name="m1.nano">
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <nova:memory>128</nova:memory>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <nova:disk>1</nova:disk>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <nova:swap>0</nova:swap>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <nova:ephemeral>0</nova:ephemeral>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <nova:vcpus>1</nova:vcpus>
Sep 30 14:42:34 compute-0 nova_compute[261524]:   </nova:flavor>
Sep 30 14:42:34 compute-0 nova_compute[261524]:   <nova:owner>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <nova:user uuid="59c80c4f189d4667aec64b43afc69ed2">tempest-TestNetworkBasicOps-195302952-project-member</nova:user>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <nova:project uuid="0f6bbb74396f4cb7bfa999ebdabfe722">tempest-TestNetworkBasicOps-195302952</nova:project>
Sep 30 14:42:34 compute-0 nova_compute[261524]:   </nova:owner>
Sep 30 14:42:34 compute-0 nova_compute[261524]:   <nova:root type="image" uuid="7c70cf84-edc3-42b2-a094-ae3c1dbaffe4"/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:   <nova:ports>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <nova:port uuid="fdd76f4a-6a11-467c-8f19-0b00baa4dbd1">
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     </nova:port>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <nova:port uuid="03e495c3-98e6-487d-b0e9-ad172586b71d">
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <nova:ip type="fixed" address="10.100.0.21" ipVersion="4"/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     </nova:port>
Sep 30 14:42:34 compute-0 nova_compute[261524]:   </nova:ports>
Sep 30 14:42:34 compute-0 nova_compute[261524]: </nova:instance>
Sep 30 14:42:34 compute-0 nova_compute[261524]:   </metadata>
Sep 30 14:42:34 compute-0 nova_compute[261524]:   <memory unit='KiB'>131072</memory>
Sep 30 14:42:34 compute-0 nova_compute[261524]:   <currentMemory unit='KiB'>131072</currentMemory>
Sep 30 14:42:34 compute-0 nova_compute[261524]:   <vcpu placement='static'>1</vcpu>
Sep 30 14:42:34 compute-0 nova_compute[261524]:   <resource>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <partition>/machine</partition>
Sep 30 14:42:34 compute-0 nova_compute[261524]:   </resource>
Sep 30 14:42:34 compute-0 nova_compute[261524]:   <sysinfo type='smbios'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <system>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <entry name='manufacturer'>RDO</entry>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <entry name='product'>OpenStack Compute</entry>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <entry name='serial'>c7b89511-067a-4ecf-9b88-41170118da87</entry>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <entry name='uuid'>c7b89511-067a-4ecf-9b88-41170118da87</entry>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <entry name='family'>Virtual Machine</entry>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     </system>
Sep 30 14:42:34 compute-0 nova_compute[261524]:   </sysinfo>
Sep 30 14:42:34 compute-0 nova_compute[261524]:   <os>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <boot dev='hd'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <smbios mode='sysinfo'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:   </os>
Sep 30 14:42:34 compute-0 nova_compute[261524]:   <features>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <acpi/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <apic/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <vmcoreinfo state='on'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:   </features>
Sep 30 14:42:34 compute-0 nova_compute[261524]:   <cpu mode='custom' match='exact' check='full'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <model fallback='forbid'>EPYC-Rome</model>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <vendor>AMD</vendor>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <feature policy='require' name='x2apic'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <feature policy='require' name='tsc-deadline'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <feature policy='require' name='hypervisor'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <feature policy='require' name='tsc_adjust'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <feature policy='require' name='spec-ctrl'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <feature policy='require' name='stibp'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <feature policy='require' name='arch-capabilities'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <feature policy='require' name='ssbd'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <feature policy='require' name='cmp_legacy'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <feature policy='require' name='overflow-recov'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <feature policy='require' name='succor'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <feature policy='require' name='ibrs'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <feature policy='require' name='amd-ssbd'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <feature policy='require' name='virt-ssbd'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <feature policy='disable' name='lbrv'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <feature policy='disable' name='tsc-scale'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <feature policy='disable' name='vmcb-clean'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <feature policy='disable' name='flushbyasid'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <feature policy='disable' name='pause-filter'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <feature policy='disable' name='pfthreshold'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <feature policy='disable' name='svme-addr-chk'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <feature policy='require' name='lfence-always-serializing'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <feature policy='require' name='rdctl-no'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <feature policy='require' name='skip-l1dfl-vmentry'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <feature policy='require' name='mds-no'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <feature policy='require' name='pschange-mc-no'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <feature policy='require' name='gds-no'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <feature policy='require' name='rfds-no'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <feature policy='disable' name='xsaves'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <feature policy='disable' name='svm'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <feature policy='require' name='topoext'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <feature policy='disable' name='npt'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <feature policy='disable' name='nrip-save'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:   </cpu>
Sep 30 14:42:34 compute-0 nova_compute[261524]:   <clock offset='utc'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <timer name='pit' tickpolicy='delay'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <timer name='rtc' tickpolicy='catchup'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <timer name='hpet' present='no'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:   </clock>
Sep 30 14:42:34 compute-0 nova_compute[261524]:   <on_poweroff>destroy</on_poweroff>
Sep 30 14:42:34 compute-0 nova_compute[261524]:   <on_reboot>restart</on_reboot>
Sep 30 14:42:34 compute-0 nova_compute[261524]:   <on_crash>destroy</on_crash>
Sep 30 14:42:34 compute-0 nova_compute[261524]:   <devices>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <disk type='network' device='disk'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <driver name='qemu' type='raw' cache='none'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <auth username='openstack'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:         <secret type='ceph' uuid='5e3c7776-ac03-5698-b79f-a6dc2d80cae6'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       </auth>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <source protocol='rbd' name='vms/c7b89511-067a-4ecf-9b88-41170118da87_disk' index='2'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:         <host name='192.168.122.100' port='6789'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:         <host name='192.168.122.102' port='6789'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:         <host name='192.168.122.101' port='6789'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       </source>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <target dev='vda' bus='virtio'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <alias name='virtio-disk0'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     </disk>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <disk type='network' device='cdrom'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <driver name='qemu' type='raw' cache='none'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <auth username='openstack'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:         <secret type='ceph' uuid='5e3c7776-ac03-5698-b79f-a6dc2d80cae6'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       </auth>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <source protocol='rbd' name='vms/c7b89511-067a-4ecf-9b88-41170118da87_disk.config' index='1'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:         <host name='192.168.122.100' port='6789'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:         <host name='192.168.122.102' port='6789'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:         <host name='192.168.122.101' port='6789'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       </source>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <target dev='sda' bus='sata'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <readonly/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <alias name='sata0-0-0'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     </disk>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <controller type='pci' index='0' model='pcie-root'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <alias name='pcie.0'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <controller type='pci' index='1' model='pcie-root-port'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <target chassis='1' port='0x10'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <alias name='pci.1'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <controller type='pci' index='2' model='pcie-root-port'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <target chassis='2' port='0x11'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <alias name='pci.2'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <controller type='pci' index='3' model='pcie-root-port'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <target chassis='3' port='0x12'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <alias name='pci.3'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <controller type='pci' index='4' model='pcie-root-port'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <target chassis='4' port='0x13'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <alias name='pci.4'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <controller type='pci' index='5' model='pcie-root-port'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <target chassis='5' port='0x14'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <alias name='pci.5'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <controller type='pci' index='6' model='pcie-root-port'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <target chassis='6' port='0x15'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <alias name='pci.6'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <controller type='pci' index='7' model='pcie-root-port'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <target chassis='7' port='0x16'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <alias name='pci.7'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <controller type='pci' index='8' model='pcie-root-port'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <target chassis='8' port='0x17'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <alias name='pci.8'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <controller type='pci' index='9' model='pcie-root-port'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <target chassis='9' port='0x18'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <alias name='pci.9'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <controller type='pci' index='10' model='pcie-root-port'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <target chassis='10' port='0x19'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <alias name='pci.10'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <controller type='pci' index='11' model='pcie-root-port'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <target chassis='11' port='0x1a'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <alias name='pci.11'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <controller type='pci' index='12' model='pcie-root-port'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <target chassis='12' port='0x1b'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <alias name='pci.12'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <controller type='pci' index='13' model='pcie-root-port'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <target chassis='13' port='0x1c'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <alias name='pci.13'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <controller type='pci' index='14' model='pcie-root-port'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <target chassis='14' port='0x1d'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <alias name='pci.14'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <controller type='pci' index='15' model='pcie-root-port'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <target chassis='15' port='0x1e'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <alias name='pci.15'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <controller type='pci' index='16' model='pcie-root-port'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <target chassis='16' port='0x1f'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <alias name='pci.16'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <controller type='pci' index='17' model='pcie-root-port'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <target chassis='17' port='0x20'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <alias name='pci.17'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <controller type='pci' index='18' model='pcie-root-port'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <target chassis='18' port='0x21'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <alias name='pci.18'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <controller type='pci' index='19' model='pcie-root-port'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <target chassis='19' port='0x22'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <alias name='pci.19'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <controller type='pci' index='20' model='pcie-root-port'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <target chassis='20' port='0x23'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <alias name='pci.20'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <controller type='pci' index='21' model='pcie-root-port'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <target chassis='21' port='0x24'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <alias name='pci.21'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <controller type='pci' index='22' model='pcie-root-port'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <target chassis='22' port='0x25'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <alias name='pci.22'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <controller type='pci' index='23' model='pcie-root-port'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <target chassis='23' port='0x26'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <alias name='pci.23'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <controller type='pci' index='24' model='pcie-root-port'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <target chassis='24' port='0x27'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <alias name='pci.24'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <controller type='pci' index='25' model='pcie-root-port'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <target chassis='25' port='0x28'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <alias name='pci.25'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <model name='pcie-pci-bridge'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <alias name='pci.26'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <controller type='usb' index='0' model='piix3-uhci'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <alias name='usb'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <controller type='sata' index='0'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <alias name='ide'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <interface type='ethernet'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <mac address='fa:16:3e:0e:c5:53'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <target dev='tapfdd76f4a-6a'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <model type='virtio'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <driver name='vhost' rx_queue_size='512'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <mtu size='1442'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <alias name='net0'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     </interface>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <serial type='pty'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <source path='/dev/pts/0'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <log file='/var/lib/nova/instances/c7b89511-067a-4ecf-9b88-41170118da87/console.log' append='off'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <target type='isa-serial' port='0'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:         <model name='isa-serial'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       </target>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <alias name='serial0'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     </serial>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <console type='pty' tty='/dev/pts/0'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <source path='/dev/pts/0'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <log file='/var/lib/nova/instances/c7b89511-067a-4ecf-9b88-41170118da87/console.log' append='off'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <target type='serial' port='0'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <alias name='serial0'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     </console>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <input type='tablet' bus='usb'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <alias name='input0'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <address type='usb' bus='0' port='1'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     </input>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <input type='mouse' bus='ps2'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <alias name='input1'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     </input>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <input type='keyboard' bus='ps2'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <alias name='input2'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     </input>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <listen type='address' address='::0'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     </graphics>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <audio id='1' type='none'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <video>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <model type='virtio' heads='1' primary='yes'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <alias name='video0'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     </video>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <watchdog model='itco' action='reset'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <alias name='watchdog0'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     </watchdog>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <memballoon model='virtio'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <stats period='10'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <alias name='balloon0'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     </memballoon>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <rng model='virtio'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <backend model='random'>/dev/urandom</backend>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <alias name='rng0'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     </rng>
Sep 30 14:42:34 compute-0 nova_compute[261524]:   </devices>
Sep 30 14:42:34 compute-0 nova_compute[261524]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <label>system_u:system_r:svirt_t:s0:c592,c924</label>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c592,c924</imagelabel>
Sep 30 14:42:34 compute-0 nova_compute[261524]:   </seclabel>
Sep 30 14:42:34 compute-0 nova_compute[261524]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <label>+107:+107</label>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <imagelabel>+107:+107</imagelabel>
Sep 30 14:42:34 compute-0 nova_compute[261524]:   </seclabel>
Sep 30 14:42:34 compute-0 nova_compute[261524]: </domain>
Sep 30 14:42:34 compute-0 nova_compute[261524]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Sep 30 14:42:34 compute-0 nova_compute[261524]: 2025-09-30 14:42:34.827 2 INFO nova.virt.libvirt.driver [None req-c81b8d85-1bc7-4fc5-9090-e6475859672c 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Successfully detached device tap03e495c3-98 from instance c7b89511-067a-4ecf-9b88-41170118da87 from the live domain config.
Sep 30 14:42:34 compute-0 nova_compute[261524]: 2025-09-30 14:42:34.827 2 DEBUG nova.virt.libvirt.vif [None req-c81b8d85-1bc7-4fc5-9090-e6475859672c 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-09-30T14:41:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2031188285',display_name='tempest-TestNetworkBasicOps-server-2031188285',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2031188285',id=3,image_ref='7c70cf84-edc3-42b2-a094-ae3c1dbaffe4',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEAImrCwKSNyEdm98pPfZ4sjS6exrK+14H3hUnFOdL8Y5dY0kzO28iP+MIhWAQTc22os7ImKeOILYxLVSkpa7J7So6O1Rtmi7C5fPdNcVDkCJS373V5RS7Al59MW7kPAog==',key_name='tempest-TestNetworkBasicOps-1988791059',keypairs=<?>,launch_index=0,launched_at=2025-09-30T14:42:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0f6bbb74396f4cb7bfa999ebdabfe722',ramdisk_id='',reservation_id='r-nwtvr0g5',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c70cf84-edc3-42b2-a094-ae3c1dbaffe4',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-195302952',owner_user_name='tempest-TestNetworkBasicOps-195302952-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-09-30T14:42:05Z,user_data=None,user_id='59c80c4f189d4667aec64b43afc69ed2',uuid=c7b89511-067a-4ecf-9b88-41170118da87,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "03e495c3-98e6-487d-b0e9-ad172586b71d", "address": "fa:16:3e:bb:7b:b7", "network": {"id": "3d9d4b9c-e1ad-44e2-8fb0-3f1c8b1cb1c1", "bridge": "br-int", "label": "tempest-network-smoke--1716238045", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0f6bbb74396f4cb7bfa999ebdabfe722", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap03e495c3-98", "ovs_interfaceid": "03e495c3-98e6-487d-b0e9-ad172586b71d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Sep 30 14:42:34 compute-0 nova_compute[261524]: 2025-09-30 14:42:34.827 2 DEBUG nova.network.os_vif_util [None req-c81b8d85-1bc7-4fc5-9090-e6475859672c 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Converting VIF {"id": "03e495c3-98e6-487d-b0e9-ad172586b71d", "address": "fa:16:3e:bb:7b:b7", "network": {"id": "3d9d4b9c-e1ad-44e2-8fb0-3f1c8b1cb1c1", "bridge": "br-int", "label": "tempest-network-smoke--1716238045", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0f6bbb74396f4cb7bfa999ebdabfe722", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap03e495c3-98", "ovs_interfaceid": "03e495c3-98e6-487d-b0e9-ad172586b71d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Sep 30 14:42:34 compute-0 nova_compute[261524]: 2025-09-30 14:42:34.828 2 DEBUG nova.network.os_vif_util [None req-c81b8d85-1bc7-4fc5-9090-e6475859672c 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bb:7b:b7,bridge_name='br-int',has_traffic_filtering=True,id=03e495c3-98e6-487d-b0e9-ad172586b71d,network=Network(3d9d4b9c-e1ad-44e2-8fb0-3f1c8b1cb1c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap03e495c3-98') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Sep 30 14:42:34 compute-0 nova_compute[261524]: 2025-09-30 14:42:34.828 2 DEBUG os_vif [None req-c81b8d85-1bc7-4fc5-9090-e6475859672c 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:7b:b7,bridge_name='br-int',has_traffic_filtering=True,id=03e495c3-98e6-487d-b0e9-ad172586b71d,network=Network(3d9d4b9c-e1ad-44e2-8fb0-3f1c8b1cb1c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap03e495c3-98') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Sep 30 14:42:34 compute-0 nova_compute[261524]: 2025-09-30 14:42:34.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:42:34 compute-0 nova_compute[261524]: 2025-09-30 14:42:34.830 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap03e495c3-98, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 14:42:34 compute-0 nova_compute[261524]: 2025-09-30 14:42:34.831 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:42:34 compute-0 nova_compute[261524]: 2025-09-30 14:42:34.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:42:34 compute-0 nova_compute[261524]: 2025-09-30 14:42:34.835 2 INFO os_vif [None req-c81b8d85-1bc7-4fc5-9090-e6475859672c 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:7b:b7,bridge_name='br-int',has_traffic_filtering=True,id=03e495c3-98e6-487d-b0e9-ad172586b71d,network=Network(3d9d4b9c-e1ad-44e2-8fb0-3f1c8b1cb1c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap03e495c3-98')
Sep 30 14:42:34 compute-0 nova_compute[261524]: 2025-09-30 14:42:34.836 2 DEBUG nova.virt.libvirt.guest [None req-c81b8d85-1bc7-4fc5-9090-e6475859672c 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Sep 30 14:42:34 compute-0 nova_compute[261524]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:   <nova:name>tempest-TestNetworkBasicOps-server-2031188285</nova:name>
Sep 30 14:42:34 compute-0 nova_compute[261524]:   <nova:creationTime>2025-09-30 14:42:34</nova:creationTime>
Sep 30 14:42:34 compute-0 nova_compute[261524]:   <nova:flavor name="m1.nano">
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <nova:memory>128</nova:memory>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <nova:disk>1</nova:disk>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <nova:swap>0</nova:swap>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <nova:ephemeral>0</nova:ephemeral>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <nova:vcpus>1</nova:vcpus>
Sep 30 14:42:34 compute-0 nova_compute[261524]:   </nova:flavor>
Sep 30 14:42:34 compute-0 nova_compute[261524]:   <nova:owner>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <nova:user uuid="59c80c4f189d4667aec64b43afc69ed2">tempest-TestNetworkBasicOps-195302952-project-member</nova:user>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <nova:project uuid="0f6bbb74396f4cb7bfa999ebdabfe722">tempest-TestNetworkBasicOps-195302952</nova:project>
Sep 30 14:42:34 compute-0 nova_compute[261524]:   </nova:owner>
Sep 30 14:42:34 compute-0 nova_compute[261524]:   <nova:root type="image" uuid="7c70cf84-edc3-42b2-a094-ae3c1dbaffe4"/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:   <nova:ports>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     <nova:port uuid="fdd76f4a-6a11-467c-8f19-0b00baa4dbd1">
Sep 30 14:42:34 compute-0 nova_compute[261524]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Sep 30 14:42:34 compute-0 nova_compute[261524]:     </nova:port>
Sep 30 14:42:34 compute-0 nova_compute[261524]:   </nova:ports>
Sep 30 14:42:34 compute-0 nova_compute[261524]: </nova:instance>
Sep 30 14:42:34 compute-0 nova_compute[261524]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Sep 30 14:42:34 compute-0 nova_compute[261524]: 2025-09-30 14:42:34.878 2 DEBUG nova.compute.manager [req-1b712243-b27d-4424-8ea4-361367af687c req-203a54a3-eea0-45c1-b799-523325cee619 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Received event network-vif-plugged-03e495c3-98e6-487d-b0e9-ad172586b71d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Sep 30 14:42:34 compute-0 nova_compute[261524]: 2025-09-30 14:42:34.878 2 DEBUG oslo_concurrency.lockutils [req-1b712243-b27d-4424-8ea4-361367af687c req-203a54a3-eea0-45c1-b799-523325cee619 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Acquiring lock "c7b89511-067a-4ecf-9b88-41170118da87-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:42:34 compute-0 nova_compute[261524]: 2025-09-30 14:42:34.879 2 DEBUG oslo_concurrency.lockutils [req-1b712243-b27d-4424-8ea4-361367af687c req-203a54a3-eea0-45c1-b799-523325cee619 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Lock "c7b89511-067a-4ecf-9b88-41170118da87-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:42:34 compute-0 nova_compute[261524]: 2025-09-30 14:42:34.879 2 DEBUG oslo_concurrency.lockutils [req-1b712243-b27d-4424-8ea4-361367af687c req-203a54a3-eea0-45c1-b799-523325cee619 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Lock "c7b89511-067a-4ecf-9b88-41170118da87-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:42:34 compute-0 nova_compute[261524]: 2025-09-30 14:42:34.879 2 DEBUG nova.compute.manager [req-1b712243-b27d-4424-8ea4-361367af687c req-203a54a3-eea0-45c1-b799-523325cee619 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: c7b89511-067a-4ecf-9b88-41170118da87] No waiting events found dispatching network-vif-plugged-03e495c3-98e6-487d-b0e9-ad172586b71d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Sep 30 14:42:34 compute-0 nova_compute[261524]: 2025-09-30 14:42:34.880 2 WARNING nova.compute.manager [req-1b712243-b27d-4424-8ea4-361367af687c req-203a54a3-eea0-45c1-b799-523325cee619 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Received unexpected event network-vif-plugged-03e495c3-98e6-487d-b0e9-ad172586b71d for instance with vm_state active and task_state None.
Sep 30 14:42:34 compute-0 neutron-haproxy-ovnmeta-3d9d4b9c-e1ad-44e2-8fb0-3f1c8b1cb1c1[272974]: [NOTICE]   (272980) : haproxy version is 2.8.14-c23fe91
Sep 30 14:42:34 compute-0 neutron-haproxy-ovnmeta-3d9d4b9c-e1ad-44e2-8fb0-3f1c8b1cb1c1[272974]: [NOTICE]   (272980) : path to executable is /usr/sbin/haproxy
Sep 30 14:42:34 compute-0 neutron-haproxy-ovnmeta-3d9d4b9c-e1ad-44e2-8fb0-3f1c8b1cb1c1[272974]: [WARNING]  (272980) : Exiting Master process...
Sep 30 14:42:34 compute-0 neutron-haproxy-ovnmeta-3d9d4b9c-e1ad-44e2-8fb0-3f1c8b1cb1c1[272974]: [ALERT]    (272980) : Current worker (272982) exited with code 143 (Terminated)
Sep 30 14:42:34 compute-0 neutron-haproxy-ovnmeta-3d9d4b9c-e1ad-44e2-8fb0-3f1c8b1cb1c1[272974]: [WARNING]  (272980) : All workers exited. Exiting... (0)
Sep 30 14:42:34 compute-0 systemd[1]: libpod-b44ff627fc6ab8aa661d16f5b7df72bee4bfec8996e46651f5f50e4b7d22505c.scope: Deactivated successfully.
Sep 30 14:42:34 compute-0 podman[273014]: 2025-09-30 14:42:34.965946229 +0000 UTC m=+0.049900535 container died b44ff627fc6ab8aa661d16f5b7df72bee4bfec8996e46651f5f50e4b7d22505c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d9d4b9c-e1ad-44e2-8fb0-3f1c8b1cb1c1, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:42:34 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b44ff627fc6ab8aa661d16f5b7df72bee4bfec8996e46651f5f50e4b7d22505c-userdata-shm.mount: Deactivated successfully.
Sep 30 14:42:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-cca7f63b0c5236ce9c03ac30e96f2a013ce595c2095fd4328461df9779148c15-merged.mount: Deactivated successfully.
Sep 30 14:42:35 compute-0 podman[273014]: 2025-09-30 14:42:35.004701204 +0000 UTC m=+0.088655540 container cleanup b44ff627fc6ab8aa661d16f5b7df72bee4bfec8996e46651f5f50e4b7d22505c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d9d4b9c-e1ad-44e2-8fb0-3f1c8b1cb1c1, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250923, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Sep 30 14:42:35 compute-0 systemd[1]: libpod-conmon-b44ff627fc6ab8aa661d16f5b7df72bee4bfec8996e46651f5f50e4b7d22505c.scope: Deactivated successfully.
Sep 30 14:42:35 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:42:35 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:42:35 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:42:35.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:42:35 compute-0 podman[273041]: 2025-09-30 14:42:35.086159885 +0000 UTC m=+0.052171843 container remove b44ff627fc6ab8aa661d16f5b7df72bee4bfec8996e46651f5f50e4b7d22505c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d9d4b9c-e1ad-44e2-8fb0-3f1c8b1cb1c1, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923)
Sep 30 14:42:35 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:42:35.093 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[d3d3e308-dd9f-4d3f-b6a9-d7416de36919]: (4, ('Tue Sep 30 02:42:34 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-3d9d4b9c-e1ad-44e2-8fb0-3f1c8b1cb1c1 (b44ff627fc6ab8aa661d16f5b7df72bee4bfec8996e46651f5f50e4b7d22505c)\nb44ff627fc6ab8aa661d16f5b7df72bee4bfec8996e46651f5f50e4b7d22505c\nTue Sep 30 02:42:35 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-3d9d4b9c-e1ad-44e2-8fb0-3f1c8b1cb1c1 (b44ff627fc6ab8aa661d16f5b7df72bee4bfec8996e46651f5f50e4b7d22505c)\nb44ff627fc6ab8aa661d16f5b7df72bee4bfec8996e46651f5f50e4b7d22505c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:42:35 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:42:35.095 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[8feba322-59b9-46f3-83ac-1ee5dbbf8bc8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:42:35 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:42:35.095 163966 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3d9d4b9c-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 14:42:35 compute-0 nova_compute[261524]: 2025-09-30 14:42:35.097 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:42:35 compute-0 kernel: tap3d9d4b9c-e0: left promiscuous mode
Sep 30 14:42:35 compute-0 nova_compute[261524]: 2025-09-30 14:42:35.102 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:42:35 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:42:35.104 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[70800eec-3253-4aea-8475-a9ea253e6a34]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:42:35 compute-0 nova_compute[261524]: 2025-09-30 14:42:35.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:42:35 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:42:35.148 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[a681d28d-cdc5-4ed5-9926-5d6190f87e9f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:42:35 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:42:35.150 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[47a13698-bc92-4af7-a9d6-ed61cf4699b3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:42:35 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:42:35.171 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[559de3fb-63c2-4d6c-8200-ceb20e55720e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 671472, 'reachable_time': 33723, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 273059, 'error': None, 'target': 'ovnmeta-3d9d4b9c-e1ad-44e2-8fb0-3f1c8b1cb1c1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:42:35 compute-0 systemd[1]: run-netns-ovnmeta\x2d3d9d4b9c\x2de1ad\x2d44e2\x2d8fb0\x2d3f1c8b1cb1c1.mount: Deactivated successfully.
Sep 30 14:42:35 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:42:35.175 164124 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3d9d4b9c-e1ad-44e2-8fb0-3f1c8b1cb1c1 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Sep 30 14:42:35 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:42:35.175 164124 DEBUG oslo.privsep.daemon [-] privsep: reply[ff41f94e-2a26-4591-b21f-9e50acbe2969]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:42:35 compute-0 nova_compute[261524]: 2025-09-30 14:42:35.275 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:42:35 compute-0 nova_compute[261524]: 2025-09-30 14:42:35.420 2 DEBUG oslo_concurrency.lockutils [None req-c81b8d85-1bc7-4fc5-9090-e6475859672c 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Acquiring lock "refresh_cache-c7b89511-067a-4ecf-9b88-41170118da87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Sep 30 14:42:35 compute-0 nova_compute[261524]: 2025-09-30 14:42:35.420 2 DEBUG oslo_concurrency.lockutils [None req-c81b8d85-1bc7-4fc5-9090-e6475859672c 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Acquired lock "refresh_cache-c7b89511-067a-4ecf-9b88-41170118da87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Sep 30 14:42:35 compute-0 nova_compute[261524]: 2025-09-30 14:42:35.420 2 DEBUG nova.network.neutron [None req-c81b8d85-1bc7-4fc5-9090-e6475859672c 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Sep 30 14:42:35 compute-0 nova_compute[261524]: 2025-09-30 14:42:35.477 2 DEBUG nova.compute.manager [req-5458c5c8-2073-437f-9ddf-5067309d9ed7 req-7bd2eef6-8726-45b6-9a9a-de0b3559b651 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Received event network-vif-deleted-03e495c3-98e6-487d-b0e9-ad172586b71d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Sep 30 14:42:35 compute-0 nova_compute[261524]: 2025-09-30 14:42:35.477 2 INFO nova.compute.manager [req-5458c5c8-2073-437f-9ddf-5067309d9ed7 req-7bd2eef6-8726-45b6-9a9a-de0b3559b651 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Neutron deleted interface 03e495c3-98e6-487d-b0e9-ad172586b71d; detaching it from the instance and deleting it from the info cache
Sep 30 14:42:35 compute-0 nova_compute[261524]: 2025-09-30 14:42:35.478 2 DEBUG nova.network.neutron [req-5458c5c8-2073-437f-9ddf-5067309d9ed7 req-7bd2eef6-8726-45b6-9a9a-de0b3559b651 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Updating instance_info_cache with network_info: [{"id": "fdd76f4a-6a11-467c-8f19-0b00baa4dbd1", "address": "fa:16:3e:0e:c5:53", "network": {"id": "31e82792-2132-423c-8fb3-0fd2453172b3", "bridge": "br-int", "label": "tempest-network-smoke--156271235", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0f6bbb74396f4cb7bfa999ebdabfe722", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdd76f4a-6a", "ovs_interfaceid": "fdd76f4a-6a11-467c-8f19-0b00baa4dbd1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Sep 30 14:42:35 compute-0 nova_compute[261524]: 2025-09-30 14:42:35.512 2 DEBUG nova.objects.instance [req-5458c5c8-2073-437f-9ddf-5067309d9ed7 req-7bd2eef6-8726-45b6-9a9a-de0b3559b651 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Lazy-loading 'system_metadata' on Instance uuid c7b89511-067a-4ecf-9b88-41170118da87 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Sep 30 14:42:35 compute-0 nova_compute[261524]: 2025-09-30 14:42:35.556 2 DEBUG nova.objects.instance [req-5458c5c8-2073-437f-9ddf-5067309d9ed7 req-7bd2eef6-8726-45b6-9a9a-de0b3559b651 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Lazy-loading 'flavor' on Instance uuid c7b89511-067a-4ecf-9b88-41170118da87 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Sep 30 14:42:35 compute-0 nova_compute[261524]: 2025-09-30 14:42:35.597 2 DEBUG nova.virt.libvirt.vif [req-5458c5c8-2073-437f-9ddf-5067309d9ed7 req-7bd2eef6-8726-45b6-9a9a-de0b3559b651 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-09-30T14:41:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2031188285',display_name='tempest-TestNetworkBasicOps-server-2031188285',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2031188285',id=3,image_ref='7c70cf84-edc3-42b2-a094-ae3c1dbaffe4',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEAImrCwKSNyEdm98pPfZ4sjS6exrK+14H3hUnFOdL8Y5dY0kzO28iP+MIhWAQTc22os7ImKeOILYxLVSkpa7J7So6O1Rtmi7C5fPdNcVDkCJS373V5RS7Al59MW7kPAog==',key_name='tempest-TestNetworkBasicOps-1988791059',keypairs=<?>,launch_index=0,launched_at=2025-09-30T14:42:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0f6bbb74396f4cb7bfa999ebdabfe722',ramdisk_id='',reservation_id='r-nwtvr0g5',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c70cf84-edc3-42b2-a094-ae3c1dbaffe4',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-195302952',owner_user_name='tempest-TestNetworkBasicOps-195302952-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-09-30T14:42:05Z,user_data=None,user_id='59c80c4f189d4667aec64b43afc69ed2',uuid=c7b89511-067a-4ecf-9b88-41170118da87,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "03e495c3-98e6-487d-b0e9-ad172586b71d", "address": "fa:16:3e:bb:7b:b7", "network": {"id": "3d9d4b9c-e1ad-44e2-8fb0-3f1c8b1cb1c1", "bridge": "br-int", "label": "tempest-network-smoke--1716238045", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0f6bbb74396f4cb7bfa999ebdabfe722", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap03e495c3-98", "ovs_interfaceid": "03e495c3-98e6-487d-b0e9-ad172586b71d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Sep 30 14:42:35 compute-0 nova_compute[261524]: 2025-09-30 14:42:35.597 2 DEBUG nova.network.os_vif_util [req-5458c5c8-2073-437f-9ddf-5067309d9ed7 req-7bd2eef6-8726-45b6-9a9a-de0b3559b651 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Converting VIF {"id": "03e495c3-98e6-487d-b0e9-ad172586b71d", "address": "fa:16:3e:bb:7b:b7", "network": {"id": "3d9d4b9c-e1ad-44e2-8fb0-3f1c8b1cb1c1", "bridge": "br-int", "label": "tempest-network-smoke--1716238045", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0f6bbb74396f4cb7bfa999ebdabfe722", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap03e495c3-98", "ovs_interfaceid": "03e495c3-98e6-487d-b0e9-ad172586b71d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Sep 30 14:42:35 compute-0 nova_compute[261524]: 2025-09-30 14:42:35.598 2 DEBUG nova.network.os_vif_util [req-5458c5c8-2073-437f-9ddf-5067309d9ed7 req-7bd2eef6-8726-45b6-9a9a-de0b3559b651 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bb:7b:b7,bridge_name='br-int',has_traffic_filtering=True,id=03e495c3-98e6-487d-b0e9-ad172586b71d,network=Network(3d9d4b9c-e1ad-44e2-8fb0-3f1c8b1cb1c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap03e495c3-98') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Sep 30 14:42:35 compute-0 nova_compute[261524]: 2025-09-30 14:42:35.602 2 DEBUG nova.virt.libvirt.guest [req-5458c5c8-2073-437f-9ddf-5067309d9ed7 req-7bd2eef6-8726-45b6-9a9a-de0b3559b651 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:bb:7b:b7"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap03e495c3-98"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Sep 30 14:42:35 compute-0 nova_compute[261524]: 2025-09-30 14:42:35.606 2 DEBUG nova.virt.libvirt.guest [req-5458c5c8-2073-437f-9ddf-5067309d9ed7 req-7bd2eef6-8726-45b6-9a9a-de0b3559b651 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:bb:7b:b7"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap03e495c3-98"/></interface>not found in domain: <domain type='kvm' id='2'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:   <name>instance-00000003</name>
Sep 30 14:42:35 compute-0 nova_compute[261524]:   <uuid>c7b89511-067a-4ecf-9b88-41170118da87</uuid>
Sep 30 14:42:35 compute-0 nova_compute[261524]:   <metadata>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Sep 30 14:42:35 compute-0 nova_compute[261524]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:   <nova:name>tempest-TestNetworkBasicOps-server-2031188285</nova:name>
Sep 30 14:42:35 compute-0 nova_compute[261524]:   <nova:creationTime>2025-09-30 14:42:34</nova:creationTime>
Sep 30 14:42:35 compute-0 nova_compute[261524]:   <nova:flavor name="m1.nano">
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <nova:memory>128</nova:memory>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <nova:disk>1</nova:disk>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <nova:swap>0</nova:swap>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <nova:ephemeral>0</nova:ephemeral>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <nova:vcpus>1</nova:vcpus>
Sep 30 14:42:35 compute-0 nova_compute[261524]:   </nova:flavor>
Sep 30 14:42:35 compute-0 nova_compute[261524]:   <nova:owner>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <nova:user uuid="59c80c4f189d4667aec64b43afc69ed2">tempest-TestNetworkBasicOps-195302952-project-member</nova:user>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <nova:project uuid="0f6bbb74396f4cb7bfa999ebdabfe722">tempest-TestNetworkBasicOps-195302952</nova:project>
Sep 30 14:42:35 compute-0 nova_compute[261524]:   </nova:owner>
Sep 30 14:42:35 compute-0 nova_compute[261524]:   <nova:root type="image" uuid="7c70cf84-edc3-42b2-a094-ae3c1dbaffe4"/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:   <nova:ports>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <nova:port uuid="fdd76f4a-6a11-467c-8f19-0b00baa4dbd1">
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     </nova:port>
Sep 30 14:42:35 compute-0 nova_compute[261524]:   </nova:ports>
Sep 30 14:42:35 compute-0 nova_compute[261524]: </nova:instance>
Sep 30 14:42:35 compute-0 nova_compute[261524]:   </metadata>
Sep 30 14:42:35 compute-0 nova_compute[261524]:   <memory unit='KiB'>131072</memory>
Sep 30 14:42:35 compute-0 nova_compute[261524]:   <currentMemory unit='KiB'>131072</currentMemory>
Sep 30 14:42:35 compute-0 nova_compute[261524]:   <vcpu placement='static'>1</vcpu>
Sep 30 14:42:35 compute-0 nova_compute[261524]:   <resource>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <partition>/machine</partition>
Sep 30 14:42:35 compute-0 nova_compute[261524]:   </resource>
Sep 30 14:42:35 compute-0 nova_compute[261524]:   <sysinfo type='smbios'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <system>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <entry name='manufacturer'>RDO</entry>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <entry name='product'>OpenStack Compute</entry>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <entry name='serial'>c7b89511-067a-4ecf-9b88-41170118da87</entry>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <entry name='uuid'>c7b89511-067a-4ecf-9b88-41170118da87</entry>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <entry name='family'>Virtual Machine</entry>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     </system>
Sep 30 14:42:35 compute-0 nova_compute[261524]:   </sysinfo>
Sep 30 14:42:35 compute-0 nova_compute[261524]:   <os>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <boot dev='hd'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <smbios mode='sysinfo'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:   </os>
Sep 30 14:42:35 compute-0 nova_compute[261524]:   <features>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <acpi/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <apic/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <vmcoreinfo state='on'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:   </features>
Sep 30 14:42:35 compute-0 nova_compute[261524]:   <cpu mode='custom' match='exact' check='full'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <model fallback='forbid'>EPYC-Rome</model>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <vendor>AMD</vendor>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <feature policy='require' name='x2apic'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <feature policy='require' name='tsc-deadline'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <feature policy='require' name='hypervisor'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <feature policy='require' name='tsc_adjust'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <feature policy='require' name='spec-ctrl'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <feature policy='require' name='stibp'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <feature policy='require' name='arch-capabilities'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <feature policy='require' name='ssbd'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <feature policy='require' name='cmp_legacy'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <feature policy='require' name='overflow-recov'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <feature policy='require' name='succor'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <feature policy='require' name='ibrs'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <feature policy='require' name='amd-ssbd'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <feature policy='require' name='virt-ssbd'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <feature policy='disable' name='lbrv'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <feature policy='disable' name='tsc-scale'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <feature policy='disable' name='vmcb-clean'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <feature policy='disable' name='flushbyasid'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <feature policy='disable' name='pause-filter'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <feature policy='disable' name='pfthreshold'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <feature policy='disable' name='svme-addr-chk'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <feature policy='require' name='lfence-always-serializing'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <feature policy='require' name='rdctl-no'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <feature policy='require' name='skip-l1dfl-vmentry'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <feature policy='require' name='mds-no'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <feature policy='require' name='pschange-mc-no'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <feature policy='require' name='gds-no'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <feature policy='require' name='rfds-no'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <feature policy='disable' name='xsaves'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <feature policy='disable' name='svm'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <feature policy='require' name='topoext'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <feature policy='disable' name='npt'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <feature policy='disable' name='nrip-save'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:   </cpu>
Sep 30 14:42:35 compute-0 nova_compute[261524]:   <clock offset='utc'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <timer name='pit' tickpolicy='delay'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <timer name='rtc' tickpolicy='catchup'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <timer name='hpet' present='no'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:   </clock>
Sep 30 14:42:35 compute-0 nova_compute[261524]:   <on_poweroff>destroy</on_poweroff>
Sep 30 14:42:35 compute-0 nova_compute[261524]:   <on_reboot>restart</on_reboot>
Sep 30 14:42:35 compute-0 nova_compute[261524]:   <on_crash>destroy</on_crash>
Sep 30 14:42:35 compute-0 nova_compute[261524]:   <devices>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <disk type='network' device='disk'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <driver name='qemu' type='raw' cache='none'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <auth username='openstack'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:         <secret type='ceph' uuid='5e3c7776-ac03-5698-b79f-a6dc2d80cae6'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       </auth>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <source protocol='rbd' name='vms/c7b89511-067a-4ecf-9b88-41170118da87_disk' index='2'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:         <host name='192.168.122.100' port='6789'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:         <host name='192.168.122.102' port='6789'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:         <host name='192.168.122.101' port='6789'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       </source>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <target dev='vda' bus='virtio'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <alias name='virtio-disk0'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     </disk>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <disk type='network' device='cdrom'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <driver name='qemu' type='raw' cache='none'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <auth username='openstack'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:         <secret type='ceph' uuid='5e3c7776-ac03-5698-b79f-a6dc2d80cae6'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       </auth>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <source protocol='rbd' name='vms/c7b89511-067a-4ecf-9b88-41170118da87_disk.config' index='1'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:         <host name='192.168.122.100' port='6789'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:         <host name='192.168.122.102' port='6789'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:         <host name='192.168.122.101' port='6789'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       </source>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <target dev='sda' bus='sata'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <readonly/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <alias name='sata0-0-0'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     </disk>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <controller type='pci' index='0' model='pcie-root'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <alias name='pcie.0'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <controller type='pci' index='1' model='pcie-root-port'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <target chassis='1' port='0x10'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <alias name='pci.1'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <controller type='pci' index='2' model='pcie-root-port'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <target chassis='2' port='0x11'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <alias name='pci.2'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <controller type='pci' index='3' model='pcie-root-port'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <target chassis='3' port='0x12'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <alias name='pci.3'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <controller type='pci' index='4' model='pcie-root-port'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <target chassis='4' port='0x13'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <alias name='pci.4'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <controller type='pci' index='5' model='pcie-root-port'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <target chassis='5' port='0x14'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <alias name='pci.5'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <controller type='pci' index='6' model='pcie-root-port'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <target chassis='6' port='0x15'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <alias name='pci.6'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <controller type='pci' index='7' model='pcie-root-port'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <target chassis='7' port='0x16'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <alias name='pci.7'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <controller type='pci' index='8' model='pcie-root-port'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <target chassis='8' port='0x17'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <alias name='pci.8'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <controller type='pci' index='9' model='pcie-root-port'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <target chassis='9' port='0x18'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <alias name='pci.9'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <controller type='pci' index='10' model='pcie-root-port'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <target chassis='10' port='0x19'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <alias name='pci.10'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <controller type='pci' index='11' model='pcie-root-port'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <target chassis='11' port='0x1a'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <alias name='pci.11'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <controller type='pci' index='12' model='pcie-root-port'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <target chassis='12' port='0x1b'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <alias name='pci.12'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <controller type='pci' index='13' model='pcie-root-port'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <target chassis='13' port='0x1c'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <alias name='pci.13'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <controller type='pci' index='14' model='pcie-root-port'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <target chassis='14' port='0x1d'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <alias name='pci.14'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <controller type='pci' index='15' model='pcie-root-port'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <target chassis='15' port='0x1e'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <alias name='pci.15'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <controller type='pci' index='16' model='pcie-root-port'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <target chassis='16' port='0x1f'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <alias name='pci.16'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <controller type='pci' index='17' model='pcie-root-port'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <target chassis='17' port='0x20'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <alias name='pci.17'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <controller type='pci' index='18' model='pcie-root-port'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <target chassis='18' port='0x21'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <alias name='pci.18'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <controller type='pci' index='19' model='pcie-root-port'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <target chassis='19' port='0x22'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <alias name='pci.19'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <controller type='pci' index='20' model='pcie-root-port'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <target chassis='20' port='0x23'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <alias name='pci.20'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <controller type='pci' index='21' model='pcie-root-port'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <target chassis='21' port='0x24'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <alias name='pci.21'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <controller type='pci' index='22' model='pcie-root-port'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <target chassis='22' port='0x25'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <alias name='pci.22'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <controller type='pci' index='23' model='pcie-root-port'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <target chassis='23' port='0x26'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <alias name='pci.23'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <controller type='pci' index='24' model='pcie-root-port'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <target chassis='24' port='0x27'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <alias name='pci.24'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <controller type='pci' index='25' model='pcie-root-port'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <target chassis='25' port='0x28'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <alias name='pci.25'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <model name='pcie-pci-bridge'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <alias name='pci.26'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <controller type='usb' index='0' model='piix3-uhci'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <alias name='usb'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <controller type='sata' index='0'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <alias name='ide'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <interface type='ethernet'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <mac address='fa:16:3e:0e:c5:53'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <target dev='tapfdd76f4a-6a'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <model type='virtio'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <driver name='vhost' rx_queue_size='512'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <mtu size='1442'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <alias name='net0'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     </interface>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <serial type='pty'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <source path='/dev/pts/0'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <log file='/var/lib/nova/instances/c7b89511-067a-4ecf-9b88-41170118da87/console.log' append='off'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <target type='isa-serial' port='0'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:         <model name='isa-serial'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       </target>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <alias name='serial0'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     </serial>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <console type='pty' tty='/dev/pts/0'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <source path='/dev/pts/0'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <log file='/var/lib/nova/instances/c7b89511-067a-4ecf-9b88-41170118da87/console.log' append='off'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <target type='serial' port='0'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <alias name='serial0'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     </console>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <input type='tablet' bus='usb'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <alias name='input0'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <address type='usb' bus='0' port='1'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     </input>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <input type='mouse' bus='ps2'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <alias name='input1'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     </input>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <input type='keyboard' bus='ps2'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <alias name='input2'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     </input>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <listen type='address' address='::0'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     </graphics>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <audio id='1' type='none'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <video>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <model type='virtio' heads='1' primary='yes'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <alias name='video0'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     </video>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <watchdog model='itco' action='reset'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <alias name='watchdog0'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     </watchdog>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <memballoon model='virtio'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <stats period='10'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <alias name='balloon0'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     </memballoon>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <rng model='virtio'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <backend model='random'>/dev/urandom</backend>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <alias name='rng0'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     </rng>
Sep 30 14:42:35 compute-0 nova_compute[261524]:   </devices>
Sep 30 14:42:35 compute-0 nova_compute[261524]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <label>system_u:system_r:svirt_t:s0:c592,c924</label>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c592,c924</imagelabel>
Sep 30 14:42:35 compute-0 nova_compute[261524]:   </seclabel>
Sep 30 14:42:35 compute-0 nova_compute[261524]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <label>+107:+107</label>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <imagelabel>+107:+107</imagelabel>
Sep 30 14:42:35 compute-0 nova_compute[261524]:   </seclabel>
Sep 30 14:42:35 compute-0 nova_compute[261524]: </domain>
Sep 30 14:42:35 compute-0 nova_compute[261524]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Sep 30 14:42:35 compute-0 nova_compute[261524]: 2025-09-30 14:42:35.608 2 DEBUG nova.virt.libvirt.guest [req-5458c5c8-2073-437f-9ddf-5067309d9ed7 req-7bd2eef6-8726-45b6-9a9a-de0b3559b651 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:bb:7b:b7"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap03e495c3-98"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Sep 30 14:42:35 compute-0 nova_compute[261524]: 2025-09-30 14:42:35.614 2 DEBUG nova.virt.libvirt.guest [req-5458c5c8-2073-437f-9ddf-5067309d9ed7 req-7bd2eef6-8726-45b6-9a9a-de0b3559b651 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:bb:7b:b7"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap03e495c3-98"/></interface>not found in domain: <domain type='kvm' id='2'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:   <name>instance-00000003</name>
Sep 30 14:42:35 compute-0 nova_compute[261524]:   <uuid>c7b89511-067a-4ecf-9b88-41170118da87</uuid>
Sep 30 14:42:35 compute-0 nova_compute[261524]:   <metadata>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Sep 30 14:42:35 compute-0 nova_compute[261524]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:   <nova:name>tempest-TestNetworkBasicOps-server-2031188285</nova:name>
Sep 30 14:42:35 compute-0 nova_compute[261524]:   <nova:creationTime>2025-09-30 14:42:34</nova:creationTime>
Sep 30 14:42:35 compute-0 nova_compute[261524]:   <nova:flavor name="m1.nano">
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <nova:memory>128</nova:memory>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <nova:disk>1</nova:disk>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <nova:swap>0</nova:swap>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <nova:ephemeral>0</nova:ephemeral>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <nova:vcpus>1</nova:vcpus>
Sep 30 14:42:35 compute-0 nova_compute[261524]:   </nova:flavor>
Sep 30 14:42:35 compute-0 nova_compute[261524]:   <nova:owner>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <nova:user uuid="59c80c4f189d4667aec64b43afc69ed2">tempest-TestNetworkBasicOps-195302952-project-member</nova:user>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <nova:project uuid="0f6bbb74396f4cb7bfa999ebdabfe722">tempest-TestNetworkBasicOps-195302952</nova:project>
Sep 30 14:42:35 compute-0 nova_compute[261524]:   </nova:owner>
Sep 30 14:42:35 compute-0 nova_compute[261524]:   <nova:root type="image" uuid="7c70cf84-edc3-42b2-a094-ae3c1dbaffe4"/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:   <nova:ports>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <nova:port uuid="fdd76f4a-6a11-467c-8f19-0b00baa4dbd1">
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     </nova:port>
Sep 30 14:42:35 compute-0 nova_compute[261524]:   </nova:ports>
Sep 30 14:42:35 compute-0 nova_compute[261524]: </nova:instance>
Sep 30 14:42:35 compute-0 nova_compute[261524]:   </metadata>
Sep 30 14:42:35 compute-0 nova_compute[261524]:   <memory unit='KiB'>131072</memory>
Sep 30 14:42:35 compute-0 nova_compute[261524]:   <currentMemory unit='KiB'>131072</currentMemory>
Sep 30 14:42:35 compute-0 nova_compute[261524]:   <vcpu placement='static'>1</vcpu>
Sep 30 14:42:35 compute-0 nova_compute[261524]:   <resource>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <partition>/machine</partition>
Sep 30 14:42:35 compute-0 nova_compute[261524]:   </resource>
Sep 30 14:42:35 compute-0 nova_compute[261524]:   <sysinfo type='smbios'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <system>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <entry name='manufacturer'>RDO</entry>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <entry name='product'>OpenStack Compute</entry>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <entry name='serial'>c7b89511-067a-4ecf-9b88-41170118da87</entry>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <entry name='uuid'>c7b89511-067a-4ecf-9b88-41170118da87</entry>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <entry name='family'>Virtual Machine</entry>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     </system>
Sep 30 14:42:35 compute-0 nova_compute[261524]:   </sysinfo>
Sep 30 14:42:35 compute-0 nova_compute[261524]:   <os>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <boot dev='hd'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <smbios mode='sysinfo'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:   </os>
Sep 30 14:42:35 compute-0 nova_compute[261524]:   <features>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <acpi/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <apic/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <vmcoreinfo state='on'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:   </features>
Sep 30 14:42:35 compute-0 nova_compute[261524]:   <cpu mode='custom' match='exact' check='full'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <model fallback='forbid'>EPYC-Rome</model>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <vendor>AMD</vendor>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <feature policy='require' name='x2apic'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <feature policy='require' name='tsc-deadline'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <feature policy='require' name='hypervisor'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <feature policy='require' name='tsc_adjust'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <feature policy='require' name='spec-ctrl'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <feature policy='require' name='stibp'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <feature policy='require' name='arch-capabilities'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <feature policy='require' name='ssbd'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <feature policy='require' name='cmp_legacy'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <feature policy='require' name='overflow-recov'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <feature policy='require' name='succor'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <feature policy='require' name='ibrs'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <feature policy='require' name='amd-ssbd'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <feature policy='require' name='virt-ssbd'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <feature policy='disable' name='lbrv'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <feature policy='disable' name='tsc-scale'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <feature policy='disable' name='vmcb-clean'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <feature policy='disable' name='flushbyasid'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <feature policy='disable' name='pause-filter'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <feature policy='disable' name='pfthreshold'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <feature policy='disable' name='svme-addr-chk'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <feature policy='require' name='lfence-always-serializing'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <feature policy='require' name='rdctl-no'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <feature policy='require' name='skip-l1dfl-vmentry'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <feature policy='require' name='mds-no'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <feature policy='require' name='pschange-mc-no'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <feature policy='require' name='gds-no'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <feature policy='require' name='rfds-no'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <feature policy='disable' name='xsaves'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <feature policy='disable' name='svm'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <feature policy='require' name='topoext'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <feature policy='disable' name='npt'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <feature policy='disable' name='nrip-save'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:   </cpu>
Sep 30 14:42:35 compute-0 nova_compute[261524]:   <clock offset='utc'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <timer name='pit' tickpolicy='delay'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <timer name='rtc' tickpolicy='catchup'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <timer name='hpet' present='no'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:   </clock>
Sep 30 14:42:35 compute-0 nova_compute[261524]:   <on_poweroff>destroy</on_poweroff>
Sep 30 14:42:35 compute-0 nova_compute[261524]:   <on_reboot>restart</on_reboot>
Sep 30 14:42:35 compute-0 nova_compute[261524]:   <on_crash>destroy</on_crash>
Sep 30 14:42:35 compute-0 nova_compute[261524]:   <devices>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <disk type='network' device='disk'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <driver name='qemu' type='raw' cache='none'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <auth username='openstack'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:         <secret type='ceph' uuid='5e3c7776-ac03-5698-b79f-a6dc2d80cae6'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       </auth>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <source protocol='rbd' name='vms/c7b89511-067a-4ecf-9b88-41170118da87_disk' index='2'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:         <host name='192.168.122.100' port='6789'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:         <host name='192.168.122.102' port='6789'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:         <host name='192.168.122.101' port='6789'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       </source>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <target dev='vda' bus='virtio'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <alias name='virtio-disk0'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     </disk>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <disk type='network' device='cdrom'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <driver name='qemu' type='raw' cache='none'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <auth username='openstack'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:         <secret type='ceph' uuid='5e3c7776-ac03-5698-b79f-a6dc2d80cae6'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       </auth>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <source protocol='rbd' name='vms/c7b89511-067a-4ecf-9b88-41170118da87_disk.config' index='1'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:         <host name='192.168.122.100' port='6789'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:         <host name='192.168.122.102' port='6789'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:         <host name='192.168.122.101' port='6789'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       </source>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <target dev='sda' bus='sata'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <readonly/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <alias name='sata0-0-0'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     </disk>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <controller type='pci' index='0' model='pcie-root'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <alias name='pcie.0'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <controller type='pci' index='1' model='pcie-root-port'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <target chassis='1' port='0x10'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <alias name='pci.1'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <controller type='pci' index='2' model='pcie-root-port'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <target chassis='2' port='0x11'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <alias name='pci.2'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <controller type='pci' index='3' model='pcie-root-port'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <target chassis='3' port='0x12'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <alias name='pci.3'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <controller type='pci' index='4' model='pcie-root-port'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <target chassis='4' port='0x13'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <alias name='pci.4'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <controller type='pci' index='5' model='pcie-root-port'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <target chassis='5' port='0x14'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <alias name='pci.5'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <controller type='pci' index='6' model='pcie-root-port'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <target chassis='6' port='0x15'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <alias name='pci.6'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <controller type='pci' index='7' model='pcie-root-port'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <target chassis='7' port='0x16'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <alias name='pci.7'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <controller type='pci' index='8' model='pcie-root-port'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <target chassis='8' port='0x17'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <alias name='pci.8'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <controller type='pci' index='9' model='pcie-root-port'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <target chassis='9' port='0x18'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <alias name='pci.9'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <controller type='pci' index='10' model='pcie-root-port'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <target chassis='10' port='0x19'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <alias name='pci.10'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <controller type='pci' index='11' model='pcie-root-port'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <target chassis='11' port='0x1a'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <alias name='pci.11'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <controller type='pci' index='12' model='pcie-root-port'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <target chassis='12' port='0x1b'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <alias name='pci.12'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <controller type='pci' index='13' model='pcie-root-port'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <target chassis='13' port='0x1c'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <alias name='pci.13'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <controller type='pci' index='14' model='pcie-root-port'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <target chassis='14' port='0x1d'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <alias name='pci.14'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <controller type='pci' index='15' model='pcie-root-port'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <target chassis='15' port='0x1e'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <alias name='pci.15'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <controller type='pci' index='16' model='pcie-root-port'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <target chassis='16' port='0x1f'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <alias name='pci.16'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <controller type='pci' index='17' model='pcie-root-port'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <target chassis='17' port='0x20'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <alias name='pci.17'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <controller type='pci' index='18' model='pcie-root-port'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <target chassis='18' port='0x21'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <alias name='pci.18'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <controller type='pci' index='19' model='pcie-root-port'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <target chassis='19' port='0x22'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <alias name='pci.19'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <controller type='pci' index='20' model='pcie-root-port'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <target chassis='20' port='0x23'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <alias name='pci.20'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <controller type='pci' index='21' model='pcie-root-port'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <target chassis='21' port='0x24'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <alias name='pci.21'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <controller type='pci' index='22' model='pcie-root-port'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <target chassis='22' port='0x25'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <alias name='pci.22'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <controller type='pci' index='23' model='pcie-root-port'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <target chassis='23' port='0x26'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <alias name='pci.23'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <controller type='pci' index='24' model='pcie-root-port'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <target chassis='24' port='0x27'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <alias name='pci.24'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <controller type='pci' index='25' model='pcie-root-port'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <target chassis='25' port='0x28'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <alias name='pci.25'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <model name='pcie-pci-bridge'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <alias name='pci.26'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <controller type='usb' index='0' model='piix3-uhci'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <alias name='usb'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <controller type='sata' index='0'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <alias name='ide'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <interface type='ethernet'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <mac address='fa:16:3e:0e:c5:53'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <target dev='tapfdd76f4a-6a'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <model type='virtio'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <driver name='vhost' rx_queue_size='512'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <mtu size='1442'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <alias name='net0'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     </interface>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <serial type='pty'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <source path='/dev/pts/0'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <log file='/var/lib/nova/instances/c7b89511-067a-4ecf-9b88-41170118da87/console.log' append='off'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <target type='isa-serial' port='0'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:         <model name='isa-serial'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       </target>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <alias name='serial0'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     </serial>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <console type='pty' tty='/dev/pts/0'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <source path='/dev/pts/0'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <log file='/var/lib/nova/instances/c7b89511-067a-4ecf-9b88-41170118da87/console.log' append='off'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <target type='serial' port='0'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <alias name='serial0'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     </console>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <input type='tablet' bus='usb'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <alias name='input0'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <address type='usb' bus='0' port='1'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     </input>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <input type='mouse' bus='ps2'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <alias name='input1'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     </input>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <input type='keyboard' bus='ps2'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <alias name='input2'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     </input>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <listen type='address' address='::0'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     </graphics>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <audio id='1' type='none'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <video>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <model type='virtio' heads='1' primary='yes'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <alias name='video0'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     </video>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <watchdog model='itco' action='reset'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <alias name='watchdog0'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     </watchdog>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <memballoon model='virtio'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <stats period='10'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <alias name='balloon0'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     </memballoon>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <rng model='virtio'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <backend model='random'>/dev/urandom</backend>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <alias name='rng0'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     </rng>
Sep 30 14:42:35 compute-0 nova_compute[261524]:   </devices>
Sep 30 14:42:35 compute-0 nova_compute[261524]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <label>system_u:system_r:svirt_t:s0:c592,c924</label>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c592,c924</imagelabel>
Sep 30 14:42:35 compute-0 nova_compute[261524]:   </seclabel>
Sep 30 14:42:35 compute-0 nova_compute[261524]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <label>+107:+107</label>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <imagelabel>+107:+107</imagelabel>
Sep 30 14:42:35 compute-0 nova_compute[261524]:   </seclabel>
Sep 30 14:42:35 compute-0 nova_compute[261524]: </domain>
Sep 30 14:42:35 compute-0 nova_compute[261524]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Sep 30 14:42:35 compute-0 nova_compute[261524]: 2025-09-30 14:42:35.614 2 WARNING nova.virt.libvirt.driver [req-5458c5c8-2073-437f-9ddf-5067309d9ed7 req-7bd2eef6-8726-45b6-9a9a-de0b3559b651 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Detaching interface fa:16:3e:bb:7b:b7 failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tap03e495c3-98' not found.
Sep 30 14:42:35 compute-0 nova_compute[261524]: 2025-09-30 14:42:35.615 2 DEBUG nova.virt.libvirt.vif [req-5458c5c8-2073-437f-9ddf-5067309d9ed7 req-7bd2eef6-8726-45b6-9a9a-de0b3559b651 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-09-30T14:41:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2031188285',display_name='tempest-TestNetworkBasicOps-server-2031188285',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2031188285',id=3,image_ref='7c70cf84-edc3-42b2-a094-ae3c1dbaffe4',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEAImrCwKSNyEdm98pPfZ4sjS6exrK+14H3hUnFOdL8Y5dY0kzO28iP+MIhWAQTc22os7ImKeOILYxLVSkpa7J7So6O1Rtmi7C5fPdNcVDkCJS373V5RS7Al59MW7kPAog==',key_name='tempest-TestNetworkBasicOps-1988791059',keypairs=<?>,launch_index=0,launched_at=2025-09-30T14:42:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0f6bbb74396f4cb7bfa999ebdabfe722',ramdisk_id='',reservation_id='r-nwtvr0g5',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c70cf84-edc3-42b2-a094-ae3c1dbaffe4',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-195302952',owner_user_name='tempest-TestNetworkBasicOps-195302952-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-09-30T14:42:05Z,user_data=None,user_id='59c80c4f189d4667aec64b43afc69ed2',uuid=c7b89511-067a-4ecf-9b88-41170118da87,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "03e495c3-98e6-487d-b0e9-ad172586b71d", "address": "fa:16:3e:bb:7b:b7", "network": {"id": "3d9d4b9c-e1ad-44e2-8fb0-3f1c8b1cb1c1", "bridge": "br-int", "label": "tempest-network-smoke--1716238045", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0f6bbb74396f4cb7bfa999ebdabfe722", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap03e495c3-98", "ovs_interfaceid": "03e495c3-98e6-487d-b0e9-ad172586b71d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Sep 30 14:42:35 compute-0 nova_compute[261524]: 2025-09-30 14:42:35.616 2 DEBUG nova.network.os_vif_util [req-5458c5c8-2073-437f-9ddf-5067309d9ed7 req-7bd2eef6-8726-45b6-9a9a-de0b3559b651 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Converting VIF {"id": "03e495c3-98e6-487d-b0e9-ad172586b71d", "address": "fa:16:3e:bb:7b:b7", "network": {"id": "3d9d4b9c-e1ad-44e2-8fb0-3f1c8b1cb1c1", "bridge": "br-int", "label": "tempest-network-smoke--1716238045", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0f6bbb74396f4cb7bfa999ebdabfe722", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap03e495c3-98", "ovs_interfaceid": "03e495c3-98e6-487d-b0e9-ad172586b71d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Sep 30 14:42:35 compute-0 nova_compute[261524]: 2025-09-30 14:42:35.617 2 DEBUG nova.network.os_vif_util [req-5458c5c8-2073-437f-9ddf-5067309d9ed7 req-7bd2eef6-8726-45b6-9a9a-de0b3559b651 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bb:7b:b7,bridge_name='br-int',has_traffic_filtering=True,id=03e495c3-98e6-487d-b0e9-ad172586b71d,network=Network(3d9d4b9c-e1ad-44e2-8fb0-3f1c8b1cb1c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap03e495c3-98') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Sep 30 14:42:35 compute-0 nova_compute[261524]: 2025-09-30 14:42:35.617 2 DEBUG os_vif [req-5458c5c8-2073-437f-9ddf-5067309d9ed7 req-7bd2eef6-8726-45b6-9a9a-de0b3559b651 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:7b:b7,bridge_name='br-int',has_traffic_filtering=True,id=03e495c3-98e6-487d-b0e9-ad172586b71d,network=Network(3d9d4b9c-e1ad-44e2-8fb0-3f1c8b1cb1c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap03e495c3-98') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Sep 30 14:42:35 compute-0 nova_compute[261524]: 2025-09-30 14:42:35.619 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:42:35 compute-0 nova_compute[261524]: 2025-09-30 14:42:35.619 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap03e495c3-98, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 14:42:35 compute-0 nova_compute[261524]: 2025-09-30 14:42:35.620 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 14:42:35 compute-0 nova_compute[261524]: 2025-09-30 14:42:35.623 2 INFO os_vif [req-5458c5c8-2073-437f-9ddf-5067309d9ed7 req-7bd2eef6-8726-45b6-9a9a-de0b3559b651 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:7b:b7,bridge_name='br-int',has_traffic_filtering=True,id=03e495c3-98e6-487d-b0e9-ad172586b71d,network=Network(3d9d4b9c-e1ad-44e2-8fb0-3f1c8b1cb1c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap03e495c3-98')
Sep 30 14:42:35 compute-0 nova_compute[261524]: 2025-09-30 14:42:35.626 2 DEBUG nova.virt.libvirt.guest [req-5458c5c8-2073-437f-9ddf-5067309d9ed7 req-7bd2eef6-8726-45b6-9a9a-de0b3559b651 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Sep 30 14:42:35 compute-0 nova_compute[261524]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:   <nova:name>tempest-TestNetworkBasicOps-server-2031188285</nova:name>
Sep 30 14:42:35 compute-0 nova_compute[261524]:   <nova:creationTime>2025-09-30 14:42:35</nova:creationTime>
Sep 30 14:42:35 compute-0 nova_compute[261524]:   <nova:flavor name="m1.nano">
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <nova:memory>128</nova:memory>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <nova:disk>1</nova:disk>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <nova:swap>0</nova:swap>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <nova:ephemeral>0</nova:ephemeral>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <nova:vcpus>1</nova:vcpus>
Sep 30 14:42:35 compute-0 nova_compute[261524]:   </nova:flavor>
Sep 30 14:42:35 compute-0 nova_compute[261524]:   <nova:owner>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <nova:user uuid="59c80c4f189d4667aec64b43afc69ed2">tempest-TestNetworkBasicOps-195302952-project-member</nova:user>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <nova:project uuid="0f6bbb74396f4cb7bfa999ebdabfe722">tempest-TestNetworkBasicOps-195302952</nova:project>
Sep 30 14:42:35 compute-0 nova_compute[261524]:   </nova:owner>
Sep 30 14:42:35 compute-0 nova_compute[261524]:   <nova:root type="image" uuid="7c70cf84-edc3-42b2-a094-ae3c1dbaffe4"/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:   <nova:ports>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     <nova:port uuid="fdd76f4a-6a11-467c-8f19-0b00baa4dbd1">
Sep 30 14:42:35 compute-0 nova_compute[261524]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Sep 30 14:42:35 compute-0 nova_compute[261524]:     </nova:port>
Sep 30 14:42:35 compute-0 nova_compute[261524]:   </nova:ports>
Sep 30 14:42:35 compute-0 nova_compute[261524]: </nova:instance>
Sep 30 14:42:35 compute-0 nova_compute[261524]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Sep 30 14:42:35 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:42:35 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:42:35 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:42:35.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:42:35 compute-0 ceph-mon[74194]: pgmap v812: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 13 KiB/s wr, 0 op/s
Sep 30 14:42:36 compute-0 nova_compute[261524]: 2025-09-30 14:42:36.481 2 INFO nova.network.neutron [None req-c81b8d85-1bc7-4fc5-9090-e6475859672c 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Port 03e495c3-98e6-487d-b0e9-ad172586b71d from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Sep 30 14:42:36 compute-0 nova_compute[261524]: 2025-09-30 14:42:36.482 2 DEBUG nova.network.neutron [None req-c81b8d85-1bc7-4fc5-9090-e6475859672c 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Updating instance_info_cache with network_info: [{"id": "fdd76f4a-6a11-467c-8f19-0b00baa4dbd1", "address": "fa:16:3e:0e:c5:53", "network": {"id": "31e82792-2132-423c-8fb3-0fd2453172b3", "bridge": "br-int", "label": "tempest-network-smoke--156271235", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0f6bbb74396f4cb7bfa999ebdabfe722", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdd76f4a-6a", "ovs_interfaceid": "fdd76f4a-6a11-467c-8f19-0b00baa4dbd1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Sep 30 14:42:36 compute-0 nova_compute[261524]: 2025-09-30 14:42:36.503 2 DEBUG oslo_concurrency.lockutils [None req-c81b8d85-1bc7-4fc5-9090-e6475859672c 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Releasing lock "refresh_cache-c7b89511-067a-4ecf-9b88-41170118da87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Sep 30 14:42:36 compute-0 nova_compute[261524]: 2025-09-30 14:42:36.527 2 DEBUG oslo_concurrency.lockutils [None req-c81b8d85-1bc7-4fc5-9090-e6475859672c 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Lock "interface-c7b89511-067a-4ecf-9b88-41170118da87-03e495c3-98e6-487d-b0e9-ad172586b71d" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 1.890s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:42:36 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v813: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 7.1 KiB/s rd, 14 KiB/s wr, 1 op/s
Sep 30 14:42:36 compute-0 ovn_controller[154021]: 2025-09-30T14:42:36Z|00053|binding|INFO|Releasing lport ab328d8f-edae-49de-941c-ed1296da8c90 from this chassis (sb_readonly=0)
Sep 30 14:42:36 compute-0 nova_compute[261524]: 2025-09-30 14:42:36.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:42:36 compute-0 nova_compute[261524]: 2025-09-30 14:42:36.952 2 DEBUG nova.compute.manager [req-3629cbe5-f95b-493f-850c-730c25396c2e req-adbf6db4-1055-49c7-9359-8bc80b9b3e1f e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Received event network-vif-unplugged-03e495c3-98e6-487d-b0e9-ad172586b71d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Sep 30 14:42:36 compute-0 nova_compute[261524]: 2025-09-30 14:42:36.953 2 DEBUG oslo_concurrency.lockutils [req-3629cbe5-f95b-493f-850c-730c25396c2e req-adbf6db4-1055-49c7-9359-8bc80b9b3e1f e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Acquiring lock "c7b89511-067a-4ecf-9b88-41170118da87-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:42:36 compute-0 nova_compute[261524]: 2025-09-30 14:42:36.953 2 DEBUG oslo_concurrency.lockutils [req-3629cbe5-f95b-493f-850c-730c25396c2e req-adbf6db4-1055-49c7-9359-8bc80b9b3e1f e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Lock "c7b89511-067a-4ecf-9b88-41170118da87-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:42:36 compute-0 nova_compute[261524]: 2025-09-30 14:42:36.953 2 DEBUG oslo_concurrency.lockutils [req-3629cbe5-f95b-493f-850c-730c25396c2e req-adbf6db4-1055-49c7-9359-8bc80b9b3e1f e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Lock "c7b89511-067a-4ecf-9b88-41170118da87-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:42:36 compute-0 nova_compute[261524]: 2025-09-30 14:42:36.953 2 DEBUG nova.compute.manager [req-3629cbe5-f95b-493f-850c-730c25396c2e req-adbf6db4-1055-49c7-9359-8bc80b9b3e1f e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: c7b89511-067a-4ecf-9b88-41170118da87] No waiting events found dispatching network-vif-unplugged-03e495c3-98e6-487d-b0e9-ad172586b71d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Sep 30 14:42:36 compute-0 nova_compute[261524]: 2025-09-30 14:42:36.954 2 WARNING nova.compute.manager [req-3629cbe5-f95b-493f-850c-730c25396c2e req-adbf6db4-1055-49c7-9359-8bc80b9b3e1f e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Received unexpected event network-vif-unplugged-03e495c3-98e6-487d-b0e9-ad172586b71d for instance with vm_state active and task_state None.
Sep 30 14:42:36 compute-0 nova_compute[261524]: 2025-09-30 14:42:36.954 2 DEBUG nova.compute.manager [req-3629cbe5-f95b-493f-850c-730c25396c2e req-adbf6db4-1055-49c7-9359-8bc80b9b3e1f e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Received event network-vif-plugged-03e495c3-98e6-487d-b0e9-ad172586b71d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Sep 30 14:42:36 compute-0 nova_compute[261524]: 2025-09-30 14:42:36.954 2 DEBUG oslo_concurrency.lockutils [req-3629cbe5-f95b-493f-850c-730c25396c2e req-adbf6db4-1055-49c7-9359-8bc80b9b3e1f e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Acquiring lock "c7b89511-067a-4ecf-9b88-41170118da87-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:42:36 compute-0 nova_compute[261524]: 2025-09-30 14:42:36.954 2 DEBUG oslo_concurrency.lockutils [req-3629cbe5-f95b-493f-850c-730c25396c2e req-adbf6db4-1055-49c7-9359-8bc80b9b3e1f e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Lock "c7b89511-067a-4ecf-9b88-41170118da87-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:42:36 compute-0 nova_compute[261524]: 2025-09-30 14:42:36.955 2 DEBUG oslo_concurrency.lockutils [req-3629cbe5-f95b-493f-850c-730c25396c2e req-adbf6db4-1055-49c7-9359-8bc80b9b3e1f e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Lock "c7b89511-067a-4ecf-9b88-41170118da87-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:42:36 compute-0 nova_compute[261524]: 2025-09-30 14:42:36.955 2 DEBUG nova.compute.manager [req-3629cbe5-f95b-493f-850c-730c25396c2e req-adbf6db4-1055-49c7-9359-8bc80b9b3e1f e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: c7b89511-067a-4ecf-9b88-41170118da87] No waiting events found dispatching network-vif-plugged-03e495c3-98e6-487d-b0e9-ad172586b71d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Sep 30 14:42:36 compute-0 nova_compute[261524]: 2025-09-30 14:42:36.955 2 WARNING nova.compute.manager [req-3629cbe5-f95b-493f-850c-730c25396c2e req-adbf6db4-1055-49c7-9359-8bc80b9b3e1f e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Received unexpected event network-vif-plugged-03e495c3-98e6-487d-b0e9-ad172586b71d for instance with vm_state active and task_state None.
Sep 30 14:42:37 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:42:37 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:42:37 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:42:37.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:42:37 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:42:37 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:42:37.131Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:42:37 compute-0 nova_compute[261524]: 2025-09-30 14:42:37.655 2 DEBUG oslo_concurrency.lockutils [None req-d709b7d2-3891-4e0a-a997-35b3a476ad8a 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Acquiring lock "c7b89511-067a-4ecf-9b88-41170118da87" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:42:37 compute-0 nova_compute[261524]: 2025-09-30 14:42:37.656 2 DEBUG oslo_concurrency.lockutils [None req-d709b7d2-3891-4e0a-a997-35b3a476ad8a 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Lock "c7b89511-067a-4ecf-9b88-41170118da87" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:42:37 compute-0 nova_compute[261524]: 2025-09-30 14:42:37.656 2 DEBUG oslo_concurrency.lockutils [None req-d709b7d2-3891-4e0a-a997-35b3a476ad8a 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Acquiring lock "c7b89511-067a-4ecf-9b88-41170118da87-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:42:37 compute-0 nova_compute[261524]: 2025-09-30 14:42:37.657 2 DEBUG oslo_concurrency.lockutils [None req-d709b7d2-3891-4e0a-a997-35b3a476ad8a 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Lock "c7b89511-067a-4ecf-9b88-41170118da87-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:42:37 compute-0 nova_compute[261524]: 2025-09-30 14:42:37.657 2 DEBUG oslo_concurrency.lockutils [None req-d709b7d2-3891-4e0a-a997-35b3a476ad8a 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Lock "c7b89511-067a-4ecf-9b88-41170118da87-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:42:37 compute-0 nova_compute[261524]: 2025-09-30 14:42:37.659 2 INFO nova.compute.manager [None req-d709b7d2-3891-4e0a-a997-35b3a476ad8a 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Terminating instance
Sep 30 14:42:37 compute-0 nova_compute[261524]: 2025-09-30 14:42:37.661 2 DEBUG nova.compute.manager [None req-d709b7d2-3891-4e0a-a997-35b3a476ad8a 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Sep 30 14:42:37 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:42:37 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:42:37 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:42:37.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:42:37 compute-0 kernel: tapfdd76f4a-6a (unregistering): left promiscuous mode
Sep 30 14:42:37 compute-0 NetworkManager[45472]: <info>  [1759243357.7242] device (tapfdd76f4a-6a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Sep 30 14:42:37 compute-0 ovn_controller[154021]: 2025-09-30T14:42:37Z|00054|binding|INFO|Releasing lport fdd76f4a-6a11-467c-8f19-0b00baa4dbd1 from this chassis (sb_readonly=0)
Sep 30 14:42:37 compute-0 ovn_controller[154021]: 2025-09-30T14:42:37Z|00055|binding|INFO|Setting lport fdd76f4a-6a11-467c-8f19-0b00baa4dbd1 down in Southbound
Sep 30 14:42:37 compute-0 ovn_controller[154021]: 2025-09-30T14:42:37Z|00056|binding|INFO|Removing iface tapfdd76f4a-6a ovn-installed in OVS
Sep 30 14:42:37 compute-0 nova_compute[261524]: 2025-09-30 14:42:37.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:42:37 compute-0 nova_compute[261524]: 2025-09-30 14:42:37.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:42:37 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:42:37.747 163966 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0e:c5:53 10.100.0.11'], port_security=['fa:16:3e:0e:c5:53 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'c7b89511-067a-4ecf-9b88-41170118da87', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-31e82792-2132-423c-8fb3-0fd2453172b3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0f6bbb74396f4cb7bfa999ebdabfe722', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8607df68-326f-4ba4-bbe1-2a261640b927', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bb4fab7e-c674-4340-863c-8e9b5fa39d81, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f8c6753f7f0>], logical_port=fdd76f4a-6a11-467c-8f19-0b00baa4dbd1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f8c6753f7f0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Sep 30 14:42:37 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:42:37.749 163966 INFO neutron.agent.ovn.metadata.agent [-] Port fdd76f4a-6a11-467c-8f19-0b00baa4dbd1 in datapath 31e82792-2132-423c-8fb3-0fd2453172b3 unbound from our chassis
Sep 30 14:42:37 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:42:37.751 163966 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 31e82792-2132-423c-8fb3-0fd2453172b3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Sep 30 14:42:37 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:42:37.752 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[72629b85-8832-44c3-9c1d-986e42266085]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:42:37 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:42:37.753 163966 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-31e82792-2132-423c-8fb3-0fd2453172b3 namespace which is not needed anymore
Sep 30 14:42:37 compute-0 nova_compute[261524]: 2025-09-30 14:42:37.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:42:37 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000003.scope: Deactivated successfully.
Sep 30 14:42:37 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000003.scope: Consumed 13.739s CPU time.
Sep 30 14:42:37 compute-0 systemd-machined[215710]: Machine qemu-2-instance-00000003 terminated.
Sep 30 14:42:37 compute-0 ceph-mon[74194]: pgmap v813: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 7.1 KiB/s rd, 14 KiB/s wr, 1 op/s
Sep 30 14:42:37 compute-0 nova_compute[261524]: 2025-09-30 14:42:37.902 2 INFO nova.virt.libvirt.driver [-] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Instance destroyed successfully.
Sep 30 14:42:37 compute-0 nova_compute[261524]: 2025-09-30 14:42:37.902 2 DEBUG nova.objects.instance [None req-d709b7d2-3891-4e0a-a997-35b3a476ad8a 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Lazy-loading 'resources' on Instance uuid c7b89511-067a-4ecf-9b88-41170118da87 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Sep 30 14:42:37 compute-0 nova_compute[261524]: 2025-09-30 14:42:37.916 2 DEBUG nova.virt.libvirt.vif [None req-d709b7d2-3891-4e0a-a997-35b3a476ad8a 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-09-30T14:41:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2031188285',display_name='tempest-TestNetworkBasicOps-server-2031188285',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2031188285',id=3,image_ref='7c70cf84-edc3-42b2-a094-ae3c1dbaffe4',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEAImrCwKSNyEdm98pPfZ4sjS6exrK+14H3hUnFOdL8Y5dY0kzO28iP+MIhWAQTc22os7ImKeOILYxLVSkpa7J7So6O1Rtmi7C5fPdNcVDkCJS373V5RS7Al59MW7kPAog==',key_name='tempest-TestNetworkBasicOps-1988791059',keypairs=<?>,launch_index=0,launched_at=2025-09-30T14:42:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0f6bbb74396f4cb7bfa999ebdabfe722',ramdisk_id='',reservation_id='r-nwtvr0g5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c70cf84-edc3-42b2-a094-ae3c1dbaffe4',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-195302952',owner_user_name='tempest-TestNetworkBasicOps-195302952-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-09-30T14:42:05Z,user_data=None,user_id='59c80c4f189d4667aec64b43afc69ed2',uuid=c7b89511-067a-4ecf-9b88-41170118da87,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fdd76f4a-6a11-467c-8f19-0b00baa4dbd1", "address": "fa:16:3e:0e:c5:53", "network": {"id": "31e82792-2132-423c-8fb3-0fd2453172b3", "bridge": "br-int", "label": "tempest-network-smoke--156271235", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0f6bbb74396f4cb7bfa999ebdabfe722", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdd76f4a-6a", "ovs_interfaceid": "fdd76f4a-6a11-467c-8f19-0b00baa4dbd1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Sep 30 14:42:37 compute-0 nova_compute[261524]: 2025-09-30 14:42:37.916 2 DEBUG nova.network.os_vif_util [None req-d709b7d2-3891-4e0a-a997-35b3a476ad8a 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Converting VIF {"id": "fdd76f4a-6a11-467c-8f19-0b00baa4dbd1", "address": "fa:16:3e:0e:c5:53", "network": {"id": "31e82792-2132-423c-8fb3-0fd2453172b3", "bridge": "br-int", "label": "tempest-network-smoke--156271235", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0f6bbb74396f4cb7bfa999ebdabfe722", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdd76f4a-6a", "ovs_interfaceid": "fdd76f4a-6a11-467c-8f19-0b00baa4dbd1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Sep 30 14:42:37 compute-0 nova_compute[261524]: 2025-09-30 14:42:37.917 2 DEBUG nova.network.os_vif_util [None req-d709b7d2-3891-4e0a-a997-35b3a476ad8a 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0e:c5:53,bridge_name='br-int',has_traffic_filtering=True,id=fdd76f4a-6a11-467c-8f19-0b00baa4dbd1,network=Network(31e82792-2132-423c-8fb3-0fd2453172b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdd76f4a-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Sep 30 14:42:37 compute-0 nova_compute[261524]: 2025-09-30 14:42:37.918 2 DEBUG os_vif [None req-d709b7d2-3891-4e0a-a997-35b3a476ad8a 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:0e:c5:53,bridge_name='br-int',has_traffic_filtering=True,id=fdd76f4a-6a11-467c-8f19-0b00baa4dbd1,network=Network(31e82792-2132-423c-8fb3-0fd2453172b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdd76f4a-6a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Sep 30 14:42:37 compute-0 nova_compute[261524]: 2025-09-30 14:42:37.920 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:42:37 compute-0 nova_compute[261524]: 2025-09-30 14:42:37.920 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfdd76f4a-6a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 14:42:37 compute-0 nova_compute[261524]: 2025-09-30 14:42:37.922 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:42:37 compute-0 nova_compute[261524]: 2025-09-30 14:42:37.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Sep 30 14:42:37 compute-0 neutron-haproxy-ovnmeta-31e82792-2132-423c-8fb3-0fd2453172b3[271999]: [NOTICE]   (272003) : haproxy version is 2.8.14-c23fe91
Sep 30 14:42:37 compute-0 neutron-haproxy-ovnmeta-31e82792-2132-423c-8fb3-0fd2453172b3[271999]: [NOTICE]   (272003) : path to executable is /usr/sbin/haproxy
Sep 30 14:42:37 compute-0 neutron-haproxy-ovnmeta-31e82792-2132-423c-8fb3-0fd2453172b3[271999]: [WARNING]  (272003) : Exiting Master process...
Sep 30 14:42:37 compute-0 neutron-haproxy-ovnmeta-31e82792-2132-423c-8fb3-0fd2453172b3[271999]: [WARNING]  (272003) : Exiting Master process...
Sep 30 14:42:37 compute-0 nova_compute[261524]: 2025-09-30 14:42:37.929 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:42:37 compute-0 neutron-haproxy-ovnmeta-31e82792-2132-423c-8fb3-0fd2453172b3[271999]: [ALERT]    (272003) : Current worker (272005) exited with code 143 (Terminated)
Sep 30 14:42:37 compute-0 neutron-haproxy-ovnmeta-31e82792-2132-423c-8fb3-0fd2453172b3[271999]: [WARNING]  (272003) : All workers exited. Exiting... (0)
Sep 30 14:42:37 compute-0 nova_compute[261524]: 2025-09-30 14:42:37.934 2 INFO os_vif [None req-d709b7d2-3891-4e0a-a997-35b3a476ad8a 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:0e:c5:53,bridge_name='br-int',has_traffic_filtering=True,id=fdd76f4a-6a11-467c-8f19-0b00baa4dbd1,network=Network(31e82792-2132-423c-8fb3-0fd2453172b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdd76f4a-6a')
Sep 30 14:42:37 compute-0 systemd[1]: libpod-8c243b1a2bf5d7b5a4e3660893c53b614fd40e5e8b252794386e9341147fdc7b.scope: Deactivated successfully.
Sep 30 14:42:37 compute-0 podman[273090]: 2025-09-30 14:42:37.942500846 +0000 UTC m=+0.065465858 container died 8c243b1a2bf5d7b5a4e3660893c53b614fd40e5e8b252794386e9341147fdc7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-31e82792-2132-423c-8fb3-0fd2453172b3, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true)
Sep 30 14:42:37 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8c243b1a2bf5d7b5a4e3660893c53b614fd40e5e8b252794386e9341147fdc7b-userdata-shm.mount: Deactivated successfully.
Sep 30 14:42:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-2dc783c118845abbf0c38aaf0f8f63b9de3df681da2b424b7896423242f58901-merged.mount: Deactivated successfully.
Sep 30 14:42:37 compute-0 podman[273090]: 2025-09-30 14:42:37.998709803 +0000 UTC m=+0.121674815 container cleanup 8c243b1a2bf5d7b5a4e3660893c53b614fd40e5e8b252794386e9341147fdc7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-31e82792-2132-423c-8fb3-0fd2453172b3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Sep 30 14:42:38 compute-0 systemd[1]: libpod-conmon-8c243b1a2bf5d7b5a4e3660893c53b614fd40e5e8b252794386e9341147fdc7b.scope: Deactivated successfully.
Sep 30 14:42:38 compute-0 podman[273151]: 2025-09-30 14:42:38.084657651 +0000 UTC m=+0.054558855 container remove 8c243b1a2bf5d7b5a4e3660893c53b614fd40e5e8b252794386e9341147fdc7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-31e82792-2132-423c-8fb3-0fd2453172b3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Sep 30 14:42:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:42:38.096 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[bd1f9c38-999f-4ede-b32a-628a72c5ed9b]: (4, ('Tue Sep 30 02:42:37 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-31e82792-2132-423c-8fb3-0fd2453172b3 (8c243b1a2bf5d7b5a4e3660893c53b614fd40e5e8b252794386e9341147fdc7b)\n8c243b1a2bf5d7b5a4e3660893c53b614fd40e5e8b252794386e9341147fdc7b\nTue Sep 30 02:42:38 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-31e82792-2132-423c-8fb3-0fd2453172b3 (8c243b1a2bf5d7b5a4e3660893c53b614fd40e5e8b252794386e9341147fdc7b)\n8c243b1a2bf5d7b5a4e3660893c53b614fd40e5e8b252794386e9341147fdc7b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:42:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:42:38.099 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[df306ce1-7557-46db-8a4b-e42a7a52b9be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:42:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:42:38.100 163966 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap31e82792-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 14:42:38 compute-0 nova_compute[261524]: 2025-09-30 14:42:38.128 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:42:38 compute-0 kernel: tap31e82792-20: left promiscuous mode
Sep 30 14:42:38 compute-0 nova_compute[261524]: 2025-09-30 14:42:38.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:42:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:42:38.148 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[a9d73948-dca8-435c-b7a3-317417e224bb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:42:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:42:38.187 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[7e9f59e2-4430-4a7f-a477-1130041e62f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:42:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:42:38.189 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[1991f22d-3ca2-452f-9c36-85a659d366ba]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:42:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:42:38.220 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[765bef5b-7005-4423-8cc9-e035b65e9012]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 668702, 'reachable_time': 21274, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 273166, 'error': None, 'target': 'ovnmeta-31e82792-2132-423c-8fb3-0fd2453172b3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:42:38 compute-0 systemd[1]: run-netns-ovnmeta\x2d31e82792\x2d2132\x2d423c\x2d8fb3\x2d0fd2453172b3.mount: Deactivated successfully.
Sep 30 14:42:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:42:38.223 164124 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-31e82792-2132-423c-8fb3-0fd2453172b3 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Sep 30 14:42:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:42:38.224 164124 DEBUG oslo.privsep.daemon [-] privsep: reply[e4176f19-29a1-47fe-92d3-f5a96af4ddcb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:42:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:42:38.258 163966 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:42:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:42:38.259 163966 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:42:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:42:38.259 163966 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:42:38 compute-0 nova_compute[261524]: 2025-09-30 14:42:38.440 2 INFO nova.virt.libvirt.driver [None req-d709b7d2-3891-4e0a-a997-35b3a476ad8a 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Deleting instance files /var/lib/nova/instances/c7b89511-067a-4ecf-9b88-41170118da87_del
Sep 30 14:42:38 compute-0 nova_compute[261524]: 2025-09-30 14:42:38.441 2 INFO nova.virt.libvirt.driver [None req-d709b7d2-3891-4e0a-a997-35b3a476ad8a 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Deletion of /var/lib/nova/instances/c7b89511-067a-4ecf-9b88-41170118da87_del complete
Sep 30 14:42:38 compute-0 nova_compute[261524]: 2025-09-30 14:42:38.494 2 INFO nova.compute.manager [None req-d709b7d2-3891-4e0a-a997-35b3a476ad8a 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Took 0.83 seconds to destroy the instance on the hypervisor.
Sep 30 14:42:38 compute-0 nova_compute[261524]: 2025-09-30 14:42:38.495 2 DEBUG oslo.service.loopingcall [None req-d709b7d2-3891-4e0a-a997-35b3a476ad8a 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Sep 30 14:42:38 compute-0 nova_compute[261524]: 2025-09-30 14:42:38.498 2 DEBUG nova.compute.manager [-] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Sep 30 14:42:38 compute-0 nova_compute[261524]: 2025-09-30 14:42:38.498 2 DEBUG nova.network.neutron [-] [instance: c7b89511-067a-4ecf-9b88-41170118da87] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Sep 30 14:42:38 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v814: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 6.8 KiB/s rd, 1023 B/s wr, 1 op/s
Sep 30 14:42:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:42:38 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:42:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:42:38 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:42:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:42:38 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:42:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:42:39 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:42:39 compute-0 nova_compute[261524]: 2025-09-30 14:42:39.043 2 DEBUG nova.compute.manager [req-7a05fd04-de77-44b5-a664-660f7a35cec3 req-012c9df7-5b0e-47ff-92e7-62b52aca560c e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Received event network-changed-fdd76f4a-6a11-467c-8f19-0b00baa4dbd1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Sep 30 14:42:39 compute-0 nova_compute[261524]: 2025-09-30 14:42:39.044 2 DEBUG nova.compute.manager [req-7a05fd04-de77-44b5-a664-660f7a35cec3 req-012c9df7-5b0e-47ff-92e7-62b52aca560c e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Refreshing instance network info cache due to event network-changed-fdd76f4a-6a11-467c-8f19-0b00baa4dbd1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Sep 30 14:42:39 compute-0 nova_compute[261524]: 2025-09-30 14:42:39.044 2 DEBUG oslo_concurrency.lockutils [req-7a05fd04-de77-44b5-a664-660f7a35cec3 req-012c9df7-5b0e-47ff-92e7-62b52aca560c e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Acquiring lock "refresh_cache-c7b89511-067a-4ecf-9b88-41170118da87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Sep 30 14:42:39 compute-0 nova_compute[261524]: 2025-09-30 14:42:39.044 2 DEBUG oslo_concurrency.lockutils [req-7a05fd04-de77-44b5-a664-660f7a35cec3 req-012c9df7-5b0e-47ff-92e7-62b52aca560c e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Acquired lock "refresh_cache-c7b89511-067a-4ecf-9b88-41170118da87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Sep 30 14:42:39 compute-0 nova_compute[261524]: 2025-09-30 14:42:39.044 2 DEBUG nova.network.neutron [req-7a05fd04-de77-44b5-a664-660f7a35cec3 req-012c9df7-5b0e-47ff-92e7-62b52aca560c e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Refreshing network info cache for port fdd76f4a-6a11-467c-8f19-0b00baa4dbd1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Sep 30 14:42:39 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:42:39 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:42:39 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:42:39.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:42:39 compute-0 nova_compute[261524]: 2025-09-30 14:42:39.250 2 INFO nova.network.neutron [req-7a05fd04-de77-44b5-a664-660f7a35cec3 req-012c9df7-5b0e-47ff-92e7-62b52aca560c e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Port fdd76f4a-6a11-467c-8f19-0b00baa4dbd1 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Sep 30 14:42:39 compute-0 nova_compute[261524]: 2025-09-30 14:42:39.250 2 DEBUG nova.network.neutron [req-7a05fd04-de77-44b5-a664-660f7a35cec3 req-012c9df7-5b0e-47ff-92e7-62b52aca560c e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Sep 30 14:42:39 compute-0 nova_compute[261524]: 2025-09-30 14:42:39.271 2 DEBUG oslo_concurrency.lockutils [req-7a05fd04-de77-44b5-a664-660f7a35cec3 req-012c9df7-5b0e-47ff-92e7-62b52aca560c e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Releasing lock "refresh_cache-c7b89511-067a-4ecf-9b88-41170118da87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Sep 30 14:42:39 compute-0 nova_compute[261524]: 2025-09-30 14:42:39.272 2 DEBUG nova.compute.manager [req-7a05fd04-de77-44b5-a664-660f7a35cec3 req-012c9df7-5b0e-47ff-92e7-62b52aca560c e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Received event network-vif-unplugged-fdd76f4a-6a11-467c-8f19-0b00baa4dbd1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Sep 30 14:42:39 compute-0 nova_compute[261524]: 2025-09-30 14:42:39.272 2 DEBUG oslo_concurrency.lockutils [req-7a05fd04-de77-44b5-a664-660f7a35cec3 req-012c9df7-5b0e-47ff-92e7-62b52aca560c e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Acquiring lock "c7b89511-067a-4ecf-9b88-41170118da87-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:42:39 compute-0 nova_compute[261524]: 2025-09-30 14:42:39.273 2 DEBUG oslo_concurrency.lockutils [req-7a05fd04-de77-44b5-a664-660f7a35cec3 req-012c9df7-5b0e-47ff-92e7-62b52aca560c e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Lock "c7b89511-067a-4ecf-9b88-41170118da87-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:42:39 compute-0 nova_compute[261524]: 2025-09-30 14:42:39.273 2 DEBUG oslo_concurrency.lockutils [req-7a05fd04-de77-44b5-a664-660f7a35cec3 req-012c9df7-5b0e-47ff-92e7-62b52aca560c e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Lock "c7b89511-067a-4ecf-9b88-41170118da87-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:42:39 compute-0 nova_compute[261524]: 2025-09-30 14:42:39.273 2 DEBUG nova.compute.manager [req-7a05fd04-de77-44b5-a664-660f7a35cec3 req-012c9df7-5b0e-47ff-92e7-62b52aca560c e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: c7b89511-067a-4ecf-9b88-41170118da87] No waiting events found dispatching network-vif-unplugged-fdd76f4a-6a11-467c-8f19-0b00baa4dbd1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Sep 30 14:42:39 compute-0 nova_compute[261524]: 2025-09-30 14:42:39.274 2 DEBUG nova.compute.manager [req-7a05fd04-de77-44b5-a664-660f7a35cec3 req-012c9df7-5b0e-47ff-92e7-62b52aca560c e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Received event network-vif-unplugged-fdd76f4a-6a11-467c-8f19-0b00baa4dbd1 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Sep 30 14:42:39 compute-0 nova_compute[261524]: 2025-09-30 14:42:39.274 2 DEBUG nova.compute.manager [req-7a05fd04-de77-44b5-a664-660f7a35cec3 req-012c9df7-5b0e-47ff-92e7-62b52aca560c e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Received event network-vif-plugged-fdd76f4a-6a11-467c-8f19-0b00baa4dbd1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Sep 30 14:42:39 compute-0 nova_compute[261524]: 2025-09-30 14:42:39.275 2 DEBUG oslo_concurrency.lockutils [req-7a05fd04-de77-44b5-a664-660f7a35cec3 req-012c9df7-5b0e-47ff-92e7-62b52aca560c e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Acquiring lock "c7b89511-067a-4ecf-9b88-41170118da87-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:42:39 compute-0 nova_compute[261524]: 2025-09-30 14:42:39.275 2 DEBUG oslo_concurrency.lockutils [req-7a05fd04-de77-44b5-a664-660f7a35cec3 req-012c9df7-5b0e-47ff-92e7-62b52aca560c e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Lock "c7b89511-067a-4ecf-9b88-41170118da87-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:42:39 compute-0 nova_compute[261524]: 2025-09-30 14:42:39.275 2 DEBUG oslo_concurrency.lockutils [req-7a05fd04-de77-44b5-a664-660f7a35cec3 req-012c9df7-5b0e-47ff-92e7-62b52aca560c e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Lock "c7b89511-067a-4ecf-9b88-41170118da87-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:42:39 compute-0 nova_compute[261524]: 2025-09-30 14:42:39.276 2 DEBUG nova.compute.manager [req-7a05fd04-de77-44b5-a664-660f7a35cec3 req-012c9df7-5b0e-47ff-92e7-62b52aca560c e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: c7b89511-067a-4ecf-9b88-41170118da87] No waiting events found dispatching network-vif-plugged-fdd76f4a-6a11-467c-8f19-0b00baa4dbd1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Sep 30 14:42:39 compute-0 nova_compute[261524]: 2025-09-30 14:42:39.276 2 WARNING nova.compute.manager [req-7a05fd04-de77-44b5-a664-660f7a35cec3 req-012c9df7-5b0e-47ff-92e7-62b52aca560c e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Received unexpected event network-vif-plugged-fdd76f4a-6a11-467c-8f19-0b00baa4dbd1 for instance with vm_state active and task_state deleting.
Sep 30 14:42:39 compute-0 nova_compute[261524]: 2025-09-30 14:42:39.304 2 DEBUG nova.network.neutron [-] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Sep 30 14:42:39 compute-0 nova_compute[261524]: 2025-09-30 14:42:39.317 2 INFO nova.compute.manager [-] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Took 0.82 seconds to deallocate network for instance.
Sep 30 14:42:39 compute-0 nova_compute[261524]: 2025-09-30 14:42:39.365 2 DEBUG nova.compute.manager [req-ae1693c2-655a-4b1a-8e2b-687737abb370 req-7c550b73-e462-415f-be08-294ad801c877 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Received event network-vif-deleted-fdd76f4a-6a11-467c-8f19-0b00baa4dbd1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Sep 30 14:42:39 compute-0 nova_compute[261524]: 2025-09-30 14:42:39.367 2 DEBUG oslo_concurrency.lockutils [None req-d709b7d2-3891-4e0a-a997-35b3a476ad8a 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:42:39 compute-0 nova_compute[261524]: 2025-09-30 14:42:39.368 2 DEBUG oslo_concurrency.lockutils [None req-d709b7d2-3891-4e0a-a997-35b3a476ad8a 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:42:39 compute-0 nova_compute[261524]: 2025-09-30 14:42:39.434 2 DEBUG oslo_concurrency.processutils [None req-d709b7d2-3891-4e0a-a997-35b3a476ad8a 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Sep 30 14:42:39 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:42:39 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:42:39 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:42:39.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:42:39 compute-0 ceph-mon[74194]: pgmap v814: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 6.8 KiB/s rd, 1023 B/s wr, 1 op/s
Sep 30 14:42:39 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 14:42:39 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/550341210' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:42:39 compute-0 nova_compute[261524]: 2025-09-30 14:42:39.942 2 DEBUG oslo_concurrency.processutils [None req-d709b7d2-3891-4e0a-a997-35b3a476ad8a 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Sep 30 14:42:39 compute-0 nova_compute[261524]: 2025-09-30 14:42:39.950 2 DEBUG nova.compute.provider_tree [None req-d709b7d2-3891-4e0a-a997-35b3a476ad8a 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Inventory has not changed in ProviderTree for provider: 06783cfc-6d32-454d-9501-ebd8adea3735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Sep 30 14:42:40 compute-0 nova_compute[261524]: 2025-09-30 14:42:40.188 2 DEBUG nova.scheduler.client.report [None req-d709b7d2-3891-4e0a-a997-35b3a476ad8a 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Inventory has not changed for provider 06783cfc-6d32-454d-9501-ebd8adea3735 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Sep 30 14:42:40 compute-0 nova_compute[261524]: 2025-09-30 14:42:40.211 2 DEBUG oslo_concurrency.lockutils [None req-d709b7d2-3891-4e0a-a997-35b3a476ad8a 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.843s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:42:40 compute-0 nova_compute[261524]: 2025-09-30 14:42:40.246 2 INFO nova.scheduler.client.report [None req-d709b7d2-3891-4e0a-a997-35b3a476ad8a 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Deleted allocations for instance c7b89511-067a-4ecf-9b88-41170118da87
Sep 30 14:42:40 compute-0 nova_compute[261524]: 2025-09-30 14:42:40.276 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:42:40 compute-0 nova_compute[261524]: 2025-09-30 14:42:40.311 2 DEBUG oslo_concurrency.lockutils [None req-d709b7d2-3891-4e0a-a997-35b3a476ad8a 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Lock "c7b89511-067a-4ecf-9b88-41170118da87" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.655s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:42:40 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v815: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 6.8 KiB/s rd, 1023 B/s wr, 1 op/s
Sep 30 14:42:40 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/550341210' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:42:41 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:42:41 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:42:41 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:42:41.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:42:41 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:42:41 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:42:41 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:42:41.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:42:41 compute-0 ceph-mon[74194]: pgmap v815: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 6.8 KiB/s rd, 1023 B/s wr, 1 op/s
Sep 30 14:42:42 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:42:42 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v816: 337 pgs: 337 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 7.8 KiB/s wr, 29 op/s
Sep 30 14:42:42 compute-0 nova_compute[261524]: 2025-09-30 14:42:42.924 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:42:43 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:42:43 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:42:43 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:42:43.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:42:43 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:42:43.640Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:42:43 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:42:43 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:42:43 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:42:43.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:42:43 compute-0 ceph-mon[74194]: pgmap v816: 337 pgs: 337 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 7.8 KiB/s wr, 29 op/s
Sep 30 14:42:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:42:43 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:42:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:42:43 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:42:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:42:43 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:42:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:42:44 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:42:44 compute-0 sudo[273196]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:42:44 compute-0 sudo[273196]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:42:44 compute-0 sudo[273196]: pam_unix(sudo:session): session closed for user root
Sep 30 14:42:44 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:42:44 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:42:44 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v817: 337 pgs: 337 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 7.8 KiB/s wr, 29 op/s
Sep 30 14:42:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:42:44] "GET /metrics HTTP/1.1" 200 48531 "" "Prometheus/2.51.0"
Sep 30 14:42:44 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:42:44] "GET /metrics HTTP/1.1" 200 48531 "" "Prometheus/2.51.0"
Sep 30 14:42:44 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:42:45 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:42:45 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:42:45 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:42:45.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:42:45 compute-0 nova_compute[261524]: 2025-09-30 14:42:45.279 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:42:45 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:42:45 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:42:45 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:42:45.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:42:45 compute-0 ceph-mon[74194]: pgmap v817: 337 pgs: 337 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 7.8 KiB/s wr, 29 op/s
Sep 30 14:42:46 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v818: 337 pgs: 337 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 7.8 KiB/s wr, 29 op/s
Sep 30 14:42:47 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:42:47 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:42:47 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:42:47.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:42:47 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:42:47 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:42:47.132Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:42:47 compute-0 nova_compute[261524]: 2025-09-30 14:42:47.177 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:42:47 compute-0 nova_compute[261524]: 2025-09-30 14:42:47.294 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:42:47 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:42:47 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:42:47 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:42:47.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:42:47 compute-0 nova_compute[261524]: 2025-09-30 14:42:47.927 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:42:48 compute-0 ceph-mon[74194]: pgmap v818: 337 pgs: 337 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 7.8 KiB/s wr, 29 op/s
Sep 30 14:42:48 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v819: 337 pgs: 337 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 6.8 KiB/s wr, 28 op/s
Sep 30 14:42:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:42:48 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:42:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:42:48 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:42:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:42:48 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:42:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:42:49 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:42:49 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:42:49 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:42:49 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:42:49.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:42:49 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:42:49 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:42:49 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:42:49.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:42:49 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:42:49.801 163966 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ea:30:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '86:54:af:bb:5a:5f'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Sep 30 14:42:49 compute-0 nova_compute[261524]: 2025-09-30 14:42:49.801 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:42:49 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:42:49.802 163966 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Sep 30 14:42:49 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:42:49.803 163966 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c6331d25-78a2-493c-bb43-51ad387342be, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 14:42:50 compute-0 ceph-mon[74194]: pgmap v819: 337 pgs: 337 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 6.8 KiB/s wr, 28 op/s
Sep 30 14:42:50 compute-0 nova_compute[261524]: 2025-09-30 14:42:50.281 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:42:50 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v820: 337 pgs: 337 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 6.8 KiB/s wr, 28 op/s
Sep 30 14:42:51 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:42:51 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:42:51 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:42:51.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:42:51 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:42:51 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:42:51 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:42:51.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:42:52 compute-0 ceph-mon[74194]: pgmap v820: 337 pgs: 337 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 6.8 KiB/s wr, 28 op/s
Sep 30 14:42:52 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:42:52 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v821: 337 pgs: 337 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 6.8 KiB/s wr, 28 op/s
Sep 30 14:42:52 compute-0 nova_compute[261524]: 2025-09-30 14:42:52.899 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759243357.8982503, c7b89511-067a-4ecf-9b88-41170118da87 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Sep 30 14:42:52 compute-0 nova_compute[261524]: 2025-09-30 14:42:52.901 2 INFO nova.compute.manager [-] [instance: c7b89511-067a-4ecf-9b88-41170118da87] VM Stopped (Lifecycle Event)
Sep 30 14:42:52 compute-0 nova_compute[261524]: 2025-09-30 14:42:52.919 2 DEBUG nova.compute.manager [None req-f70c995d-b7d6-40cb-92e9-ff9ddfef49e3 - - - - - -] [instance: c7b89511-067a-4ecf-9b88-41170118da87] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Sep 30 14:42:52 compute-0 nova_compute[261524]: 2025-09-30 14:42:52.930 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:42:53 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:42:53 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:42:53 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:42:53.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:42:53 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:42:53.640Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:42:53 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:42:53 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:42:53 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:42:53.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:42:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:42:53 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:42:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:42:53 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:42:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:42:53 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:42:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:42:54 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:42:54 compute-0 ceph-mon[74194]: pgmap v821: 337 pgs: 337 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 6.8 KiB/s wr, 28 op/s
Sep 30 14:42:54 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v822: 337 pgs: 337 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:42:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:42:54] "GET /metrics HTTP/1.1" 200 48531 "" "Prometheus/2.51.0"
Sep 30 14:42:54 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:42:54] "GET /metrics HTTP/1.1" 200 48531 "" "Prometheus/2.51.0"
Sep 30 14:42:55 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:42:55 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:42:55 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:42:55.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:42:55 compute-0 nova_compute[261524]: 2025-09-30 14:42:55.285 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:42:55 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:42:55 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:42:55 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:42:55.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:42:56 compute-0 ceph-mon[74194]: pgmap v822: 337 pgs: 337 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:42:56 compute-0 podman[273236]: 2025-09-30 14:42:56.176039156 +0000 UTC m=+0.087604442 container health_status b3fb2fd96e3ed0561f41ccf5f3a6a8f5edc1610051f196882b9fe64f54041c07 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20250923, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Sep 30 14:42:56 compute-0 podman[273237]: 2025-09-30 14:42:56.185363458 +0000 UTC m=+0.071366381 container health_status c96c6cc1b89de09f21811bef665f35e069874e3ad1bbd4c37d02227af98e3458 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Sep 30 14:42:56 compute-0 podman[273234]: 2025-09-30 14:42:56.198200071 +0000 UTC m=+0.115293450 container health_status 3f9405f717bf7bccb1d94628a6cea0442375ebf8d5cf43ef2536ee30dce6c6e0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, org.label-schema.build-date=20250923, container_name=iscsid, managed_by=edpm_ansible, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Sep 30 14:42:56 compute-0 podman[273235]: 2025-09-30 14:42:56.229130063 +0000 UTC m=+0.138546423 container health_status 8ace90c1ad44d98a416105b72e1c541724757ab08efa10adfaa0df7e8c2027c6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:42:56 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v823: 337 pgs: 337 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:42:57 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:42:57 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:42:57 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:42:57.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:42:57 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:42:57 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:42:57.133Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:42:57 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:42:57 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:42:57 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:42:57.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:42:57 compute-0 nova_compute[261524]: 2025-09-30 14:42:57.932 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:42:58 compute-0 ceph-mon[74194]: pgmap v823: 337 pgs: 337 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:42:58 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v824: 337 pgs: 337 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:42:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:42:59 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:42:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:42:59 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:42:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:42:59 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:42:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:42:59 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:42:59 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:42:59 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:42:59 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:42:59.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:42:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-crash-compute-0[79646]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Sep 30 14:42:59 compute-0 ceph-mgr[74485]: [balancer INFO root] Optimize plan auto_2025-09-30_14:42:59
Sep 30 14:42:59 compute-0 ceph-mgr[74485]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 14:42:59 compute-0 ceph-mgr[74485]: [balancer INFO root] do_upmap
Sep 30 14:42:59 compute-0 ceph-mgr[74485]: [balancer INFO root] pools ['volumes', '.mgr', 'images', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'vms', 'default.rgw.meta', 'backups', '.nfs', 'default.rgw.log']
Sep 30 14:42:59 compute-0 ceph-mgr[74485]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 14:42:59 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:42:59 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:42:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:42:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:42:59 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:42:59 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:42:59 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:42:59.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:42:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:42:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:42:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:42:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:42:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 14:42:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:42:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 14:42:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:42:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 14:42:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:42:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:42:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:42:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:42:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:42:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Sep 30 14:42:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:42:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Sep 30 14:42:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:42:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:42:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:42:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Sep 30 14:42:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:42:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Sep 30 14:42:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:42:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:42:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:42:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 14:42:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:42:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 14:43:00 compute-0 ceph-mon[74194]: pgmap v824: 337 pgs: 337 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:43:00 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:43:00 compute-0 nova_compute[261524]: 2025-09-30 14:43:00.287 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:43:00 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v825: 337 pgs: 337 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:43:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 14:43:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 14:43:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 14:43:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 14:43:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 14:43:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 14:43:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 14:43:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 14:43:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 14:43:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 14:43:01 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:43:01 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:43:01 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:43:01.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:43:01 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:43:01 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:43:01 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:43:01.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:43:02 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:43:02 compute-0 ceph-mon[74194]: pgmap v825: 337 pgs: 337 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:43:02 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v826: 337 pgs: 337 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:43:02 compute-0 nova_compute[261524]: 2025-09-30 14:43:02.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:43:03 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:43:03 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:43:03 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:43:03.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:43:03 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:43:03.642Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:43:03 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:43:03 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:43:03 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:43:03.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:43:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:43:03 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:43:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:43:03 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:43:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:43:03 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:43:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:43:04 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:43:04 compute-0 ceph-mon[74194]: pgmap v826: 337 pgs: 337 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:43:04 compute-0 sudo[273324]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:43:04 compute-0 sudo[273324]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:43:04 compute-0 sudo[273324]: pam_unix(sudo:session): session closed for user root
Sep 30 14:43:04 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v827: 337 pgs: 337 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:43:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:43:04] "GET /metrics HTTP/1.1" 200 48531 "" "Prometheus/2.51.0"
Sep 30 14:43:04 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:43:04] "GET /metrics HTTP/1.1" 200 48531 "" "Prometheus/2.51.0"
Sep 30 14:43:05 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:43:05 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:43:05 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:43:05.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:43:05 compute-0 nova_compute[261524]: 2025-09-30 14:43:05.291 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:43:05 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:43:05 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:43:05 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:43:05.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:43:06 compute-0 ceph-mon[74194]: pgmap v827: 337 pgs: 337 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:43:06 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v828: 337 pgs: 337 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:43:07 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:43:07 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:43:07 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:43:07.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:43:07 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:43:07 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:43:07.135Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:43:07 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:43:07 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:43:07 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:43:07.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:43:07 compute-0 nova_compute[261524]: 2025-09-30 14:43:07.936 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:43:08 compute-0 ceph-mon[74194]: pgmap v828: 337 pgs: 337 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:43:08 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v829: 337 pgs: 337 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:43:09 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:43:08 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:43:09 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:43:08 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:43:09 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:43:08 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:43:09 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:43:09 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:43:09 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:43:09 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:43:09 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:43:09.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:43:09 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/720429256' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:43:09 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:43:09 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:43:09 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:43:09.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:43:10 compute-0 ceph-mon[74194]: pgmap v829: 337 pgs: 337 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:43:10 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/608176979' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:43:10 compute-0 nova_compute[261524]: 2025-09-30 14:43:10.293 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:43:10 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v830: 337 pgs: 337 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:43:11 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:43:11 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:43:11 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:43:11.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:43:11 compute-0 ceph-mon[74194]: from='client.? 192.168.122.10:0/472891680' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 14:43:11 compute-0 ceph-mon[74194]: from='client.? 192.168.122.10:0/472891680' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 14:43:11 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:43:11 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:43:11 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:43:11.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:43:12 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:43:12 compute-0 ceph-mon[74194]: pgmap v830: 337 pgs: 337 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:43:12 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/3713321560' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:43:12 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v831: 337 pgs: 337 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:43:12 compute-0 nova_compute[261524]: 2025-09-30 14:43:12.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:43:13 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:43:13 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:43:13 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:43:13.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:43:13 compute-0 ceph-mon[74194]: pgmap v831: 337 pgs: 337 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:43:13 compute-0 nova_compute[261524]: 2025-09-30 14:43:13.479 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:43:13 compute-0 nova_compute[261524]: 2025-09-30 14:43:13.480 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:43:13 compute-0 nova_compute[261524]: 2025-09-30 14:43:13.480 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:43:13 compute-0 nova_compute[261524]: 2025-09-30 14:43:13.480 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:43:13 compute-0 nova_compute[261524]: 2025-09-30 14:43:13.480 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:43:13 compute-0 nova_compute[261524]: 2025-09-30 14:43:13.480 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:43:13 compute-0 nova_compute[261524]: 2025-09-30 14:43:13.504 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:43:13 compute-0 nova_compute[261524]: 2025-09-30 14:43:13.505 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:43:13 compute-0 nova_compute[261524]: 2025-09-30 14:43:13.505 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:43:13 compute-0 nova_compute[261524]: 2025-09-30 14:43:13.505 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Sep 30 14:43:13 compute-0 nova_compute[261524]: 2025-09-30 14:43:13.506 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Sep 30 14:43:13 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:43:13.643Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:43:13 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:43:13 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:43:13 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:43:13.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:43:13 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 14:43:13 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/410199001' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:43:13 compute-0 nova_compute[261524]: 2025-09-30 14:43:13.970 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Sep 30 14:43:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:43:13 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:43:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:43:13 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:43:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:43:13 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:43:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:43:14 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:43:14 compute-0 nova_compute[261524]: 2025-09-30 14:43:14.139 2 WARNING nova.virt.libvirt.driver [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 14:43:14 compute-0 nova_compute[261524]: 2025-09-30 14:43:14.140 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4620MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Sep 30 14:43:14 compute-0 nova_compute[261524]: 2025-09-30 14:43:14.141 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:43:14 compute-0 nova_compute[261524]: 2025-09-30 14:43:14.141 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:43:14 compute-0 nova_compute[261524]: 2025-09-30 14:43:14.201 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Sep 30 14:43:14 compute-0 nova_compute[261524]: 2025-09-30 14:43:14.202 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Sep 30 14:43:14 compute-0 nova_compute[261524]: 2025-09-30 14:43:14.218 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Sep 30 14:43:14 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/1317463092' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:43:14 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/410199001' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:43:14 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/4237993983' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:43:14 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:43:14 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:43:14 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 14:43:14 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/299252313' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:43:14 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v832: 337 pgs: 337 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:43:14 compute-0 nova_compute[261524]: 2025-09-30 14:43:14.703 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Sep 30 14:43:14 compute-0 nova_compute[261524]: 2025-09-30 14:43:14.708 2 DEBUG nova.compute.provider_tree [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Inventory has not changed in ProviderTree for provider: 06783cfc-6d32-454d-9501-ebd8adea3735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Sep 30 14:43:14 compute-0 nova_compute[261524]: 2025-09-30 14:43:14.731 2 DEBUG nova.scheduler.client.report [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Inventory has not changed for provider 06783cfc-6d32-454d-9501-ebd8adea3735 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Sep 30 14:43:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:43:14] "GET /metrics HTTP/1.1" 200 48531 "" "Prometheus/2.51.0"
Sep 30 14:43:14 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:43:14] "GET /metrics HTTP/1.1" 200 48531 "" "Prometheus/2.51.0"
Sep 30 14:43:14 compute-0 nova_compute[261524]: 2025-09-30 14:43:14.756 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Sep 30 14:43:14 compute-0 nova_compute[261524]: 2025-09-30 14:43:14.757 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.616s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:43:15 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:43:15 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:43:15 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:43:15.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:43:15 compute-0 nova_compute[261524]: 2025-09-30 14:43:15.229 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:43:15 compute-0 nova_compute[261524]: 2025-09-30 14:43:15.230 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Sep 30 14:43:15 compute-0 nova_compute[261524]: 2025-09-30 14:43:15.230 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Sep 30 14:43:15 compute-0 nova_compute[261524]: 2025-09-30 14:43:15.248 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Sep 30 14:43:15 compute-0 nova_compute[261524]: 2025-09-30 14:43:15.248 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:43:15 compute-0 nova_compute[261524]: 2025-09-30 14:43:15.249 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:43:15 compute-0 nova_compute[261524]: 2025-09-30 14:43:15.250 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Sep 30 14:43:15 compute-0 nova_compute[261524]: 2025-09-30 14:43:15.297 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:43:15 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:43:15 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/299252313' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:43:15 compute-0 ceph-mon[74194]: pgmap v832: 337 pgs: 337 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:43:15 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:43:15 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:43:15 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:43:15.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:43:16 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/230648684' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 14:43:16 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v833: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 14:43:17 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:43:17 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:43:17 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:43:17.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:43:17 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:43:17 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:43:17.135Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:43:17 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:43:17.136Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:43:17 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/3544607075' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 14:43:17 compute-0 ceph-mon[74194]: pgmap v833: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 14:43:17 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:43:17 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:43:17 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:43:17.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:43:17 compute-0 nova_compute[261524]: 2025-09-30 14:43:17.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:43:18 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v834: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 14:43:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:43:18 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:43:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:43:18 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:43:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:43:18 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:43:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:43:19 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:43:19 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:43:19 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:43:19 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:43:19.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:43:19 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:43:19 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:43:19 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:43:19.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:43:19 compute-0 ceph-mon[74194]: pgmap v834: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 14:43:20 compute-0 nova_compute[261524]: 2025-09-30 14:43:20.296 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:43:20 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v835: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 14:43:21 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:43:21 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:43:21 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:43:21.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:43:21 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:43:21 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:43:21 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:43:21.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:43:21 compute-0 ceph-mon[74194]: pgmap v835: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 14:43:22 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:43:22 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v836: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Sep 30 14:43:22 compute-0 nova_compute[261524]: 2025-09-30 14:43:22.946 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:43:23 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:43:23 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:43:23 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:43:23.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:43:23 compute-0 sudo[273413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:43:23 compute-0 sudo[273413]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:43:23 compute-0 sudo[273413]: pam_unix(sudo:session): session closed for user root
Sep 30 14:43:23 compute-0 sudo[273438]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Sep 30 14:43:23 compute-0 sudo[273438]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:43:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:43:23.644Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:43:23 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:43:23 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:43:23 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:43:23.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:43:23 compute-0 ceph-mon[74194]: pgmap v836: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Sep 30 14:43:23 compute-0 sudo[273438]: pam_unix(sudo:session): session closed for user root
Sep 30 14:43:23 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Sep 30 14:43:23 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 14:43:23 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:43:23 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Sep 30 14:43:23 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:43:23 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 14:43:23 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:43:23 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:43:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:43:23 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:43:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:43:23 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:43:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:43:23 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:43:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:43:24 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:43:24 compute-0 sudo[273486]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:43:24 compute-0 sudo[273486]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:43:24 compute-0 sudo[273486]: pam_unix(sudo:session): session closed for user root
Sep 30 14:43:24 compute-0 sudo[273511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 14:43:24 compute-0 sudo[273511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:43:24 compute-0 sudo[273543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:43:24 compute-0 sudo[273543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:43:24 compute-0 sudo[273543]: pam_unix(sudo:session): session closed for user root
Sep 30 14:43:24 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v837: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Sep 30 14:43:24 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:43:24] "GET /metrics HTTP/1.1" 200 48531 "" "Prometheus/2.51.0"
Sep 30 14:43:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:43:24] "GET /metrics HTTP/1.1" 200 48531 "" "Prometheus/2.51.0"
Sep 30 14:43:24 compute-0 sudo[273511]: pam_unix(sudo:session): session closed for user root
Sep 30 14:43:24 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:43:24 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:43:24 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 14:43:24 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 14:43:24 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 14:43:24 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:43:24 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 14:43:24 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:43:24 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 14:43:24 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 14:43:24 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 14:43:24 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 14:43:24 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:43:24 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:43:24 compute-0 sudo[273592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:43:24 compute-0 sudo[273592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:43:24 compute-0 sudo[273592]: pam_unix(sudo:session): session closed for user root
Sep 30 14:43:24 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:43:24 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:43:24 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:43:24 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:43:24 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:43:24 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 14:43:24 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:43:24 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:43:24 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 14:43:24 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 14:43:24 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:43:24 compute-0 sudo[273617]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 14:43:24 compute-0 sudo[273617]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:43:25 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:43:25 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:43:25 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:43:25.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:43:25 compute-0 nova_compute[261524]: 2025-09-30 14:43:25.300 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:43:25 compute-0 podman[273686]: 2025-09-30 14:43:25.468225038 +0000 UTC m=+0.048158279 container create dbbaefb3a4146fa50d8486efa206b4d3cdf2ebae4721c9289516063ff3a61846 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_wescoff, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Sep 30 14:43:25 compute-0 systemd[1]: Started libpod-conmon-dbbaefb3a4146fa50d8486efa206b4d3cdf2ebae4721c9289516063ff3a61846.scope.
Sep 30 14:43:25 compute-0 podman[273686]: 2025-09-30 14:43:25.448376204 +0000 UTC m=+0.028309465 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:43:25 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:43:25 compute-0 podman[273686]: 2025-09-30 14:43:25.575302093 +0000 UTC m=+0.155235354 container init dbbaefb3a4146fa50d8486efa206b4d3cdf2ebae4721c9289516063ff3a61846 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_wescoff, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Sep 30 14:43:25 compute-0 podman[273686]: 2025-09-30 14:43:25.585526718 +0000 UTC m=+0.165459959 container start dbbaefb3a4146fa50d8486efa206b4d3cdf2ebae4721c9289516063ff3a61846 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_wescoff, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:43:25 compute-0 podman[273686]: 2025-09-30 14:43:25.589287676 +0000 UTC m=+0.169220937 container attach dbbaefb3a4146fa50d8486efa206b4d3cdf2ebae4721c9289516063ff3a61846 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Sep 30 14:43:25 compute-0 angry_wescoff[273702]: 167 167
Sep 30 14:43:25 compute-0 systemd[1]: libpod-dbbaefb3a4146fa50d8486efa206b4d3cdf2ebae4721c9289516063ff3a61846.scope: Deactivated successfully.
Sep 30 14:43:25 compute-0 podman[273686]: 2025-09-30 14:43:25.594658555 +0000 UTC m=+0.174591796 container died dbbaefb3a4146fa50d8486efa206b4d3cdf2ebae4721c9289516063ff3a61846 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_wescoff, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:43:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-11304c583cb434d0b6d70a13885277fca7ee58c2d38b10b3ced186db5ded3f9d-merged.mount: Deactivated successfully.
Sep 30 14:43:25 compute-0 podman[273686]: 2025-09-30 14:43:25.636087839 +0000 UTC m=+0.216021080 container remove dbbaefb3a4146fa50d8486efa206b4d3cdf2ebae4721c9289516063ff3a61846 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_wescoff, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:43:25 compute-0 systemd[1]: libpod-conmon-dbbaefb3a4146fa50d8486efa206b4d3cdf2ebae4721c9289516063ff3a61846.scope: Deactivated successfully.
Sep 30 14:43:25 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:43:25 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:43:25 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:43:25.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:43:25 compute-0 podman[273726]: 2025-09-30 14:43:25.844567434 +0000 UTC m=+0.054928825 container create 317f1f685ec5505e89de5203ebf15efef48659d537df4995a6302e4f7969a2de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_noyce, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325)
Sep 30 14:43:25 compute-0 systemd[1]: Started libpod-conmon-317f1f685ec5505e89de5203ebf15efef48659d537df4995a6302e4f7969a2de.scope.
Sep 30 14:43:25 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:43:25 compute-0 podman[273726]: 2025-09-30 14:43:25.82551978 +0000 UTC m=+0.035881201 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:43:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c02b83fe9e5d2090d8aec9a4c43dee02cb81859bdfee44fc437e05585c47b98/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:43:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c02b83fe9e5d2090d8aec9a4c43dee02cb81859bdfee44fc437e05585c47b98/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:43:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c02b83fe9e5d2090d8aec9a4c43dee02cb81859bdfee44fc437e05585c47b98/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:43:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c02b83fe9e5d2090d8aec9a4c43dee02cb81859bdfee44fc437e05585c47b98/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:43:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c02b83fe9e5d2090d8aec9a4c43dee02cb81859bdfee44fc437e05585c47b98/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 14:43:25 compute-0 podman[273726]: 2025-09-30 14:43:25.932785361 +0000 UTC m=+0.143146772 container init 317f1f685ec5505e89de5203ebf15efef48659d537df4995a6302e4f7969a2de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_noyce, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Sep 30 14:43:25 compute-0 podman[273726]: 2025-09-30 14:43:25.943088948 +0000 UTC m=+0.153450359 container start 317f1f685ec5505e89de5203ebf15efef48659d537df4995a6302e4f7969a2de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_noyce, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:43:25 compute-0 podman[273726]: 2025-09-30 14:43:25.946216389 +0000 UTC m=+0.156577780 container attach 317f1f685ec5505e89de5203ebf15efef48659d537df4995a6302e4f7969a2de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_noyce, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:43:25 compute-0 ceph-mon[74194]: pgmap v837: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Sep 30 14:43:26 compute-0 elastic_noyce[273743]: --> passed data devices: 0 physical, 1 LVM
Sep 30 14:43:26 compute-0 elastic_noyce[273743]: --> All data devices are unavailable
Sep 30 14:43:26 compute-0 ovn_controller[154021]: 2025-09-30T14:43:26Z|00057|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory
Sep 30 14:43:26 compute-0 systemd[1]: libpod-317f1f685ec5505e89de5203ebf15efef48659d537df4995a6302e4f7969a2de.scope: Deactivated successfully.
Sep 30 14:43:26 compute-0 podman[273726]: 2025-09-30 14:43:26.370238602 +0000 UTC m=+0.580600033 container died 317f1f685ec5505e89de5203ebf15efef48659d537df4995a6302e4f7969a2de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_noyce, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Sep 30 14:43:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-4c02b83fe9e5d2090d8aec9a4c43dee02cb81859bdfee44fc437e05585c47b98-merged.mount: Deactivated successfully.
Sep 30 14:43:26 compute-0 podman[273726]: 2025-09-30 14:43:26.447110495 +0000 UTC m=+0.657471886 container remove 317f1f685ec5505e89de5203ebf15efef48659d537df4995a6302e4f7969a2de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_noyce, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Sep 30 14:43:26 compute-0 systemd[1]: libpod-conmon-317f1f685ec5505e89de5203ebf15efef48659d537df4995a6302e4f7969a2de.scope: Deactivated successfully.
Sep 30 14:43:26 compute-0 podman[273760]: 2025-09-30 14:43:26.498799705 +0000 UTC m=+0.092854588 container health_status 3f9405f717bf7bccb1d94628a6cea0442375ebf8d5cf43ef2536ee30dce6c6e0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Sep 30 14:43:26 compute-0 podman[273769]: 2025-09-30 14:43:26.503083727 +0000 UTC m=+0.081117024 container health_status c96c6cc1b89de09f21811bef665f35e069874e3ad1bbd4c37d02227af98e3458 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent)
Sep 30 14:43:26 compute-0 podman[273768]: 2025-09-30 14:43:26.504464222 +0000 UTC m=+0.091345759 container health_status b3fb2fd96e3ed0561f41ccf5f3a6a8f5edc1610051f196882b9fe64f54041c07 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=multipathd, managed_by=edpm_ansible)
Sep 30 14:43:26 compute-0 sudo[273617]: pam_unix(sudo:session): session closed for user root
Sep 30 14:43:26 compute-0 podman[273762]: 2025-09-30 14:43:26.536505203 +0000 UTC m=+0.122681262 container health_status 8ace90c1ad44d98a416105b72e1c541724757ab08efa10adfaa0df7e8c2027c6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250923, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Sep 30 14:43:26 compute-0 sudo[273849]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:43:26 compute-0 sudo[273849]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:43:26 compute-0 sudo[273849]: pam_unix(sudo:session): session closed for user root
Sep 30 14:43:26 compute-0 sudo[273874]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -- lvm list --format json
Sep 30 14:43:26 compute-0 sudo[273874]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:43:26 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v838: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Sep 30 14:43:27 compute-0 podman[273942]: 2025-09-30 14:43:27.089586942 +0000 UTC m=+0.048976501 container create 87ce2be3ae942cfcbe6c7cc74b070f8281db3164511bf767c7d2d00036c99bb3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_montalcini, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Sep 30 14:43:27 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:43:27 compute-0 systemd[1]: Started libpod-conmon-87ce2be3ae942cfcbe6c7cc74b070f8281db3164511bf767c7d2d00036c99bb3.scope.
Sep 30 14:43:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:43:27.137Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:43:27 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:43:27 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:43:27 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:43:27.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:43:27 compute-0 podman[273942]: 2025-09-30 14:43:27.064100291 +0000 UTC m=+0.023489870 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:43:27 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:43:27 compute-0 podman[273942]: 2025-09-30 14:43:27.181572807 +0000 UTC m=+0.140962436 container init 87ce2be3ae942cfcbe6c7cc74b070f8281db3164511bf767c7d2d00036c99bb3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_montalcini, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:43:27 compute-0 podman[273942]: 2025-09-30 14:43:27.189473602 +0000 UTC m=+0.148863151 container start 87ce2be3ae942cfcbe6c7cc74b070f8281db3164511bf767c7d2d00036c99bb3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_montalcini, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:43:27 compute-0 podman[273942]: 2025-09-30 14:43:27.194292287 +0000 UTC m=+0.153681926 container attach 87ce2be3ae942cfcbe6c7cc74b070f8281db3164511bf767c7d2d00036c99bb3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_montalcini, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:43:27 compute-0 affectionate_montalcini[273959]: 167 167
Sep 30 14:43:27 compute-0 systemd[1]: libpod-87ce2be3ae942cfcbe6c7cc74b070f8281db3164511bf767c7d2d00036c99bb3.scope: Deactivated successfully.
Sep 30 14:43:27 compute-0 podman[273942]: 2025-09-30 14:43:27.197422978 +0000 UTC m=+0.156812527 container died 87ce2be3ae942cfcbe6c7cc74b070f8281db3164511bf767c7d2d00036c99bb3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_montalcini, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:43:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-46bc0cfda672a7de246ea56746fd65530e9b875668aa6a81ef03c56f6fbb9440-merged.mount: Deactivated successfully.
Sep 30 14:43:27 compute-0 podman[273942]: 2025-09-30 14:43:27.235630388 +0000 UTC m=+0.195019937 container remove 87ce2be3ae942cfcbe6c7cc74b070f8281db3164511bf767c7d2d00036c99bb3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Sep 30 14:43:27 compute-0 systemd[1]: libpod-conmon-87ce2be3ae942cfcbe6c7cc74b070f8281db3164511bf767c7d2d00036c99bb3.scope: Deactivated successfully.
Sep 30 14:43:27 compute-0 podman[273982]: 2025-09-30 14:43:27.435607683 +0000 UTC m=+0.047584835 container create 229233c1b907b85561bd2ff428f33a53dc586b895525454ff4ae11f720c4c864 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_jackson, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Sep 30 14:43:27 compute-0 systemd[1]: Started libpod-conmon-229233c1b907b85561bd2ff428f33a53dc586b895525454ff4ae11f720c4c864.scope.
Sep 30 14:43:27 compute-0 podman[273982]: 2025-09-30 14:43:27.415165863 +0000 UTC m=+0.027143045 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:43:27 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:43:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c4d5129d6ee5bca064fc225bb40dd31aca439e261763c0830fbcf493088ace4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:43:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c4d5129d6ee5bca064fc225bb40dd31aca439e261763c0830fbcf493088ace4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:43:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c4d5129d6ee5bca064fc225bb40dd31aca439e261763c0830fbcf493088ace4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:43:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c4d5129d6ee5bca064fc225bb40dd31aca439e261763c0830fbcf493088ace4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:43:27 compute-0 podman[273982]: 2025-09-30 14:43:27.546699573 +0000 UTC m=+0.158676795 container init 229233c1b907b85561bd2ff428f33a53dc586b895525454ff4ae11f720c4c864 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_jackson, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:43:27 compute-0 podman[273982]: 2025-09-30 14:43:27.55547044 +0000 UTC m=+0.167447592 container start 229233c1b907b85561bd2ff428f33a53dc586b895525454ff4ae11f720c4c864 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_jackson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:43:27 compute-0 podman[273982]: 2025-09-30 14:43:27.558269173 +0000 UTC m=+0.170246405 container attach 229233c1b907b85561bd2ff428f33a53dc586b895525454ff4ae11f720c4c864 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_jackson, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:43:27 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:43:27 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:43:27 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:43:27.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:43:27 compute-0 sad_jackson[274000]: {
Sep 30 14:43:27 compute-0 sad_jackson[274000]:     "0": [
Sep 30 14:43:27 compute-0 sad_jackson[274000]:         {
Sep 30 14:43:27 compute-0 sad_jackson[274000]:             "devices": [
Sep 30 14:43:27 compute-0 sad_jackson[274000]:                 "/dev/loop3"
Sep 30 14:43:27 compute-0 sad_jackson[274000]:             ],
Sep 30 14:43:27 compute-0 sad_jackson[274000]:             "lv_name": "ceph_lv0",
Sep 30 14:43:27 compute-0 sad_jackson[274000]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:43:27 compute-0 sad_jackson[274000]:             "lv_size": "21470642176",
Sep 30 14:43:27 compute-0 sad_jackson[274000]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5e3c7776-ac03-5698-b79f-a6dc2d80cae6,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1bf35304-bfb4-41f5-b832-570aa31de1b2,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 14:43:27 compute-0 sad_jackson[274000]:             "lv_uuid": "f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L",
Sep 30 14:43:27 compute-0 sad_jackson[274000]:             "name": "ceph_lv0",
Sep 30 14:43:27 compute-0 sad_jackson[274000]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:43:27 compute-0 sad_jackson[274000]:             "tags": {
Sep 30 14:43:27 compute-0 sad_jackson[274000]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:43:27 compute-0 sad_jackson[274000]:                 "ceph.block_uuid": "f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L",
Sep 30 14:43:27 compute-0 sad_jackson[274000]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 14:43:27 compute-0 sad_jackson[274000]:                 "ceph.cluster_fsid": "5e3c7776-ac03-5698-b79f-a6dc2d80cae6",
Sep 30 14:43:27 compute-0 sad_jackson[274000]:                 "ceph.cluster_name": "ceph",
Sep 30 14:43:27 compute-0 sad_jackson[274000]:                 "ceph.crush_device_class": "",
Sep 30 14:43:27 compute-0 sad_jackson[274000]:                 "ceph.encrypted": "0",
Sep 30 14:43:27 compute-0 sad_jackson[274000]:                 "ceph.osd_fsid": "1bf35304-bfb4-41f5-b832-570aa31de1b2",
Sep 30 14:43:27 compute-0 sad_jackson[274000]:                 "ceph.osd_id": "0",
Sep 30 14:43:27 compute-0 sad_jackson[274000]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 14:43:27 compute-0 sad_jackson[274000]:                 "ceph.type": "block",
Sep 30 14:43:27 compute-0 sad_jackson[274000]:                 "ceph.vdo": "0",
Sep 30 14:43:27 compute-0 sad_jackson[274000]:                 "ceph.with_tpm": "0"
Sep 30 14:43:27 compute-0 sad_jackson[274000]:             },
Sep 30 14:43:27 compute-0 sad_jackson[274000]:             "type": "block",
Sep 30 14:43:27 compute-0 sad_jackson[274000]:             "vg_name": "ceph_vg0"
Sep 30 14:43:27 compute-0 sad_jackson[274000]:         }
Sep 30 14:43:27 compute-0 sad_jackson[274000]:     ]
Sep 30 14:43:27 compute-0 sad_jackson[274000]: }
Sep 30 14:43:27 compute-0 systemd[1]: libpod-229233c1b907b85561bd2ff428f33a53dc586b895525454ff4ae11f720c4c864.scope: Deactivated successfully.
Sep 30 14:43:27 compute-0 podman[273982]: 2025-09-30 14:43:27.87327783 +0000 UTC m=+0.485255002 container died 229233c1b907b85561bd2ff428f33a53dc586b895525454ff4ae11f720c4c864 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_jackson, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS)
Sep 30 14:43:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-4c4d5129d6ee5bca064fc225bb40dd31aca439e261763c0830fbcf493088ace4-merged.mount: Deactivated successfully.
Sep 30 14:43:27 compute-0 podman[273982]: 2025-09-30 14:43:27.9103075 +0000 UTC m=+0.522284642 container remove 229233c1b907b85561bd2ff428f33a53dc586b895525454ff4ae11f720c4c864 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_jackson, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Sep 30 14:43:27 compute-0 systemd[1]: libpod-conmon-229233c1b907b85561bd2ff428f33a53dc586b895525454ff4ae11f720c4c864.scope: Deactivated successfully.
Sep 30 14:43:27 compute-0 nova_compute[261524]: 2025-09-30 14:43:27.948 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:43:27 compute-0 sudo[273874]: pam_unix(sudo:session): session closed for user root
Sep 30 14:43:28 compute-0 ceph-mon[74194]: pgmap v838: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Sep 30 14:43:28 compute-0 sudo[274022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:43:28 compute-0 sudo[274022]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:43:28 compute-0 sudo[274022]: pam_unix(sudo:session): session closed for user root
Sep 30 14:43:28 compute-0 sudo[274048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -- raw list --format json
Sep 30 14:43:28 compute-0 sudo[274048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:43:28 compute-0 podman[274114]: 2025-09-30 14:43:28.633381576 +0000 UTC m=+0.049136235 container create 598169427a304c8605dc0eaf1fa4eb148bfe55ca6aff8d650ef2f85afeabff0b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_haslett, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Sep 30 14:43:28 compute-0 systemd[1]: Started libpod-conmon-598169427a304c8605dc0eaf1fa4eb148bfe55ca6aff8d650ef2f85afeabff0b.scope.
Sep 30 14:43:28 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v839: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Sep 30 14:43:28 compute-0 podman[274114]: 2025-09-30 14:43:28.611655463 +0000 UTC m=+0.027410102 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:43:28 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:43:28 compute-0 podman[274114]: 2025-09-30 14:43:28.742613038 +0000 UTC m=+0.158367737 container init 598169427a304c8605dc0eaf1fa4eb148bfe55ca6aff8d650ef2f85afeabff0b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_haslett, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Sep 30 14:43:28 compute-0 podman[274114]: 2025-09-30 14:43:28.754194678 +0000 UTC m=+0.169949327 container start 598169427a304c8605dc0eaf1fa4eb148bfe55ca6aff8d650ef2f85afeabff0b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_haslett, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:43:28 compute-0 podman[274114]: 2025-09-30 14:43:28.757512424 +0000 UTC m=+0.173267143 container attach 598169427a304c8605dc0eaf1fa4eb148bfe55ca6aff8d650ef2f85afeabff0b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_haslett, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Sep 30 14:43:28 compute-0 distracted_haslett[274131]: 167 167
Sep 30 14:43:28 compute-0 systemd[1]: libpod-598169427a304c8605dc0eaf1fa4eb148bfe55ca6aff8d650ef2f85afeabff0b.scope: Deactivated successfully.
Sep 30 14:43:28 compute-0 conmon[274131]: conmon 598169427a304c8605dc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-598169427a304c8605dc0eaf1fa4eb148bfe55ca6aff8d650ef2f85afeabff0b.scope/container/memory.events
Sep 30 14:43:28 compute-0 podman[274114]: 2025-09-30 14:43:28.763236722 +0000 UTC m=+0.178991441 container died 598169427a304c8605dc0eaf1fa4eb148bfe55ca6aff8d650ef2f85afeabff0b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_haslett, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:43:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-3a32c2fb84ebe8e0a21d3462faef3060bc0cc98e8d2f007e25af2f4c7c5de4af-merged.mount: Deactivated successfully.
Sep 30 14:43:28 compute-0 podman[274114]: 2025-09-30 14:43:28.806952446 +0000 UTC m=+0.222707075 container remove 598169427a304c8605dc0eaf1fa4eb148bfe55ca6aff8d650ef2f85afeabff0b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_haslett, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:43:28 compute-0 systemd[1]: libpod-conmon-598169427a304c8605dc0eaf1fa4eb148bfe55ca6aff8d650ef2f85afeabff0b.scope: Deactivated successfully.
Sep 30 14:43:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:43:28 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:43:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:43:29 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:43:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:43:29 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:43:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:43:29 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:43:29 compute-0 podman[274155]: 2025-09-30 14:43:29.007331891 +0000 UTC m=+0.057370929 container create 80ed3891cda688562d7c8e34b3c8aae98608cb4a0f5b8d1cc2957c094e05da7f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_cannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Sep 30 14:43:29 compute-0 systemd[1]: Started libpod-conmon-80ed3891cda688562d7c8e34b3c8aae98608cb4a0f5b8d1cc2957c094e05da7f.scope.
Sep 30 14:43:29 compute-0 podman[274155]: 2025-09-30 14:43:28.981624044 +0000 UTC m=+0.031663112 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:43:29 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:43:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e187ddeca0368eebf69e5de54f76e03c2314154d29676c3909af45d3a38e914/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:43:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e187ddeca0368eebf69e5de54f76e03c2314154d29676c3909af45d3a38e914/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:43:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e187ddeca0368eebf69e5de54f76e03c2314154d29676c3909af45d3a38e914/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:43:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e187ddeca0368eebf69e5de54f76e03c2314154d29676c3909af45d3a38e914/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:43:29 compute-0 podman[274155]: 2025-09-30 14:43:29.104320775 +0000 UTC m=+0.154359823 container init 80ed3891cda688562d7c8e34b3c8aae98608cb4a0f5b8d1cc2957c094e05da7f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_cannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Sep 30 14:43:29 compute-0 podman[274155]: 2025-09-30 14:43:29.119917849 +0000 UTC m=+0.169956887 container start 80ed3891cda688562d7c8e34b3c8aae98608cb4a0f5b8d1cc2957c094e05da7f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_cannon, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:43:29 compute-0 podman[274155]: 2025-09-30 14:43:29.123742048 +0000 UTC m=+0.173781136 container attach 80ed3891cda688562d7c8e34b3c8aae98608cb4a0f5b8d1cc2957c094e05da7f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_cannon, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:43:29 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:43:29 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:43:29 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:43:29.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:43:29 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:43:29 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:43:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:43:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:43:29 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:43:29 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:43:29 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:43:29.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:43:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:43:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:43:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:43:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:43:29 compute-0 lvm[274249]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 14:43:29 compute-0 lvm[274249]: VG ceph_vg0 finished
Sep 30 14:43:29 compute-0 zealous_cannon[274172]: {}
Sep 30 14:43:29 compute-0 podman[274155]: 2025-09-30 14:43:29.912572949 +0000 UTC m=+0.962611997 container died 80ed3891cda688562d7c8e34b3c8aae98608cb4a0f5b8d1cc2957c094e05da7f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_cannon, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Sep 30 14:43:29 compute-0 systemd[1]: libpod-80ed3891cda688562d7c8e34b3c8aae98608cb4a0f5b8d1cc2957c094e05da7f.scope: Deactivated successfully.
Sep 30 14:43:29 compute-0 systemd[1]: libpod-80ed3891cda688562d7c8e34b3c8aae98608cb4a0f5b8d1cc2957c094e05da7f.scope: Consumed 1.302s CPU time.
Sep 30 14:43:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-2e187ddeca0368eebf69e5de54f76e03c2314154d29676c3909af45d3a38e914-merged.mount: Deactivated successfully.
Sep 30 14:43:29 compute-0 podman[274155]: 2025-09-30 14:43:29.960408379 +0000 UTC m=+1.010447427 container remove 80ed3891cda688562d7c8e34b3c8aae98608cb4a0f5b8d1cc2957c094e05da7f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_cannon, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True)
Sep 30 14:43:29 compute-0 systemd[1]: libpod-conmon-80ed3891cda688562d7c8e34b3c8aae98608cb4a0f5b8d1cc2957c094e05da7f.scope: Deactivated successfully.
Sep 30 14:43:30 compute-0 sudo[274048]: pam_unix(sudo:session): session closed for user root
Sep 30 14:43:30 compute-0 ceph-mon[74194]: pgmap v839: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Sep 30 14:43:30 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:43:30 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 14:43:30 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:43:30 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 14:43:30 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:43:30 compute-0 sudo[274266]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 14:43:30 compute-0 sudo[274266]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:43:30 compute-0 sudo[274266]: pam_unix(sudo:session): session closed for user root
Sep 30 14:43:30 compute-0 nova_compute[261524]: 2025-09-30 14:43:30.301 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:43:30 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v840: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Sep 30 14:43:31 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:43:31 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:43:31 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:43:31 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:43:31 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:43:31.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:43:31 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:43:31 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:43:31 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:43:31.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:43:32 compute-0 ceph-mon[74194]: pgmap v840: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Sep 30 14:43:32 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:43:32 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v841: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 138 op/s
Sep 30 14:43:32 compute-0 nova_compute[261524]: 2025-09-30 14:43:32.953 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:43:33 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:43:33 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:43:33 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:43:33.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:43:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:43:33.645Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:43:33 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:43:33 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:43:33 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:43:33.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:43:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:43:33 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:43:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:43:33 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:43:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:43:33 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:43:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:43:34 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:43:34 compute-0 ceph-mon[74194]: pgmap v841: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 138 op/s
Sep 30 14:43:34 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v842: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Sep 30 14:43:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:43:34] "GET /metrics HTTP/1.1" 200 48552 "" "Prometheus/2.51.0"
Sep 30 14:43:34 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:43:34] "GET /metrics HTTP/1.1" 200 48552 "" "Prometheus/2.51.0"
Sep 30 14:43:35 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:43:35 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:43:35 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:43:35.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:43:35 compute-0 nova_compute[261524]: 2025-09-30 14:43:35.304 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:43:35 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:43:35 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:43:35 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:43:35.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:43:36 compute-0 ceph-mon[74194]: pgmap v842: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Sep 30 14:43:36 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v843: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Sep 30 14:43:37 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:43:37 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:43:37.138Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:43:37 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:43:37 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:43:37 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:43:37.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:43:37 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:43:37 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:43:37 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:43:37.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:43:37 compute-0 nova_compute[261524]: 2025-09-30 14:43:37.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:43:38 compute-0 ceph-mon[74194]: pgmap v843: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Sep 30 14:43:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:43:38.259 163966 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:43:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:43:38.261 163966 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:43:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:43:38.261 163966 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:43:38 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v844: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Sep 30 14:43:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:43:38 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:43:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:43:38 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:43:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:43:38 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:43:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:43:39 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:43:39 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:43:39 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:43:39 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:43:39.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:43:39 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:43:39 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:43:39 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:43:39.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:43:40 compute-0 ceph-mon[74194]: pgmap v844: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Sep 30 14:43:40 compute-0 nova_compute[261524]: 2025-09-30 14:43:40.306 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:43:40 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v845: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Sep 30 14:43:41 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:43:41 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:43:41 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:43:41.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:43:41 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:43:41.210 163966 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ea:30:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '86:54:af:bb:5a:5f'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Sep 30 14:43:41 compute-0 nova_compute[261524]: 2025-09-30 14:43:41.211 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:43:41 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:43:41.213 163966 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Sep 30 14:43:41 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:43:41 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:43:41 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:43:41.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:43:42 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:43:42 compute-0 ceph-mon[74194]: pgmap v845: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Sep 30 14:43:42 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v846: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 331 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Sep 30 14:43:42 compute-0 nova_compute[261524]: 2025-09-30 14:43:42.959 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:43:43 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:43:43 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 14:43:43 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:43:43.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 14:43:43 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:43:43.646Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:43:43 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:43:43.647Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:43:43 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:43:43.647Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:43:43 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:43:43 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:43:43 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:43:43.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:43:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:43:43 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:43:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:43:43 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:43:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:43:43 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:43:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:43:44 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:43:44 compute-0 ceph-mon[74194]: pgmap v846: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 331 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Sep 30 14:43:44 compute-0 sudo[274305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:43:44 compute-0 sudo[274305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:43:44 compute-0 sudo[274305]: pam_unix(sudo:session): session closed for user root
Sep 30 14:43:44 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:43:44 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:43:44 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v847: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 6.2 KiB/s rd, 12 KiB/s wr, 1 op/s
Sep 30 14:43:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:43:44] "GET /metrics HTTP/1.1" 200 48550 "" "Prometheus/2.51.0"
Sep 30 14:43:44 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:43:44] "GET /metrics HTTP/1.1" 200 48550 "" "Prometheus/2.51.0"
Sep 30 14:43:45 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:43:45 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:43:45 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:43:45.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:43:45 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:43:45 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/183003241' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:43:45 compute-0 nova_compute[261524]: 2025-09-30 14:43:45.308 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:43:45 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:43:45 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:43:45 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:43:45.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:43:46 compute-0 ceph-mon[74194]: pgmap v847: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 6.2 KiB/s rd, 12 KiB/s wr, 1 op/s
Sep 30 14:43:46 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v848: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 16 KiB/s wr, 8 op/s
Sep 30 14:43:47 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:43:47 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:43:47.139Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:43:47 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:43:47 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:43:47 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:43:47.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:43:47 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:43:47 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:43:47 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:43:47.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:43:47 compute-0 nova_compute[261524]: 2025-09-30 14:43:47.963 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:43:48 compute-0 ceph-mon[74194]: pgmap v848: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 16 KiB/s wr, 8 op/s
Sep 30 14:43:48 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v849: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 3.7 KiB/s wr, 7 op/s
Sep 30 14:43:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:43:48 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:43:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:43:48 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:43:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:43:48 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:43:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:43:49 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:43:49 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:43:49 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:43:49 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:43:49.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:43:49 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:43:49 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:43:49 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:43:49.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:43:50 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:43:50.215 163966 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c6331d25-78a2-493c-bb43-51ad387342be, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 14:43:50 compute-0 ceph-mon[74194]: pgmap v849: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 3.7 KiB/s wr, 7 op/s
Sep 30 14:43:50 compute-0 nova_compute[261524]: 2025-09-30 14:43:50.311 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:43:50 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v850: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 3.7 KiB/s wr, 7 op/s
Sep 30 14:43:51 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:43:51 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:43:51 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:43:51.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:43:51 compute-0 ceph-mon[74194]: pgmap v850: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 3.7 KiB/s wr, 7 op/s
Sep 30 14:43:51 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:43:51 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:43:51 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:43:51.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:43:52 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:43:52 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/1673231352' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 14:43:52 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/1098803955' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 14:43:52 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v851: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 35 op/s
Sep 30 14:43:52 compute-0 nova_compute[261524]: 2025-09-30 14:43:52.966 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:43:53 compute-0 ceph-osd[82707]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Sep 30 14:43:53 compute-0 ceph-osd[82707]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 11K writes, 40K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                           Cumulative WAL: 11K writes, 3104 syncs, 3.57 writes per sync, written: 0.03 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1677 writes, 5010 keys, 1677 commit groups, 1.0 writes per commit group, ingest: 5.64 MB, 0.01 MB/s
                                           Interval WAL: 1677 writes, 710 syncs, 2.36 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Sep 30 14:43:53 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:43:53 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:43:53 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:43:53.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:43:53 compute-0 ceph-mon[74194]: pgmap v851: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 35 op/s
Sep 30 14:43:53 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:43:53.648Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:43:53 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:43:53 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:43:53 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:43:53.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:43:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:43:53 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:43:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:43:53 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:43:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:43:53 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:43:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:43:54 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:43:54 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v852: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Sep 30 14:43:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:43:54] "GET /metrics HTTP/1.1" 200 48550 "" "Prometheus/2.51.0"
Sep 30 14:43:54 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:43:54] "GET /metrics HTTP/1.1" 200 48550 "" "Prometheus/2.51.0"
Sep 30 14:43:55 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:43:55 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:43:55 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:43:55.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:43:55 compute-0 nova_compute[261524]: 2025-09-30 14:43:55.313 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:43:55 compute-0 ceph-mon[74194]: pgmap v852: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Sep 30 14:43:55 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:43:55 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:43:55 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:43:55.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:43:56 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v853: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 48 op/s
Sep 30 14:43:57 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:43:57.140Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:43:57 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:43:57 compute-0 podman[274345]: 2025-09-30 14:43:57.142476775 +0000 UTC m=+0.059282945 container health_status c96c6cc1b89de09f21811bef665f35e069874e3ad1bbd4c37d02227af98e3458 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true)
Sep 30 14:43:57 compute-0 podman[274344]: 2025-09-30 14:43:57.142713231 +0000 UTC m=+0.063940806 container health_status b3fb2fd96e3ed0561f41ccf5f3a6a8f5edc1610051f196882b9fe64f54041c07 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Sep 30 14:43:57 compute-0 podman[274342]: 2025-09-30 14:43:57.172055025 +0000 UTC m=+0.097460279 container health_status 3f9405f717bf7bccb1d94628a6cea0442375ebf8d5cf43ef2536ee30dce6c6e0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Sep 30 14:43:57 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:43:57 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:43:57 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:43:57.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:43:57 compute-0 podman[274343]: 2025-09-30 14:43:57.199592352 +0000 UTC m=+0.124709958 container health_status 8ace90c1ad44d98a416105b72e1c541724757ab08efa10adfaa0df7e8c2027c6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Sep 30 14:43:57 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:43:57 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:43:57 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:43:57.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:43:57 compute-0 ceph-mon[74194]: pgmap v853: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 48 op/s
Sep 30 14:43:57 compute-0 nova_compute[261524]: 2025-09-30 14:43:57.969 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:43:58 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v854: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 484 KiB/s rd, 1.8 MiB/s wr, 40 op/s
Sep 30 14:43:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:43:58 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:43:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:43:58 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:43:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:43:58 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:43:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:43:59 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:43:59 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:43:59 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:43:59 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:43:59.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:43:59 compute-0 ceph-mgr[74485]: [balancer INFO root] Optimize plan auto_2025-09-30_14:43:59
Sep 30 14:43:59 compute-0 ceph-mgr[74485]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 14:43:59 compute-0 ceph-mgr[74485]: [balancer INFO root] do_upmap
Sep 30 14:43:59 compute-0 ceph-mgr[74485]: [balancer INFO root] pools ['vms', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.control', 'volumes', '.nfs', 'default.rgw.log', 'images', 'cephfs.cephfs.data', 'backups', '.rgw.root']
Sep 30 14:43:59 compute-0 ceph-mgr[74485]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 14:43:59 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:43:59 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:43:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:43:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:43:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:43:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:43:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:43:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:43:59 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:43:59 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:43:59 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:43:59.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:43:59 compute-0 ceph-mon[74194]: pgmap v854: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 484 KiB/s rd, 1.8 MiB/s wr, 40 op/s
Sep 30 14:43:59 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:43:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 14:43:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:43:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 14:43:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:43:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011082588558963338 of space, bias 1.0, pg target 0.3324776567689002 quantized to 32 (current 32)
Sep 30 14:43:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:43:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:43:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:43:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:43:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:43:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Sep 30 14:43:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:43:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Sep 30 14:43:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:43:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:43:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:43:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Sep 30 14:43:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:43:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Sep 30 14:43:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:43:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:43:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:43:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 14:43:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:43:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 14:44:00 compute-0 nova_compute[261524]: 2025-09-30 14:44:00.316 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:44:00 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v855: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 484 KiB/s rd, 1.8 MiB/s wr, 40 op/s
Sep 30 14:44:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 14:44:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 14:44:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 14:44:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 14:44:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 14:44:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 14:44:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 14:44:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 14:44:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 14:44:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 14:44:01 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:44:01 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:44:01 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:44:01.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:44:01 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:44:01 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:44:01 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:44:01.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:44:01 compute-0 ceph-mon[74194]: pgmap v855: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 484 KiB/s rd, 1.8 MiB/s wr, 40 op/s
Sep 30 14:44:02 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:44:02 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v856: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Sep 30 14:44:02 compute-0 nova_compute[261524]: 2025-09-30 14:44:02.972 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:44:03 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:44:03 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:44:03 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:44:03.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:44:03 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:44:03.650Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:44:03 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:44:03.650Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:44:03 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:44:03 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:44:03 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:44:03.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:44:03 compute-0 ceph-mon[74194]: pgmap v856: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Sep 30 14:44:03 compute-0 nova_compute[261524]: 2025-09-30 14:44:03.952 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:44:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:44:03 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:44:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:44:03 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:44:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:44:04 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:44:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:44:04 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:44:04 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Sep 30 14:44:04 compute-0 sudo[274426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:44:04 compute-0 sudo[274426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:44:04 compute-0 sudo[274426]: pam_unix(sudo:session): session closed for user root
Sep 30 14:44:04 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v857: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Sep 30 14:44:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:44:04] "GET /metrics HTTP/1.1" 200 48550 "" "Prometheus/2.51.0"
Sep 30 14:44:04 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:44:04] "GET /metrics HTTP/1.1" 200 48550 "" "Prometheus/2.51.0"
Sep 30 14:44:05 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:44:05 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:44:05 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:44:05.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:44:05 compute-0 nova_compute[261524]: 2025-09-30 14:44:05.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:44:05 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:44:05 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:44:05 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:44:05.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:44:05 compute-0 ceph-mon[74194]: pgmap v857: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Sep 30 14:44:06 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v858: 337 pgs: 337 active+clean; 175 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 618 KiB/s wr, 86 op/s
Sep 30 14:44:07 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:44:07.142Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:44:07 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:44:07 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:44:07 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:44:07 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:44:07.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:44:07 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:44:07 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:44:07 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:44:07.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:44:07 compute-0 ceph-mon[74194]: pgmap v858: 337 pgs: 337 active+clean; 175 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 618 KiB/s wr, 86 op/s
Sep 30 14:44:07 compute-0 nova_compute[261524]: 2025-09-30 14:44:07.975 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:44:08 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v859: 337 pgs: 337 active+clean; 175 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 606 KiB/s wr, 73 op/s
Sep 30 14:44:09 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:44:08 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:44:09 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:44:08 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:44:09 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:44:08 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:44:09 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:44:09 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:44:09 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:44:09 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:44:09 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:44:09.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:44:09 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:44:09 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:44:09 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:44:09.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:44:09 compute-0 ceph-mon[74194]: pgmap v859: 337 pgs: 337 active+clean; 175 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 606 KiB/s wr, 73 op/s
Sep 30 14:44:10 compute-0 nova_compute[261524]: 2025-09-30 14:44:10.321 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:44:10 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v860: 337 pgs: 337 active+clean; 175 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 606 KiB/s wr, 73 op/s
Sep 30 14:44:10 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/3028998939' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:44:10 compute-0 nova_compute[261524]: 2025-09-30 14:44:10.975 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:44:11 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:44:11 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:44:11 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:44:11.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:44:11 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:44:11 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:44:11 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:44:11.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:44:11 compute-0 nova_compute[261524]: 2025-09-30 14:44:11.952 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:44:11 compute-0 nova_compute[261524]: 2025-09-30 14:44:11.953 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Sep 30 14:44:11 compute-0 nova_compute[261524]: 2025-09-30 14:44:11.968 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Sep 30 14:44:12 compute-0 ceph-mon[74194]: pgmap v860: 337 pgs: 337 active+clean; 175 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 606 KiB/s wr, 73 op/s
Sep 30 14:44:12 compute-0 ceph-mon[74194]: from='client.? 192.168.122.10:0/366802027' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 14:44:12 compute-0 ceph-mon[74194]: from='client.? 192.168.122.10:0/366802027' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 14:44:12 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/1212664708' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:44:12 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:44:12 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v861: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 125 op/s
Sep 30 14:44:12 compute-0 nova_compute[261524]: 2025-09-30 14:44:12.964 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:44:12 compute-0 nova_compute[261524]: 2025-09-30 14:44:12.964 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:44:12 compute-0 nova_compute[261524]: 2025-09-30 14:44:12.964 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:44:12 compute-0 nova_compute[261524]: 2025-09-30 14:44:12.965 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:44:12 compute-0 nova_compute[261524]: 2025-09-30 14:44:12.978 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:44:13 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:44:13 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:44:13 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:44:13.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:44:13 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:44:13.651Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:44:13 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:44:13 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:44:13 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:44:13.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:44:13 compute-0 nova_compute[261524]: 2025-09-30 14:44:13.953 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:44:13 compute-0 nova_compute[261524]: 2025-09-30 14:44:13.954 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Sep 30 14:44:13 compute-0 nova_compute[261524]: 2025-09-30 14:44:13.954 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Sep 30 14:44:13 compute-0 nova_compute[261524]: 2025-09-30 14:44:13.968 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Sep 30 14:44:13 compute-0 nova_compute[261524]: 2025-09-30 14:44:13.968 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:44:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:44:13 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:44:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:44:14 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:44:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:44:14 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:44:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:44:14 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:44:14 compute-0 ceph-mon[74194]: pgmap v861: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 125 op/s
Sep 30 14:44:14 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/87781901' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:44:14 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:44:14 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:44:14 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v862: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Sep 30 14:44:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:44:14] "GET /metrics HTTP/1.1" 200 48551 "" "Prometheus/2.51.0"
Sep 30 14:44:14 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:44:14] "GET /metrics HTTP/1.1" 200 48551 "" "Prometheus/2.51.0"
Sep 30 14:44:14 compute-0 nova_compute[261524]: 2025-09-30 14:44:14.952 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:44:14 compute-0 nova_compute[261524]: 2025-09-30 14:44:14.978 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:44:14 compute-0 nova_compute[261524]: 2025-09-30 14:44:14.978 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:44:14 compute-0 nova_compute[261524]: 2025-09-30 14:44:14.979 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:44:14 compute-0 nova_compute[261524]: 2025-09-30 14:44:14.979 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Sep 30 14:44:14 compute-0 nova_compute[261524]: 2025-09-30 14:44:14.980 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Sep 30 14:44:15 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/1596907925' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:44:15 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:44:15 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:44:15 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:44:15 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:44:15.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:44:15 compute-0 nova_compute[261524]: 2025-09-30 14:44:15.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:44:15 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 14:44:15 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/897141907' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:44:15 compute-0 nova_compute[261524]: 2025-09-30 14:44:15.447 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Sep 30 14:44:15 compute-0 nova_compute[261524]: 2025-09-30 14:44:15.630 2 WARNING nova.virt.libvirt.driver [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 14:44:15 compute-0 nova_compute[261524]: 2025-09-30 14:44:15.631 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4628MB free_disk=59.89730453491211GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Sep 30 14:44:15 compute-0 nova_compute[261524]: 2025-09-30 14:44:15.632 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:44:15 compute-0 nova_compute[261524]: 2025-09-30 14:44:15.632 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:44:15 compute-0 nova_compute[261524]: 2025-09-30 14:44:15.775 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Sep 30 14:44:15 compute-0 nova_compute[261524]: 2025-09-30 14:44:15.776 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Sep 30 14:44:15 compute-0 nova_compute[261524]: 2025-09-30 14:44:15.822 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Sep 30 14:44:15 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:44:15 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:44:15 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:44:15.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:44:16 compute-0 ceph-mon[74194]: pgmap v862: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Sep 30 14:44:16 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/897141907' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:44:16 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 14:44:16 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2510359750' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:44:16 compute-0 nova_compute[261524]: 2025-09-30 14:44:16.327 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Sep 30 14:44:16 compute-0 nova_compute[261524]: 2025-09-30 14:44:16.331 2 DEBUG nova.compute.provider_tree [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Inventory has not changed in ProviderTree for provider: 06783cfc-6d32-454d-9501-ebd8adea3735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Sep 30 14:44:16 compute-0 nova_compute[261524]: 2025-09-30 14:44:16.360 2 DEBUG nova.scheduler.client.report [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Inventory has not changed for provider 06783cfc-6d32-454d-9501-ebd8adea3735 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Sep 30 14:44:16 compute-0 nova_compute[261524]: 2025-09-30 14:44:16.362 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Sep 30 14:44:16 compute-0 nova_compute[261524]: 2025-09-30 14:44:16.362 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.730s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:44:16 compute-0 nova_compute[261524]: 2025-09-30 14:44:16.363 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:44:16 compute-0 nova_compute[261524]: 2025-09-30 14:44:16.363 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Sep 30 14:44:16 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v863: 337 pgs: 337 active+clean; 174 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 337 KiB/s rd, 2.1 MiB/s wr, 77 op/s
Sep 30 14:44:17 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:44:17.143Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:44:17 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:44:17.143Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:44:17 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:44:17.144Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:44:17 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:44:17 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/2510359750' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:44:17 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:44:17 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:44:17 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:44:17.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:44:17 compute-0 nova_compute[261524]: 2025-09-30 14:44:17.390 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:44:17 compute-0 nova_compute[261524]: 2025-09-30 14:44:17.391 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:44:17 compute-0 nova_compute[261524]: 2025-09-30 14:44:17.391 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Sep 30 14:44:17 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:44:17 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 14:44:17 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:44:17.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 14:44:17 compute-0 nova_compute[261524]: 2025-09-30 14:44:17.980 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:44:18 compute-0 ceph-mon[74194]: pgmap v863: 337 pgs: 337 active+clean; 174 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 337 KiB/s rd, 2.1 MiB/s wr, 77 op/s
Sep 30 14:44:18 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v864: 337 pgs: 337 active+clean; 174 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 308 KiB/s rd, 1.6 MiB/s wr, 66 op/s
Sep 30 14:44:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:44:19 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:44:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:44:19 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:44:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:44:19 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:44:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:44:19 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:44:19 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/301683750' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:44:19 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:44:19 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:44:19 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:44:19.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:44:19 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:44:19 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:44:19 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:44:19.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:44:20 compute-0 ceph-mon[74194]: pgmap v864: 337 pgs: 337 active+clean; 174 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 308 KiB/s rd, 1.6 MiB/s wr, 66 op/s
Sep 30 14:44:20 compute-0 nova_compute[261524]: 2025-09-30 14:44:20.324 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:44:20 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v865: 337 pgs: 337 active+clean; 174 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 308 KiB/s rd, 1.6 MiB/s wr, 66 op/s
Sep 30 14:44:21 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:44:21 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:44:21 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:44:21.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:44:21 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:44:21 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:44:21 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:44:21.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:44:22 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:44:22 compute-0 ceph-mon[74194]: pgmap v865: 337 pgs: 337 active+clean; 174 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 308 KiB/s rd, 1.6 MiB/s wr, 66 op/s
Sep 30 14:44:22 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v866: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 317 KiB/s rd, 1.6 MiB/s wr, 82 op/s
Sep 30 14:44:22 compute-0 nova_compute[261524]: 2025-09-30 14:44:22.985 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:44:23 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:44:23 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:44:23 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:44:23.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:44:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:44:23.652Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:44:23 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:44:23 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:44:23 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:44:23.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:44:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:44:23 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:44:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:44:24 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:44:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:44:24 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:44:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:44:24 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:44:24 compute-0 ceph-mon[74194]: pgmap v866: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 317 KiB/s rd, 1.6 MiB/s wr, 82 op/s
Sep 30 14:44:24 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/4015843017' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:44:24 compute-0 sudo[274517]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:44:24 compute-0 sudo[274517]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:44:24 compute-0 sudo[274517]: pam_unix(sudo:session): session closed for user root
Sep 30 14:44:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:44:24] "GET /metrics HTTP/1.1" 200 48551 "" "Prometheus/2.51.0"
Sep 30 14:44:24 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:44:24] "GET /metrics HTTP/1.1" 200 48551 "" "Prometheus/2.51.0"
Sep 30 14:44:24 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v867: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 22 KiB/s wr, 30 op/s
Sep 30 14:44:25 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:44:25 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:44:25 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:44:25.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:44:25 compute-0 nova_compute[261524]: 2025-09-30 14:44:25.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:44:25 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:44:25 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:44:25 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:44:25.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:44:26 compute-0 ceph-mon[74194]: pgmap v867: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 22 KiB/s wr, 30 op/s
Sep 30 14:44:26 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v868: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 23 KiB/s wr, 57 op/s
Sep 30 14:44:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:44:27.145Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:44:27 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:44:27 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:44:27 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:44:27 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:44:27.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:44:27 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:44:27 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:44:27 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:44:27.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:44:27 compute-0 nova_compute[261524]: 2025-09-30 14:44:27.989 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:44:28 compute-0 podman[274546]: 2025-09-30 14:44:28.134363849 +0000 UTC m=+0.061418170 container health_status 3f9405f717bf7bccb1d94628a6cea0442375ebf8d5cf43ef2536ee30dce6c6e0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=iscsid, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Sep 30 14:44:28 compute-0 podman[274548]: 2025-09-30 14:44:28.150286693 +0000 UTC m=+0.070936648 container health_status b3fb2fd96e3ed0561f41ccf5f3a6a8f5edc1610051f196882b9fe64f54041c07 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Sep 30 14:44:28 compute-0 podman[274549]: 2025-09-30 14:44:28.1513298 +0000 UTC m=+0.067660522 container health_status c96c6cc1b89de09f21811bef665f35e069874e3ad1bbd4c37d02227af98e3458 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true)
Sep 30 14:44:28 compute-0 podman[274547]: 2025-09-30 14:44:28.173376994 +0000 UTC m=+0.097565421 container health_status 8ace90c1ad44d98a416105b72e1c541724757ab08efa10adfaa0df7e8c2027c6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Sep 30 14:44:28 compute-0 ceph-mon[74194]: pgmap v868: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 23 KiB/s wr, 57 op/s
Sep 30 14:44:28 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v869: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 11 KiB/s wr, 44 op/s
Sep 30 14:44:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:44:28 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:44:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:44:28 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:44:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:44:28 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:44:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:44:29 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:44:29 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:44:29 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:44:29 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:44:29.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:44:29 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:44:29 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:44:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:44:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:44:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:44:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:44:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:44:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:44:29 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:44:29 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:44:29 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:44:29.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:44:30 compute-0 ceph-mon[74194]: pgmap v869: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 11 KiB/s wr, 44 op/s
Sep 30 14:44:30 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:44:30 compute-0 nova_compute[261524]: 2025-09-30 14:44:30.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:44:30 compute-0 sudo[274631]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:44:30 compute-0 sudo[274631]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:44:30 compute-0 sudo[274631]: pam_unix(sudo:session): session closed for user root
Sep 30 14:44:30 compute-0 sudo[274656]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 14:44:30 compute-0 sudo[274656]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:44:30 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v870: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 11 KiB/s wr, 44 op/s
Sep 30 14:44:30 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Sep 30 14:44:30 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:44:30 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Sep 30 14:44:30 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:44:31 compute-0 sudo[274656]: pam_unix(sudo:session): session closed for user root
Sep 30 14:44:31 compute-0 sudo[274712]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:44:31 compute-0 sudo[274712]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:44:31 compute-0 sudo[274712]: pam_unix(sudo:session): session closed for user root
Sep 30 14:44:31 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:44:31 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:44:31 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:44:31.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:44:31 compute-0 sudo[274737]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -- inventory --format=json-pretty --filter-for-batch
Sep 30 14:44:31 compute-0 sudo[274737]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:44:31 compute-0 podman[274804]: 2025-09-30 14:44:31.689855169 +0000 UTC m=+0.046399749 container create 761cba5f207ff12fec08f6988d8ef6dfa19f85a3ab005ebcc30b6a14a325f583 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_fermat, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Sep 30 14:44:31 compute-0 systemd[1]: Started libpod-conmon-761cba5f207ff12fec08f6988d8ef6dfa19f85a3ab005ebcc30b6a14a325f583.scope.
Sep 30 14:44:31 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:44:31 compute-0 podman[274804]: 2025-09-30 14:44:31.67067855 +0000 UTC m=+0.027223150 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:44:31 compute-0 podman[274804]: 2025-09-30 14:44:31.786645239 +0000 UTC m=+0.143189859 container init 761cba5f207ff12fec08f6988d8ef6dfa19f85a3ab005ebcc30b6a14a325f583 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_fermat, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:44:31 compute-0 podman[274804]: 2025-09-30 14:44:31.797543453 +0000 UTC m=+0.154088033 container start 761cba5f207ff12fec08f6988d8ef6dfa19f85a3ab005ebcc30b6a14a325f583 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_fermat, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Sep 30 14:44:31 compute-0 podman[274804]: 2025-09-30 14:44:31.801728842 +0000 UTC m=+0.158273462 container attach 761cba5f207ff12fec08f6988d8ef6dfa19f85a3ab005ebcc30b6a14a325f583 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_fermat, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:44:31 compute-0 systemd[1]: libpod-761cba5f207ff12fec08f6988d8ef6dfa19f85a3ab005ebcc30b6a14a325f583.scope: Deactivated successfully.
Sep 30 14:44:31 compute-0 wizardly_fermat[274821]: 167 167
Sep 30 14:44:31 compute-0 podman[274804]: 2025-09-30 14:44:31.80781229 +0000 UTC m=+0.164356870 container died 761cba5f207ff12fec08f6988d8ef6dfa19f85a3ab005ebcc30b6a14a325f583 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_fermat, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:44:31 compute-0 conmon[274821]: conmon 761cba5f207ff12fec08 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-761cba5f207ff12fec08f6988d8ef6dfa19f85a3ab005ebcc30b6a14a325f583.scope/container/memory.events
Sep 30 14:44:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-b9351f62f436ad29f668050692d7414cf404a91e0b307e99de2d5c2e9bf191b4-merged.mount: Deactivated successfully.
Sep 30 14:44:31 compute-0 ceph-mon[74194]: pgmap v870: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 11 KiB/s wr, 44 op/s
Sep 30 14:44:31 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:44:31 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:44:31 compute-0 podman[274804]: 2025-09-30 14:44:31.857243867 +0000 UTC m=+0.213788467 container remove 761cba5f207ff12fec08f6988d8ef6dfa19f85a3ab005ebcc30b6a14a325f583 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_fermat, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:44:31 compute-0 systemd[1]: libpod-conmon-761cba5f207ff12fec08f6988d8ef6dfa19f85a3ab005ebcc30b6a14a325f583.scope: Deactivated successfully.
Sep 30 14:44:31 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:44:31 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:44:31 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:44:31.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:44:32 compute-0 podman[274846]: 2025-09-30 14:44:32.068565869 +0000 UTC m=+0.065902247 container create 1e48506de58d1ba21e6bcdf8c79d2f03233260aecc0b8399968b48d43350d24c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Sep 30 14:44:32 compute-0 systemd[1]: Started libpod-conmon-1e48506de58d1ba21e6bcdf8c79d2f03233260aecc0b8399968b48d43350d24c.scope.
Sep 30 14:44:32 compute-0 podman[274846]: 2025-09-30 14:44:32.040744905 +0000 UTC m=+0.038081323 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:44:32 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:44:32 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:44:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42f7ff6d19ecb3f3347fb57cc9271887dda6bd5b361424a8f9949feca6ab7c10/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:44:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42f7ff6d19ecb3f3347fb57cc9271887dda6bd5b361424a8f9949feca6ab7c10/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:44:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42f7ff6d19ecb3f3347fb57cc9271887dda6bd5b361424a8f9949feca6ab7c10/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:44:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42f7ff6d19ecb3f3347fb57cc9271887dda6bd5b361424a8f9949feca6ab7c10/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:44:32 compute-0 podman[274846]: 2025-09-30 14:44:32.1726718 +0000 UTC m=+0.170008198 container init 1e48506de58d1ba21e6bcdf8c79d2f03233260aecc0b8399968b48d43350d24c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_tharp, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:44:32 compute-0 podman[274846]: 2025-09-30 14:44:32.181037988 +0000 UTC m=+0.178374366 container start 1e48506de58d1ba21e6bcdf8c79d2f03233260aecc0b8399968b48d43350d24c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_tharp, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:44:32 compute-0 podman[274846]: 2025-09-30 14:44:32.184346314 +0000 UTC m=+0.181682722 container attach 1e48506de58d1ba21e6bcdf8c79d2f03233260aecc0b8399968b48d43350d24c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_tharp, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True)
Sep 30 14:44:32 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v871: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 11 KiB/s wr, 44 op/s
Sep 30 14:44:32 compute-0 agitated_tharp[274863]: [
Sep 30 14:44:32 compute-0 agitated_tharp[274863]:     {
Sep 30 14:44:32 compute-0 agitated_tharp[274863]:         "available": false,
Sep 30 14:44:32 compute-0 agitated_tharp[274863]:         "being_replaced": false,
Sep 30 14:44:32 compute-0 agitated_tharp[274863]:         "ceph_device_lvm": false,
Sep 30 14:44:32 compute-0 agitated_tharp[274863]:         "device_id": "QEMU_DVD-ROM_QM00001",
Sep 30 14:44:32 compute-0 agitated_tharp[274863]:         "lsm_data": {},
Sep 30 14:44:32 compute-0 agitated_tharp[274863]:         "lvs": [],
Sep 30 14:44:32 compute-0 agitated_tharp[274863]:         "path": "/dev/sr0",
Sep 30 14:44:32 compute-0 agitated_tharp[274863]:         "rejected_reasons": [
Sep 30 14:44:32 compute-0 agitated_tharp[274863]:             "Insufficient space (<5GB)",
Sep 30 14:44:32 compute-0 agitated_tharp[274863]:             "Has a FileSystem"
Sep 30 14:44:32 compute-0 agitated_tharp[274863]:         ],
Sep 30 14:44:32 compute-0 agitated_tharp[274863]:         "sys_api": {
Sep 30 14:44:32 compute-0 agitated_tharp[274863]:             "actuators": null,
Sep 30 14:44:32 compute-0 agitated_tharp[274863]:             "device_nodes": [
Sep 30 14:44:32 compute-0 agitated_tharp[274863]:                 "sr0"
Sep 30 14:44:32 compute-0 agitated_tharp[274863]:             ],
Sep 30 14:44:32 compute-0 agitated_tharp[274863]:             "devname": "sr0",
Sep 30 14:44:32 compute-0 agitated_tharp[274863]:             "human_readable_size": "482.00 KB",
Sep 30 14:44:32 compute-0 agitated_tharp[274863]:             "id_bus": "ata",
Sep 30 14:44:32 compute-0 agitated_tharp[274863]:             "model": "QEMU DVD-ROM",
Sep 30 14:44:32 compute-0 agitated_tharp[274863]:             "nr_requests": "2",
Sep 30 14:44:32 compute-0 agitated_tharp[274863]:             "parent": "/dev/sr0",
Sep 30 14:44:32 compute-0 agitated_tharp[274863]:             "partitions": {},
Sep 30 14:44:32 compute-0 agitated_tharp[274863]:             "path": "/dev/sr0",
Sep 30 14:44:32 compute-0 agitated_tharp[274863]:             "removable": "1",
Sep 30 14:44:32 compute-0 agitated_tharp[274863]:             "rev": "2.5+",
Sep 30 14:44:32 compute-0 agitated_tharp[274863]:             "ro": "0",
Sep 30 14:44:32 compute-0 agitated_tharp[274863]:             "rotational": "0",
Sep 30 14:44:32 compute-0 agitated_tharp[274863]:             "sas_address": "",
Sep 30 14:44:32 compute-0 agitated_tharp[274863]:             "sas_device_handle": "",
Sep 30 14:44:32 compute-0 agitated_tharp[274863]:             "scheduler_mode": "mq-deadline",
Sep 30 14:44:32 compute-0 agitated_tharp[274863]:             "sectors": 0,
Sep 30 14:44:32 compute-0 agitated_tharp[274863]:             "sectorsize": "2048",
Sep 30 14:44:32 compute-0 agitated_tharp[274863]:             "size": 493568.0,
Sep 30 14:44:32 compute-0 agitated_tharp[274863]:             "support_discard": "2048",
Sep 30 14:44:32 compute-0 agitated_tharp[274863]:             "type": "disk",
Sep 30 14:44:32 compute-0 agitated_tharp[274863]:             "vendor": "QEMU"
Sep 30 14:44:32 compute-0 agitated_tharp[274863]:         }
Sep 30 14:44:32 compute-0 agitated_tharp[274863]:     }
Sep 30 14:44:32 compute-0 agitated_tharp[274863]: ]
Sep 30 14:44:32 compute-0 systemd[1]: libpod-1e48506de58d1ba21e6bcdf8c79d2f03233260aecc0b8399968b48d43350d24c.scope: Deactivated successfully.
Sep 30 14:44:32 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Sep 30 14:44:32 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:44:32 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Sep 30 14:44:32 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:44:32 compute-0 podman[276073]: 2025-09-30 14:44:32.963522461 +0000 UTC m=+0.028661808 container died 1e48506de58d1ba21e6bcdf8c79d2f03233260aecc0b8399968b48d43350d24c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_tharp, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:44:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-42f7ff6d19ecb3f3347fb57cc9271887dda6bd5b361424a8f9949feca6ab7c10-merged.mount: Deactivated successfully.
Sep 30 14:44:32 compute-0 nova_compute[261524]: 2025-09-30 14:44:32.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:44:33 compute-0 podman[276073]: 2025-09-30 14:44:33.007884436 +0000 UTC m=+0.073023763 container remove 1e48506de58d1ba21e6bcdf8c79d2f03233260aecc0b8399968b48d43350d24c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_tharp, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:44:33 compute-0 systemd[1]: libpod-conmon-1e48506de58d1ba21e6bcdf8c79d2f03233260aecc0b8399968b48d43350d24c.scope: Deactivated successfully.
Sep 30 14:44:33 compute-0 sudo[274737]: pam_unix(sudo:session): session closed for user root
Sep 30 14:44:33 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 14:44:33 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:44:33 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 14:44:33 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:44:33 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:44:33 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:44:33 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 14:44:33 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 14:44:33 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 14:44:33 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:44:33 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 14:44:33 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:44:33 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 14:44:33 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 14:44:33 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 14:44:33 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 14:44:33 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:44:33 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:44:33 compute-0 sudo[276088]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:44:33 compute-0 sudo[276088]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:44:33 compute-0 sudo[276088]: pam_unix(sudo:session): session closed for user root
Sep 30 14:44:33 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:44:33 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:44:33 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:44:33.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:44:33 compute-0 sudo[276113]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 14:44:33 compute-0 sudo[276113]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:44:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:44:33.652Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:44:33 compute-0 podman[276180]: 2025-09-30 14:44:33.75396618 +0000 UTC m=+0.050869106 container create 5dc851cff56893222ed03f109d9bed54b03e4b4edb8c0a31d72995503b43c23b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:44:33 compute-0 systemd[1]: Started libpod-conmon-5dc851cff56893222ed03f109d9bed54b03e4b4edb8c0a31d72995503b43c23b.scope.
Sep 30 14:44:33 compute-0 podman[276180]: 2025-09-30 14:44:33.729375059 +0000 UTC m=+0.026278005 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:44:33 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:44:33 compute-0 podman[276180]: 2025-09-30 14:44:33.855574505 +0000 UTC m=+0.152477431 container init 5dc851cff56893222ed03f109d9bed54b03e4b4edb8c0a31d72995503b43c23b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_stonebraker, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:44:33 compute-0 podman[276180]: 2025-09-30 14:44:33.866047938 +0000 UTC m=+0.162950844 container start 5dc851cff56893222ed03f109d9bed54b03e4b4edb8c0a31d72995503b43c23b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_stonebraker, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:44:33 compute-0 podman[276180]: 2025-09-30 14:44:33.870813292 +0000 UTC m=+0.167716198 container attach 5dc851cff56893222ed03f109d9bed54b03e4b4edb8c0a31d72995503b43c23b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_stonebraker, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Sep 30 14:44:33 compute-0 silly_stonebraker[276197]: 167 167
Sep 30 14:44:33 compute-0 systemd[1]: libpod-5dc851cff56893222ed03f109d9bed54b03e4b4edb8c0a31d72995503b43c23b.scope: Deactivated successfully.
Sep 30 14:44:33 compute-0 podman[276180]: 2025-09-30 14:44:33.873000689 +0000 UTC m=+0.169903595 container died 5dc851cff56893222ed03f109d9bed54b03e4b4edb8c0a31d72995503b43c23b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Sep 30 14:44:33 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:44:33 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:44:33 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:44:33.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:44:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-482e2e3d7015a4b1b2cac6d64bee3ec7b2021d43d3a2b676385fd3c4135834c4-merged.mount: Deactivated successfully.
Sep 30 14:44:33 compute-0 podman[276180]: 2025-09-30 14:44:33.928383851 +0000 UTC m=+0.225286807 container remove 5dc851cff56893222ed03f109d9bed54b03e4b4edb8c0a31d72995503b43c23b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_stonebraker, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:44:33 compute-0 systemd[1]: libpod-conmon-5dc851cff56893222ed03f109d9bed54b03e4b4edb8c0a31d72995503b43c23b.scope: Deactivated successfully.
Sep 30 14:44:33 compute-0 ceph-mon[74194]: pgmap v871: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 11 KiB/s wr, 44 op/s
Sep 30 14:44:33 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:44:33 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:44:33 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:44:33 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:44:33 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:44:33 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 14:44:33 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:44:33 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:44:33 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 14:44:33 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 14:44:33 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:44:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:44:33 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:44:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:44:33 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:44:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:44:33 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:44:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:44:34 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:44:34 compute-0 podman[276222]: 2025-09-30 14:44:34.102228977 +0000 UTC m=+0.049636153 container create 289dc9181fb23d51f553adff2dd1882c57bc342d5cad222a274a48a3ac32f9f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_hoover, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True)
Sep 30 14:44:34 compute-0 systemd[1]: Started libpod-conmon-289dc9181fb23d51f553adff2dd1882c57bc342d5cad222a274a48a3ac32f9f3.scope.
Sep 30 14:44:34 compute-0 podman[276222]: 2025-09-30 14:44:34.077702829 +0000 UTC m=+0.025109865 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:44:34 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:44:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de8e06da63ba646f975e09e217ae14fc35dc234aa32c200bceef633baae320c0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:44:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de8e06da63ba646f975e09e217ae14fc35dc234aa32c200bceef633baae320c0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:44:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de8e06da63ba646f975e09e217ae14fc35dc234aa32c200bceef633baae320c0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:44:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de8e06da63ba646f975e09e217ae14fc35dc234aa32c200bceef633baae320c0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:44:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de8e06da63ba646f975e09e217ae14fc35dc234aa32c200bceef633baae320c0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 14:44:34 compute-0 podman[276222]: 2025-09-30 14:44:34.212131399 +0000 UTC m=+0.159538435 container init 289dc9181fb23d51f553adff2dd1882c57bc342d5cad222a274a48a3ac32f9f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_hoover, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Sep 30 14:44:34 compute-0 podman[276222]: 2025-09-30 14:44:34.220393154 +0000 UTC m=+0.167800170 container start 289dc9181fb23d51f553adff2dd1882c57bc342d5cad222a274a48a3ac32f9f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_hoover, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325)
Sep 30 14:44:34 compute-0 podman[276222]: 2025-09-30 14:44:34.226066151 +0000 UTC m=+0.173473187 container attach 289dc9181fb23d51f553adff2dd1882c57bc342d5cad222a274a48a3ac32f9f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_hoover, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Sep 30 14:44:34 compute-0 silly_hoover[276238]: --> passed data devices: 0 physical, 1 LVM
Sep 30 14:44:34 compute-0 silly_hoover[276238]: --> All data devices are unavailable
Sep 30 14:44:34 compute-0 systemd[1]: libpod-289dc9181fb23d51f553adff2dd1882c57bc342d5cad222a274a48a3ac32f9f3.scope: Deactivated successfully.
Sep 30 14:44:34 compute-0 podman[276222]: 2025-09-30 14:44:34.561020512 +0000 UTC m=+0.508427538 container died 289dc9181fb23d51f553adff2dd1882c57bc342d5cad222a274a48a3ac32f9f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_hoover, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Sep 30 14:44:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-de8e06da63ba646f975e09e217ae14fc35dc234aa32c200bceef633baae320c0-merged.mount: Deactivated successfully.
Sep 30 14:44:34 compute-0 podman[276222]: 2025-09-30 14:44:34.612650067 +0000 UTC m=+0.560057083 container remove 289dc9181fb23d51f553adff2dd1882c57bc342d5cad222a274a48a3ac32f9f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_hoover, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Sep 30 14:44:34 compute-0 systemd[1]: libpod-conmon-289dc9181fb23d51f553adff2dd1882c57bc342d5cad222a274a48a3ac32f9f3.scope: Deactivated successfully.
Sep 30 14:44:34 compute-0 sudo[276113]: pam_unix(sudo:session): session closed for user root
Sep 30 14:44:34 compute-0 sudo[276265]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:44:34 compute-0 sudo[276265]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:44:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:44:34] "GET /metrics HTTP/1.1" 200 48525 "" "Prometheus/2.51.0"
Sep 30 14:44:34 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:44:34] "GET /metrics HTTP/1.1" 200 48525 "" "Prometheus/2.51.0"
Sep 30 14:44:34 compute-0 sudo[276265]: pam_unix(sudo:session): session closed for user root
Sep 30 14:44:34 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v872: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 14:44:34 compute-0 sudo[276290]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -- lvm list --format json
Sep 30 14:44:34 compute-0 sudo[276290]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:44:35 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:44:35 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:44:35 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:44:35.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:44:35 compute-0 podman[276356]: 2025-09-30 14:44:35.293252717 +0000 UTC m=+0.091975926 container create f371e6c75bbf67ab7952cb63618caa22a7b7d0f269e0ef2dc4e50c2a51be0873 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_sammet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:44:35 compute-0 nova_compute[261524]: 2025-09-30 14:44:35.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:44:35 compute-0 podman[276356]: 2025-09-30 14:44:35.240492763 +0000 UTC m=+0.039216052 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:44:35 compute-0 systemd[1]: Started libpod-conmon-f371e6c75bbf67ab7952cb63618caa22a7b7d0f269e0ef2dc4e50c2a51be0873.scope.
Sep 30 14:44:35 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:44:35 compute-0 podman[276356]: 2025-09-30 14:44:35.397610674 +0000 UTC m=+0.196333903 container init f371e6c75bbf67ab7952cb63618caa22a7b7d0f269e0ef2dc4e50c2a51be0873 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_sammet, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default)
Sep 30 14:44:35 compute-0 podman[276356]: 2025-09-30 14:44:35.409483783 +0000 UTC m=+0.208207022 container start f371e6c75bbf67ab7952cb63618caa22a7b7d0f269e0ef2dc4e50c2a51be0873 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_sammet, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:44:35 compute-0 sweet_sammet[276372]: 167 167
Sep 30 14:44:35 compute-0 systemd[1]: libpod-f371e6c75bbf67ab7952cb63618caa22a7b7d0f269e0ef2dc4e50c2a51be0873.scope: Deactivated successfully.
Sep 30 14:44:35 compute-0 conmon[276372]: conmon f371e6c75bbf67ab7952 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f371e6c75bbf67ab7952cb63618caa22a7b7d0f269e0ef2dc4e50c2a51be0873.scope/container/memory.events
Sep 30 14:44:35 compute-0 podman[276356]: 2025-09-30 14:44:35.419224297 +0000 UTC m=+0.217947516 container attach f371e6c75bbf67ab7952cb63618caa22a7b7d0f269e0ef2dc4e50c2a51be0873 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_sammet, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:44:35 compute-0 podman[276356]: 2025-09-30 14:44:35.420607853 +0000 UTC m=+0.219331122 container died f371e6c75bbf67ab7952cb63618caa22a7b7d0f269e0ef2dc4e50c2a51be0873 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_sammet, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:44:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-320686a6d24ac597bd520695d9a293cc1f4a3194423be898b9a00530d733d325-merged.mount: Deactivated successfully.
Sep 30 14:44:35 compute-0 podman[276356]: 2025-09-30 14:44:35.547191799 +0000 UTC m=+0.345915018 container remove f371e6c75bbf67ab7952cb63618caa22a7b7d0f269e0ef2dc4e50c2a51be0873 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_sammet, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Sep 30 14:44:35 compute-0 systemd[1]: libpod-conmon-f371e6c75bbf67ab7952cb63618caa22a7b7d0f269e0ef2dc4e50c2a51be0873.scope: Deactivated successfully.
Sep 30 14:44:35 compute-0 podman[276399]: 2025-09-30 14:44:35.777064444 +0000 UTC m=+0.058496654 container create a4e1a4a4728bc730c0976096cd8dc56b5ee595b783c149ab023c4a140c17623a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_herschel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Sep 30 14:44:35 compute-0 podman[276399]: 2025-09-30 14:44:35.741257071 +0000 UTC m=+0.022689251 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:44:35 compute-0 systemd[1]: Started libpod-conmon-a4e1a4a4728bc730c0976096cd8dc56b5ee595b783c149ab023c4a140c17623a.scope.
Sep 30 14:44:35 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:44:35 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:44:35 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:44:35.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:44:35 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:44:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46398c46fccd2f93b9f20fccf8a5a794b4fca5c8c03bb41d9db51b627bd50a28/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:44:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46398c46fccd2f93b9f20fccf8a5a794b4fca5c8c03bb41d9db51b627bd50a28/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:44:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46398c46fccd2f93b9f20fccf8a5a794b4fca5c8c03bb41d9db51b627bd50a28/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:44:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46398c46fccd2f93b9f20fccf8a5a794b4fca5c8c03bb41d9db51b627bd50a28/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:44:35 compute-0 podman[276399]: 2025-09-30 14:44:35.943704642 +0000 UTC m=+0.225136912 container init a4e1a4a4728bc730c0976096cd8dc56b5ee595b783c149ab023c4a140c17623a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_herschel, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Sep 30 14:44:35 compute-0 podman[276399]: 2025-09-30 14:44:35.95052811 +0000 UTC m=+0.231960320 container start a4e1a4a4728bc730c0976096cd8dc56b5ee595b783c149ab023c4a140c17623a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2)
Sep 30 14:44:35 compute-0 podman[276399]: 2025-09-30 14:44:35.991964619 +0000 UTC m=+0.273396779 container attach a4e1a4a4728bc730c0976096cd8dc56b5ee595b783c149ab023c4a140c17623a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_herschel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Sep 30 14:44:35 compute-0 ceph-mon[74194]: pgmap v872: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 14:44:36 compute-0 pensive_herschel[276417]: {
Sep 30 14:44:36 compute-0 pensive_herschel[276417]:     "0": [
Sep 30 14:44:36 compute-0 pensive_herschel[276417]:         {
Sep 30 14:44:36 compute-0 pensive_herschel[276417]:             "devices": [
Sep 30 14:44:36 compute-0 pensive_herschel[276417]:                 "/dev/loop3"
Sep 30 14:44:36 compute-0 pensive_herschel[276417]:             ],
Sep 30 14:44:36 compute-0 pensive_herschel[276417]:             "lv_name": "ceph_lv0",
Sep 30 14:44:36 compute-0 pensive_herschel[276417]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:44:36 compute-0 pensive_herschel[276417]:             "lv_size": "21470642176",
Sep 30 14:44:36 compute-0 pensive_herschel[276417]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5e3c7776-ac03-5698-b79f-a6dc2d80cae6,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1bf35304-bfb4-41f5-b832-570aa31de1b2,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 14:44:36 compute-0 pensive_herschel[276417]:             "lv_uuid": "f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L",
Sep 30 14:44:36 compute-0 pensive_herschel[276417]:             "name": "ceph_lv0",
Sep 30 14:44:36 compute-0 pensive_herschel[276417]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:44:36 compute-0 pensive_herschel[276417]:             "tags": {
Sep 30 14:44:36 compute-0 pensive_herschel[276417]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:44:36 compute-0 pensive_herschel[276417]:                 "ceph.block_uuid": "f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L",
Sep 30 14:44:36 compute-0 pensive_herschel[276417]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 14:44:36 compute-0 pensive_herschel[276417]:                 "ceph.cluster_fsid": "5e3c7776-ac03-5698-b79f-a6dc2d80cae6",
Sep 30 14:44:36 compute-0 pensive_herschel[276417]:                 "ceph.cluster_name": "ceph",
Sep 30 14:44:36 compute-0 pensive_herschel[276417]:                 "ceph.crush_device_class": "",
Sep 30 14:44:36 compute-0 pensive_herschel[276417]:                 "ceph.encrypted": "0",
Sep 30 14:44:36 compute-0 pensive_herschel[276417]:                 "ceph.osd_fsid": "1bf35304-bfb4-41f5-b832-570aa31de1b2",
Sep 30 14:44:36 compute-0 pensive_herschel[276417]:                 "ceph.osd_id": "0",
Sep 30 14:44:36 compute-0 pensive_herschel[276417]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 14:44:36 compute-0 pensive_herschel[276417]:                 "ceph.type": "block",
Sep 30 14:44:36 compute-0 pensive_herschel[276417]:                 "ceph.vdo": "0",
Sep 30 14:44:36 compute-0 pensive_herschel[276417]:                 "ceph.with_tpm": "0"
Sep 30 14:44:36 compute-0 pensive_herschel[276417]:             },
Sep 30 14:44:36 compute-0 pensive_herschel[276417]:             "type": "block",
Sep 30 14:44:36 compute-0 pensive_herschel[276417]:             "vg_name": "ceph_vg0"
Sep 30 14:44:36 compute-0 pensive_herschel[276417]:         }
Sep 30 14:44:36 compute-0 pensive_herschel[276417]:     ]
Sep 30 14:44:36 compute-0 pensive_herschel[276417]: }
Sep 30 14:44:36 compute-0 systemd[1]: libpod-a4e1a4a4728bc730c0976096cd8dc56b5ee595b783c149ab023c4a140c17623a.scope: Deactivated successfully.
Sep 30 14:44:36 compute-0 podman[276399]: 2025-09-30 14:44:36.27586856 +0000 UTC m=+0.557300770 container died a4e1a4a4728bc730c0976096cd8dc56b5ee595b783c149ab023c4a140c17623a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_herschel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Sep 30 14:44:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-46398c46fccd2f93b9f20fccf8a5a794b4fca5c8c03bb41d9db51b627bd50a28-merged.mount: Deactivated successfully.
Sep 30 14:44:36 compute-0 podman[276399]: 2025-09-30 14:44:36.358397499 +0000 UTC m=+0.639829679 container remove a4e1a4a4728bc730c0976096cd8dc56b5ee595b783c149ab023c4a140c17623a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_herschel, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Sep 30 14:44:36 compute-0 systemd[1]: libpod-conmon-a4e1a4a4728bc730c0976096cd8dc56b5ee595b783c149ab023c4a140c17623a.scope: Deactivated successfully.
Sep 30 14:44:36 compute-0 sudo[276290]: pam_unix(sudo:session): session closed for user root
Sep 30 14:44:36 compute-0 sudo[276437]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:44:36 compute-0 sudo[276437]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:44:36 compute-0 sudo[276437]: pam_unix(sudo:session): session closed for user root
Sep 30 14:44:36 compute-0 sudo[276462]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -- raw list --format json
Sep 30 14:44:36 compute-0 sudo[276462]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:44:36 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v873: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 14:44:36 compute-0 podman[276528]: 2025-09-30 14:44:36.98332453 +0000 UTC m=+0.062236351 container create 438db74866f93fe428da59c0aa495b17806d5f4cdd6cd33122323d9c350d70ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_newton, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Sep 30 14:44:37 compute-0 systemd[1]: Started libpod-conmon-438db74866f93fe428da59c0aa495b17806d5f4cdd6cd33122323d9c350d70ec.scope.
Sep 30 14:44:37 compute-0 podman[276528]: 2025-09-30 14:44:36.951960093 +0000 UTC m=+0.030872004 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:44:37 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:44:37 compute-0 podman[276528]: 2025-09-30 14:44:37.079646568 +0000 UTC m=+0.158558469 container init 438db74866f93fe428da59c0aa495b17806d5f4cdd6cd33122323d9c350d70ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_newton, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:44:37 compute-0 podman[276528]: 2025-09-30 14:44:37.088660012 +0000 UTC m=+0.167571833 container start 438db74866f93fe428da59c0aa495b17806d5f4cdd6cd33122323d9c350d70ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_newton, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Sep 30 14:44:37 compute-0 sleepy_newton[276545]: 167 167
Sep 30 14:44:37 compute-0 systemd[1]: libpod-438db74866f93fe428da59c0aa495b17806d5f4cdd6cd33122323d9c350d70ec.scope: Deactivated successfully.
Sep 30 14:44:37 compute-0 podman[276528]: 2025-09-30 14:44:37.095968813 +0000 UTC m=+0.174880684 container attach 438db74866f93fe428da59c0aa495b17806d5f4cdd6cd33122323d9c350d70ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_newton, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default)
Sep 30 14:44:37 compute-0 podman[276528]: 2025-09-30 14:44:37.096436255 +0000 UTC m=+0.175348116 container died 438db74866f93fe428da59c0aa495b17806d5f4cdd6cd33122323d9c350d70ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_newton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Sep 30 14:44:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-4c8883497a5099c52dfd7a5ea794ded8b3477923cf7efe3ed3f4d9820f3c6508-merged.mount: Deactivated successfully.
Sep 30 14:44:37 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:44:37.146Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:44:37 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:44:37.148Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:44:37 compute-0 podman[276528]: 2025-09-30 14:44:37.154543537 +0000 UTC m=+0.233455358 container remove 438db74866f93fe428da59c0aa495b17806d5f4cdd6cd33122323d9c350d70ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_newton, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:44:37 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:44:37 compute-0 systemd[1]: libpod-conmon-438db74866f93fe428da59c0aa495b17806d5f4cdd6cd33122323d9c350d70ec.scope: Deactivated successfully.
Sep 30 14:44:37 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:44:37 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:44:37 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:44:37.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:44:37 compute-0 podman[276569]: 2025-09-30 14:44:37.444074055 +0000 UTC m=+0.125447687 container create 974899767d4bb157c890f829ace320aab34ca7a28f432cacb93adbf126ab103f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_driscoll, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Sep 30 14:44:37 compute-0 podman[276569]: 2025-09-30 14:44:37.367231835 +0000 UTC m=+0.048605577 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:44:37 compute-0 systemd[1]: Started libpod-conmon-974899767d4bb157c890f829ace320aab34ca7a28f432cacb93adbf126ab103f.scope.
Sep 30 14:44:37 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:44:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e8dd1689be772714db4bbfd48b629b46ec328f1b7514dcc73af197d19062881/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:44:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e8dd1689be772714db4bbfd48b629b46ec328f1b7514dcc73af197d19062881/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:44:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e8dd1689be772714db4bbfd48b629b46ec328f1b7514dcc73af197d19062881/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:44:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e8dd1689be772714db4bbfd48b629b46ec328f1b7514dcc73af197d19062881/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:44:37 compute-0 podman[276569]: 2025-09-30 14:44:37.561164154 +0000 UTC m=+0.242537866 container init 974899767d4bb157c890f829ace320aab34ca7a28f432cacb93adbf126ab103f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_driscoll, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Sep 30 14:44:37 compute-0 podman[276569]: 2025-09-30 14:44:37.568815623 +0000 UTC m=+0.250189295 container start 974899767d4bb157c890f829ace320aab34ca7a28f432cacb93adbf126ab103f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_driscoll, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:44:37 compute-0 podman[276569]: 2025-09-30 14:44:37.593135096 +0000 UTC m=+0.274508758 container attach 974899767d4bb157c890f829ace320aab34ca7a28f432cacb93adbf126ab103f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_driscoll, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:44:37 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:44:37 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:44:37 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:44:37.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:44:37 compute-0 nova_compute[261524]: 2025-09-30 14:44:37.994 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:44:38 compute-0 ceph-mon[74194]: pgmap v873: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 14:44:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:44:38.260 163966 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:44:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:44:38.260 163966 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:44:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:44:38.261 163966 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:44:38 compute-0 lvm[276662]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 14:44:38 compute-0 lvm[276662]: VG ceph_vg0 finished
Sep 30 14:44:38 compute-0 nervous_driscoll[276587]: {}
Sep 30 14:44:38 compute-0 systemd[1]: libpod-974899767d4bb157c890f829ace320aab34ca7a28f432cacb93adbf126ab103f.scope: Deactivated successfully.
Sep 30 14:44:38 compute-0 systemd[1]: libpod-974899767d4bb157c890f829ace320aab34ca7a28f432cacb93adbf126ab103f.scope: Consumed 1.337s CPU time.
Sep 30 14:44:38 compute-0 podman[276569]: 2025-09-30 14:44:38.37911568 +0000 UTC m=+1.060489332 container died 974899767d4bb157c890f829ace320aab34ca7a28f432cacb93adbf126ab103f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_driscoll, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Sep 30 14:44:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e8dd1689be772714db4bbfd48b629b46ec328f1b7514dcc73af197d19062881-merged.mount: Deactivated successfully.
Sep 30 14:44:38 compute-0 podman[276569]: 2025-09-30 14:44:38.449340708 +0000 UTC m=+1.130714340 container remove 974899767d4bb157c890f829ace320aab34ca7a28f432cacb93adbf126ab103f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_driscoll, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Sep 30 14:44:38 compute-0 systemd[1]: libpod-conmon-974899767d4bb157c890f829ace320aab34ca7a28f432cacb93adbf126ab103f.scope: Deactivated successfully.
Sep 30 14:44:38 compute-0 sudo[276462]: pam_unix(sudo:session): session closed for user root
Sep 30 14:44:38 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 14:44:38 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:44:38 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 14:44:38 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:44:38 compute-0 sudo[276679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 14:44:38 compute-0 sudo[276679]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:44:38 compute-0 sudo[276679]: pam_unix(sudo:session): session closed for user root
Sep 30 14:44:38 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v874: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:44:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:44:38 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:44:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:44:38 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:44:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:44:38 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:44:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:44:39 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:44:39 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:44:39 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:44:39 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:44:39.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:44:39 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:44:39 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:44:39 compute-0 ceph-mon[74194]: pgmap v874: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:44:39 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:44:39 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:44:39 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:44:39.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:44:40 compute-0 nova_compute[261524]: 2025-09-30 14:44:40.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:44:40 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v875: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:44:41 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:44:41 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:44:41 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:44:41.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:44:41 compute-0 ceph-mon[74194]: pgmap v875: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:44:41 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:44:41 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:44:41 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:44:41.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:44:42 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:44:42 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v876: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:44:43 compute-0 nova_compute[261524]: 2025-09-30 14:44:42.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:44:43 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:44:43 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:44:43 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:44:43.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:44:43 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:44:43.654Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:44:43 compute-0 ceph-mon[74194]: pgmap v876: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:44:43 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:44:43 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:44:43 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:44:43.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:44:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:44:43 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:44:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:44:43 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:44:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:44:43 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:44:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:44:44 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:44:44 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:44:44 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:44:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:44:44] "GET /metrics HTTP/1.1" 200 48528 "" "Prometheus/2.51.0"
Sep 30 14:44:44 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:44:44] "GET /metrics HTTP/1.1" 200 48528 "" "Prometheus/2.51.0"
Sep 30 14:44:44 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v877: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:44:44 compute-0 sudo[276711]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:44:44 compute-0 sudo[276711]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:44:44 compute-0 sudo[276711]: pam_unix(sudo:session): session closed for user root
Sep 30 14:44:44 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:44:45 compute-0 nova_compute[261524]: 2025-09-30 14:44:45.148 2 DEBUG oslo_concurrency.lockutils [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Acquiring lock "ab354489-bdb3-49d0-9ed1-574d93130913" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:44:45 compute-0 nova_compute[261524]: 2025-09-30 14:44:45.149 2 DEBUG oslo_concurrency.lockutils [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Lock "ab354489-bdb3-49d0-9ed1-574d93130913" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:44:45 compute-0 nova_compute[261524]: 2025-09-30 14:44:45.185 2 DEBUG nova.compute.manager [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Sep 30 14:44:45 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:44:45 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:44:45 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:44:45.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:44:45 compute-0 nova_compute[261524]: 2025-09-30 14:44:45.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:44:45 compute-0 nova_compute[261524]: 2025-09-30 14:44:45.394 2 DEBUG oslo_concurrency.lockutils [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:44:45 compute-0 nova_compute[261524]: 2025-09-30 14:44:45.395 2 DEBUG oslo_concurrency.lockutils [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:44:45 compute-0 nova_compute[261524]: 2025-09-30 14:44:45.404 2 DEBUG nova.virt.hardware [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Sep 30 14:44:45 compute-0 nova_compute[261524]: 2025-09-30 14:44:45.405 2 INFO nova.compute.claims [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Claim successful on node compute-0.ctlplane.example.com
Sep 30 14:44:45 compute-0 nova_compute[261524]: 2025-09-30 14:44:45.548 2 DEBUG oslo_concurrency.processutils [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Sep 30 14:44:45 compute-0 ceph-mon[74194]: pgmap v877: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:44:45 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:44:45 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:44:45 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:44:45.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:44:45 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 14:44:45 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/123885430' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:44:46 compute-0 nova_compute[261524]: 2025-09-30 14:44:46.010 2 DEBUG oslo_concurrency.processutils [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Sep 30 14:44:46 compute-0 nova_compute[261524]: 2025-09-30 14:44:46.017 2 DEBUG nova.compute.provider_tree [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Inventory has not changed in ProviderTree for provider: 06783cfc-6d32-454d-9501-ebd8adea3735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Sep 30 14:44:46 compute-0 nova_compute[261524]: 2025-09-30 14:44:46.086 2 DEBUG nova.scheduler.client.report [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Inventory has not changed for provider 06783cfc-6d32-454d-9501-ebd8adea3735 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Sep 30 14:44:46 compute-0 nova_compute[261524]: 2025-09-30 14:44:46.120 2 DEBUG oslo_concurrency.lockutils [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.725s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:44:46 compute-0 nova_compute[261524]: 2025-09-30 14:44:46.121 2 DEBUG nova.compute.manager [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Sep 30 14:44:46 compute-0 nova_compute[261524]: 2025-09-30 14:44:46.323 2 DEBUG nova.compute.manager [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Sep 30 14:44:46 compute-0 nova_compute[261524]: 2025-09-30 14:44:46.324 2 DEBUG nova.network.neutron [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Sep 30 14:44:46 compute-0 nova_compute[261524]: 2025-09-30 14:44:46.433 2 INFO nova.virt.libvirt.driver [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Sep 30 14:44:46 compute-0 nova_compute[261524]: 2025-09-30 14:44:46.488 2 DEBUG nova.policy [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '59c80c4f189d4667aec64b43afc69ed2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0f6bbb74396f4cb7bfa999ebdabfe722', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Sep 30 14:44:46 compute-0 nova_compute[261524]: 2025-09-30 14:44:46.532 2 DEBUG nova.compute.manager [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Sep 30 14:44:46 compute-0 nova_compute[261524]: 2025-09-30 14:44:46.712 2 DEBUG nova.compute.manager [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Sep 30 14:44:46 compute-0 nova_compute[261524]: 2025-09-30 14:44:46.714 2 DEBUG nova.virt.libvirt.driver [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Sep 30 14:44:46 compute-0 nova_compute[261524]: 2025-09-30 14:44:46.714 2 INFO nova.virt.libvirt.driver [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Creating image(s)
Sep 30 14:44:46 compute-0 nova_compute[261524]: 2025-09-30 14:44:46.742 2 DEBUG nova.storage.rbd_utils [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] rbd image ab354489-bdb3-49d0-9ed1-574d93130913_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Sep 30 14:44:46 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v878: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:44:46 compute-0 nova_compute[261524]: 2025-09-30 14:44:46.776 2 DEBUG nova.storage.rbd_utils [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] rbd image ab354489-bdb3-49d0-9ed1-574d93130913_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Sep 30 14:44:46 compute-0 nova_compute[261524]: 2025-09-30 14:44:46.813 2 DEBUG nova.storage.rbd_utils [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] rbd image ab354489-bdb3-49d0-9ed1-574d93130913_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Sep 30 14:44:46 compute-0 nova_compute[261524]: 2025-09-30 14:44:46.818 2 DEBUG oslo_concurrency.processutils [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5be88f2030ae3f90b4568c2fe3300967dbe88639 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Sep 30 14:44:46 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/123885430' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:44:46 compute-0 nova_compute[261524]: 2025-09-30 14:44:46.893 2 DEBUG oslo_concurrency.processutils [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5be88f2030ae3f90b4568c2fe3300967dbe88639 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Sep 30 14:44:46 compute-0 nova_compute[261524]: 2025-09-30 14:44:46.894 2 DEBUG oslo_concurrency.lockutils [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Acquiring lock "5be88f2030ae3f90b4568c2fe3300967dbe88639" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:44:46 compute-0 nova_compute[261524]: 2025-09-30 14:44:46.895 2 DEBUG oslo_concurrency.lockutils [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Lock "5be88f2030ae3f90b4568c2fe3300967dbe88639" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:44:46 compute-0 nova_compute[261524]: 2025-09-30 14:44:46.895 2 DEBUG oslo_concurrency.lockutils [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Lock "5be88f2030ae3f90b4568c2fe3300967dbe88639" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:44:46 compute-0 nova_compute[261524]: 2025-09-30 14:44:46.925 2 DEBUG nova.storage.rbd_utils [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] rbd image ab354489-bdb3-49d0-9ed1-574d93130913_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Sep 30 14:44:46 compute-0 nova_compute[261524]: 2025-09-30 14:44:46.929 2 DEBUG oslo_concurrency.processutils [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/5be88f2030ae3f90b4568c2fe3300967dbe88639 ab354489-bdb3-49d0-9ed1-574d93130913_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Sep 30 14:44:47 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:44:47.149Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:44:47 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:44:47 compute-0 nova_compute[261524]: 2025-09-30 14:44:47.192 2 DEBUG oslo_concurrency.processutils [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/5be88f2030ae3f90b4568c2fe3300967dbe88639 ab354489-bdb3-49d0-9ed1-574d93130913_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.263s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Sep 30 14:44:47 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:44:47 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:44:47 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:44:47.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:44:47 compute-0 nova_compute[261524]: 2025-09-30 14:44:47.278 2 DEBUG nova.storage.rbd_utils [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] resizing rbd image ab354489-bdb3-49d0-9ed1-574d93130913_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Sep 30 14:44:47 compute-0 nova_compute[261524]: 2025-09-30 14:44:47.415 2 DEBUG nova.objects.instance [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Lazy-loading 'migration_context' on Instance uuid ab354489-bdb3-49d0-9ed1-574d93130913 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Sep 30 14:44:47 compute-0 nova_compute[261524]: 2025-09-30 14:44:47.435 2 DEBUG nova.virt.libvirt.driver [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Sep 30 14:44:47 compute-0 nova_compute[261524]: 2025-09-30 14:44:47.436 2 DEBUG nova.virt.libvirt.driver [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Ensure instance console log exists: /var/lib/nova/instances/ab354489-bdb3-49d0-9ed1-574d93130913/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Sep 30 14:44:47 compute-0 nova_compute[261524]: 2025-09-30 14:44:47.436 2 DEBUG oslo_concurrency.lockutils [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:44:47 compute-0 nova_compute[261524]: 2025-09-30 14:44:47.436 2 DEBUG oslo_concurrency.lockutils [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:44:47 compute-0 nova_compute[261524]: 2025-09-30 14:44:47.437 2 DEBUG oslo_concurrency.lockutils [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:44:47 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:44:47 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:44:47 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:44:47.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:44:47 compute-0 ceph-mon[74194]: pgmap v878: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:44:48 compute-0 nova_compute[261524]: 2025-09-30 14:44:48.002 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:44:48 compute-0 nova_compute[261524]: 2025-09-30 14:44:48.283 2 DEBUG nova.network.neutron [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Successfully created port: 70e1bfe9-6006-4e08-9c7f-c0d64c8269a0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Sep 30 14:44:48 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v879: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:44:48 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:44:48.806 163966 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ea:30:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '86:54:af:bb:5a:5f'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Sep 30 14:44:48 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:44:48.807 163966 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Sep 30 14:44:48 compute-0 nova_compute[261524]: 2025-09-30 14:44:48.807 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:44:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:44:48 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:44:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:44:48 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:44:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:44:48 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:44:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:44:49 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:44:49 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:44:49 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:44:49 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:44:49.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:44:49 compute-0 nova_compute[261524]: 2025-09-30 14:44:49.289 2 DEBUG nova.network.neutron [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Successfully updated port: 70e1bfe9-6006-4e08-9c7f-c0d64c8269a0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Sep 30 14:44:49 compute-0 nova_compute[261524]: 2025-09-30 14:44:49.304 2 DEBUG oslo_concurrency.lockutils [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Acquiring lock "refresh_cache-ab354489-bdb3-49d0-9ed1-574d93130913" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Sep 30 14:44:49 compute-0 nova_compute[261524]: 2025-09-30 14:44:49.304 2 DEBUG oslo_concurrency.lockutils [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Acquired lock "refresh_cache-ab354489-bdb3-49d0-9ed1-574d93130913" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Sep 30 14:44:49 compute-0 nova_compute[261524]: 2025-09-30 14:44:49.305 2 DEBUG nova.network.neutron [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Sep 30 14:44:49 compute-0 nova_compute[261524]: 2025-09-30 14:44:49.391 2 DEBUG nova.compute.manager [req-9299a849-d679-4146-911a-c9c895f6ca05 req-bad89d1f-5baf-4940-bae7-40dce66a5f95 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Received event network-changed-70e1bfe9-6006-4e08-9c7f-c0d64c8269a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Sep 30 14:44:49 compute-0 nova_compute[261524]: 2025-09-30 14:44:49.391 2 DEBUG nova.compute.manager [req-9299a849-d679-4146-911a-c9c895f6ca05 req-bad89d1f-5baf-4940-bae7-40dce66a5f95 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Refreshing instance network info cache due to event network-changed-70e1bfe9-6006-4e08-9c7f-c0d64c8269a0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Sep 30 14:44:49 compute-0 nova_compute[261524]: 2025-09-30 14:44:49.391 2 DEBUG oslo_concurrency.lockutils [req-9299a849-d679-4146-911a-c9c895f6ca05 req-bad89d1f-5baf-4940-bae7-40dce66a5f95 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Acquiring lock "refresh_cache-ab354489-bdb3-49d0-9ed1-574d93130913" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Sep 30 14:44:49 compute-0 nova_compute[261524]: 2025-09-30 14:44:49.481 2 DEBUG nova.network.neutron [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Sep 30 14:44:49 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:44:49 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:44:49 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:44:49.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:44:49 compute-0 ceph-mon[74194]: pgmap v879: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:44:50 compute-0 nova_compute[261524]: 2025-09-30 14:44:50.266 2 DEBUG nova.network.neutron [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Updating instance_info_cache with network_info: [{"id": "70e1bfe9-6006-4e08-9c7f-c0d64c8269a0", "address": "fa:16:3e:db:b9:ad", "network": {"id": "653945fb-0a1b-4a3b-b45f-4bafe62f765f", "bridge": "br-int", "label": "tempest-network-smoke--969342711", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0f6bbb74396f4cb7bfa999ebdabfe722", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70e1bfe9-60", "ovs_interfaceid": "70e1bfe9-6006-4e08-9c7f-c0d64c8269a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Sep 30 14:44:50 compute-0 nova_compute[261524]: 2025-09-30 14:44:50.338 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:44:50 compute-0 nova_compute[261524]: 2025-09-30 14:44:50.402 2 DEBUG oslo_concurrency.lockutils [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Releasing lock "refresh_cache-ab354489-bdb3-49d0-9ed1-574d93130913" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Sep 30 14:44:50 compute-0 nova_compute[261524]: 2025-09-30 14:44:50.402 2 DEBUG nova.compute.manager [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Instance network_info: |[{"id": "70e1bfe9-6006-4e08-9c7f-c0d64c8269a0", "address": "fa:16:3e:db:b9:ad", "network": {"id": "653945fb-0a1b-4a3b-b45f-4bafe62f765f", "bridge": "br-int", "label": "tempest-network-smoke--969342711", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0f6bbb74396f4cb7bfa999ebdabfe722", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70e1bfe9-60", "ovs_interfaceid": "70e1bfe9-6006-4e08-9c7f-c0d64c8269a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Sep 30 14:44:50 compute-0 nova_compute[261524]: 2025-09-30 14:44:50.403 2 DEBUG oslo_concurrency.lockutils [req-9299a849-d679-4146-911a-c9c895f6ca05 req-bad89d1f-5baf-4940-bae7-40dce66a5f95 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Acquired lock "refresh_cache-ab354489-bdb3-49d0-9ed1-574d93130913" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Sep 30 14:44:50 compute-0 nova_compute[261524]: 2025-09-30 14:44:50.403 2 DEBUG nova.network.neutron [req-9299a849-d679-4146-911a-c9c895f6ca05 req-bad89d1f-5baf-4940-bae7-40dce66a5f95 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Refreshing network info cache for port 70e1bfe9-6006-4e08-9c7f-c0d64c8269a0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Sep 30 14:44:50 compute-0 nova_compute[261524]: 2025-09-30 14:44:50.406 2 DEBUG nova.virt.libvirt.driver [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Start _get_guest_xml network_info=[{"id": "70e1bfe9-6006-4e08-9c7f-c0d64c8269a0", "address": "fa:16:3e:db:b9:ad", "network": {"id": "653945fb-0a1b-4a3b-b45f-4bafe62f765f", "bridge": "br-int", "label": "tempest-network-smoke--969342711", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0f6bbb74396f4cb7bfa999ebdabfe722", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70e1bfe9-60", "ovs_interfaceid": "70e1bfe9-6006-4e08-9c7f-c0d64c8269a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-09-30T14:39:17Z,direct_url=<?>,disk_format='qcow2',id=7c70cf84-edc3-42b2-a094-ae3c1dbaffe4,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5beed35d375f4bd185a6774dc475e0b9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-09-30T14:39:19Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'size': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'guest_format': None, 'disk_bus': 'virtio', 'image_id': '7c70cf84-edc3-42b2-a094-ae3c1dbaffe4'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Sep 30 14:44:50 compute-0 nova_compute[261524]: 2025-09-30 14:44:50.411 2 WARNING nova.virt.libvirt.driver [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 14:44:50 compute-0 nova_compute[261524]: 2025-09-30 14:44:50.417 2 DEBUG nova.virt.libvirt.host [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Sep 30 14:44:50 compute-0 nova_compute[261524]: 2025-09-30 14:44:50.418 2 DEBUG nova.virt.libvirt.host [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Sep 30 14:44:50 compute-0 nova_compute[261524]: 2025-09-30 14:44:50.421 2 DEBUG nova.virt.libvirt.host [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Sep 30 14:44:50 compute-0 nova_compute[261524]: 2025-09-30 14:44:50.422 2 DEBUG nova.virt.libvirt.host [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Sep 30 14:44:50 compute-0 nova_compute[261524]: 2025-09-30 14:44:50.422 2 DEBUG nova.virt.libvirt.driver [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Sep 30 14:44:50 compute-0 nova_compute[261524]: 2025-09-30 14:44:50.423 2 DEBUG nova.virt.hardware [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-09-30T14:39:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='64f3d3b9-41b6-4b89-8bbd-f654faf17546',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-09-30T14:39:17Z,direct_url=<?>,disk_format='qcow2',id=7c70cf84-edc3-42b2-a094-ae3c1dbaffe4,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5beed35d375f4bd185a6774dc475e0b9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-09-30T14:39:19Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Sep 30 14:44:50 compute-0 nova_compute[261524]: 2025-09-30 14:44:50.423 2 DEBUG nova.virt.hardware [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Sep 30 14:44:50 compute-0 nova_compute[261524]: 2025-09-30 14:44:50.423 2 DEBUG nova.virt.hardware [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Sep 30 14:44:50 compute-0 nova_compute[261524]: 2025-09-30 14:44:50.424 2 DEBUG nova.virt.hardware [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Sep 30 14:44:50 compute-0 nova_compute[261524]: 2025-09-30 14:44:50.424 2 DEBUG nova.virt.hardware [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Sep 30 14:44:50 compute-0 nova_compute[261524]: 2025-09-30 14:44:50.424 2 DEBUG nova.virt.hardware [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Sep 30 14:44:50 compute-0 nova_compute[261524]: 2025-09-30 14:44:50.425 2 DEBUG nova.virt.hardware [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Sep 30 14:44:50 compute-0 nova_compute[261524]: 2025-09-30 14:44:50.425 2 DEBUG nova.virt.hardware [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Sep 30 14:44:50 compute-0 nova_compute[261524]: 2025-09-30 14:44:50.425 2 DEBUG nova.virt.hardware [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Sep 30 14:44:50 compute-0 nova_compute[261524]: 2025-09-30 14:44:50.425 2 DEBUG nova.virt.hardware [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Sep 30 14:44:50 compute-0 nova_compute[261524]: 2025-09-30 14:44:50.426 2 DEBUG nova.virt.hardware [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Sep 30 14:44:50 compute-0 nova_compute[261524]: 2025-09-30 14:44:50.429 2 DEBUG oslo_concurrency.processutils [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Sep 30 14:44:50 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v880: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:44:50 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Sep 30 14:44:50 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2183008482' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 14:44:50 compute-0 nova_compute[261524]: 2025-09-30 14:44:50.871 2 DEBUG oslo_concurrency.processutils [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Sep 30 14:44:50 compute-0 nova_compute[261524]: 2025-09-30 14:44:50.907 2 DEBUG nova.storage.rbd_utils [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] rbd image ab354489-bdb3-49d0-9ed1-574d93130913_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Sep 30 14:44:50 compute-0 nova_compute[261524]: 2025-09-30 14:44:50.912 2 DEBUG oslo_concurrency.processutils [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Sep 30 14:44:50 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/2183008482' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 14:44:51 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:44:51 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:44:51 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:44:51.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:44:51 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Sep 30 14:44:51 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2072905592' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 14:44:51 compute-0 nova_compute[261524]: 2025-09-30 14:44:51.393 2 DEBUG oslo_concurrency.processutils [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Sep 30 14:44:51 compute-0 nova_compute[261524]: 2025-09-30 14:44:51.396 2 DEBUG nova.virt.libvirt.vif [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-09-30T14:44:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-711458846',display_name='tempest-TestNetworkBasicOps-server-711458846',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-711458846',id=6,image_ref='7c70cf84-edc3-42b2-a094-ae3c1dbaffe4',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIMQL53T2ZkSAoVzfinB0Xb6YV6zqFtICzdovU1Kn/PIvW0fTnkL2hml556IQQU+IFdjIRu6Xc3RQKHc2DkPb73zFKtN5c4E62Q7wZZkQI9VBc0aWDqG12KKHVj732hp6w==',key_name='tempest-TestNetworkBasicOps-1073344022',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0f6bbb74396f4cb7bfa999ebdabfe722',ramdisk_id='',reservation_id='r-z3tdfpa2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c70cf84-edc3-42b2-a094-ae3c1dbaffe4',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-195302952',owner_user_name='tempest-TestNetworkBasicOps-195302952-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-09-30T14:44:46Z,user_data=None,user_id='59c80c4f189d4667aec64b43afc69ed2',uuid=ab354489-bdb3-49d0-9ed1-574d93130913,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "70e1bfe9-6006-4e08-9c7f-c0d64c8269a0", "address": "fa:16:3e:db:b9:ad", "network": {"id": "653945fb-0a1b-4a3b-b45f-4bafe62f765f", "bridge": "br-int", "label": "tempest-network-smoke--969342711", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0f6bbb74396f4cb7bfa999ebdabfe722", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70e1bfe9-60", "ovs_interfaceid": "70e1bfe9-6006-4e08-9c7f-c0d64c8269a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Sep 30 14:44:51 compute-0 nova_compute[261524]: 2025-09-30 14:44:51.396 2 DEBUG nova.network.os_vif_util [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Converting VIF {"id": "70e1bfe9-6006-4e08-9c7f-c0d64c8269a0", "address": "fa:16:3e:db:b9:ad", "network": {"id": "653945fb-0a1b-4a3b-b45f-4bafe62f765f", "bridge": "br-int", "label": "tempest-network-smoke--969342711", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0f6bbb74396f4cb7bfa999ebdabfe722", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70e1bfe9-60", "ovs_interfaceid": "70e1bfe9-6006-4e08-9c7f-c0d64c8269a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Sep 30 14:44:51 compute-0 nova_compute[261524]: 2025-09-30 14:44:51.397 2 DEBUG nova.network.os_vif_util [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:db:b9:ad,bridge_name='br-int',has_traffic_filtering=True,id=70e1bfe9-6006-4e08-9c7f-c0d64c8269a0,network=Network(653945fb-0a1b-4a3b-b45f-4bafe62f765f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap70e1bfe9-60') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Sep 30 14:44:51 compute-0 nova_compute[261524]: 2025-09-30 14:44:51.399 2 DEBUG nova.objects.instance [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Lazy-loading 'pci_devices' on Instance uuid ab354489-bdb3-49d0-9ed1-574d93130913 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Sep 30 14:44:51 compute-0 nova_compute[261524]: 2025-09-30 14:44:51.416 2 DEBUG nova.virt.libvirt.driver [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] End _get_guest_xml xml=<domain type="kvm">
Sep 30 14:44:51 compute-0 nova_compute[261524]:   <uuid>ab354489-bdb3-49d0-9ed1-574d93130913</uuid>
Sep 30 14:44:51 compute-0 nova_compute[261524]:   <name>instance-00000006</name>
Sep 30 14:44:51 compute-0 nova_compute[261524]:   <memory>131072</memory>
Sep 30 14:44:51 compute-0 nova_compute[261524]:   <vcpu>1</vcpu>
Sep 30 14:44:51 compute-0 nova_compute[261524]:   <metadata>
Sep 30 14:44:51 compute-0 nova_compute[261524]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Sep 30 14:44:51 compute-0 nova_compute[261524]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Sep 30 14:44:51 compute-0 nova_compute[261524]:       <nova:name>tempest-TestNetworkBasicOps-server-711458846</nova:name>
Sep 30 14:44:51 compute-0 nova_compute[261524]:       <nova:creationTime>2025-09-30 14:44:50</nova:creationTime>
Sep 30 14:44:51 compute-0 nova_compute[261524]:       <nova:flavor name="m1.nano">
Sep 30 14:44:51 compute-0 nova_compute[261524]:         <nova:memory>128</nova:memory>
Sep 30 14:44:51 compute-0 nova_compute[261524]:         <nova:disk>1</nova:disk>
Sep 30 14:44:51 compute-0 nova_compute[261524]:         <nova:swap>0</nova:swap>
Sep 30 14:44:51 compute-0 nova_compute[261524]:         <nova:ephemeral>0</nova:ephemeral>
Sep 30 14:44:51 compute-0 nova_compute[261524]:         <nova:vcpus>1</nova:vcpus>
Sep 30 14:44:51 compute-0 nova_compute[261524]:       </nova:flavor>
Sep 30 14:44:51 compute-0 nova_compute[261524]:       <nova:owner>
Sep 30 14:44:51 compute-0 nova_compute[261524]:         <nova:user uuid="59c80c4f189d4667aec64b43afc69ed2">tempest-TestNetworkBasicOps-195302952-project-member</nova:user>
Sep 30 14:44:51 compute-0 nova_compute[261524]:         <nova:project uuid="0f6bbb74396f4cb7bfa999ebdabfe722">tempest-TestNetworkBasicOps-195302952</nova:project>
Sep 30 14:44:51 compute-0 nova_compute[261524]:       </nova:owner>
Sep 30 14:44:51 compute-0 nova_compute[261524]:       <nova:root type="image" uuid="7c70cf84-edc3-42b2-a094-ae3c1dbaffe4"/>
Sep 30 14:44:51 compute-0 nova_compute[261524]:       <nova:ports>
Sep 30 14:44:51 compute-0 nova_compute[261524]:         <nova:port uuid="70e1bfe9-6006-4e08-9c7f-c0d64c8269a0">
Sep 30 14:44:51 compute-0 nova_compute[261524]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Sep 30 14:44:51 compute-0 nova_compute[261524]:         </nova:port>
Sep 30 14:44:51 compute-0 nova_compute[261524]:       </nova:ports>
Sep 30 14:44:51 compute-0 nova_compute[261524]:     </nova:instance>
Sep 30 14:44:51 compute-0 nova_compute[261524]:   </metadata>
Sep 30 14:44:51 compute-0 nova_compute[261524]:   <sysinfo type="smbios">
Sep 30 14:44:51 compute-0 nova_compute[261524]:     <system>
Sep 30 14:44:51 compute-0 nova_compute[261524]:       <entry name="manufacturer">RDO</entry>
Sep 30 14:44:51 compute-0 nova_compute[261524]:       <entry name="product">OpenStack Compute</entry>
Sep 30 14:44:51 compute-0 nova_compute[261524]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Sep 30 14:44:51 compute-0 nova_compute[261524]:       <entry name="serial">ab354489-bdb3-49d0-9ed1-574d93130913</entry>
Sep 30 14:44:51 compute-0 nova_compute[261524]:       <entry name="uuid">ab354489-bdb3-49d0-9ed1-574d93130913</entry>
Sep 30 14:44:51 compute-0 nova_compute[261524]:       <entry name="family">Virtual Machine</entry>
Sep 30 14:44:51 compute-0 nova_compute[261524]:     </system>
Sep 30 14:44:51 compute-0 nova_compute[261524]:   </sysinfo>
Sep 30 14:44:51 compute-0 nova_compute[261524]:   <os>
Sep 30 14:44:51 compute-0 nova_compute[261524]:     <type arch="x86_64" machine="q35">hvm</type>
Sep 30 14:44:51 compute-0 nova_compute[261524]:     <boot dev="hd"/>
Sep 30 14:44:51 compute-0 nova_compute[261524]:     <smbios mode="sysinfo"/>
Sep 30 14:44:51 compute-0 nova_compute[261524]:   </os>
Sep 30 14:44:51 compute-0 nova_compute[261524]:   <features>
Sep 30 14:44:51 compute-0 nova_compute[261524]:     <acpi/>
Sep 30 14:44:51 compute-0 nova_compute[261524]:     <apic/>
Sep 30 14:44:51 compute-0 nova_compute[261524]:     <vmcoreinfo/>
Sep 30 14:44:51 compute-0 nova_compute[261524]:   </features>
Sep 30 14:44:51 compute-0 nova_compute[261524]:   <clock offset="utc">
Sep 30 14:44:51 compute-0 nova_compute[261524]:     <timer name="pit" tickpolicy="delay"/>
Sep 30 14:44:51 compute-0 nova_compute[261524]:     <timer name="rtc" tickpolicy="catchup"/>
Sep 30 14:44:51 compute-0 nova_compute[261524]:     <timer name="hpet" present="no"/>
Sep 30 14:44:51 compute-0 nova_compute[261524]:   </clock>
Sep 30 14:44:51 compute-0 nova_compute[261524]:   <cpu mode="host-model" match="exact">
Sep 30 14:44:51 compute-0 nova_compute[261524]:     <topology sockets="1" cores="1" threads="1"/>
Sep 30 14:44:51 compute-0 nova_compute[261524]:   </cpu>
Sep 30 14:44:51 compute-0 nova_compute[261524]:   <devices>
Sep 30 14:44:51 compute-0 nova_compute[261524]:     <disk type="network" device="disk">
Sep 30 14:44:51 compute-0 nova_compute[261524]:       <driver type="raw" cache="none"/>
Sep 30 14:44:51 compute-0 nova_compute[261524]:       <source protocol="rbd" name="vms/ab354489-bdb3-49d0-9ed1-574d93130913_disk">
Sep 30 14:44:51 compute-0 nova_compute[261524]:         <host name="192.168.122.100" port="6789"/>
Sep 30 14:44:51 compute-0 nova_compute[261524]:         <host name="192.168.122.102" port="6789"/>
Sep 30 14:44:51 compute-0 nova_compute[261524]:         <host name="192.168.122.101" port="6789"/>
Sep 30 14:44:51 compute-0 nova_compute[261524]:       </source>
Sep 30 14:44:51 compute-0 nova_compute[261524]:       <auth username="openstack">
Sep 30 14:44:51 compute-0 nova_compute[261524]:         <secret type="ceph" uuid="5e3c7776-ac03-5698-b79f-a6dc2d80cae6"/>
Sep 30 14:44:51 compute-0 nova_compute[261524]:       </auth>
Sep 30 14:44:51 compute-0 nova_compute[261524]:       <target dev="vda" bus="virtio"/>
Sep 30 14:44:51 compute-0 nova_compute[261524]:     </disk>
Sep 30 14:44:51 compute-0 nova_compute[261524]:     <disk type="network" device="cdrom">
Sep 30 14:44:51 compute-0 nova_compute[261524]:       <driver type="raw" cache="none"/>
Sep 30 14:44:51 compute-0 nova_compute[261524]:       <source protocol="rbd" name="vms/ab354489-bdb3-49d0-9ed1-574d93130913_disk.config">
Sep 30 14:44:51 compute-0 nova_compute[261524]:         <host name="192.168.122.100" port="6789"/>
Sep 30 14:44:51 compute-0 nova_compute[261524]:         <host name="192.168.122.102" port="6789"/>
Sep 30 14:44:51 compute-0 nova_compute[261524]:         <host name="192.168.122.101" port="6789"/>
Sep 30 14:44:51 compute-0 nova_compute[261524]:       </source>
Sep 30 14:44:51 compute-0 nova_compute[261524]:       <auth username="openstack">
Sep 30 14:44:51 compute-0 nova_compute[261524]:         <secret type="ceph" uuid="5e3c7776-ac03-5698-b79f-a6dc2d80cae6"/>
Sep 30 14:44:51 compute-0 nova_compute[261524]:       </auth>
Sep 30 14:44:51 compute-0 nova_compute[261524]:       <target dev="sda" bus="sata"/>
Sep 30 14:44:51 compute-0 nova_compute[261524]:     </disk>
Sep 30 14:44:51 compute-0 nova_compute[261524]:     <interface type="ethernet">
Sep 30 14:44:51 compute-0 nova_compute[261524]:       <mac address="fa:16:3e:db:b9:ad"/>
Sep 30 14:44:51 compute-0 nova_compute[261524]:       <model type="virtio"/>
Sep 30 14:44:51 compute-0 nova_compute[261524]:       <driver name="vhost" rx_queue_size="512"/>
Sep 30 14:44:51 compute-0 nova_compute[261524]:       <mtu size="1442"/>
Sep 30 14:44:51 compute-0 nova_compute[261524]:       <target dev="tap70e1bfe9-60"/>
Sep 30 14:44:51 compute-0 nova_compute[261524]:     </interface>
Sep 30 14:44:51 compute-0 nova_compute[261524]:     <serial type="pty">
Sep 30 14:44:51 compute-0 nova_compute[261524]:       <log file="/var/lib/nova/instances/ab354489-bdb3-49d0-9ed1-574d93130913/console.log" append="off"/>
Sep 30 14:44:51 compute-0 nova_compute[261524]:     </serial>
Sep 30 14:44:51 compute-0 nova_compute[261524]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Sep 30 14:44:51 compute-0 nova_compute[261524]:     <video>
Sep 30 14:44:51 compute-0 nova_compute[261524]:       <model type="virtio"/>
Sep 30 14:44:51 compute-0 nova_compute[261524]:     </video>
Sep 30 14:44:51 compute-0 nova_compute[261524]:     <input type="tablet" bus="usb"/>
Sep 30 14:44:51 compute-0 nova_compute[261524]:     <rng model="virtio">
Sep 30 14:44:51 compute-0 nova_compute[261524]:       <backend model="random">/dev/urandom</backend>
Sep 30 14:44:51 compute-0 nova_compute[261524]:     </rng>
Sep 30 14:44:51 compute-0 nova_compute[261524]:     <controller type="pci" model="pcie-root"/>
Sep 30 14:44:51 compute-0 nova_compute[261524]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 14:44:51 compute-0 nova_compute[261524]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 14:44:51 compute-0 nova_compute[261524]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 14:44:51 compute-0 nova_compute[261524]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 14:44:51 compute-0 nova_compute[261524]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 14:44:51 compute-0 nova_compute[261524]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 14:44:51 compute-0 nova_compute[261524]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 14:44:51 compute-0 nova_compute[261524]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 14:44:51 compute-0 nova_compute[261524]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 14:44:51 compute-0 nova_compute[261524]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 14:44:51 compute-0 nova_compute[261524]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 14:44:51 compute-0 nova_compute[261524]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 14:44:51 compute-0 nova_compute[261524]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 14:44:51 compute-0 nova_compute[261524]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 14:44:51 compute-0 nova_compute[261524]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 14:44:51 compute-0 nova_compute[261524]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 14:44:51 compute-0 nova_compute[261524]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 14:44:51 compute-0 nova_compute[261524]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 14:44:51 compute-0 nova_compute[261524]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 14:44:51 compute-0 nova_compute[261524]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 14:44:51 compute-0 nova_compute[261524]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 14:44:51 compute-0 nova_compute[261524]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 14:44:51 compute-0 nova_compute[261524]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 14:44:51 compute-0 nova_compute[261524]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 14:44:51 compute-0 nova_compute[261524]:     <controller type="usb" index="0"/>
Sep 30 14:44:51 compute-0 nova_compute[261524]:     <memballoon model="virtio">
Sep 30 14:44:51 compute-0 nova_compute[261524]:       <stats period="10"/>
Sep 30 14:44:51 compute-0 nova_compute[261524]:     </memballoon>
Sep 30 14:44:51 compute-0 nova_compute[261524]:   </devices>
Sep 30 14:44:51 compute-0 nova_compute[261524]: </domain>
Sep 30 14:44:51 compute-0 nova_compute[261524]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Sep 30 14:44:51 compute-0 nova_compute[261524]: 2025-09-30 14:44:51.418 2 DEBUG nova.compute.manager [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Preparing to wait for external event network-vif-plugged-70e1bfe9-6006-4e08-9c7f-c0d64c8269a0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Sep 30 14:44:51 compute-0 nova_compute[261524]: 2025-09-30 14:44:51.418 2 DEBUG oslo_concurrency.lockutils [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Acquiring lock "ab354489-bdb3-49d0-9ed1-574d93130913-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:44:51 compute-0 nova_compute[261524]: 2025-09-30 14:44:51.418 2 DEBUG oslo_concurrency.lockutils [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Lock "ab354489-bdb3-49d0-9ed1-574d93130913-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:44:51 compute-0 nova_compute[261524]: 2025-09-30 14:44:51.418 2 DEBUG oslo_concurrency.lockutils [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Lock "ab354489-bdb3-49d0-9ed1-574d93130913-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:44:51 compute-0 nova_compute[261524]: 2025-09-30 14:44:51.419 2 DEBUG nova.virt.libvirt.vif [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-09-30T14:44:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-711458846',display_name='tempest-TestNetworkBasicOps-server-711458846',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-711458846',id=6,image_ref='7c70cf84-edc3-42b2-a094-ae3c1dbaffe4',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIMQL53T2ZkSAoVzfinB0Xb6YV6zqFtICzdovU1Kn/PIvW0fTnkL2hml556IQQU+IFdjIRu6Xc3RQKHc2DkPb73zFKtN5c4E62Q7wZZkQI9VBc0aWDqG12KKHVj732hp6w==',key_name='tempest-TestNetworkBasicOps-1073344022',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0f6bbb74396f4cb7bfa999ebdabfe722',ramdisk_id='',reservation_id='r-z3tdfpa2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c70cf84-edc3-42b2-a094-ae3c1dbaffe4',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-195302952',owner_user_name='tempest-TestNetworkBasicOps-195302952-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-09-30T14:44:46Z,user_data=None,user_id='59c80c4f189d4667aec64b43afc69ed2',uuid=ab354489-bdb3-49d0-9ed1-574d93130913,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "70e1bfe9-6006-4e08-9c7f-c0d64c8269a0", "address": "fa:16:3e:db:b9:ad", "network": {"id": "653945fb-0a1b-4a3b-b45f-4bafe62f765f", "bridge": "br-int", "label": "tempest-network-smoke--969342711", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0f6bbb74396f4cb7bfa999ebdabfe722", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70e1bfe9-60", "ovs_interfaceid": "70e1bfe9-6006-4e08-9c7f-c0d64c8269a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Sep 30 14:44:51 compute-0 nova_compute[261524]: 2025-09-30 14:44:51.419 2 DEBUG nova.network.os_vif_util [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Converting VIF {"id": "70e1bfe9-6006-4e08-9c7f-c0d64c8269a0", "address": "fa:16:3e:db:b9:ad", "network": {"id": "653945fb-0a1b-4a3b-b45f-4bafe62f765f", "bridge": "br-int", "label": "tempest-network-smoke--969342711", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0f6bbb74396f4cb7bfa999ebdabfe722", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70e1bfe9-60", "ovs_interfaceid": "70e1bfe9-6006-4e08-9c7f-c0d64c8269a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Sep 30 14:44:51 compute-0 nova_compute[261524]: 2025-09-30 14:44:51.420 2 DEBUG nova.network.os_vif_util [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:db:b9:ad,bridge_name='br-int',has_traffic_filtering=True,id=70e1bfe9-6006-4e08-9c7f-c0d64c8269a0,network=Network(653945fb-0a1b-4a3b-b45f-4bafe62f765f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap70e1bfe9-60') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Sep 30 14:44:51 compute-0 nova_compute[261524]: 2025-09-30 14:44:51.420 2 DEBUG os_vif [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:db:b9:ad,bridge_name='br-int',has_traffic_filtering=True,id=70e1bfe9-6006-4e08-9c7f-c0d64c8269a0,network=Network(653945fb-0a1b-4a3b-b45f-4bafe62f765f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap70e1bfe9-60') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Sep 30 14:44:51 compute-0 nova_compute[261524]: 2025-09-30 14:44:51.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:44:51 compute-0 nova_compute[261524]: 2025-09-30 14:44:51.421 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 14:44:51 compute-0 nova_compute[261524]: 2025-09-30 14:44:51.422 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 14:44:51 compute-0 nova_compute[261524]: 2025-09-30 14:44:51.425 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:44:51 compute-0 nova_compute[261524]: 2025-09-30 14:44:51.426 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap70e1bfe9-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 14:44:51 compute-0 nova_compute[261524]: 2025-09-30 14:44:51.427 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap70e1bfe9-60, col_values=(('external_ids', {'iface-id': '70e1bfe9-6006-4e08-9c7f-c0d64c8269a0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:db:b9:ad', 'vm-uuid': 'ab354489-bdb3-49d0-9ed1-574d93130913'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 14:44:51 compute-0 nova_compute[261524]: 2025-09-30 14:44:51.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:44:51 compute-0 NetworkManager[45472]: <info>  [1759243491.4298] manager: (tap70e1bfe9-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/43)
Sep 30 14:44:51 compute-0 nova_compute[261524]: 2025-09-30 14:44:51.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Sep 30 14:44:51 compute-0 nova_compute[261524]: 2025-09-30 14:44:51.437 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:44:51 compute-0 nova_compute[261524]: 2025-09-30 14:44:51.439 2 INFO os_vif [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:db:b9:ad,bridge_name='br-int',has_traffic_filtering=True,id=70e1bfe9-6006-4e08-9c7f-c0d64c8269a0,network=Network(653945fb-0a1b-4a3b-b45f-4bafe62f765f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap70e1bfe9-60')
Sep 30 14:44:51 compute-0 nova_compute[261524]: 2025-09-30 14:44:51.505 2 DEBUG nova.virt.libvirt.driver [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Sep 30 14:44:51 compute-0 nova_compute[261524]: 2025-09-30 14:44:51.506 2 DEBUG nova.virt.libvirt.driver [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Sep 30 14:44:51 compute-0 nova_compute[261524]: 2025-09-30 14:44:51.506 2 DEBUG nova.virt.libvirt.driver [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] No VIF found with MAC fa:16:3e:db:b9:ad, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Sep 30 14:44:51 compute-0 nova_compute[261524]: 2025-09-30 14:44:51.507 2 INFO nova.virt.libvirt.driver [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Using config drive
Sep 30 14:44:51 compute-0 nova_compute[261524]: 2025-09-30 14:44:51.538 2 DEBUG nova.storage.rbd_utils [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] rbd image ab354489-bdb3-49d0-9ed1-574d93130913_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Sep 30 14:44:51 compute-0 nova_compute[261524]: 2025-09-30 14:44:51.544 2 DEBUG nova.network.neutron [req-9299a849-d679-4146-911a-c9c895f6ca05 req-bad89d1f-5baf-4940-bae7-40dce66a5f95 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Updated VIF entry in instance network info cache for port 70e1bfe9-6006-4e08-9c7f-c0d64c8269a0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Sep 30 14:44:51 compute-0 nova_compute[261524]: 2025-09-30 14:44:51.545 2 DEBUG nova.network.neutron [req-9299a849-d679-4146-911a-c9c895f6ca05 req-bad89d1f-5baf-4940-bae7-40dce66a5f95 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Updating instance_info_cache with network_info: [{"id": "70e1bfe9-6006-4e08-9c7f-c0d64c8269a0", "address": "fa:16:3e:db:b9:ad", "network": {"id": "653945fb-0a1b-4a3b-b45f-4bafe62f765f", "bridge": "br-int", "label": "tempest-network-smoke--969342711", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0f6bbb74396f4cb7bfa999ebdabfe722", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70e1bfe9-60", "ovs_interfaceid": "70e1bfe9-6006-4e08-9c7f-c0d64c8269a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Sep 30 14:44:51 compute-0 nova_compute[261524]: 2025-09-30 14:44:51.569 2 DEBUG oslo_concurrency.lockutils [req-9299a849-d679-4146-911a-c9c895f6ca05 req-bad89d1f-5baf-4940-bae7-40dce66a5f95 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Releasing lock "refresh_cache-ab354489-bdb3-49d0-9ed1-574d93130913" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Sep 30 14:44:51 compute-0 nova_compute[261524]: 2025-09-30 14:44:51.791 2 INFO nova.virt.libvirt.driver [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Creating config drive at /var/lib/nova/instances/ab354489-bdb3-49d0-9ed1-574d93130913/disk.config
Sep 30 14:44:51 compute-0 nova_compute[261524]: 2025-09-30 14:44:51.798 2 DEBUG oslo_concurrency.processutils [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ab354489-bdb3-49d0-9ed1-574d93130913/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgrgt7htc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Sep 30 14:44:51 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:44:51.810 163966 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c6331d25-78a2-493c-bb43-51ad387342be, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 14:44:51 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:44:51 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:44:51 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:44:51.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:44:51 compute-0 nova_compute[261524]: 2025-09-30 14:44:51.934 2 DEBUG oslo_concurrency.processutils [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ab354489-bdb3-49d0-9ed1-574d93130913/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgrgt7htc" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Sep 30 14:44:51 compute-0 ceph-mon[74194]: pgmap v880: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:44:51 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/2072905592' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 14:44:51 compute-0 nova_compute[261524]: 2025-09-30 14:44:51.966 2 DEBUG nova.storage.rbd_utils [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] rbd image ab354489-bdb3-49d0-9ed1-574d93130913_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Sep 30 14:44:51 compute-0 nova_compute[261524]: 2025-09-30 14:44:51.971 2 DEBUG oslo_concurrency.processutils [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ab354489-bdb3-49d0-9ed1-574d93130913/disk.config ab354489-bdb3-49d0-9ed1-574d93130913_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Sep 30 14:44:52 compute-0 nova_compute[261524]: 2025-09-30 14:44:52.145 2 DEBUG oslo_concurrency.processutils [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ab354489-bdb3-49d0-9ed1-574d93130913/disk.config ab354489-bdb3-49d0-9ed1-574d93130913_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.174s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Sep 30 14:44:52 compute-0 nova_compute[261524]: 2025-09-30 14:44:52.146 2 INFO nova.virt.libvirt.driver [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Deleting local config drive /var/lib/nova/instances/ab354489-bdb3-49d0-9ed1-574d93130913/disk.config because it was imported into RBD.
Sep 30 14:44:52 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:44:52 compute-0 systemd[1]: Starting libvirt secret daemon...
Sep 30 14:44:52 compute-0 systemd[1]: Started libvirt secret daemon.
Sep 30 14:44:52 compute-0 kernel: tap70e1bfe9-60: entered promiscuous mode
Sep 30 14:44:52 compute-0 NetworkManager[45472]: <info>  [1759243492.2651] manager: (tap70e1bfe9-60): new Tun device (/org/freedesktop/NetworkManager/Devices/44)
Sep 30 14:44:52 compute-0 ovn_controller[154021]: 2025-09-30T14:44:52Z|00058|binding|INFO|Claiming lport 70e1bfe9-6006-4e08-9c7f-c0d64c8269a0 for this chassis.
Sep 30 14:44:52 compute-0 ovn_controller[154021]: 2025-09-30T14:44:52Z|00059|binding|INFO|70e1bfe9-6006-4e08-9c7f-c0d64c8269a0: Claiming fa:16:3e:db:b9:ad 10.100.0.14
Sep 30 14:44:52 compute-0 nova_compute[261524]: 2025-09-30 14:44:52.266 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:44:52 compute-0 nova_compute[261524]: 2025-09-30 14:44:52.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:44:52 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:44:52.283 163966 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:db:b9:ad 10.100.0.14'], port_security=['fa:16:3e:db:b9:ad 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'ab354489-bdb3-49d0-9ed1-574d93130913', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-653945fb-0a1b-4a3b-b45f-4bafe62f765f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0f6bbb74396f4cb7bfa999ebdabfe722', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a439bb63-9919-40fb-8adf-828076e3652c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f85ab132-9b06-4fe7-bf67-10b54f3571f8, chassis=[<ovs.db.idl.Row object at 0x7f8c6753f7f0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f8c6753f7f0>], logical_port=70e1bfe9-6006-4e08-9c7f-c0d64c8269a0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Sep 30 14:44:52 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:44:52.284 163966 INFO neutron.agent.ovn.metadata.agent [-] Port 70e1bfe9-6006-4e08-9c7f-c0d64c8269a0 in datapath 653945fb-0a1b-4a3b-b45f-4bafe62f765f bound to our chassis
Sep 30 14:44:52 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:44:52.285 163966 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 653945fb-0a1b-4a3b-b45f-4bafe62f765f
Sep 30 14:44:52 compute-0 systemd-udevd[277084]: Network interface NamePolicy= disabled on kernel command line.
Sep 30 14:44:52 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:44:52.300 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[80917af9-0a02-41d6-9ea5-d7f5a8957280]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:44:52 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:44:52.302 163966 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap653945fb-01 in ovnmeta-653945fb-0a1b-4a3b-b45f-4bafe62f765f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Sep 30 14:44:52 compute-0 systemd-machined[215710]: New machine qemu-3-instance-00000006.
Sep 30 14:44:52 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:44:52.304 269027 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap653945fb-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Sep 30 14:44:52 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:44:52.304 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[4f8ed7c2-bae7-4c2f-9c08-e3e49f55ff9c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:44:52 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:44:52.305 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[940d4131-e41c-4903-8f11-cdc41cbee7b9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:44:52 compute-0 NetworkManager[45472]: <info>  [1759243492.3108] device (tap70e1bfe9-60): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Sep 30 14:44:52 compute-0 NetworkManager[45472]: <info>  [1759243492.3122] device (tap70e1bfe9-60): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Sep 30 14:44:52 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:44:52.319 164124 DEBUG oslo.privsep.daemon [-] privsep: reply[61998ba1-1583-4880-b602-38dbdbb3be3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:44:52 compute-0 systemd[1]: Started Virtual Machine qemu-3-instance-00000006.
Sep 30 14:44:52 compute-0 nova_compute[261524]: 2025-09-30 14:44:52.346 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:44:52 compute-0 ovn_controller[154021]: 2025-09-30T14:44:52Z|00060|binding|INFO|Setting lport 70e1bfe9-6006-4e08-9c7f-c0d64c8269a0 ovn-installed in OVS
Sep 30 14:44:52 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:44:52.349 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[8480b5a7-18d3-49eb-9001-d32db1629fd5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:44:52 compute-0 ovn_controller[154021]: 2025-09-30T14:44:52Z|00061|binding|INFO|Setting lport 70e1bfe9-6006-4e08-9c7f-c0d64c8269a0 up in Southbound
Sep 30 14:44:52 compute-0 nova_compute[261524]: 2025-09-30 14:44:52.353 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:44:52 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:44:52.388 269085 DEBUG oslo.privsep.daemon [-] privsep: reply[4cdb9160-3154-4ca1-adee-45452b4122fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:44:52 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:44:52.395 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[77ab579e-51ed-41a3-b580-9dd26b3c194a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:44:52 compute-0 NetworkManager[45472]: <info>  [1759243492.3967] manager: (tap653945fb-00): new Veth device (/org/freedesktop/NetworkManager/Devices/45)
Sep 30 14:44:52 compute-0 systemd-udevd[277087]: Network interface NamePolicy= disabled on kernel command line.
Sep 30 14:44:52 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:44:52.440 269085 DEBUG oslo.privsep.daemon [-] privsep: reply[1881b0f6-9fe1-4017-9214-48ad4009b057]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:44:52 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:44:52.444 269085 DEBUG oslo.privsep.daemon [-] privsep: reply[97f4df35-9513-4210-94e5-f210a1706e6b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:44:52 compute-0 NetworkManager[45472]: <info>  [1759243492.4690] device (tap653945fb-00): carrier: link connected
Sep 30 14:44:52 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:44:52.474 269085 DEBUG oslo.privsep.daemon [-] privsep: reply[9e2059df-fc59-4294-9536-212cbe82cf9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:44:52 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:44:52.490 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[b4378497-a273-4fb6-9a79-e37b2a5dd75e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap653945fb-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:82:77:1b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 685482, 'reachable_time': 21947, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 277116, 'error': None, 'target': 'ovnmeta-653945fb-0a1b-4a3b-b45f-4bafe62f765f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:44:52 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:44:52.503 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[a6495748-5a3d-47ee-aaec-ab04aa91f14a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe82:771b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 685482, 'tstamp': 685482}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 277117, 'error': None, 'target': 'ovnmeta-653945fb-0a1b-4a3b-b45f-4bafe62f765f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:44:52 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:44:52.521 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[6945c29d-f94a-4385-be48-f42e9635817c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap653945fb-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:82:77:1b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 685482, 'reachable_time': 21947, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 277118, 'error': None, 'target': 'ovnmeta-653945fb-0a1b-4a3b-b45f-4bafe62f765f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:44:52 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:44:52.550 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[1024dd3c-5c94-4e12-8049-23381eac034e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:44:52 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:44:52.608 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[f20ed943-aaf8-4595-be1b-518cf4ca3a09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:44:52 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:44:52.610 163966 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap653945fb-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 14:44:52 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:44:52.610 163966 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 14:44:52 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:44:52.611 163966 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap653945fb-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 14:44:52 compute-0 kernel: tap653945fb-00: entered promiscuous mode
Sep 30 14:44:52 compute-0 NetworkManager[45472]: <info>  [1759243492.6146] manager: (tap653945fb-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/46)
Sep 30 14:44:52 compute-0 nova_compute[261524]: 2025-09-30 14:44:52.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:44:52 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:44:52.616 163966 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap653945fb-00, col_values=(('external_ids', {'iface-id': '774ce5b0-5e80-4a27-9cdb-1f1629fd42f7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 14:44:52 compute-0 ovn_controller[154021]: 2025-09-30T14:44:52Z|00062|binding|INFO|Releasing lport 774ce5b0-5e80-4a27-9cdb-1f1629fd42f7 from this chassis (sb_readonly=0)
Sep 30 14:44:52 compute-0 nova_compute[261524]: 2025-09-30 14:44:52.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:44:52 compute-0 nova_compute[261524]: 2025-09-30 14:44:52.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:44:52 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:44:52.637 163966 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/653945fb-0a1b-4a3b-b45f-4bafe62f765f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/653945fb-0a1b-4a3b-b45f-4bafe62f765f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Sep 30 14:44:52 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:44:52.638 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[abb3273a-7ef5-458e-aa76-e1cf89be5717]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:44:52 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:44:52.639 163966 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Sep 30 14:44:52 compute-0 ovn_metadata_agent[163949]: global
Sep 30 14:44:52 compute-0 ovn_metadata_agent[163949]:     log         /dev/log local0 debug
Sep 30 14:44:52 compute-0 ovn_metadata_agent[163949]:     log-tag     haproxy-metadata-proxy-653945fb-0a1b-4a3b-b45f-4bafe62f765f
Sep 30 14:44:52 compute-0 ovn_metadata_agent[163949]:     user        root
Sep 30 14:44:52 compute-0 ovn_metadata_agent[163949]:     group       root
Sep 30 14:44:52 compute-0 ovn_metadata_agent[163949]:     maxconn     1024
Sep 30 14:44:52 compute-0 ovn_metadata_agent[163949]:     pidfile     /var/lib/neutron/external/pids/653945fb-0a1b-4a3b-b45f-4bafe62f765f.pid.haproxy
Sep 30 14:44:52 compute-0 ovn_metadata_agent[163949]:     daemon
Sep 30 14:44:52 compute-0 ovn_metadata_agent[163949]: 
Sep 30 14:44:52 compute-0 ovn_metadata_agent[163949]: defaults
Sep 30 14:44:52 compute-0 ovn_metadata_agent[163949]:     log global
Sep 30 14:44:52 compute-0 ovn_metadata_agent[163949]:     mode http
Sep 30 14:44:52 compute-0 ovn_metadata_agent[163949]:     option httplog
Sep 30 14:44:52 compute-0 ovn_metadata_agent[163949]:     option dontlognull
Sep 30 14:44:52 compute-0 ovn_metadata_agent[163949]:     option http-server-close
Sep 30 14:44:52 compute-0 ovn_metadata_agent[163949]:     option forwardfor
Sep 30 14:44:52 compute-0 ovn_metadata_agent[163949]:     retries                 3
Sep 30 14:44:52 compute-0 ovn_metadata_agent[163949]:     timeout http-request    30s
Sep 30 14:44:52 compute-0 ovn_metadata_agent[163949]:     timeout connect         30s
Sep 30 14:44:52 compute-0 ovn_metadata_agent[163949]:     timeout client          32s
Sep 30 14:44:52 compute-0 ovn_metadata_agent[163949]:     timeout server          32s
Sep 30 14:44:52 compute-0 ovn_metadata_agent[163949]:     timeout http-keep-alive 30s
Sep 30 14:44:52 compute-0 ovn_metadata_agent[163949]: 
Sep 30 14:44:52 compute-0 ovn_metadata_agent[163949]: 
Sep 30 14:44:52 compute-0 ovn_metadata_agent[163949]: listen listener
Sep 30 14:44:52 compute-0 ovn_metadata_agent[163949]:     bind 169.254.169.254:80
Sep 30 14:44:52 compute-0 ovn_metadata_agent[163949]:     server metadata /var/lib/neutron/metadata_proxy
Sep 30 14:44:52 compute-0 ovn_metadata_agent[163949]:     http-request add-header X-OVN-Network-ID 653945fb-0a1b-4a3b-b45f-4bafe62f765f
Sep 30 14:44:52 compute-0 ovn_metadata_agent[163949]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Sep 30 14:44:52 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:44:52.640 163966 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-653945fb-0a1b-4a3b-b45f-4bafe62f765f', 'env', 'PROCESS_TAG=haproxy-653945fb-0a1b-4a3b-b45f-4bafe62f765f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/653945fb-0a1b-4a3b-b45f-4bafe62f765f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Sep 30 14:44:52 compute-0 nova_compute[261524]: 2025-09-30 14:44:52.738 2 DEBUG nova.compute.manager [req-ec4115ff-c007-444b-bee5-cfa65c8eb816 req-151dfe59-85c8-4e89-93af-dd4f8994aecb e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Received event network-vif-plugged-70e1bfe9-6006-4e08-9c7f-c0d64c8269a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Sep 30 14:44:52 compute-0 nova_compute[261524]: 2025-09-30 14:44:52.739 2 DEBUG oslo_concurrency.lockutils [req-ec4115ff-c007-444b-bee5-cfa65c8eb816 req-151dfe59-85c8-4e89-93af-dd4f8994aecb e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Acquiring lock "ab354489-bdb3-49d0-9ed1-574d93130913-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:44:52 compute-0 nova_compute[261524]: 2025-09-30 14:44:52.739 2 DEBUG oslo_concurrency.lockutils [req-ec4115ff-c007-444b-bee5-cfa65c8eb816 req-151dfe59-85c8-4e89-93af-dd4f8994aecb e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Lock "ab354489-bdb3-49d0-9ed1-574d93130913-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:44:52 compute-0 nova_compute[261524]: 2025-09-30 14:44:52.739 2 DEBUG oslo_concurrency.lockutils [req-ec4115ff-c007-444b-bee5-cfa65c8eb816 req-151dfe59-85c8-4e89-93af-dd4f8994aecb e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Lock "ab354489-bdb3-49d0-9ed1-574d93130913-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:44:52 compute-0 nova_compute[261524]: 2025-09-30 14:44:52.740 2 DEBUG nova.compute.manager [req-ec4115ff-c007-444b-bee5-cfa65c8eb816 req-151dfe59-85c8-4e89-93af-dd4f8994aecb e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Processing event network-vif-plugged-70e1bfe9-6006-4e08-9c7f-c0d64c8269a0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Sep 30 14:44:52 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v881: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 14:44:53 compute-0 podman[277192]: 2025-09-30 14:44:53.045730259 +0000 UTC m=+0.052678723 container create 8ec4d148dbca58bfb8df57321046b18ccbe18e95ac2018a6b9ec3d4800b4b8a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-653945fb-0a1b-4a3b-b45f-4bafe62f765f, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250923, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Sep 30 14:44:53 compute-0 systemd[1]: Started libpod-conmon-8ec4d148dbca58bfb8df57321046b18ccbe18e95ac2018a6b9ec3d4800b4b8a1.scope.
Sep 30 14:44:53 compute-0 podman[277192]: 2025-09-30 14:44:53.016269271 +0000 UTC m=+0.023217745 image pull aa21cc3d2531fe07b45a943d4ac1ba0268bfab26b0884a4a00fbad7695318ba9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Sep 30 14:44:53 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:44:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23b98b075e583dcc583f15e9ea5f42d76d9d59597714d4f45cba2b1331fbb896/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Sep 30 14:44:53 compute-0 podman[277192]: 2025-09-30 14:44:53.152295383 +0000 UTC m=+0.159243847 container init 8ec4d148dbca58bfb8df57321046b18ccbe18e95ac2018a6b9ec3d4800b4b8a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-653945fb-0a1b-4a3b-b45f-4bafe62f765f, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250923, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:44:53 compute-0 podman[277192]: 2025-09-30 14:44:53.158671509 +0000 UTC m=+0.165619953 container start 8ec4d148dbca58bfb8df57321046b18ccbe18e95ac2018a6b9ec3d4800b4b8a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-653945fb-0a1b-4a3b-b45f-4bafe62f765f, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250923, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Sep 30 14:44:53 compute-0 neutron-haproxy-ovnmeta-653945fb-0a1b-4a3b-b45f-4bafe62f765f[277207]: [NOTICE]   (277211) : New worker (277213) forked
Sep 30 14:44:53 compute-0 neutron-haproxy-ovnmeta-653945fb-0a1b-4a3b-b45f-4bafe62f765f[277207]: [NOTICE]   (277211) : Loading success.
Sep 30 14:44:53 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:44:53 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:44:53 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:44:53.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:44:53 compute-0 nova_compute[261524]: 2025-09-30 14:44:53.361 2 DEBUG nova.compute.manager [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Sep 30 14:44:53 compute-0 nova_compute[261524]: 2025-09-30 14:44:53.363 2 DEBUG nova.virt.driver [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] Emitting event <LifecycleEvent: 1759243493.3603592, ab354489-bdb3-49d0-9ed1-574d93130913 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Sep 30 14:44:53 compute-0 nova_compute[261524]: 2025-09-30 14:44:53.363 2 INFO nova.compute.manager [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] VM Started (Lifecycle Event)
Sep 30 14:44:53 compute-0 nova_compute[261524]: 2025-09-30 14:44:53.367 2 DEBUG nova.virt.libvirt.driver [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Sep 30 14:44:53 compute-0 nova_compute[261524]: 2025-09-30 14:44:53.373 2 INFO nova.virt.libvirt.driver [-] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Instance spawned successfully.
Sep 30 14:44:53 compute-0 nova_compute[261524]: 2025-09-30 14:44:53.374 2 DEBUG nova.virt.libvirt.driver [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Sep 30 14:44:53 compute-0 nova_compute[261524]: 2025-09-30 14:44:53.385 2 DEBUG nova.compute.manager [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Sep 30 14:44:53 compute-0 nova_compute[261524]: 2025-09-30 14:44:53.388 2 DEBUG nova.compute.manager [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Sep 30 14:44:53 compute-0 nova_compute[261524]: 2025-09-30 14:44:53.397 2 DEBUG nova.virt.libvirt.driver [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Sep 30 14:44:53 compute-0 nova_compute[261524]: 2025-09-30 14:44:53.397 2 DEBUG nova.virt.libvirt.driver [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Sep 30 14:44:53 compute-0 nova_compute[261524]: 2025-09-30 14:44:53.398 2 DEBUG nova.virt.libvirt.driver [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Sep 30 14:44:53 compute-0 nova_compute[261524]: 2025-09-30 14:44:53.398 2 DEBUG nova.virt.libvirt.driver [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Sep 30 14:44:53 compute-0 nova_compute[261524]: 2025-09-30 14:44:53.398 2 DEBUG nova.virt.libvirt.driver [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Sep 30 14:44:53 compute-0 nova_compute[261524]: 2025-09-30 14:44:53.399 2 DEBUG nova.virt.libvirt.driver [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Sep 30 14:44:53 compute-0 nova_compute[261524]: 2025-09-30 14:44:53.408 2 INFO nova.compute.manager [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] During sync_power_state the instance has a pending task (spawning). Skip.
Sep 30 14:44:53 compute-0 nova_compute[261524]: 2025-09-30 14:44:53.408 2 DEBUG nova.virt.driver [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] Emitting event <LifecycleEvent: 1759243493.362115, ab354489-bdb3-49d0-9ed1-574d93130913 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Sep 30 14:44:53 compute-0 nova_compute[261524]: 2025-09-30 14:44:53.409 2 INFO nova.compute.manager [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] VM Paused (Lifecycle Event)
Sep 30 14:44:53 compute-0 nova_compute[261524]: 2025-09-30 14:44:53.474 2 DEBUG nova.compute.manager [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Sep 30 14:44:53 compute-0 nova_compute[261524]: 2025-09-30 14:44:53.479 2 DEBUG nova.virt.driver [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] Emitting event <LifecycleEvent: 1759243493.3663652, ab354489-bdb3-49d0-9ed1-574d93130913 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Sep 30 14:44:53 compute-0 nova_compute[261524]: 2025-09-30 14:44:53.480 2 INFO nova.compute.manager [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] VM Resumed (Lifecycle Event)
Sep 30 14:44:53 compute-0 nova_compute[261524]: 2025-09-30 14:44:53.508 2 INFO nova.compute.manager [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Took 6.80 seconds to spawn the instance on the hypervisor.
Sep 30 14:44:53 compute-0 nova_compute[261524]: 2025-09-30 14:44:53.509 2 DEBUG nova.compute.manager [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Sep 30 14:44:53 compute-0 nova_compute[261524]: 2025-09-30 14:44:53.512 2 DEBUG nova.compute.manager [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Sep 30 14:44:53 compute-0 nova_compute[261524]: 2025-09-30 14:44:53.524 2 DEBUG nova.compute.manager [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Sep 30 14:44:53 compute-0 nova_compute[261524]: 2025-09-30 14:44:53.559 2 INFO nova.compute.manager [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] During sync_power_state the instance has a pending task (spawning). Skip.
Sep 30 14:44:53 compute-0 nova_compute[261524]: 2025-09-30 14:44:53.578 2 INFO nova.compute.manager [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Took 8.30 seconds to build instance.
Sep 30 14:44:53 compute-0 nova_compute[261524]: 2025-09-30 14:44:53.593 2 DEBUG oslo_concurrency.lockutils [None req-171103c3-7356-45b4-9ad5-c98470569add 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Lock "ab354489-bdb3-49d0-9ed1-574d93130913" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.444s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:44:53 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:44:53.655Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:44:53 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:44:53.655Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:44:53 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:44:53 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:44:53 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:44:53.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:44:53 compute-0 ceph-mon[74194]: pgmap v881: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 14:44:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:44:54 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:44:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:44:54 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:44:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:44:54 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:44:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:44:54 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:44:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:44:54] "GET /metrics HTTP/1.1" 200 48528 "" "Prometheus/2.51.0"
Sep 30 14:44:54 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:44:54] "GET /metrics HTTP/1.1" 200 48528 "" "Prometheus/2.51.0"
Sep 30 14:44:54 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v882: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 14:44:54 compute-0 nova_compute[261524]: 2025-09-30 14:44:54.808 2 DEBUG nova.compute.manager [req-82d1244a-6f81-44cd-a34b-f0b542c67e71 req-da664c92-6fe2-4813-a78b-aac4cc767087 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Received event network-vif-plugged-70e1bfe9-6006-4e08-9c7f-c0d64c8269a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Sep 30 14:44:54 compute-0 nova_compute[261524]: 2025-09-30 14:44:54.809 2 DEBUG oslo_concurrency.lockutils [req-82d1244a-6f81-44cd-a34b-f0b542c67e71 req-da664c92-6fe2-4813-a78b-aac4cc767087 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Acquiring lock "ab354489-bdb3-49d0-9ed1-574d93130913-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:44:54 compute-0 nova_compute[261524]: 2025-09-30 14:44:54.809 2 DEBUG oslo_concurrency.lockutils [req-82d1244a-6f81-44cd-a34b-f0b542c67e71 req-da664c92-6fe2-4813-a78b-aac4cc767087 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Lock "ab354489-bdb3-49d0-9ed1-574d93130913-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:44:54 compute-0 nova_compute[261524]: 2025-09-30 14:44:54.809 2 DEBUG oslo_concurrency.lockutils [req-82d1244a-6f81-44cd-a34b-f0b542c67e71 req-da664c92-6fe2-4813-a78b-aac4cc767087 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Lock "ab354489-bdb3-49d0-9ed1-574d93130913-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:44:54 compute-0 nova_compute[261524]: 2025-09-30 14:44:54.810 2 DEBUG nova.compute.manager [req-82d1244a-6f81-44cd-a34b-f0b542c67e71 req-da664c92-6fe2-4813-a78b-aac4cc767087 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] No waiting events found dispatching network-vif-plugged-70e1bfe9-6006-4e08-9c7f-c0d64c8269a0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Sep 30 14:44:54 compute-0 nova_compute[261524]: 2025-09-30 14:44:54.810 2 WARNING nova.compute.manager [req-82d1244a-6f81-44cd-a34b-f0b542c67e71 req-da664c92-6fe2-4813-a78b-aac4cc767087 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Received unexpected event network-vif-plugged-70e1bfe9-6006-4e08-9c7f-c0d64c8269a0 for instance with vm_state active and task_state None.
Sep 30 14:44:55 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:44:55 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:44:55 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:44:55.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:44:55 compute-0 nova_compute[261524]: 2025-09-30 14:44:55.340 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:44:55 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:44:55 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:44:55 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:44:55.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:44:56 compute-0 ceph-mon[74194]: pgmap v882: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 14:44:56 compute-0 nova_compute[261524]: 2025-09-30 14:44:56.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:44:56 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v883: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Sep 30 14:44:57 compute-0 ovn_controller[154021]: 2025-09-30T14:44:57Z|00063|binding|INFO|Releasing lport 774ce5b0-5e80-4a27-9cdb-1f1629fd42f7 from this chassis (sb_readonly=0)
Sep 30 14:44:57 compute-0 NetworkManager[45472]: <info>  [1759243497.0842] manager: (patch-provnet-5acf2efb-cf69-45fa-8cf3-f555bc74ee6d-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/47)
Sep 30 14:44:57 compute-0 NetworkManager[45472]: <info>  [1759243497.0852] manager: (patch-br-int-to-provnet-5acf2efb-cf69-45fa-8cf3-f555bc74ee6d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/48)
Sep 30 14:44:57 compute-0 nova_compute[261524]: 2025-09-30 14:44:57.083 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:44:57 compute-0 ovn_controller[154021]: 2025-09-30T14:44:57Z|00064|binding|INFO|Releasing lport 774ce5b0-5e80-4a27-9cdb-1f1629fd42f7 from this chassis (sb_readonly=0)
Sep 30 14:44:57 compute-0 nova_compute[261524]: 2025-09-30 14:44:57.142 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:44:57 compute-0 nova_compute[261524]: 2025-09-30 14:44:57.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:44:57 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:44:57.151Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:44:57 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:44:57.151Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:44:57 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:44:57 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:44:57 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:44:57 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:44:57.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:44:57 compute-0 nova_compute[261524]: 2025-09-30 14:44:57.425 2 DEBUG nova.compute.manager [req-4d58665d-9217-4e0d-acda-fb403a0ff1bd req-92789fba-8a5f-4440-b692-9d2db4e24e24 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Received event network-changed-70e1bfe9-6006-4e08-9c7f-c0d64c8269a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Sep 30 14:44:57 compute-0 nova_compute[261524]: 2025-09-30 14:44:57.426 2 DEBUG nova.compute.manager [req-4d58665d-9217-4e0d-acda-fb403a0ff1bd req-92789fba-8a5f-4440-b692-9d2db4e24e24 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Refreshing instance network info cache due to event network-changed-70e1bfe9-6006-4e08-9c7f-c0d64c8269a0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Sep 30 14:44:57 compute-0 nova_compute[261524]: 2025-09-30 14:44:57.426 2 DEBUG oslo_concurrency.lockutils [req-4d58665d-9217-4e0d-acda-fb403a0ff1bd req-92789fba-8a5f-4440-b692-9d2db4e24e24 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Acquiring lock "refresh_cache-ab354489-bdb3-49d0-9ed1-574d93130913" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Sep 30 14:44:57 compute-0 nova_compute[261524]: 2025-09-30 14:44:57.426 2 DEBUG oslo_concurrency.lockutils [req-4d58665d-9217-4e0d-acda-fb403a0ff1bd req-92789fba-8a5f-4440-b692-9d2db4e24e24 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Acquired lock "refresh_cache-ab354489-bdb3-49d0-9ed1-574d93130913" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Sep 30 14:44:57 compute-0 nova_compute[261524]: 2025-09-30 14:44:57.426 2 DEBUG nova.network.neutron [req-4d58665d-9217-4e0d-acda-fb403a0ff1bd req-92789fba-8a5f-4440-b692-9d2db4e24e24 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Refreshing network info cache for port 70e1bfe9-6006-4e08-9c7f-c0d64c8269a0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Sep 30 14:44:57 compute-0 nova_compute[261524]: 2025-09-30 14:44:57.859 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:44:57 compute-0 nova_compute[261524]: 2025-09-30 14:44:57.882 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Triggering sync for uuid ab354489-bdb3-49d0-9ed1-574d93130913 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Sep 30 14:44:57 compute-0 nova_compute[261524]: 2025-09-30 14:44:57.883 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Acquiring lock "ab354489-bdb3-49d0-9ed1-574d93130913" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:44:57 compute-0 nova_compute[261524]: 2025-09-30 14:44:57.884 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "ab354489-bdb3-49d0-9ed1-574d93130913" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:44:57 compute-0 nova_compute[261524]: 2025-09-30 14:44:57.918 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "ab354489-bdb3-49d0-9ed1-574d93130913" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.035s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:44:57 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:44:57 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:44:57 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:44:57.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:44:58 compute-0 ceph-mon[74194]: pgmap v883: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Sep 30 14:44:58 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v884: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Sep 30 14:44:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:44:58 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:44:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:44:58 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:44:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:44:58 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:44:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:44:59 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:44:59 compute-0 podman[277232]: 2025-09-30 14:44:59.147562255 +0000 UTC m=+0.063538835 container health_status c96c6cc1b89de09f21811bef665f35e069874e3ad1bbd4c37d02227af98e3458 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20250923)
Sep 30 14:44:59 compute-0 podman[277229]: 2025-09-30 14:44:59.185622056 +0000 UTC m=+0.111175035 container health_status 3f9405f717bf7bccb1d94628a6cea0442375ebf8d5cf43ef2536ee30dce6c6e0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Sep 30 14:44:59 compute-0 podman[277231]: 2025-09-30 14:44:59.186563351 +0000 UTC m=+0.108728982 container health_status b3fb2fd96e3ed0561f41ccf5f3a6a8f5edc1610051f196882b9fe64f54041c07 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20250923, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Sep 30 14:44:59 compute-0 podman[277230]: 2025-09-30 14:44:59.191004476 +0000 UTC m=+0.114335808 container health_status 8ace90c1ad44d98a416105b72e1c541724757ab08efa10adfaa0df7e8c2027c6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2)
Sep 30 14:44:59 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:44:59 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:44:59 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:44:59.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:44:59 compute-0 nova_compute[261524]: 2025-09-30 14:44:59.438 2 DEBUG nova.network.neutron [req-4d58665d-9217-4e0d-acda-fb403a0ff1bd req-92789fba-8a5f-4440-b692-9d2db4e24e24 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Updated VIF entry in instance network info cache for port 70e1bfe9-6006-4e08-9c7f-c0d64c8269a0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Sep 30 14:44:59 compute-0 nova_compute[261524]: 2025-09-30 14:44:59.438 2 DEBUG nova.network.neutron [req-4d58665d-9217-4e0d-acda-fb403a0ff1bd req-92789fba-8a5f-4440-b692-9d2db4e24e24 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Updating instance_info_cache with network_info: [{"id": "70e1bfe9-6006-4e08-9c7f-c0d64c8269a0", "address": "fa:16:3e:db:b9:ad", "network": {"id": "653945fb-0a1b-4a3b-b45f-4bafe62f765f", "bridge": "br-int", "label": "tempest-network-smoke--969342711", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0f6bbb74396f4cb7bfa999ebdabfe722", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70e1bfe9-60", "ovs_interfaceid": "70e1bfe9-6006-4e08-9c7f-c0d64c8269a0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Sep 30 14:44:59 compute-0 nova_compute[261524]: 2025-09-30 14:44:59.466 2 DEBUG oslo_concurrency.lockutils [req-4d58665d-9217-4e0d-acda-fb403a0ff1bd req-92789fba-8a5f-4440-b692-9d2db4e24e24 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Releasing lock "refresh_cache-ab354489-bdb3-49d0-9ed1-574d93130913" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Sep 30 14:44:59 compute-0 ceph-mgr[74485]: [balancer INFO root] Optimize plan auto_2025-09-30_14:44:59
Sep 30 14:44:59 compute-0 ceph-mgr[74485]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 14:44:59 compute-0 ceph-mgr[74485]: [balancer INFO root] do_upmap
Sep 30 14:44:59 compute-0 ceph-mgr[74485]: [balancer INFO root] pools ['.nfs', 'backups', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.control', 'cephfs.cephfs.data', '.mgr', 'default.rgw.meta', 'default.rgw.log', 'volumes', 'images', 'vms']
Sep 30 14:44:59 compute-0 ceph-mgr[74485]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 14:44:59 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:44:59 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:44:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:44:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:44:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:44:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:44:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:44:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:44:59 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:44:59 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:44:59 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:44:59.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:44:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 14:44:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:44:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 14:44:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:44:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00034841348814872695 of space, bias 1.0, pg target 0.10452404644461809 quantized to 32 (current 32)
Sep 30 14:44:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:44:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:44:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:44:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:44:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:44:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Sep 30 14:44:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:44:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Sep 30 14:44:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:44:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:44:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:44:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Sep 30 14:44:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:44:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Sep 30 14:44:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:44:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:44:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:44:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 14:44:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:44:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 14:45:00 compute-0 ceph-mon[74194]: pgmap v884: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Sep 30 14:45:00 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:45:00 compute-0 nova_compute[261524]: 2025-09-30 14:45:00.343 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:45:00 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v885: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Sep 30 14:45:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 14:45:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 14:45:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 14:45:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 14:45:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 14:45:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 14:45:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 14:45:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 14:45:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 14:45:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 14:45:01 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:45:01 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:45:01 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:45:01.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:45:01 compute-0 nova_compute[261524]: 2025-09-30 14:45:01.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:45:01 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:45:01 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:45:01 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:45:01.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:45:02 compute-0 ceph-mon[74194]: pgmap v885: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Sep 30 14:45:02 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:45:02 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v886: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Sep 30 14:45:03 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:45:03 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:45:03 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:45:03.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:45:03 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:45:03.656Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:45:03 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:45:03 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:45:03 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:45:03.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:45:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:45:03 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:45:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:45:03 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:45:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:45:03 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:45:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:45:04 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:45:04 compute-0 ceph-mon[74194]: pgmap v886: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Sep 30 14:45:04 compute-0 ceph-mon[74194]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Sep 30 14:45:04 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:45:04.132610) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Sep 30 14:45:04 compute-0 ceph-mon[74194]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Sep 30 14:45:04 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759243504132752, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 2121, "num_deletes": 251, "total_data_size": 4262217, "memory_usage": 4338192, "flush_reason": "Manual Compaction"}
Sep 30 14:45:04 compute-0 ceph-mon[74194]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Sep 30 14:45:04 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759243504218948, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 4112172, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24944, "largest_seqno": 27064, "table_properties": {"data_size": 4102709, "index_size": 5957, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19601, "raw_average_key_size": 20, "raw_value_size": 4083768, "raw_average_value_size": 4227, "num_data_blocks": 260, "num_entries": 966, "num_filter_entries": 966, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759243298, "oldest_key_time": 1759243298, "file_creation_time": 1759243504, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4a74fe2f-a33e-416b-ba25-743e7942b3ac", "db_session_id": "KY5CTSKWFSFJYE5835A9", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Sep 30 14:45:04 compute-0 ceph-mon[74194]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 86446 microseconds, and 10282 cpu microseconds.
Sep 30 14:45:04 compute-0 ceph-mon[74194]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 14:45:04 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:45:04.219051) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 4112172 bytes OK
Sep 30 14:45:04 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:45:04.219106) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Sep 30 14:45:04 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:45:04.234634) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Sep 30 14:45:04 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:45:04.234686) EVENT_LOG_v1 {"time_micros": 1759243504234673, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Sep 30 14:45:04 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:45:04.234718) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Sep 30 14:45:04 compute-0 ceph-mon[74194]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 4253635, prev total WAL file size 4253635, number of live WAL files 2.
Sep 30 14:45:04 compute-0 ceph-mon[74194]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 14:45:04 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:45:04.236711) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Sep 30 14:45:04 compute-0 ceph-mon[74194]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Sep 30 14:45:04 compute-0 ceph-mon[74194]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(4015KB)], [56(12MB)]
Sep 30 14:45:04 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759243504236826, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 17107266, "oldest_snapshot_seqno": -1}
Sep 30 14:45:04 compute-0 ceph-mon[74194]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 5908 keys, 14984426 bytes, temperature: kUnknown
Sep 30 14:45:04 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759243504425541, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 14984426, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14943670, "index_size": 24902, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14789, "raw_key_size": 149909, "raw_average_key_size": 25, "raw_value_size": 14835740, "raw_average_value_size": 2511, "num_data_blocks": 1017, "num_entries": 5908, "num_filter_entries": 5908, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759241526, "oldest_key_time": 0, "file_creation_time": 1759243504, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4a74fe2f-a33e-416b-ba25-743e7942b3ac", "db_session_id": "KY5CTSKWFSFJYE5835A9", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Sep 30 14:45:04 compute-0 ceph-mon[74194]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 14:45:04 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:45:04.425881) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 14984426 bytes
Sep 30 14:45:04 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:45:04.450229) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 90.6 rd, 79.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.9, 12.4 +0.0 blob) out(14.3 +0.0 blob), read-write-amplify(7.8) write-amplify(3.6) OK, records in: 6428, records dropped: 520 output_compression: NoCompression
Sep 30 14:45:04 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:45:04.450290) EVENT_LOG_v1 {"time_micros": 1759243504450269, "job": 30, "event": "compaction_finished", "compaction_time_micros": 188827, "compaction_time_cpu_micros": 33263, "output_level": 6, "num_output_files": 1, "total_output_size": 14984426, "num_input_records": 6428, "num_output_records": 5908, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Sep 30 14:45:04 compute-0 ceph-mon[74194]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 14:45:04 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759243504451398, "job": 30, "event": "table_file_deletion", "file_number": 58}
Sep 30 14:45:04 compute-0 ceph-mon[74194]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 14:45:04 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759243504455093, "job": 30, "event": "table_file_deletion", "file_number": 56}
Sep 30 14:45:04 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:45:04.236556) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:45:04 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:45:04.455240) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:45:04 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:45:04.455248) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:45:04 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:45:04.455250) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:45:04 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:45:04.455252) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:45:04 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:45:04.455254) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:45:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:45:04] "GET /metrics HTTP/1.1" 200 48551 "" "Prometheus/2.51.0"
Sep 30 14:45:04 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:45:04] "GET /metrics HTTP/1.1" 200 48551 "" "Prometheus/2.51.0"
Sep 30 14:45:04 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v887: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Sep 30 14:45:04 compute-0 sudo[277322]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:45:04 compute-0 sudo[277322]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:45:04 compute-0 sudo[277322]: pam_unix(sudo:session): session closed for user root
Sep 30 14:45:05 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:45:05 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:45:05 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:45:05.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:45:05 compute-0 nova_compute[261524]: 2025-09-30 14:45:05.345 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:45:05 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:45:05 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:45:05 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:45:05.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:45:06 compute-0 ceph-mon[74194]: pgmap v887: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Sep 30 14:45:06 compute-0 nova_compute[261524]: 2025-09-30 14:45:06.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:45:06 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v888: 337 pgs: 337 active+clean; 113 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 124 op/s
Sep 30 14:45:07 compute-0 ovn_controller[154021]: 2025-09-30T14:45:07Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:db:b9:ad 10.100.0.14
Sep 30 14:45:07 compute-0 ovn_controller[154021]: 2025-09-30T14:45:07Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:db:b9:ad 10.100.0.14
Sep 30 14:45:07 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:45:07.152Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:45:07 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:45:07.152Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:45:07 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:45:07.152Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:45:07 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:45:07 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:45:07 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:45:07 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:45:07.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:45:07 compute-0 ceph-mon[74194]: pgmap v888: 337 pgs: 337 active+clean; 113 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 124 op/s
Sep 30 14:45:07 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:45:07 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:45:07 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:45:07.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:45:08 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v889: 337 pgs: 337 active+clean; 113 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.0 MiB/s wr, 50 op/s
Sep 30 14:45:09 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:45:08 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:45:09 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:45:08 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:45:09 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:45:08 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:45:09 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:45:09 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:45:09 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:45:09 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:45:09 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:45:09.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:45:09 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:45:09 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:45:09 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:45:09.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:45:10 compute-0 nova_compute[261524]: 2025-09-30 14:45:10.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:45:10 compute-0 ceph-mon[74194]: pgmap v889: 337 pgs: 337 active+clean; 113 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.0 MiB/s wr, 50 op/s
Sep 30 14:45:10 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v890: 337 pgs: 337 active+clean; 113 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.0 MiB/s wr, 50 op/s
Sep 30 14:45:11 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 14:45:11 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/188826799' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 14:45:11 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 14:45:11 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/188826799' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 14:45:11 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:45:11 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:45:11 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:45:11.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:45:11 compute-0 nova_compute[261524]: 2025-09-30 14:45:11.437 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:45:11 compute-0 ceph-mon[74194]: pgmap v890: 337 pgs: 337 active+clean; 113 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.0 MiB/s wr, 50 op/s
Sep 30 14:45:11 compute-0 ceph-mon[74194]: from='client.? 192.168.122.10:0/188826799' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 14:45:11 compute-0 ceph-mon[74194]: from='client.? 192.168.122.10:0/188826799' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 14:45:11 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:45:11 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:45:11 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:45:11.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:45:12 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:45:12 compute-0 nova_compute[261524]: 2025-09-30 14:45:12.583 2 INFO nova.compute.manager [None req-c572a97f-da06-44e7-8baa-9e4db66ba4f4 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Get console output
Sep 30 14:45:12 compute-0 nova_compute[261524]: 2025-09-30 14:45:12.589 696 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Sep 30 14:45:12 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v891: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 365 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Sep 30 14:45:12 compute-0 nova_compute[261524]: 2025-09-30 14:45:12.972 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:45:12 compute-0 nova_compute[261524]: 2025-09-30 14:45:12.972 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:45:12 compute-0 nova_compute[261524]: 2025-09-30 14:45:12.973 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:45:13 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:45:13 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:45:13 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:45:13.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:45:13 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:45:13.657Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:45:13 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/1341505740' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:45:13 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/3235983054' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:45:13 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:45:13 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:45:13 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:45:13.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:45:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:45:13 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:45:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:45:13 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:45:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:45:13 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:45:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:45:14 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:45:14 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:45:14 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:45:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:45:14] "GET /metrics HTTP/1.1" 200 48553 "" "Prometheus/2.51.0"
Sep 30 14:45:14 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:45:14] "GET /metrics HTTP/1.1" 200 48553 "" "Prometheus/2.51.0"
Sep 30 14:45:14 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v892: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 365 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Sep 30 14:45:14 compute-0 ceph-mon[74194]: pgmap v891: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 365 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Sep 30 14:45:14 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/2345075913' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:45:14 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:45:14 compute-0 nova_compute[261524]: 2025-09-30 14:45:14.952 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:45:14 compute-0 nova_compute[261524]: 2025-09-30 14:45:14.952 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Sep 30 14:45:14 compute-0 nova_compute[261524]: 2025-09-30 14:45:14.952 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Sep 30 14:45:15 compute-0 nova_compute[261524]: 2025-09-30 14:45:15.158 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Acquiring lock "refresh_cache-ab354489-bdb3-49d0-9ed1-574d93130913" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Sep 30 14:45:15 compute-0 nova_compute[261524]: 2025-09-30 14:45:15.159 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Acquired lock "refresh_cache-ab354489-bdb3-49d0-9ed1-574d93130913" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Sep 30 14:45:15 compute-0 nova_compute[261524]: 2025-09-30 14:45:15.159 2 DEBUG nova.network.neutron [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Sep 30 14:45:15 compute-0 nova_compute[261524]: 2025-09-30 14:45:15.159 2 DEBUG nova.objects.instance [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lazy-loading 'info_cache' on Instance uuid ab354489-bdb3-49d0-9ed1-574d93130913 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Sep 30 14:45:15 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:45:15 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:45:15 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:45:15.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:45:15 compute-0 nova_compute[261524]: 2025-09-30 14:45:15.372 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:45:15 compute-0 nova_compute[261524]: 2025-09-30 14:45:15.911 2 DEBUG oslo_concurrency.lockutils [None req-951fa84e-e5a2-4101-8032-fb30150ab0f0 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Acquiring lock "interface-ab354489-bdb3-49d0-9ed1-574d93130913-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:45:15 compute-0 nova_compute[261524]: 2025-09-30 14:45:15.912 2 DEBUG oslo_concurrency.lockutils [None req-951fa84e-e5a2-4101-8032-fb30150ab0f0 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Lock "interface-ab354489-bdb3-49d0-9ed1-574d93130913-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:45:15 compute-0 nova_compute[261524]: 2025-09-30 14:45:15.912 2 DEBUG nova.objects.instance [None req-951fa84e-e5a2-4101-8032-fb30150ab0f0 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Lazy-loading 'flavor' on Instance uuid ab354489-bdb3-49d0-9ed1-574d93130913 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Sep 30 14:45:15 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:45:15 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:45:15 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:45:15.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:45:16 compute-0 ceph-mon[74194]: pgmap v892: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 365 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Sep 30 14:45:16 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/1612055248' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:45:16 compute-0 nova_compute[261524]: 2025-09-30 14:45:16.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:45:16 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v893: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 366 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Sep 30 14:45:17 compute-0 nova_compute[261524]: 2025-09-30 14:45:17.093 2 DEBUG nova.objects.instance [None req-951fa84e-e5a2-4101-8032-fb30150ab0f0 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Lazy-loading 'pci_requests' on Instance uuid ab354489-bdb3-49d0-9ed1-574d93130913 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Sep 30 14:45:17 compute-0 nova_compute[261524]: 2025-09-30 14:45:17.110 2 DEBUG nova.network.neutron [None req-951fa84e-e5a2-4101-8032-fb30150ab0f0 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Sep 30 14:45:17 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:45:17.154Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:45:17 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:45:17.155Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:45:17 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:45:17 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:45:17 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:45:17 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:45:17.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:45:17 compute-0 nova_compute[261524]: 2025-09-30 14:45:17.282 2 DEBUG nova.network.neutron [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Updating instance_info_cache with network_info: [{"id": "70e1bfe9-6006-4e08-9c7f-c0d64c8269a0", "address": "fa:16:3e:db:b9:ad", "network": {"id": "653945fb-0a1b-4a3b-b45f-4bafe62f765f", "bridge": "br-int", "label": "tempest-network-smoke--969342711", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0f6bbb74396f4cb7bfa999ebdabfe722", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70e1bfe9-60", "ovs_interfaceid": "70e1bfe9-6006-4e08-9c7f-c0d64c8269a0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Sep 30 14:45:17 compute-0 nova_compute[261524]: 2025-09-30 14:45:17.299 2 DEBUG nova.policy [None req-951fa84e-e5a2-4101-8032-fb30150ab0f0 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '59c80c4f189d4667aec64b43afc69ed2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0f6bbb74396f4cb7bfa999ebdabfe722', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Sep 30 14:45:17 compute-0 nova_compute[261524]: 2025-09-30 14:45:17.303 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Releasing lock "refresh_cache-ab354489-bdb3-49d0-9ed1-574d93130913" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Sep 30 14:45:17 compute-0 nova_compute[261524]: 2025-09-30 14:45:17.303 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Sep 30 14:45:17 compute-0 nova_compute[261524]: 2025-09-30 14:45:17.304 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:45:17 compute-0 nova_compute[261524]: 2025-09-30 14:45:17.304 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:45:17 compute-0 nova_compute[261524]: 2025-09-30 14:45:17.304 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:45:17 compute-0 nova_compute[261524]: 2025-09-30 14:45:17.304 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:45:17 compute-0 nova_compute[261524]: 2025-09-30 14:45:17.305 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Sep 30 14:45:17 compute-0 nova_compute[261524]: 2025-09-30 14:45:17.305 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:45:17 compute-0 nova_compute[261524]: 2025-09-30 14:45:17.336 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:45:17 compute-0 nova_compute[261524]: 2025-09-30 14:45:17.336 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:45:17 compute-0 nova_compute[261524]: 2025-09-30 14:45:17.336 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:45:17 compute-0 nova_compute[261524]: 2025-09-30 14:45:17.337 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Sep 30 14:45:17 compute-0 nova_compute[261524]: 2025-09-30 14:45:17.337 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Sep 30 14:45:17 compute-0 ceph-mon[74194]: pgmap v893: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 366 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Sep 30 14:45:17 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 14:45:17 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1219658858' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:45:17 compute-0 nova_compute[261524]: 2025-09-30 14:45:17.842 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Sep 30 14:45:17 compute-0 nova_compute[261524]: 2025-09-30 14:45:17.928 2 DEBUG nova.virt.libvirt.driver [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Sep 30 14:45:17 compute-0 nova_compute[261524]: 2025-09-30 14:45:17.928 2 DEBUG nova.virt.libvirt.driver [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Sep 30 14:45:17 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:45:17 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:45:17 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:45:17.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:45:18 compute-0 nova_compute[261524]: 2025-09-30 14:45:18.115 2 WARNING nova.virt.libvirt.driver [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 14:45:18 compute-0 nova_compute[261524]: 2025-09-30 14:45:18.116 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4422MB free_disk=59.942752838134766GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Sep 30 14:45:18 compute-0 nova_compute[261524]: 2025-09-30 14:45:18.116 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:45:18 compute-0 nova_compute[261524]: 2025-09-30 14:45:18.116 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:45:18 compute-0 nova_compute[261524]: 2025-09-30 14:45:18.196 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Instance ab354489-bdb3-49d0-9ed1-574d93130913 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Sep 30 14:45:18 compute-0 nova_compute[261524]: 2025-09-30 14:45:18.196 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Sep 30 14:45:18 compute-0 nova_compute[261524]: 2025-09-30 14:45:18.196 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Sep 30 14:45:18 compute-0 nova_compute[261524]: 2025-09-30 14:45:18.211 2 DEBUG nova.scheduler.client.report [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Refreshing inventories for resource provider 06783cfc-6d32-454d-9501-ebd8adea3735 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Sep 30 14:45:18 compute-0 nova_compute[261524]: 2025-09-30 14:45:18.228 2 DEBUG nova.scheduler.client.report [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Updating ProviderTree inventory for provider 06783cfc-6d32-454d-9501-ebd8adea3735 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Sep 30 14:45:18 compute-0 nova_compute[261524]: 2025-09-30 14:45:18.229 2 DEBUG nova.compute.provider_tree [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Updating inventory in ProviderTree for provider 06783cfc-6d32-454d-9501-ebd8adea3735 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Sep 30 14:45:18 compute-0 nova_compute[261524]: 2025-09-30 14:45:18.246 2 DEBUG nova.scheduler.client.report [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Refreshing aggregate associations for resource provider 06783cfc-6d32-454d-9501-ebd8adea3735, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Sep 30 14:45:18 compute-0 nova_compute[261524]: 2025-09-30 14:45:18.271 2 DEBUG nova.scheduler.client.report [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Refreshing trait associations for resource provider 06783cfc-6d32-454d-9501-ebd8adea3735, traits: COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AVX,HW_CPU_X86_SSE2,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_BMI2,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SVM,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_BMI,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_FMA3,HW_CPU_X86_AVX2,HW_CPU_X86_SSE42,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_F16C,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_RESCUE_BFV,COMPUTE_NODE,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_USB,COMPUTE_ACCELERATORS,HW_CPU_X86_CLMUL,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE4A,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_AMD_SVM _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Sep 30 14:45:18 compute-0 nova_compute[261524]: 2025-09-30 14:45:18.308 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Sep 30 14:45:18 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/1219658858' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:45:18 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v894: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 102 KiB/s wr, 14 op/s
Sep 30 14:45:18 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 14:45:18 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1610260668' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:45:18 compute-0 nova_compute[261524]: 2025-09-30 14:45:18.802 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Sep 30 14:45:18 compute-0 nova_compute[261524]: 2025-09-30 14:45:18.810 2 DEBUG nova.compute.provider_tree [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Inventory has not changed in ProviderTree for provider: 06783cfc-6d32-454d-9501-ebd8adea3735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Sep 30 14:45:18 compute-0 nova_compute[261524]: 2025-09-30 14:45:18.835 2 DEBUG nova.scheduler.client.report [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Inventory has not changed for provider 06783cfc-6d32-454d-9501-ebd8adea3735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Sep 30 14:45:18 compute-0 nova_compute[261524]: 2025-09-30 14:45:18.866 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Sep 30 14:45:18 compute-0 nova_compute[261524]: 2025-09-30 14:45:18.867 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.751s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:45:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:45:18 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:45:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:45:18 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:45:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:45:18 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:45:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:45:19 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:45:19 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:45:19 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:45:19 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:45:19.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:45:19 compute-0 nova_compute[261524]: 2025-09-30 14:45:19.312 2 DEBUG nova.network.neutron [None req-951fa84e-e5a2-4101-8032-fb30150ab0f0 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Successfully created port: 9647a6b7-6ba5-4788-9075-bdfb0924041c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Sep 30 14:45:19 compute-0 ceph-mon[74194]: pgmap v894: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 102 KiB/s wr, 14 op/s
Sep 30 14:45:19 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/1610260668' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:45:19 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:45:19 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:45:19 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:45:19.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:45:20 compute-0 nova_compute[261524]: 2025-09-30 14:45:20.375 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:45:20 compute-0 nova_compute[261524]: 2025-09-30 14:45:20.381 2 DEBUG nova.network.neutron [None req-951fa84e-e5a2-4101-8032-fb30150ab0f0 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Successfully updated port: 9647a6b7-6ba5-4788-9075-bdfb0924041c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Sep 30 14:45:20 compute-0 nova_compute[261524]: 2025-09-30 14:45:20.405 2 DEBUG oslo_concurrency.lockutils [None req-951fa84e-e5a2-4101-8032-fb30150ab0f0 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Acquiring lock "refresh_cache-ab354489-bdb3-49d0-9ed1-574d93130913" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Sep 30 14:45:20 compute-0 nova_compute[261524]: 2025-09-30 14:45:20.406 2 DEBUG oslo_concurrency.lockutils [None req-951fa84e-e5a2-4101-8032-fb30150ab0f0 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Acquired lock "refresh_cache-ab354489-bdb3-49d0-9ed1-574d93130913" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Sep 30 14:45:20 compute-0 nova_compute[261524]: 2025-09-30 14:45:20.406 2 DEBUG nova.network.neutron [None req-951fa84e-e5a2-4101-8032-fb30150ab0f0 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Sep 30 14:45:20 compute-0 nova_compute[261524]: 2025-09-30 14:45:20.485 2 DEBUG nova.compute.manager [req-00f809a4-4e20-459e-9d62-b1728233ef46 req-100d0f5d-9a9f-4ab1-9cc7-dfbc75aab045 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Received event network-changed-9647a6b7-6ba5-4788-9075-bdfb0924041c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Sep 30 14:45:20 compute-0 nova_compute[261524]: 2025-09-30 14:45:20.486 2 DEBUG nova.compute.manager [req-00f809a4-4e20-459e-9d62-b1728233ef46 req-100d0f5d-9a9f-4ab1-9cc7-dfbc75aab045 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Refreshing instance network info cache due to event network-changed-9647a6b7-6ba5-4788-9075-bdfb0924041c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Sep 30 14:45:20 compute-0 nova_compute[261524]: 2025-09-30 14:45:20.486 2 DEBUG oslo_concurrency.lockutils [req-00f809a4-4e20-459e-9d62-b1728233ef46 req-100d0f5d-9a9f-4ab1-9cc7-dfbc75aab045 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Acquiring lock "refresh_cache-ab354489-bdb3-49d0-9ed1-574d93130913" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Sep 30 14:45:20 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v895: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 102 KiB/s wr, 14 op/s
Sep 30 14:45:21 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:45:21 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:45:21 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:45:21.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:45:21 compute-0 nova_compute[261524]: 2025-09-30 14:45:21.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:45:21 compute-0 ceph-mon[74194]: pgmap v895: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 102 KiB/s wr, 14 op/s
Sep 30 14:45:21 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:45:21 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:45:21 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:45:21.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:45:22 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:45:22 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v896: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 104 KiB/s wr, 15 op/s
Sep 30 14:45:23 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:45:23 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:45:23 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:45:23.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:45:23 compute-0 nova_compute[261524]: 2025-09-30 14:45:23.486 2 DEBUG nova.network.neutron [None req-951fa84e-e5a2-4101-8032-fb30150ab0f0 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Updating instance_info_cache with network_info: [{"id": "70e1bfe9-6006-4e08-9c7f-c0d64c8269a0", "address": "fa:16:3e:db:b9:ad", "network": {"id": "653945fb-0a1b-4a3b-b45f-4bafe62f765f", "bridge": "br-int", "label": "tempest-network-smoke--969342711", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0f6bbb74396f4cb7bfa999ebdabfe722", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70e1bfe9-60", "ovs_interfaceid": "70e1bfe9-6006-4e08-9c7f-c0d64c8269a0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9647a6b7-6ba5-4788-9075-bdfb0924041c", "address": "fa:16:3e:21:35:09", "network": {"id": "4f96ad7c-4512-478c-acee-7360218cf3ea", "bridge": "br-int", "label": "tempest-network-smoke--980620503", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0f6bbb74396f4cb7bfa999ebdabfe722", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9647a6b7-6b", "ovs_interfaceid": "9647a6b7-6ba5-4788-9075-bdfb0924041c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Sep 30 14:45:23 compute-0 nova_compute[261524]: 2025-09-30 14:45:23.522 2 DEBUG oslo_concurrency.lockutils [None req-951fa84e-e5a2-4101-8032-fb30150ab0f0 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Releasing lock "refresh_cache-ab354489-bdb3-49d0-9ed1-574d93130913" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Sep 30 14:45:23 compute-0 nova_compute[261524]: 2025-09-30 14:45:23.523 2 DEBUG oslo_concurrency.lockutils [req-00f809a4-4e20-459e-9d62-b1728233ef46 req-100d0f5d-9a9f-4ab1-9cc7-dfbc75aab045 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Acquired lock "refresh_cache-ab354489-bdb3-49d0-9ed1-574d93130913" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Sep 30 14:45:23 compute-0 nova_compute[261524]: 2025-09-30 14:45:23.523 2 DEBUG nova.network.neutron [req-00f809a4-4e20-459e-9d62-b1728233ef46 req-100d0f5d-9a9f-4ab1-9cc7-dfbc75aab045 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Refreshing network info cache for port 9647a6b7-6ba5-4788-9075-bdfb0924041c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Sep 30 14:45:23 compute-0 nova_compute[261524]: 2025-09-30 14:45:23.526 2 DEBUG nova.virt.libvirt.vif [None req-951fa84e-e5a2-4101-8032-fb30150ab0f0 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-09-30T14:44:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-711458846',display_name='tempest-TestNetworkBasicOps-server-711458846',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-711458846',id=6,image_ref='7c70cf84-edc3-42b2-a094-ae3c1dbaffe4',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIMQL53T2ZkSAoVzfinB0Xb6YV6zqFtICzdovU1Kn/PIvW0fTnkL2hml556IQQU+IFdjIRu6Xc3RQKHc2DkPb73zFKtN5c4E62Q7wZZkQI9VBc0aWDqG12KKHVj732hp6w==',key_name='tempest-TestNetworkBasicOps-1073344022',keypairs=<?>,launch_index=0,launched_at=2025-09-30T14:44:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='0f6bbb74396f4cb7bfa999ebdabfe722',ramdisk_id='',reservation_id='r-z3tdfpa2',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c70cf84-edc3-42b2-a094-ae3c1dbaffe4',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-195302952',owner_user_name='tempest-TestNetworkBasicOps-195302952-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-09-30T14:44:53Z,user_data=None,user_id='59c80c4f189d4667aec64b43afc69ed2',uuid=ab354489-bdb3-49d0-9ed1-574d93130913,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9647a6b7-6ba5-4788-9075-bdfb0924041c", "address": "fa:16:3e:21:35:09", "network": {"id": "4f96ad7c-4512-478c-acee-7360218cf3ea", "bridge": "br-int", "label": "tempest-network-smoke--980620503", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0f6bbb74396f4cb7bfa999ebdabfe722", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9647a6b7-6b", "ovs_interfaceid": "9647a6b7-6ba5-4788-9075-bdfb0924041c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Sep 30 14:45:23 compute-0 nova_compute[261524]: 2025-09-30 14:45:23.527 2 DEBUG nova.network.os_vif_util [None req-951fa84e-e5a2-4101-8032-fb30150ab0f0 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Converting VIF {"id": "9647a6b7-6ba5-4788-9075-bdfb0924041c", "address": "fa:16:3e:21:35:09", "network": {"id": "4f96ad7c-4512-478c-acee-7360218cf3ea", "bridge": "br-int", "label": "tempest-network-smoke--980620503", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0f6bbb74396f4cb7bfa999ebdabfe722", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9647a6b7-6b", "ovs_interfaceid": "9647a6b7-6ba5-4788-9075-bdfb0924041c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Sep 30 14:45:23 compute-0 nova_compute[261524]: 2025-09-30 14:45:23.528 2 DEBUG nova.network.os_vif_util [None req-951fa84e-e5a2-4101-8032-fb30150ab0f0 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:21:35:09,bridge_name='br-int',has_traffic_filtering=True,id=9647a6b7-6ba5-4788-9075-bdfb0924041c,network=Network(4f96ad7c-4512-478c-acee-7360218cf3ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9647a6b7-6b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Sep 30 14:45:23 compute-0 nova_compute[261524]: 2025-09-30 14:45:23.528 2 DEBUG os_vif [None req-951fa84e-e5a2-4101-8032-fb30150ab0f0 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:21:35:09,bridge_name='br-int',has_traffic_filtering=True,id=9647a6b7-6ba5-4788-9075-bdfb0924041c,network=Network(4f96ad7c-4512-478c-acee-7360218cf3ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9647a6b7-6b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Sep 30 14:45:23 compute-0 nova_compute[261524]: 2025-09-30 14:45:23.529 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:45:23 compute-0 nova_compute[261524]: 2025-09-30 14:45:23.529 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 14:45:23 compute-0 nova_compute[261524]: 2025-09-30 14:45:23.530 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 14:45:23 compute-0 nova_compute[261524]: 2025-09-30 14:45:23.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:45:23 compute-0 nova_compute[261524]: 2025-09-30 14:45:23.533 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9647a6b7-6b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 14:45:23 compute-0 nova_compute[261524]: 2025-09-30 14:45:23.534 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9647a6b7-6b, col_values=(('external_ids', {'iface-id': '9647a6b7-6ba5-4788-9075-bdfb0924041c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:21:35:09', 'vm-uuid': 'ab354489-bdb3-49d0-9ed1-574d93130913'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 14:45:23 compute-0 NetworkManager[45472]: <info>  [1759243523.5365] manager: (tap9647a6b7-6b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/49)
Sep 30 14:45:23 compute-0 nova_compute[261524]: 2025-09-30 14:45:23.537 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Sep 30 14:45:23 compute-0 nova_compute[261524]: 2025-09-30 14:45:23.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:45:23 compute-0 nova_compute[261524]: 2025-09-30 14:45:23.545 2 INFO os_vif [None req-951fa84e-e5a2-4101-8032-fb30150ab0f0 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:21:35:09,bridge_name='br-int',has_traffic_filtering=True,id=9647a6b7-6ba5-4788-9075-bdfb0924041c,network=Network(4f96ad7c-4512-478c-acee-7360218cf3ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9647a6b7-6b')
Sep 30 14:45:23 compute-0 nova_compute[261524]: 2025-09-30 14:45:23.546 2 DEBUG nova.virt.libvirt.vif [None req-951fa84e-e5a2-4101-8032-fb30150ab0f0 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-09-30T14:44:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-711458846',display_name='tempest-TestNetworkBasicOps-server-711458846',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-711458846',id=6,image_ref='7c70cf84-edc3-42b2-a094-ae3c1dbaffe4',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIMQL53T2ZkSAoVzfinB0Xb6YV6zqFtICzdovU1Kn/PIvW0fTnkL2hml556IQQU+IFdjIRu6Xc3RQKHc2DkPb73zFKtN5c4E62Q7wZZkQI9VBc0aWDqG12KKHVj732hp6w==',key_name='tempest-TestNetworkBasicOps-1073344022',keypairs=<?>,launch_index=0,launched_at=2025-09-30T14:44:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='0f6bbb74396f4cb7bfa999ebdabfe722',ramdisk_id='',reservation_id='r-z3tdfpa2',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c70cf84-edc3-42b2-a094-ae3c1dbaffe4',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-195302952',owner_user_name='tempest-TestNetworkBasicOps-195302952-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-09-30T14:44:53Z,user_data=None,user_id='59c80c4f189d4667aec64b43afc69ed2',uuid=ab354489-bdb3-49d0-9ed1-574d93130913,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9647a6b7-6ba5-4788-9075-bdfb0924041c", "address": "fa:16:3e:21:35:09", "network": {"id": "4f96ad7c-4512-478c-acee-7360218cf3ea", "bridge": "br-int", "label": "tempest-network-smoke--980620503", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0f6bbb74396f4cb7bfa999ebdabfe722", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9647a6b7-6b", "ovs_interfaceid": "9647a6b7-6ba5-4788-9075-bdfb0924041c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Sep 30 14:45:23 compute-0 nova_compute[261524]: 2025-09-30 14:45:23.546 2 DEBUG nova.network.os_vif_util [None req-951fa84e-e5a2-4101-8032-fb30150ab0f0 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Converting VIF {"id": "9647a6b7-6ba5-4788-9075-bdfb0924041c", "address": "fa:16:3e:21:35:09", "network": {"id": "4f96ad7c-4512-478c-acee-7360218cf3ea", "bridge": "br-int", "label": "tempest-network-smoke--980620503", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0f6bbb74396f4cb7bfa999ebdabfe722", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9647a6b7-6b", "ovs_interfaceid": "9647a6b7-6ba5-4788-9075-bdfb0924041c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Sep 30 14:45:23 compute-0 nova_compute[261524]: 2025-09-30 14:45:23.547 2 DEBUG nova.network.os_vif_util [None req-951fa84e-e5a2-4101-8032-fb30150ab0f0 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:21:35:09,bridge_name='br-int',has_traffic_filtering=True,id=9647a6b7-6ba5-4788-9075-bdfb0924041c,network=Network(4f96ad7c-4512-478c-acee-7360218cf3ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9647a6b7-6b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Sep 30 14:45:23 compute-0 nova_compute[261524]: 2025-09-30 14:45:23.549 2 DEBUG nova.virt.libvirt.guest [None req-951fa84e-e5a2-4101-8032-fb30150ab0f0 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] attach device xml: <interface type="ethernet">
Sep 30 14:45:23 compute-0 nova_compute[261524]:   <mac address="fa:16:3e:21:35:09"/>
Sep 30 14:45:23 compute-0 nova_compute[261524]:   <model type="virtio"/>
Sep 30 14:45:23 compute-0 nova_compute[261524]:   <driver name="vhost" rx_queue_size="512"/>
Sep 30 14:45:23 compute-0 nova_compute[261524]:   <mtu size="1442"/>
Sep 30 14:45:23 compute-0 nova_compute[261524]:   <target dev="tap9647a6b7-6b"/>
Sep 30 14:45:23 compute-0 nova_compute[261524]: </interface>
Sep 30 14:45:23 compute-0 nova_compute[261524]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Sep 30 14:45:23 compute-0 kernel: tap9647a6b7-6b: entered promiscuous mode
Sep 30 14:45:23 compute-0 NetworkManager[45472]: <info>  [1759243523.5634] manager: (tap9647a6b7-6b): new Tun device (/org/freedesktop/NetworkManager/Devices/50)
Sep 30 14:45:23 compute-0 ovn_controller[154021]: 2025-09-30T14:45:23Z|00065|binding|INFO|Claiming lport 9647a6b7-6ba5-4788-9075-bdfb0924041c for this chassis.
Sep 30 14:45:23 compute-0 ovn_controller[154021]: 2025-09-30T14:45:23Z|00066|binding|INFO|9647a6b7-6ba5-4788-9075-bdfb0924041c: Claiming fa:16:3e:21:35:09 10.100.0.18
Sep 30 14:45:23 compute-0 nova_compute[261524]: 2025-09-30 14:45:23.564 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:45:23 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:45:23.574 163966 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:21:35:09 10.100.0.18'], port_security=['fa:16:3e:21:35:09 10.100.0.18'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28', 'neutron:device_id': 'ab354489-bdb3-49d0-9ed1-574d93130913', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4f96ad7c-4512-478c-acee-7360218cf3ea', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0f6bbb74396f4cb7bfa999ebdabfe722', 'neutron:revision_number': '2', 'neutron:security_group_ids': '577c7718-6276-434c-be06-b394756c15c1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=093e98dd-9645-4629-9a56-b4dac70fd8d8, chassis=[<ovs.db.idl.Row object at 0x7f8c6753f7f0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f8c6753f7f0>], logical_port=9647a6b7-6ba5-4788-9075-bdfb0924041c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Sep 30 14:45:23 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:45:23.575 163966 INFO neutron.agent.ovn.metadata.agent [-] Port 9647a6b7-6ba5-4788-9075-bdfb0924041c in datapath 4f96ad7c-4512-478c-acee-7360218cf3ea bound to our chassis
Sep 30 14:45:23 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:45:23.577 163966 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4f96ad7c-4512-478c-acee-7360218cf3ea
Sep 30 14:45:23 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:45:23.588 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[cdf35cf9-ac40-4fb9-8bdc-4a95a9902356]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:45:23 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:45:23.589 163966 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4f96ad7c-41 in ovnmeta-4f96ad7c-4512-478c-acee-7360218cf3ea namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Sep 30 14:45:23 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:45:23.590 269027 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4f96ad7c-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Sep 30 14:45:23 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:45:23.591 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[ece26bf1-8dd9-4479-be01-01475399d8f8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:45:23 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:45:23.591 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[bfaa4fbf-f8cc-4754-a218-88f1b0c4a345]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:45:23 compute-0 systemd-udevd[277418]: Network interface NamePolicy= disabled on kernel command line.
Sep 30 14:45:23 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:45:23.602 164124 DEBUG oslo.privsep.daemon [-] privsep: reply[8838740a-d88b-4431-ba9e-2bdd917b04b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:45:23 compute-0 NetworkManager[45472]: <info>  [1759243523.6048] device (tap9647a6b7-6b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Sep 30 14:45:23 compute-0 NetworkManager[45472]: <info>  [1759243523.6056] device (tap9647a6b7-6b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Sep 30 14:45:23 compute-0 nova_compute[261524]: 2025-09-30 14:45:23.620 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:45:23 compute-0 ovn_controller[154021]: 2025-09-30T14:45:23Z|00067|binding|INFO|Setting lport 9647a6b7-6ba5-4788-9075-bdfb0924041c ovn-installed in OVS
Sep 30 14:45:23 compute-0 ovn_controller[154021]: 2025-09-30T14:45:23Z|00068|binding|INFO|Setting lport 9647a6b7-6ba5-4788-9075-bdfb0924041c up in Southbound
Sep 30 14:45:23 compute-0 nova_compute[261524]: 2025-09-30 14:45:23.625 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:45:23 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:45:23.626 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[66f5f6a9-84ca-49a2-8e20-62b092d94761]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:45:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:45:23.658Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:45:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:45:23.659Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:45:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:45:23.659Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:45:23 compute-0 nova_compute[261524]: 2025-09-30 14:45:23.661 2 DEBUG nova.virt.libvirt.driver [None req-951fa84e-e5a2-4101-8032-fb30150ab0f0 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Sep 30 14:45:23 compute-0 nova_compute[261524]: 2025-09-30 14:45:23.661 2 DEBUG nova.virt.libvirt.driver [None req-951fa84e-e5a2-4101-8032-fb30150ab0f0 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Sep 30 14:45:23 compute-0 nova_compute[261524]: 2025-09-30 14:45:23.662 2 DEBUG nova.virt.libvirt.driver [None req-951fa84e-e5a2-4101-8032-fb30150ab0f0 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] No VIF found with MAC fa:16:3e:db:b9:ad, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Sep 30 14:45:23 compute-0 nova_compute[261524]: 2025-09-30 14:45:23.662 2 DEBUG nova.virt.libvirt.driver [None req-951fa84e-e5a2-4101-8032-fb30150ab0f0 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] No VIF found with MAC fa:16:3e:21:35:09, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Sep 30 14:45:23 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:45:23.665 269085 DEBUG oslo.privsep.daemon [-] privsep: reply[f737a5cc-bf88-4b91-a01c-4c02c3c65a93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:45:23 compute-0 NetworkManager[45472]: <info>  [1759243523.6722] manager: (tap4f96ad7c-40): new Veth device (/org/freedesktop/NetworkManager/Devices/51)
Sep 30 14:45:23 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:45:23.672 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[0cbf9a4b-b31b-4f41-9c74-f458de6cfd63]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:45:23 compute-0 systemd-udevd[277421]: Network interface NamePolicy= disabled on kernel command line.
Sep 30 14:45:23 compute-0 nova_compute[261524]: 2025-09-30 14:45:23.693 2 DEBUG nova.virt.libvirt.guest [None req-951fa84e-e5a2-4101-8032-fb30150ab0f0 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Sep 30 14:45:23 compute-0 nova_compute[261524]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Sep 30 14:45:23 compute-0 nova_compute[261524]:   <nova:name>tempest-TestNetworkBasicOps-server-711458846</nova:name>
Sep 30 14:45:23 compute-0 nova_compute[261524]:   <nova:creationTime>2025-09-30 14:45:23</nova:creationTime>
Sep 30 14:45:23 compute-0 nova_compute[261524]:   <nova:flavor name="m1.nano">
Sep 30 14:45:23 compute-0 nova_compute[261524]:     <nova:memory>128</nova:memory>
Sep 30 14:45:23 compute-0 nova_compute[261524]:     <nova:disk>1</nova:disk>
Sep 30 14:45:23 compute-0 nova_compute[261524]:     <nova:swap>0</nova:swap>
Sep 30 14:45:23 compute-0 nova_compute[261524]:     <nova:ephemeral>0</nova:ephemeral>
Sep 30 14:45:23 compute-0 nova_compute[261524]:     <nova:vcpus>1</nova:vcpus>
Sep 30 14:45:23 compute-0 nova_compute[261524]:   </nova:flavor>
Sep 30 14:45:23 compute-0 nova_compute[261524]:   <nova:owner>
Sep 30 14:45:23 compute-0 nova_compute[261524]:     <nova:user uuid="59c80c4f189d4667aec64b43afc69ed2">tempest-TestNetworkBasicOps-195302952-project-member</nova:user>
Sep 30 14:45:23 compute-0 nova_compute[261524]:     <nova:project uuid="0f6bbb74396f4cb7bfa999ebdabfe722">tempest-TestNetworkBasicOps-195302952</nova:project>
Sep 30 14:45:23 compute-0 nova_compute[261524]:   </nova:owner>
Sep 30 14:45:23 compute-0 nova_compute[261524]:   <nova:root type="image" uuid="7c70cf84-edc3-42b2-a094-ae3c1dbaffe4"/>
Sep 30 14:45:23 compute-0 nova_compute[261524]:   <nova:ports>
Sep 30 14:45:23 compute-0 nova_compute[261524]:     <nova:port uuid="70e1bfe9-6006-4e08-9c7f-c0d64c8269a0">
Sep 30 14:45:23 compute-0 nova_compute[261524]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Sep 30 14:45:23 compute-0 nova_compute[261524]:     </nova:port>
Sep 30 14:45:23 compute-0 nova_compute[261524]:     <nova:port uuid="9647a6b7-6ba5-4788-9075-bdfb0924041c">
Sep 30 14:45:23 compute-0 nova_compute[261524]:       <nova:ip type="fixed" address="10.100.0.18" ipVersion="4"/>
Sep 30 14:45:23 compute-0 nova_compute[261524]:     </nova:port>
Sep 30 14:45:23 compute-0 nova_compute[261524]:   </nova:ports>
Sep 30 14:45:23 compute-0 nova_compute[261524]: </nova:instance>
Sep 30 14:45:23 compute-0 nova_compute[261524]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Sep 30 14:45:23 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:45:23.710 269085 DEBUG oslo.privsep.daemon [-] privsep: reply[392e0985-4677-4a66-b724-53d35b6f6317]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:45:23 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:45:23.713 269085 DEBUG oslo.privsep.daemon [-] privsep: reply[935fa44e-a4a6-4a95-83ec-b02cab12ad3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:45:23 compute-0 nova_compute[261524]: 2025-09-30 14:45:23.724 2 DEBUG oslo_concurrency.lockutils [None req-951fa84e-e5a2-4101-8032-fb30150ab0f0 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Lock "interface-ab354489-bdb3-49d0-9ed1-574d93130913-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 7.813s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:45:23 compute-0 NetworkManager[45472]: <info>  [1759243523.7394] device (tap4f96ad7c-40): carrier: link connected
Sep 30 14:45:23 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:45:23.747 269085 DEBUG oslo.privsep.daemon [-] privsep: reply[340f9997-3be7-4feb-859f-4bd65e232c78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:45:23 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:45:23.769 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[7e44ee61-8bc9-4dd9-8f97-c6d8285d3dae]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4f96ad7c-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:58:68:67'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 688609, 'reachable_time': 43753, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 277446, 'error': None, 'target': 'ovnmeta-4f96ad7c-4512-478c-acee-7360218cf3ea', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:45:23 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:45:23.790 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[686d5fe6-6aff-40d2-9a96-aec49320e94f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe58:6867'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 688609, 'tstamp': 688609}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 277447, 'error': None, 'target': 'ovnmeta-4f96ad7c-4512-478c-acee-7360218cf3ea', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:45:23 compute-0 ceph-mon[74194]: pgmap v896: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 104 KiB/s wr, 15 op/s
Sep 30 14:45:23 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:45:23.810 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[fbec7390-b3b9-449f-8c9d-259c3d294d10]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4f96ad7c-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:58:68:67'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 688609, 'reachable_time': 43753, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 277448, 'error': None, 'target': 'ovnmeta-4f96ad7c-4512-478c-acee-7360218cf3ea', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:45:23 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:45:23.854 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[076684ab-c959-41e5-aa00-28949f42cace]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:45:23 compute-0 nova_compute[261524]: 2025-09-30 14:45:23.855 2 DEBUG nova.compute.manager [req-2d0ff95e-c864-4b49-9df6-e5c651437a85 req-f7baab87-b9aa-49e2-9781-8aff87c68323 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Received event network-vif-plugged-9647a6b7-6ba5-4788-9075-bdfb0924041c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Sep 30 14:45:23 compute-0 nova_compute[261524]: 2025-09-30 14:45:23.856 2 DEBUG oslo_concurrency.lockutils [req-2d0ff95e-c864-4b49-9df6-e5c651437a85 req-f7baab87-b9aa-49e2-9781-8aff87c68323 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Acquiring lock "ab354489-bdb3-49d0-9ed1-574d93130913-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:45:23 compute-0 nova_compute[261524]: 2025-09-30 14:45:23.856 2 DEBUG oslo_concurrency.lockutils [req-2d0ff95e-c864-4b49-9df6-e5c651437a85 req-f7baab87-b9aa-49e2-9781-8aff87c68323 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Lock "ab354489-bdb3-49d0-9ed1-574d93130913-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:45:23 compute-0 nova_compute[261524]: 2025-09-30 14:45:23.857 2 DEBUG oslo_concurrency.lockutils [req-2d0ff95e-c864-4b49-9df6-e5c651437a85 req-f7baab87-b9aa-49e2-9781-8aff87c68323 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Lock "ab354489-bdb3-49d0-9ed1-574d93130913-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:45:23 compute-0 nova_compute[261524]: 2025-09-30 14:45:23.857 2 DEBUG nova.compute.manager [req-2d0ff95e-c864-4b49-9df6-e5c651437a85 req-f7baab87-b9aa-49e2-9781-8aff87c68323 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] No waiting events found dispatching network-vif-plugged-9647a6b7-6ba5-4788-9075-bdfb0924041c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Sep 30 14:45:23 compute-0 nova_compute[261524]: 2025-09-30 14:45:23.858 2 WARNING nova.compute.manager [req-2d0ff95e-c864-4b49-9df6-e5c651437a85 req-f7baab87-b9aa-49e2-9781-8aff87c68323 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Received unexpected event network-vif-plugged-9647a6b7-6ba5-4788-9075-bdfb0924041c for instance with vm_state active and task_state None.
Sep 30 14:45:23 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:45:23.919 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[1308445c-6437-446f-bf44-08d537c60619]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:45:23 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:45:23.920 163966 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4f96ad7c-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 14:45:23 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:45:23.921 163966 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 14:45:23 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:45:23.921 163966 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4f96ad7c-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 14:45:23 compute-0 kernel: tap4f96ad7c-40: entered promiscuous mode
Sep 30 14:45:23 compute-0 nova_compute[261524]: 2025-09-30 14:45:23.923 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:45:23 compute-0 NetworkManager[45472]: <info>  [1759243523.9245] manager: (tap4f96ad7c-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/52)
Sep 30 14:45:23 compute-0 nova_compute[261524]: 2025-09-30 14:45:23.926 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:45:23 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:45:23.927 163966 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4f96ad7c-40, col_values=(('external_ids', {'iface-id': '845615dc-efdb-4490-bc40-9a8a23d405c1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 14:45:23 compute-0 ovn_controller[154021]: 2025-09-30T14:45:23Z|00069|binding|INFO|Releasing lport 845615dc-efdb-4490-bc40-9a8a23d405c1 from this chassis (sb_readonly=0)
Sep 30 14:45:23 compute-0 nova_compute[261524]: 2025-09-30 14:45:23.957 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:45:23 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:45:23.959 163966 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4f96ad7c-4512-478c-acee-7360218cf3ea.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4f96ad7c-4512-478c-acee-7360218cf3ea.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Sep 30 14:45:23 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:45:23.960 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[eeb44fe3-0947-45cf-834a-d8ae7cf18a22]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:45:23 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:45:23.961 163966 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Sep 30 14:45:23 compute-0 ovn_metadata_agent[163949]: global
Sep 30 14:45:23 compute-0 ovn_metadata_agent[163949]:     log         /dev/log local0 debug
Sep 30 14:45:23 compute-0 ovn_metadata_agent[163949]:     log-tag     haproxy-metadata-proxy-4f96ad7c-4512-478c-acee-7360218cf3ea
Sep 30 14:45:23 compute-0 ovn_metadata_agent[163949]:     user        root
Sep 30 14:45:23 compute-0 ovn_metadata_agent[163949]:     group       root
Sep 30 14:45:23 compute-0 ovn_metadata_agent[163949]:     maxconn     1024
Sep 30 14:45:23 compute-0 ovn_metadata_agent[163949]:     pidfile     /var/lib/neutron/external/pids/4f96ad7c-4512-478c-acee-7360218cf3ea.pid.haproxy
Sep 30 14:45:23 compute-0 ovn_metadata_agent[163949]:     daemon
Sep 30 14:45:23 compute-0 ovn_metadata_agent[163949]: 
Sep 30 14:45:23 compute-0 ovn_metadata_agent[163949]: defaults
Sep 30 14:45:23 compute-0 ovn_metadata_agent[163949]:     log global
Sep 30 14:45:23 compute-0 ovn_metadata_agent[163949]:     mode http
Sep 30 14:45:23 compute-0 ovn_metadata_agent[163949]:     option httplog
Sep 30 14:45:23 compute-0 ovn_metadata_agent[163949]:     option dontlognull
Sep 30 14:45:23 compute-0 ovn_metadata_agent[163949]:     option http-server-close
Sep 30 14:45:23 compute-0 ovn_metadata_agent[163949]:     option forwardfor
Sep 30 14:45:23 compute-0 ovn_metadata_agent[163949]:     retries                 3
Sep 30 14:45:23 compute-0 ovn_metadata_agent[163949]:     timeout http-request    30s
Sep 30 14:45:23 compute-0 ovn_metadata_agent[163949]:     timeout connect         30s
Sep 30 14:45:23 compute-0 ovn_metadata_agent[163949]:     timeout client          32s
Sep 30 14:45:23 compute-0 ovn_metadata_agent[163949]:     timeout server          32s
Sep 30 14:45:23 compute-0 ovn_metadata_agent[163949]:     timeout http-keep-alive 30s
Sep 30 14:45:23 compute-0 ovn_metadata_agent[163949]: 
Sep 30 14:45:23 compute-0 ovn_metadata_agent[163949]: 
Sep 30 14:45:23 compute-0 ovn_metadata_agent[163949]: listen listener
Sep 30 14:45:23 compute-0 ovn_metadata_agent[163949]:     bind 169.254.169.254:80
Sep 30 14:45:23 compute-0 ovn_metadata_agent[163949]:     server metadata /var/lib/neutron/metadata_proxy
Sep 30 14:45:23 compute-0 ovn_metadata_agent[163949]:     http-request add-header X-OVN-Network-ID 4f96ad7c-4512-478c-acee-7360218cf3ea
Sep 30 14:45:23 compute-0 ovn_metadata_agent[163949]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Sep 30 14:45:23 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:45:23.962 163966 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4f96ad7c-4512-478c-acee-7360218cf3ea', 'env', 'PROCESS_TAG=haproxy-4f96ad7c-4512-478c-acee-7360218cf3ea', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4f96ad7c-4512-478c-acee-7360218cf3ea.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Sep 30 14:45:23 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:45:23 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:45:23 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:45:23.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:45:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:45:23 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:45:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:45:23 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:45:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:45:23 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:45:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:45:24 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:45:24 compute-0 podman[277481]: 2025-09-30 14:45:24.419211497 +0000 UTC m=+0.064995073 container create bc5b8fa766ce1f8f9b8500f1eda942945f5212cd4c0cfa1b832930b8d469f60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4f96ad7c-4512-478c-acee-7360218cf3ea, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Sep 30 14:45:24 compute-0 systemd[1]: Started libpod-conmon-bc5b8fa766ce1f8f9b8500f1eda942945f5212cd4c0cfa1b832930b8d469f60b.scope.
Sep 30 14:45:24 compute-0 podman[277481]: 2025-09-30 14:45:24.380060628 +0000 UTC m=+0.025844304 image pull aa21cc3d2531fe07b45a943d4ac1ba0268bfab26b0884a4a00fbad7695318ba9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Sep 30 14:45:24 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:45:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/482fc40af09dd4bda9578911cfb5534a6ea27758df09ab94a5eec6c961ae2757/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Sep 30 14:45:24 compute-0 podman[277481]: 2025-09-30 14:45:24.509893648 +0000 UTC m=+0.155677324 container init bc5b8fa766ce1f8f9b8500f1eda942945f5212cd4c0cfa1b832930b8d469f60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4f96ad7c-4512-478c-acee-7360218cf3ea, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Sep 30 14:45:24 compute-0 podman[277481]: 2025-09-30 14:45:24.516842019 +0000 UTC m=+0.162625635 container start bc5b8fa766ce1f8f9b8500f1eda942945f5212cd4c0cfa1b832930b8d469f60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4f96ad7c-4512-478c-acee-7360218cf3ea, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Sep 30 14:45:24 compute-0 neutron-haproxy-ovnmeta-4f96ad7c-4512-478c-acee-7360218cf3ea[277496]: [NOTICE]   (277500) : New worker (277502) forked
Sep 30 14:45:24 compute-0 neutron-haproxy-ovnmeta-4f96ad7c-4512-478c-acee-7360218cf3ea[277496]: [NOTICE]   (277500) : Loading success.
Sep 30 14:45:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:45:24] "GET /metrics HTTP/1.1" 200 48553 "" "Prometheus/2.51.0"
Sep 30 14:45:24 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:45:24] "GET /metrics HTTP/1.1" 200 48553 "" "Prometheus/2.51.0"
Sep 30 14:45:24 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v897: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 14 KiB/s wr, 1 op/s
Sep 30 14:45:24 compute-0 sudo[277511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:45:25 compute-0 sudo[277511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:45:25 compute-0 sudo[277511]: pam_unix(sudo:session): session closed for user root
Sep 30 14:45:25 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:45:25 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:45:25 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:45:25.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:45:25 compute-0 nova_compute[261524]: 2025-09-30 14:45:25.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:45:25 compute-0 ovn_controller[154021]: 2025-09-30T14:45:25Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:21:35:09 10.100.0.18
Sep 30 14:45:25 compute-0 ovn_controller[154021]: 2025-09-30T14:45:25Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:21:35:09 10.100.0.18
Sep 30 14:45:25 compute-0 nova_compute[261524]: 2025-09-30 14:45:25.649 2 DEBUG nova.network.neutron [req-00f809a4-4e20-459e-9d62-b1728233ef46 req-100d0f5d-9a9f-4ab1-9cc7-dfbc75aab045 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Updated VIF entry in instance network info cache for port 9647a6b7-6ba5-4788-9075-bdfb0924041c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Sep 30 14:45:25 compute-0 nova_compute[261524]: 2025-09-30 14:45:25.650 2 DEBUG nova.network.neutron [req-00f809a4-4e20-459e-9d62-b1728233ef46 req-100d0f5d-9a9f-4ab1-9cc7-dfbc75aab045 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Updating instance_info_cache with network_info: [{"id": "70e1bfe9-6006-4e08-9c7f-c0d64c8269a0", "address": "fa:16:3e:db:b9:ad", "network": {"id": "653945fb-0a1b-4a3b-b45f-4bafe62f765f", "bridge": "br-int", "label": "tempest-network-smoke--969342711", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0f6bbb74396f4cb7bfa999ebdabfe722", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70e1bfe9-60", "ovs_interfaceid": "70e1bfe9-6006-4e08-9c7f-c0d64c8269a0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9647a6b7-6ba5-4788-9075-bdfb0924041c", "address": "fa:16:3e:21:35:09", "network": {"id": "4f96ad7c-4512-478c-acee-7360218cf3ea", "bridge": "br-int", "label": "tempest-network-smoke--980620503", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0f6bbb74396f4cb7bfa999ebdabfe722", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9647a6b7-6b", "ovs_interfaceid": "9647a6b7-6ba5-4788-9075-bdfb0924041c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Sep 30 14:45:25 compute-0 nova_compute[261524]: 2025-09-30 14:45:25.673 2 DEBUG oslo_concurrency.lockutils [req-00f809a4-4e20-459e-9d62-b1728233ef46 req-100d0f5d-9a9f-4ab1-9cc7-dfbc75aab045 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Releasing lock "refresh_cache-ab354489-bdb3-49d0-9ed1-574d93130913" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Sep 30 14:45:25 compute-0 ceph-mon[74194]: pgmap v897: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 14 KiB/s wr, 1 op/s
Sep 30 14:45:25 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:45:25 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:45:25 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:45:25.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:45:25 compute-0 nova_compute[261524]: 2025-09-30 14:45:25.988 2 DEBUG nova.compute.manager [req-5218884f-92f4-4986-a09d-aa00fbef50c9 req-ad27d5c5-b0ce-44c1-9452-59060e31b8bb e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Received event network-vif-plugged-9647a6b7-6ba5-4788-9075-bdfb0924041c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Sep 30 14:45:25 compute-0 nova_compute[261524]: 2025-09-30 14:45:25.989 2 DEBUG oslo_concurrency.lockutils [req-5218884f-92f4-4986-a09d-aa00fbef50c9 req-ad27d5c5-b0ce-44c1-9452-59060e31b8bb e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Acquiring lock "ab354489-bdb3-49d0-9ed1-574d93130913-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:45:25 compute-0 nova_compute[261524]: 2025-09-30 14:45:25.989 2 DEBUG oslo_concurrency.lockutils [req-5218884f-92f4-4986-a09d-aa00fbef50c9 req-ad27d5c5-b0ce-44c1-9452-59060e31b8bb e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Lock "ab354489-bdb3-49d0-9ed1-574d93130913-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:45:25 compute-0 nova_compute[261524]: 2025-09-30 14:45:25.989 2 DEBUG oslo_concurrency.lockutils [req-5218884f-92f4-4986-a09d-aa00fbef50c9 req-ad27d5c5-b0ce-44c1-9452-59060e31b8bb e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Lock "ab354489-bdb3-49d0-9ed1-574d93130913-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:45:25 compute-0 nova_compute[261524]: 2025-09-30 14:45:25.990 2 DEBUG nova.compute.manager [req-5218884f-92f4-4986-a09d-aa00fbef50c9 req-ad27d5c5-b0ce-44c1-9452-59060e31b8bb e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] No waiting events found dispatching network-vif-plugged-9647a6b7-6ba5-4788-9075-bdfb0924041c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Sep 30 14:45:25 compute-0 nova_compute[261524]: 2025-09-30 14:45:25.990 2 WARNING nova.compute.manager [req-5218884f-92f4-4986-a09d-aa00fbef50c9 req-ad27d5c5-b0ce-44c1-9452-59060e31b8bb e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Received unexpected event network-vif-plugged-9647a6b7-6ba5-4788-9075-bdfb0924041c for instance with vm_state active and task_state None.
Sep 30 14:45:26 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v898: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 14 KiB/s wr, 1 op/s
Sep 30 14:45:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:45:27.156Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:45:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:45:27.157Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:45:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:45:27.157Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:45:27 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:45:27 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:45:27 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:45:27 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:45:27.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:45:27 compute-0 ceph-mon[74194]: pgmap v898: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 14 KiB/s wr, 1 op/s
Sep 30 14:45:27 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:45:27 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:45:27 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:45:27.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:45:28 compute-0 nova_compute[261524]: 2025-09-30 14:45:28.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:45:28 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v899: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 2.0 KiB/s wr, 1 op/s
Sep 30 14:45:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:45:28 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:45:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:45:28 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:45:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:45:28 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:45:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:45:29 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:45:29 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:45:29 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:45:29 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:45:29.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:45:29 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:45:29 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:45:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:45:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:45:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:45:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:45:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:45:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:45:29 compute-0 ceph-mon[74194]: pgmap v899: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 2.0 KiB/s wr, 1 op/s
Sep 30 14:45:29 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:45:29 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:45:29 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:45:29 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:45:29.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:45:30 compute-0 podman[277544]: 2025-09-30 14:45:30.163711662 +0000 UTC m=+0.076101253 container health_status b3fb2fd96e3ed0561f41ccf5f3a6a8f5edc1610051f196882b9fe64f54041c07 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=multipathd, tcib_managed=true, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20250923)
Sep 30 14:45:30 compute-0 podman[277550]: 2025-09-30 14:45:30.163987269 +0000 UTC m=+0.059864570 container health_status c96c6cc1b89de09f21811bef665f35e069874e3ad1bbd4c37d02227af98e3458 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, tcib_managed=true)
Sep 30 14:45:30 compute-0 podman[277542]: 2025-09-30 14:45:30.183126037 +0000 UTC m=+0.104975444 container health_status 3f9405f717bf7bccb1d94628a6cea0442375ebf8d5cf43ef2536ee30dce6c6e0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Sep 30 14:45:30 compute-0 podman[277543]: 2025-09-30 14:45:30.200396227 +0000 UTC m=+0.118023624 container health_status 8ace90c1ad44d98a416105b72e1c541724757ab08efa10adfaa0df7e8c2027c6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=36bccb96575468ec919301205d8daa2c, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Sep 30 14:45:30 compute-0 nova_compute[261524]: 2025-09-30 14:45:30.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:45:30 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v900: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 2.0 KiB/s wr, 1 op/s
Sep 30 14:45:31 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:45:31 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:45:31 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:45:31.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:45:31 compute-0 ceph-mon[74194]: pgmap v900: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 2.0 KiB/s wr, 1 op/s
Sep 30 14:45:31 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:45:31 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:45:31 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:45:31.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:45:32 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:45:32 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v901: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s rd, 7.3 KiB/s wr, 1 op/s
Sep 30 14:45:32 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/1482807520' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:45:33 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:45:33 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:45:33 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:45:33.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:45:33 compute-0 nova_compute[261524]: 2025-09-30 14:45:33.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:45:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:45:33.660Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:45:33 compute-0 ceph-mon[74194]: pgmap v901: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s rd, 7.3 KiB/s wr, 1 op/s
Sep 30 14:45:33 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:45:33 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:45:33 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:45:33.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:45:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:45:33 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:45:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:45:34 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:45:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:45:34 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:45:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:45:34 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:45:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:45:34] "GET /metrics HTTP/1.1" 200 48552 "" "Prometheus/2.51.0"
Sep 30 14:45:34 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:45:34] "GET /metrics HTTP/1.1" 200 48552 "" "Prometheus/2.51.0"
Sep 30 14:45:34 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v902: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 5.3 KiB/s wr, 1 op/s
Sep 30 14:45:35 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:45:35 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:45:35 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:45:35.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:45:35 compute-0 nova_compute[261524]: 2025-09-30 14:45:35.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:45:35 compute-0 ceph-mon[74194]: pgmap v902: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 5.3 KiB/s wr, 1 op/s
Sep 30 14:45:35 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:45:35 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:45:35 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:45:35.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:45:36 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v903: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 1.8 MiB/s wr, 63 op/s
Sep 30 14:45:37 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:45:37.157Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:45:37 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:45:37 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:45:37 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:45:37 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:45:37.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:45:37 compute-0 ceph-mon[74194]: pgmap v903: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 1.8 MiB/s wr, 63 op/s
Sep 30 14:45:37 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/1042734695' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 14:45:37 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/4043231495' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 14:45:37 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:45:37 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:45:37 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:45:37.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:45:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:45:38.262 163966 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:45:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:45:38.262 163966 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:45:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:45:38.263 163966 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:45:38 compute-0 nova_compute[261524]: 2025-09-30 14:45:38.545 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:45:38 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v904: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 1.8 MiB/s wr, 62 op/s
Sep 30 14:45:38 compute-0 sudo[277637]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:45:38 compute-0 sudo[277637]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:45:38 compute-0 sudo[277637]: pam_unix(sudo:session): session closed for user root
Sep 30 14:45:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:45:38 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:45:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:45:38 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:45:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:45:38 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:45:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:45:39 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:45:39 compute-0 sudo[277662]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 14:45:39 compute-0 sudo[277662]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:45:39 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:45:39 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:45:39 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:45:39.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:45:39 compute-0 sudo[277662]: pam_unix(sudo:session): session closed for user root
Sep 30 14:45:39 compute-0 ceph-mon[74194]: pgmap v904: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 1.8 MiB/s wr, 62 op/s
Sep 30 14:45:39 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:45:39 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:45:39 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:45:39.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:45:40 compute-0 nova_compute[261524]: 2025-09-30 14:45:40.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:45:40 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v905: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 1.8 MiB/s wr, 62 op/s
Sep 30 14:45:41 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:45:41 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 14:45:41 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:45:41.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 14:45:41 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Sep 30 14:45:41 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:45:41 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Sep 30 14:45:41 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:45:41 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:45:41 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:45:41 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 14:45:41 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 14:45:41 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 14:45:41 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:45:41 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 14:45:41 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:45:41 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 14:45:41 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 14:45:41 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 14:45:41 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 14:45:41 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:45:41 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:45:41 compute-0 sudo[277722]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:45:41 compute-0 sudo[277722]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:45:41 compute-0 sudo[277722]: pam_unix(sudo:session): session closed for user root
Sep 30 14:45:41 compute-0 sudo[277747]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 14:45:41 compute-0 sudo[277747]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:45:41 compute-0 ceph-mon[74194]: pgmap v905: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 1.8 MiB/s wr, 62 op/s
Sep 30 14:45:41 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:45:41 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:45:41 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:45:41 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 14:45:41 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:45:41 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:45:41 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 14:45:41 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 14:45:41 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:45:41 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:45:41 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:45:41 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:45:41.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:45:42 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:45:42 compute-0 podman[277815]: 2025-09-30 14:45:42.228447286 +0000 UTC m=+0.056283426 container create 03fda780e6f0211a0dcc6143d7dffabd5af3817ca7206c5c61dc7529b589af7a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_galois, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:45:42 compute-0 systemd[1]: Started libpod-conmon-03fda780e6f0211a0dcc6143d7dffabd5af3817ca7206c5c61dc7529b589af7a.scope.
Sep 30 14:45:42 compute-0 podman[277815]: 2025-09-30 14:45:42.206544286 +0000 UTC m=+0.034380456 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:45:42 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:45:42 compute-0 podman[277815]: 2025-09-30 14:45:42.327148206 +0000 UTC m=+0.154984376 container init 03fda780e6f0211a0dcc6143d7dffabd5af3817ca7206c5c61dc7529b589af7a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:45:42 compute-0 podman[277815]: 2025-09-30 14:45:42.340409441 +0000 UTC m=+0.168245591 container start 03fda780e6f0211a0dcc6143d7dffabd5af3817ca7206c5c61dc7529b589af7a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_galois, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:45:42 compute-0 podman[277815]: 2025-09-30 14:45:42.343875521 +0000 UTC m=+0.171711691 container attach 03fda780e6f0211a0dcc6143d7dffabd5af3817ca7206c5c61dc7529b589af7a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_galois, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:45:42 compute-0 determined_galois[277831]: 167 167
Sep 30 14:45:42 compute-0 systemd[1]: libpod-03fda780e6f0211a0dcc6143d7dffabd5af3817ca7206c5c61dc7529b589af7a.scope: Deactivated successfully.
Sep 30 14:45:42 compute-0 podman[277815]: 2025-09-30 14:45:42.348712657 +0000 UTC m=+0.176548817 container died 03fda780e6f0211a0dcc6143d7dffabd5af3817ca7206c5c61dc7529b589af7a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:45:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-5723e6ace62fd7507b0d755e48ab61c1bbaf35bbdbed4c6d217b3be2568ef5fe-merged.mount: Deactivated successfully.
Sep 30 14:45:42 compute-0 podman[277815]: 2025-09-30 14:45:42.387411025 +0000 UTC m=+0.215247165 container remove 03fda780e6f0211a0dcc6143d7dffabd5af3817ca7206c5c61dc7529b589af7a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_galois, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:45:42 compute-0 systemd[1]: libpod-conmon-03fda780e6f0211a0dcc6143d7dffabd5af3817ca7206c5c61dc7529b589af7a.scope: Deactivated successfully.
Sep 30 14:45:42 compute-0 podman[277854]: 2025-09-30 14:45:42.617544837 +0000 UTC m=+0.076732169 container create 5fb1d4ea61e2e4d51cc5d58321f7c3df4dcc1d68c6ddf0d0e214d1d6efb027ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_stonebraker, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:45:42 compute-0 systemd[1]: Started libpod-conmon-5fb1d4ea61e2e4d51cc5d58321f7c3df4dcc1d68c6ddf0d0e214d1d6efb027ec.scope.
Sep 30 14:45:42 compute-0 podman[277854]: 2025-09-30 14:45:42.58733562 +0000 UTC m=+0.046523002 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:45:42 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:45:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13468e1f42c11360d74b79e17cfd9b3f57c877a189fbec15c391ffb4f6fcc12c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:45:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13468e1f42c11360d74b79e17cfd9b3f57c877a189fbec15c391ffb4f6fcc12c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:45:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13468e1f42c11360d74b79e17cfd9b3f57c877a189fbec15c391ffb4f6fcc12c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:45:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13468e1f42c11360d74b79e17cfd9b3f57c877a189fbec15c391ffb4f6fcc12c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:45:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13468e1f42c11360d74b79e17cfd9b3f57c877a189fbec15c391ffb4f6fcc12c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 14:45:42 compute-0 podman[277854]: 2025-09-30 14:45:42.711165274 +0000 UTC m=+0.170352616 container init 5fb1d4ea61e2e4d51cc5d58321f7c3df4dcc1d68c6ddf0d0e214d1d6efb027ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Sep 30 14:45:42 compute-0 podman[277854]: 2025-09-30 14:45:42.723222228 +0000 UTC m=+0.182409550 container start 5fb1d4ea61e2e4d51cc5d58321f7c3df4dcc1d68c6ddf0d0e214d1d6efb027ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:45:42 compute-0 podman[277854]: 2025-09-30 14:45:42.727748806 +0000 UTC m=+0.186936178 container attach 5fb1d4ea61e2e4d51cc5d58321f7c3df4dcc1d68c6ddf0d0e214d1d6efb027ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Sep 30 14:45:42 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v906: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 161 op/s
Sep 30 14:45:43 compute-0 practical_stonebraker[277870]: --> passed data devices: 0 physical, 1 LVM
Sep 30 14:45:43 compute-0 practical_stonebraker[277870]: --> All data devices are unavailable
Sep 30 14:45:43 compute-0 systemd[1]: libpod-5fb1d4ea61e2e4d51cc5d58321f7c3df4dcc1d68c6ddf0d0e214d1d6efb027ec.scope: Deactivated successfully.
Sep 30 14:45:43 compute-0 podman[277854]: 2025-09-30 14:45:43.126388255 +0000 UTC m=+0.585575547 container died 5fb1d4ea61e2e4d51cc5d58321f7c3df4dcc1d68c6ddf0d0e214d1d6efb027ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:45:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-13468e1f42c11360d74b79e17cfd9b3f57c877a189fbec15c391ffb4f6fcc12c-merged.mount: Deactivated successfully.
Sep 30 14:45:43 compute-0 podman[277854]: 2025-09-30 14:45:43.167367892 +0000 UTC m=+0.626555184 container remove 5fb1d4ea61e2e4d51cc5d58321f7c3df4dcc1d68c6ddf0d0e214d1d6efb027ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_stonebraker, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Sep 30 14:45:43 compute-0 systemd[1]: libpod-conmon-5fb1d4ea61e2e4d51cc5d58321f7c3df4dcc1d68c6ddf0d0e214d1d6efb027ec.scope: Deactivated successfully.
Sep 30 14:45:43 compute-0 sudo[277747]: pam_unix(sudo:session): session closed for user root
Sep 30 14:45:43 compute-0 sudo[277897]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:45:43 compute-0 sudo[277897]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:45:43 compute-0 sudo[277897]: pam_unix(sudo:session): session closed for user root
Sep 30 14:45:43 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:45:43 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:45:43 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:45:43.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:45:43 compute-0 sudo[277922]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -- lvm list --format json
Sep 30 14:45:43 compute-0 sudo[277922]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:45:43 compute-0 nova_compute[261524]: 2025-09-30 14:45:43.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:45:43 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:45:43.661Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:45:43 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:45:43.662Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:45:43 compute-0 podman[277988]: 2025-09-30 14:45:43.790414394 +0000 UTC m=+0.042929119 container create e4e44713a276044da8dbff7fa78558e48e86a46df1c8e3dd2fa8bf4b4892fe84 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_hawking, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:45:43 compute-0 systemd[1]: Started libpod-conmon-e4e44713a276044da8dbff7fa78558e48e86a46df1c8e3dd2fa8bf4b4892fe84.scope.
Sep 30 14:45:43 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:45:43 compute-0 podman[277988]: 2025-09-30 14:45:43.774102579 +0000 UTC m=+0.026617334 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:45:43 compute-0 podman[277988]: 2025-09-30 14:45:43.873122797 +0000 UTC m=+0.125637602 container init e4e44713a276044da8dbff7fa78558e48e86a46df1c8e3dd2fa8bf4b4892fe84 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_hawking, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:45:43 compute-0 podman[277988]: 2025-09-30 14:45:43.88669059 +0000 UTC m=+0.139205345 container start e4e44713a276044da8dbff7fa78558e48e86a46df1c8e3dd2fa8bf4b4892fe84 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_hawking, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Sep 30 14:45:43 compute-0 elegant_hawking[278006]: 167 167
Sep 30 14:45:43 compute-0 systemd[1]: libpod-e4e44713a276044da8dbff7fa78558e48e86a46df1c8e3dd2fa8bf4b4892fe84.scope: Deactivated successfully.
Sep 30 14:45:43 compute-0 podman[277988]: 2025-09-30 14:45:43.893554139 +0000 UTC m=+0.146068944 container attach e4e44713a276044da8dbff7fa78558e48e86a46df1c8e3dd2fa8bf4b4892fe84 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_hawking, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:45:43 compute-0 podman[277988]: 2025-09-30 14:45:43.894089943 +0000 UTC m=+0.146604668 container died e4e44713a276044da8dbff7fa78558e48e86a46df1c8e3dd2fa8bf4b4892fe84 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_hawking, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Sep 30 14:45:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-975cf8814ffdeb907f93d3969639d5cdecafc899690e2c8f54bcf655195fc11f-merged.mount: Deactivated successfully.
Sep 30 14:45:43 compute-0 podman[277988]: 2025-09-30 14:45:43.934277169 +0000 UTC m=+0.186791894 container remove e4e44713a276044da8dbff7fa78558e48e86a46df1c8e3dd2fa8bf4b4892fe84 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_hawking, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:45:43 compute-0 systemd[1]: libpod-conmon-e4e44713a276044da8dbff7fa78558e48e86a46df1c8e3dd2fa8bf4b4892fe84.scope: Deactivated successfully.
Sep 30 14:45:43 compute-0 ceph-mon[74194]: pgmap v906: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 161 op/s
Sep 30 14:45:43 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:45:43 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:45:43 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:45:43.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:45:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:45:43 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:45:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:45:44 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:45:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:45:44 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:45:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:45:44 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:45:44 compute-0 podman[278029]: 2025-09-30 14:45:44.172252795 +0000 UTC m=+0.068410342 container create 0603e2cf5cc05c19301de9359cc6acf49059b07b363da888cd6fcf1205435ec4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_jennings, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Sep 30 14:45:44 compute-0 systemd[1]: Started libpod-conmon-0603e2cf5cc05c19301de9359cc6acf49059b07b363da888cd6fcf1205435ec4.scope.
Sep 30 14:45:44 compute-0 podman[278029]: 2025-09-30 14:45:44.14783756 +0000 UTC m=+0.043995177 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:45:44 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:45:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cc9e46ac011dc832b03ab8aed379909807e3f43728058879dcfdc858056e1d7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:45:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cc9e46ac011dc832b03ab8aed379909807e3f43728058879dcfdc858056e1d7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:45:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cc9e46ac011dc832b03ab8aed379909807e3f43728058879dcfdc858056e1d7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:45:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cc9e46ac011dc832b03ab8aed379909807e3f43728058879dcfdc858056e1d7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:45:44 compute-0 podman[278029]: 2025-09-30 14:45:44.279778445 +0000 UTC m=+0.175935982 container init 0603e2cf5cc05c19301de9359cc6acf49059b07b363da888cd6fcf1205435ec4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_jennings, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:45:44 compute-0 podman[278029]: 2025-09-30 14:45:44.291145351 +0000 UTC m=+0.187302898 container start 0603e2cf5cc05c19301de9359cc6acf49059b07b363da888cd6fcf1205435ec4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Sep 30 14:45:44 compute-0 podman[278029]: 2025-09-30 14:45:44.295049402 +0000 UTC m=+0.191206909 container attach 0603e2cf5cc05c19301de9359cc6acf49059b07b363da888cd6fcf1205435ec4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_jennings, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid)
Sep 30 14:45:44 compute-0 stupefied_jennings[278045]: {
Sep 30 14:45:44 compute-0 stupefied_jennings[278045]:     "0": [
Sep 30 14:45:44 compute-0 stupefied_jennings[278045]:         {
Sep 30 14:45:44 compute-0 stupefied_jennings[278045]:             "devices": [
Sep 30 14:45:44 compute-0 stupefied_jennings[278045]:                 "/dev/loop3"
Sep 30 14:45:44 compute-0 stupefied_jennings[278045]:             ],
Sep 30 14:45:44 compute-0 stupefied_jennings[278045]:             "lv_name": "ceph_lv0",
Sep 30 14:45:44 compute-0 stupefied_jennings[278045]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:45:44 compute-0 stupefied_jennings[278045]:             "lv_size": "21470642176",
Sep 30 14:45:44 compute-0 stupefied_jennings[278045]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5e3c7776-ac03-5698-b79f-a6dc2d80cae6,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1bf35304-bfb4-41f5-b832-570aa31de1b2,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 14:45:44 compute-0 stupefied_jennings[278045]:             "lv_uuid": "f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L",
Sep 30 14:45:44 compute-0 stupefied_jennings[278045]:             "name": "ceph_lv0",
Sep 30 14:45:44 compute-0 stupefied_jennings[278045]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:45:44 compute-0 stupefied_jennings[278045]:             "tags": {
Sep 30 14:45:44 compute-0 stupefied_jennings[278045]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:45:44 compute-0 stupefied_jennings[278045]:                 "ceph.block_uuid": "f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L",
Sep 30 14:45:44 compute-0 stupefied_jennings[278045]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 14:45:44 compute-0 stupefied_jennings[278045]:                 "ceph.cluster_fsid": "5e3c7776-ac03-5698-b79f-a6dc2d80cae6",
Sep 30 14:45:44 compute-0 stupefied_jennings[278045]:                 "ceph.cluster_name": "ceph",
Sep 30 14:45:44 compute-0 stupefied_jennings[278045]:                 "ceph.crush_device_class": "",
Sep 30 14:45:44 compute-0 stupefied_jennings[278045]:                 "ceph.encrypted": "0",
Sep 30 14:45:44 compute-0 stupefied_jennings[278045]:                 "ceph.osd_fsid": "1bf35304-bfb4-41f5-b832-570aa31de1b2",
Sep 30 14:45:44 compute-0 stupefied_jennings[278045]:                 "ceph.osd_id": "0",
Sep 30 14:45:44 compute-0 stupefied_jennings[278045]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 14:45:44 compute-0 stupefied_jennings[278045]:                 "ceph.type": "block",
Sep 30 14:45:44 compute-0 stupefied_jennings[278045]:                 "ceph.vdo": "0",
Sep 30 14:45:44 compute-0 stupefied_jennings[278045]:                 "ceph.with_tpm": "0"
Sep 30 14:45:44 compute-0 stupefied_jennings[278045]:             },
Sep 30 14:45:44 compute-0 stupefied_jennings[278045]:             "type": "block",
Sep 30 14:45:44 compute-0 stupefied_jennings[278045]:             "vg_name": "ceph_vg0"
Sep 30 14:45:44 compute-0 stupefied_jennings[278045]:         }
Sep 30 14:45:44 compute-0 stupefied_jennings[278045]:     ]
Sep 30 14:45:44 compute-0 stupefied_jennings[278045]: }
Sep 30 14:45:44 compute-0 systemd[1]: libpod-0603e2cf5cc05c19301de9359cc6acf49059b07b363da888cd6fcf1205435ec4.scope: Deactivated successfully.
Sep 30 14:45:44 compute-0 podman[278029]: 2025-09-30 14:45:44.626014309 +0000 UTC m=+0.522171866 container died 0603e2cf5cc05c19301de9359cc6acf49059b07b363da888cd6fcf1205435ec4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_jennings, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Sep 30 14:45:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-6cc9e46ac011dc832b03ab8aed379909807e3f43728058879dcfdc858056e1d7-merged.mount: Deactivated successfully.
Sep 30 14:45:44 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:45:44 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:45:44 compute-0 podman[278029]: 2025-09-30 14:45:44.674626095 +0000 UTC m=+0.570783632 container remove 0603e2cf5cc05c19301de9359cc6acf49059b07b363da888cd6fcf1205435ec4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_jennings, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:45:44 compute-0 systemd[1]: libpod-conmon-0603e2cf5cc05c19301de9359cc6acf49059b07b363da888cd6fcf1205435ec4.scope: Deactivated successfully.
Sep 30 14:45:44 compute-0 sudo[277922]: pam_unix(sudo:session): session closed for user root
Sep 30 14:45:44 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:45:44] "GET /metrics HTTP/1.1" 200 48545 "" "Prometheus/2.51.0"
Sep 30 14:45:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:45:44] "GET /metrics HTTP/1.1" 200 48545 "" "Prometheus/2.51.0"
Sep 30 14:45:44 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v907: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 161 op/s
Sep 30 14:45:44 compute-0 sudo[278066]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:45:44 compute-0 sudo[278066]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:45:44 compute-0 sudo[278066]: pam_unix(sudo:session): session closed for user root
Sep 30 14:45:44 compute-0 sudo[278091]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -- raw list --format json
Sep 30 14:45:44 compute-0 sudo[278091]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:45:45 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:45:45 compute-0 sudo[278116]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:45:45 compute-0 sudo[278116]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:45:45 compute-0 sudo[278116]: pam_unix(sudo:session): session closed for user root
Sep 30 14:45:45 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:45:45 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:45:45 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:45:45.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:45:45 compute-0 podman[278183]: 2025-09-30 14:45:45.346951109 +0000 UTC m=+0.067441247 container create 4ea4e65a287b64c23c68d06c6a0e5e84dc5c89c2fc6ca7f5a12501b040dd9b5a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_lichterman, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:45:45 compute-0 nova_compute[261524]: 2025-09-30 14:45:45.390 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:45:45 compute-0 systemd[1]: Started libpod-conmon-4ea4e65a287b64c23c68d06c6a0e5e84dc5c89c2fc6ca7f5a12501b040dd9b5a.scope.
Sep 30 14:45:45 compute-0 podman[278183]: 2025-09-30 14:45:45.31743523 +0000 UTC m=+0.037925378 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:45:45 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:45:45 compute-0 podman[278183]: 2025-09-30 14:45:45.448696998 +0000 UTC m=+0.169187156 container init 4ea4e65a287b64c23c68d06c6a0e5e84dc5c89c2fc6ca7f5a12501b040dd9b5a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_lichterman, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:45:45 compute-0 podman[278183]: 2025-09-30 14:45:45.460438954 +0000 UTC m=+0.180929072 container start 4ea4e65a287b64c23c68d06c6a0e5e84dc5c89c2fc6ca7f5a12501b040dd9b5a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_lichterman, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Sep 30 14:45:45 compute-0 podman[278183]: 2025-09-30 14:45:45.463837362 +0000 UTC m=+0.184327500 container attach 4ea4e65a287b64c23c68d06c6a0e5e84dc5c89c2fc6ca7f5a12501b040dd9b5a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_lichterman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:45:45 compute-0 eloquent_lichterman[278200]: 167 167
Sep 30 14:45:45 compute-0 systemd[1]: libpod-4ea4e65a287b64c23c68d06c6a0e5e84dc5c89c2fc6ca7f5a12501b040dd9b5a.scope: Deactivated successfully.
Sep 30 14:45:45 compute-0 podman[278183]: 2025-09-30 14:45:45.46680919 +0000 UTC m=+0.187299318 container died 4ea4e65a287b64c23c68d06c6a0e5e84dc5c89c2fc6ca7f5a12501b040dd9b5a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_lichterman, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:45:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-f1477edeefb1e5402ec298789b1c104397c6c522b2a312986b2a393e855daaf4-merged.mount: Deactivated successfully.
Sep 30 14:45:45 compute-0 podman[278183]: 2025-09-30 14:45:45.515889937 +0000 UTC m=+0.236380055 container remove 4ea4e65a287b64c23c68d06c6a0e5e84dc5c89c2fc6ca7f5a12501b040dd9b5a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_lichterman, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:45:45 compute-0 systemd[1]: libpod-conmon-4ea4e65a287b64c23c68d06c6a0e5e84dc5c89c2fc6ca7f5a12501b040dd9b5a.scope: Deactivated successfully.
Sep 30 14:45:45 compute-0 podman[278226]: 2025-09-30 14:45:45.730709931 +0000 UTC m=+0.051648236 container create 63fe20a9a4ab737ab7fbf232c4d15f4d5eea28686f03ce2e7ea638efb1e68cbe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_wilbur, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Sep 30 14:45:45 compute-0 systemd[1]: Started libpod-conmon-63fe20a9a4ab737ab7fbf232c4d15f4d5eea28686f03ce2e7ea638efb1e68cbe.scope.
Sep 30 14:45:45 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:45:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bb36aaf63b8494fab8407c98f7c65cc64c7bb68b72b7dbf1346369b86e88040/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:45:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bb36aaf63b8494fab8407c98f7c65cc64c7bb68b72b7dbf1346369b86e88040/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:45:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bb36aaf63b8494fab8407c98f7c65cc64c7bb68b72b7dbf1346369b86e88040/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:45:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bb36aaf63b8494fab8407c98f7c65cc64c7bb68b72b7dbf1346369b86e88040/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:45:45 compute-0 podman[278226]: 2025-09-30 14:45:45.715617288 +0000 UTC m=+0.036555623 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:45:45 compute-0 podman[278226]: 2025-09-30 14:45:45.816925575 +0000 UTC m=+0.137863920 container init 63fe20a9a4ab737ab7fbf232c4d15f4d5eea28686f03ce2e7ea638efb1e68cbe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_wilbur, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Sep 30 14:45:45 compute-0 podman[278226]: 2025-09-30 14:45:45.833383174 +0000 UTC m=+0.154321519 container start 63fe20a9a4ab737ab7fbf232c4d15f4d5eea28686f03ce2e7ea638efb1e68cbe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:45:45 compute-0 podman[278226]: 2025-09-30 14:45:45.837279965 +0000 UTC m=+0.158218310 container attach 63fe20a9a4ab737ab7fbf232c4d15f4d5eea28686f03ce2e7ea638efb1e68cbe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_wilbur, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:45:46 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:45:46 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:45:46 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:45:46.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:45:46 compute-0 ceph-mon[74194]: pgmap v907: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 161 op/s
Sep 30 14:45:46 compute-0 lvm[278317]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 14:45:46 compute-0 lvm[278317]: VG ceph_vg0 finished
Sep 30 14:45:46 compute-0 determined_wilbur[278242]: {}
Sep 30 14:45:46 compute-0 systemd[1]: libpod-63fe20a9a4ab737ab7fbf232c4d15f4d5eea28686f03ce2e7ea638efb1e68cbe.scope: Deactivated successfully.
Sep 30 14:45:46 compute-0 podman[278226]: 2025-09-30 14:45:46.534607391 +0000 UTC m=+0.855545706 container died 63fe20a9a4ab737ab7fbf232c4d15f4d5eea28686f03ce2e7ea638efb1e68cbe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_wilbur, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Sep 30 14:45:46 compute-0 systemd[1]: libpod-63fe20a9a4ab737ab7fbf232c4d15f4d5eea28686f03ce2e7ea638efb1e68cbe.scope: Consumed 1.109s CPU time.
Sep 30 14:45:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-8bb36aaf63b8494fab8407c98f7c65cc64c7bb68b72b7dbf1346369b86e88040-merged.mount: Deactivated successfully.
Sep 30 14:45:46 compute-0 podman[278226]: 2025-09-30 14:45:46.580124866 +0000 UTC m=+0.901063191 container remove 63fe20a9a4ab737ab7fbf232c4d15f4d5eea28686f03ce2e7ea638efb1e68cbe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_wilbur, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:45:46 compute-0 systemd[1]: libpod-conmon-63fe20a9a4ab737ab7fbf232c4d15f4d5eea28686f03ce2e7ea638efb1e68cbe.scope: Deactivated successfully.
Sep 30 14:45:46 compute-0 sudo[278091]: pam_unix(sudo:session): session closed for user root
Sep 30 14:45:46 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 14:45:46 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:45:46 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 14:45:46 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:45:46 compute-0 sudo[278332]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 14:45:46 compute-0 sudo[278332]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:45:46 compute-0 sudo[278332]: pam_unix(sudo:session): session closed for user root
Sep 30 14:45:46 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v908: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 162 op/s
Sep 30 14:45:47 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:45:47.159Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:45:47 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:45:47 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:45:47 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:45:47 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:45:47.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:45:47 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:45:47 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:45:47 compute-0 ceph-mon[74194]: pgmap v908: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 162 op/s
Sep 30 14:45:48 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:45:48 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:45:48 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:45:48.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:45:48 compute-0 nova_compute[261524]: 2025-09-30 14:45:48.582 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:45:48 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v909: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 100 op/s
Sep 30 14:45:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:45:48 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:45:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:45:48 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:45:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:45:48 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:45:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:45:49 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:45:49 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:45:49 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:45:49 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:45:49.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:45:49 compute-0 ceph-mon[74194]: pgmap v909: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 100 op/s
Sep 30 14:45:50 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:45:50 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:45:50 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:45:50.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:45:50 compute-0 nova_compute[261524]: 2025-09-30 14:45:50.394 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:45:50 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v910: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 100 op/s
Sep 30 14:45:51 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:45:51 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:45:51 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:45:51.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:45:51 compute-0 ceph-mon[74194]: pgmap v910: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 100 op/s
Sep 30 14:45:52 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:45:52 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:45:52 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:45:52.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:45:52 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:45:52 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v911: 337 pgs: 337 active+clean; 192 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 146 op/s
Sep 30 14:45:53 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:45:53 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:45:53 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:45:53.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:45:53 compute-0 nova_compute[261524]: 2025-09-30 14:45:53.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:45:53 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:45:53.663Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:45:53 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:45:53.663Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:45:53 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:45:53.663Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:45:53 compute-0 ceph-mon[74194]: pgmap v911: 337 pgs: 337 active+clean; 192 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 146 op/s
Sep 30 14:45:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:45:53 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:45:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:45:53 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:45:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:45:53 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:45:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:45:54 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:45:54 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:45:54 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:45:54 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:45:54.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:45:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:45:54] "GET /metrics HTTP/1.1" 200 48545 "" "Prometheus/2.51.0"
Sep 30 14:45:54 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:45:54] "GET /metrics HTTP/1.1" 200 48545 "" "Prometheus/2.51.0"
Sep 30 14:45:54 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v912: 337 pgs: 337 active+clean; 192 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 199 KiB/s rd, 2.0 MiB/s wr, 46 op/s
Sep 30 14:45:55 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:45:55 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 14:45:55 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:45:55.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 14:45:55 compute-0 nova_compute[261524]: 2025-09-30 14:45:55.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:45:55 compute-0 ceph-mon[74194]: pgmap v912: 337 pgs: 337 active+clean; 192 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 199 KiB/s rd, 2.0 MiB/s wr, 46 op/s
Sep 30 14:45:56 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:45:56 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:45:56 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:45:56.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:45:56 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v913: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 259 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Sep 30 14:45:57 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:45:57.162Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:45:57 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:45:57 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:45:57 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:45:57 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:45:57.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:45:57 compute-0 ceph-mon[74194]: pgmap v913: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 259 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Sep 30 14:45:58 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:45:58 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:45:58 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:45:58.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:45:58 compute-0 nova_compute[261524]: 2025-09-30 14:45:58.589 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:45:58 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v914: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 253 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Sep 30 14:45:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:45:58 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:45:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:45:58 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:45:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:45:58 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:45:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:45:59 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:45:59 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:45:59 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000024s ======
Sep 30 14:45:59 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:45:59.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Sep 30 14:45:59 compute-0 ceph-mgr[74485]: [balancer INFO root] Optimize plan auto_2025-09-30_14:45:59
Sep 30 14:45:59 compute-0 ceph-mgr[74485]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 14:45:59 compute-0 ceph-mgr[74485]: [balancer INFO root] do_upmap
Sep 30 14:45:59 compute-0 ceph-mgr[74485]: [balancer INFO root] pools ['cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.control', '.mgr', 'volumes', 'backups', 'vms', '.rgw.root', 'images', '.nfs', 'default.rgw.log']
Sep 30 14:45:59 compute-0 ceph-mgr[74485]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 14:45:59 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:45:59 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:45:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:45:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:45:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:45:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:45:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:45:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:45:59 compute-0 ceph-mon[74194]: pgmap v914: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 253 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Sep 30 14:45:59 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:45:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 14:45:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:45:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 14:45:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:45:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015178466049684182 of space, bias 1.0, pg target 0.4553539814905255 quantized to 32 (current 32)
Sep 30 14:45:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:45:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:45:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:45:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:45:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:45:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Sep 30 14:45:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:45:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Sep 30 14:45:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:45:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:45:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:45:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Sep 30 14:45:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:45:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Sep 30 14:45:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:45:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:45:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:45:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 14:45:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:45:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 14:46:00 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:46:00 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:46:00 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:46:00.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:46:00 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:46:00.123 163966 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ea:30:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '86:54:af:bb:5a:5f'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Sep 30 14:46:00 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:46:00.124 163966 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Sep 30 14:46:00 compute-0 nova_compute[261524]: 2025-09-30 14:46:00.124 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:46:00 compute-0 nova_compute[261524]: 2025-09-30 14:46:00.436 2 DEBUG nova.compute.manager [req-cae0a431-e1b5-4e97-bc7b-3e680f3640bb req-65bd3f7e-7e9e-4b30-9d4e-212719dbab42 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Received event network-changed-9647a6b7-6ba5-4788-9075-bdfb0924041c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Sep 30 14:46:00 compute-0 nova_compute[261524]: 2025-09-30 14:46:00.436 2 DEBUG nova.compute.manager [req-cae0a431-e1b5-4e97-bc7b-3e680f3640bb req-65bd3f7e-7e9e-4b30-9d4e-212719dbab42 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Refreshing instance network info cache due to event network-changed-9647a6b7-6ba5-4788-9075-bdfb0924041c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Sep 30 14:46:00 compute-0 nova_compute[261524]: 2025-09-30 14:46:00.436 2 DEBUG oslo_concurrency.lockutils [req-cae0a431-e1b5-4e97-bc7b-3e680f3640bb req-65bd3f7e-7e9e-4b30-9d4e-212719dbab42 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Acquiring lock "refresh_cache-ab354489-bdb3-49d0-9ed1-574d93130913" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Sep 30 14:46:00 compute-0 nova_compute[261524]: 2025-09-30 14:46:00.436 2 DEBUG oslo_concurrency.lockutils [req-cae0a431-e1b5-4e97-bc7b-3e680f3640bb req-65bd3f7e-7e9e-4b30-9d4e-212719dbab42 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Acquired lock "refresh_cache-ab354489-bdb3-49d0-9ed1-574d93130913" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Sep 30 14:46:00 compute-0 nova_compute[261524]: 2025-09-30 14:46:00.437 2 DEBUG nova.network.neutron [req-cae0a431-e1b5-4e97-bc7b-3e680f3640bb req-65bd3f7e-7e9e-4b30-9d4e-212719dbab42 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Refreshing network info cache for port 9647a6b7-6ba5-4788-9075-bdfb0924041c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Sep 30 14:46:00 compute-0 nova_compute[261524]: 2025-09-30 14:46:00.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:46:00 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v915: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 253 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Sep 30 14:46:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 14:46:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 14:46:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 14:46:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 14:46:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 14:46:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 14:46:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 14:46:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 14:46:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 14:46:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 14:46:01 compute-0 podman[278371]: 2025-09-30 14:46:01.175075835 +0000 UTC m=+0.084145675 container health_status 3f9405f717bf7bccb1d94628a6cea0442375ebf8d5cf43ef2536ee30dce6c6e0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Sep 30 14:46:01 compute-0 podman[278373]: 2025-09-30 14:46:01.187245135 +0000 UTC m=+0.088290337 container health_status b3fb2fd96e3ed0561f41ccf5f3a6a8f5edc1610051f196882b9fe64f54041c07 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:46:01 compute-0 podman[278374]: 2025-09-30 14:46:01.194693948 +0000 UTC m=+0.094646003 container health_status c96c6cc1b89de09f21811bef665f35e069874e3ad1bbd4c37d02227af98e3458 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Sep 30 14:46:01 compute-0 podman[278372]: 2025-09-30 14:46:01.209415161 +0000 UTC m=+0.118513312 container health_status 8ace90c1ad44d98a416105b72e1c541724757ab08efa10adfaa0df7e8c2027c6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Sep 30 14:46:01 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:46:01 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 14:46:01 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:46:01.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 14:46:01 compute-0 ceph-mon[74194]: pgmap v915: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 253 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Sep 30 14:46:02 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:46:02 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 14:46:02 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:46:02.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 14:46:02 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:46:02 compute-0 nova_compute[261524]: 2025-09-30 14:46:02.382 2 DEBUG nova.network.neutron [req-cae0a431-e1b5-4e97-bc7b-3e680f3640bb req-65bd3f7e-7e9e-4b30-9d4e-212719dbab42 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Updated VIF entry in instance network info cache for port 9647a6b7-6ba5-4788-9075-bdfb0924041c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Sep 30 14:46:02 compute-0 nova_compute[261524]: 2025-09-30 14:46:02.383 2 DEBUG nova.network.neutron [req-cae0a431-e1b5-4e97-bc7b-3e680f3640bb req-65bd3f7e-7e9e-4b30-9d4e-212719dbab42 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Updating instance_info_cache with network_info: [{"id": "70e1bfe9-6006-4e08-9c7f-c0d64c8269a0", "address": "fa:16:3e:db:b9:ad", "network": {"id": "653945fb-0a1b-4a3b-b45f-4bafe62f765f", "bridge": "br-int", "label": "tempest-network-smoke--969342711", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0f6bbb74396f4cb7bfa999ebdabfe722", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70e1bfe9-60", "ovs_interfaceid": "70e1bfe9-6006-4e08-9c7f-c0d64c8269a0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9647a6b7-6ba5-4788-9075-bdfb0924041c", "address": "fa:16:3e:21:35:09", "network": {"id": "4f96ad7c-4512-478c-acee-7360218cf3ea", "bridge": "br-int", "label": "tempest-network-smoke--980620503", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0f6bbb74396f4cb7bfa999ebdabfe722", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9647a6b7-6b", "ovs_interfaceid": "9647a6b7-6ba5-4788-9075-bdfb0924041c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Sep 30 14:46:02 compute-0 nova_compute[261524]: 2025-09-30 14:46:02.404 2 DEBUG oslo_concurrency.lockutils [req-cae0a431-e1b5-4e97-bc7b-3e680f3640bb req-65bd3f7e-7e9e-4b30-9d4e-212719dbab42 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Releasing lock "refresh_cache-ab354489-bdb3-49d0-9ed1-574d93130913" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Sep 30 14:46:02 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v916: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 254 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Sep 30 14:46:03 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:46:03.127 163966 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c6331d25-78a2-493c-bb43-51ad387342be, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 14:46:03 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:46:03 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:46:03 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:46:03.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:46:03 compute-0 nova_compute[261524]: 2025-09-30 14:46:03.592 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:46:03 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:46:03.664Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:46:03 compute-0 ceph-mon[74194]: pgmap v916: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 254 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Sep 30 14:46:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:46:03 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:46:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:46:03 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:46:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:46:03 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:46:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:46:04 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:46:04 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:46:04 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000024s ======
Sep 30 14:46:04 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:46:04.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Sep 30 14:46:04 compute-0 unix_chkpwd[278453]: password check failed for user (root)
Sep 30 14:46:04 compute-0 sshd-session[278450]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=91.224.92.108  user=root
Sep 30 14:46:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:46:04] "GET /metrics HTTP/1.1" 200 48549 "" "Prometheus/2.51.0"
Sep 30 14:46:04 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:46:04] "GET /metrics HTTP/1.1" 200 48549 "" "Prometheus/2.51.0"
Sep 30 14:46:04 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v917: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 106 KiB/s wr, 15 op/s
Sep 30 14:46:05 compute-0 sudo[278454]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:46:05 compute-0 sudo[278454]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:46:05 compute-0 sudo[278454]: pam_unix(sudo:session): session closed for user root
Sep 30 14:46:05 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:46:05 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000024s ======
Sep 30 14:46:05 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:46:05.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Sep 30 14:46:05 compute-0 nova_compute[261524]: 2025-09-30 14:46:05.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:46:05 compute-0 ceph-mon[74194]: pgmap v917: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 106 KiB/s wr, 15 op/s
Sep 30 14:46:06 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:46:06 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:46:06 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:46:06.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:46:06 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v918: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 61 KiB/s rd, 107 KiB/s wr, 15 op/s
Sep 30 14:46:07 compute-0 sshd-session[278450]: Failed password for root from 91.224.92.108 port 61622 ssh2
Sep 30 14:46:07 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:46:07.163Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:46:07 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:46:07 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:46:07 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:46:07 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:46:07.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:46:07 compute-0 ceph-mon[74194]: pgmap v918: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 61 KiB/s rd, 107 KiB/s wr, 15 op/s
Sep 30 14:46:08 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:46:08 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:46:08 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:46:08.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:46:08 compute-0 unix_chkpwd[278483]: password check failed for user (root)
Sep 30 14:46:08 compute-0 nova_compute[261524]: 2025-09-30 14:46:08.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:46:08 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v919: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 15 KiB/s wr, 1 op/s
Sep 30 14:46:09 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:46:08 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:46:09 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:46:08 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:46:09 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:46:08 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:46:09 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:46:09 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:46:09 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:46:09 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:46:09 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:46:09.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:46:10 compute-0 ceph-mon[74194]: pgmap v919: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 15 KiB/s wr, 1 op/s
Sep 30 14:46:10 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:46:10 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:46:10 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:46:10.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:46:10 compute-0 sshd-session[278450]: Failed password for root from 91.224.92.108 port 61622 ssh2
Sep 30 14:46:10 compute-0 nova_compute[261524]: 2025-09-30 14:46:10.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:46:10 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Sep 30 14:46:10 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Sep 30 14:46:10 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v920: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 15 KiB/s wr, 1 op/s
Sep 30 14:46:11 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 14:46:11 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/315782047' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 14:46:11 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 14:46:11 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/315782047' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 14:46:11 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:46:11 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:46:11 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:46:11.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:46:12 compute-0 ceph-mon[74194]: pgmap v920: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 15 KiB/s wr, 1 op/s
Sep 30 14:46:12 compute-0 ceph-mon[74194]: from='client.? 192.168.122.10:0/315782047' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 14:46:12 compute-0 ceph-mon[74194]: from='client.? 192.168.122.10:0/315782047' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 14:46:12 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:46:12 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:46:12 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:46:12.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:46:12 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:46:12 compute-0 unix_chkpwd[278489]: password check failed for user (root)
Sep 30 14:46:12 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v921: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 28 KiB/s wr, 3 op/s
Sep 30 14:46:13 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:46:13 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:46:13 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:46:13.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:46:13 compute-0 nova_compute[261524]: 2025-09-30 14:46:13.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:46:13 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:46:13.664Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:46:13 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:46:13.665Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:46:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:46:13 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:46:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:46:14 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:46:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:46:14 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:46:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:46:14 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:46:14 compute-0 ceph-mon[74194]: pgmap v921: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 28 KiB/s wr, 3 op/s
Sep 30 14:46:14 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:46:14 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:46:14 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:46:14.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:46:14 compute-0 sshd-session[278450]: Failed password for root from 91.224.92.108 port 61622 ssh2
Sep 30 14:46:14 compute-0 sshd-session[278450]: Received disconnect from 91.224.92.108 port 61622:11:  [preauth]
Sep 30 14:46:14 compute-0 sshd-session[278450]: Disconnected from authenticating user root 91.224.92.108 port 61622 [preauth]
Sep 30 14:46:14 compute-0 sshd-session[278450]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=91.224.92.108  user=root
Sep 30 14:46:14 compute-0 nova_compute[261524]: 2025-09-30 14:46:14.516 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:46:14 compute-0 nova_compute[261524]: 2025-09-30 14:46:14.568 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:46:14 compute-0 nova_compute[261524]: 2025-09-30 14:46:14.568 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:46:14 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:46:14 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:46:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:46:14] "GET /metrics HTTP/1.1" 200 48551 "" "Prometheus/2.51.0"
Sep 30 14:46:14 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:46:14] "GET /metrics HTTP/1.1" 200 48551 "" "Prometheus/2.51.0"
Sep 30 14:46:14 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v922: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 14 KiB/s wr, 2 op/s
Sep 30 14:46:14 compute-0 nova_compute[261524]: 2025-09-30 14:46:14.998 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:46:15 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/3425693323' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:46:15 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:46:15 compute-0 unix_chkpwd[278494]: password check failed for user (root)
Sep 30 14:46:15 compute-0 sshd-session[278492]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=91.224.92.108  user=root
Sep 30 14:46:15 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:46:15 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:46:15 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:46:15.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:46:15 compute-0 nova_compute[261524]: 2025-09-30 14:46:15.443 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:46:15 compute-0 nova_compute[261524]: 2025-09-30 14:46:15.951 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:46:15 compute-0 nova_compute[261524]: 2025-09-30 14:46:15.952 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Sep 30 14:46:15 compute-0 nova_compute[261524]: 2025-09-30 14:46:15.952 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Sep 30 14:46:16 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:46:16 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:46:16 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:46:16.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:46:16 compute-0 ceph-mon[74194]: pgmap v922: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 14 KiB/s wr, 2 op/s
Sep 30 14:46:16 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/224412291' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:46:16 compute-0 nova_compute[261524]: 2025-09-30 14:46:16.161 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Acquiring lock "refresh_cache-ab354489-bdb3-49d0-9ed1-574d93130913" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Sep 30 14:46:16 compute-0 nova_compute[261524]: 2025-09-30 14:46:16.161 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Acquired lock "refresh_cache-ab354489-bdb3-49d0-9ed1-574d93130913" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Sep 30 14:46:16 compute-0 nova_compute[261524]: 2025-09-30 14:46:16.162 2 DEBUG nova.network.neutron [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Sep 30 14:46:16 compute-0 nova_compute[261524]: 2025-09-30 14:46:16.162 2 DEBUG nova.objects.instance [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lazy-loading 'info_cache' on Instance uuid ab354489-bdb3-49d0-9ed1-574d93130913 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Sep 30 14:46:16 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v923: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 15 KiB/s wr, 69 op/s
Sep 30 14:46:16 compute-0 sshd-session[278492]: Failed password for root from 91.224.92.108 port 51198 ssh2
Sep 30 14:46:17 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/2430058543' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:46:17 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:46:17.163Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:46:17 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:46:17 compute-0 unix_chkpwd[278497]: password check failed for user (root)
Sep 30 14:46:17 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:46:17 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:46:17 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:46:17.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:46:18 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:46:18 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:46:18 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:46:18.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:46:18 compute-0 ceph-mon[74194]: pgmap v923: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 15 KiB/s wr, 69 op/s
Sep 30 14:46:18 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/2780710799' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:46:18 compute-0 nova_compute[261524]: 2025-09-30 14:46:18.601 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:46:18 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v924: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 14 KiB/s wr, 69 op/s
Sep 30 14:46:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:46:18 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:46:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:46:18 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:46:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:46:18 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:46:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:46:19 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:46:19 compute-0 sshd-session[278492]: Failed password for root from 91.224.92.108 port 51198 ssh2
Sep 30 14:46:19 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:46:19 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000024s ======
Sep 30 14:46:19 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:46:19.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Sep 30 14:46:19 compute-0 unix_chkpwd[278501]: password check failed for user (root)
Sep 30 14:46:20 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:46:20 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000024s ======
Sep 30 14:46:20 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:46:20.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Sep 30 14:46:20 compute-0 ceph-mon[74194]: pgmap v924: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 14 KiB/s wr, 69 op/s
Sep 30 14:46:20 compute-0 nova_compute[261524]: 2025-09-30 14:46:20.489 2 DEBUG nova.network.neutron [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Updating instance_info_cache with network_info: [{"id": "70e1bfe9-6006-4e08-9c7f-c0d64c8269a0", "address": "fa:16:3e:db:b9:ad", "network": {"id": "653945fb-0a1b-4a3b-b45f-4bafe62f765f", "bridge": "br-int", "label": "tempest-network-smoke--969342711", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0f6bbb74396f4cb7bfa999ebdabfe722", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70e1bfe9-60", "ovs_interfaceid": "70e1bfe9-6006-4e08-9c7f-c0d64c8269a0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9647a6b7-6ba5-4788-9075-bdfb0924041c", "address": "fa:16:3e:21:35:09", "network": {"id": "4f96ad7c-4512-478c-acee-7360218cf3ea", "bridge": "br-int", "label": "tempest-network-smoke--980620503", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0f6bbb74396f4cb7bfa999ebdabfe722", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9647a6b7-6b", "ovs_interfaceid": "9647a6b7-6ba5-4788-9075-bdfb0924041c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Sep 30 14:46:20 compute-0 nova_compute[261524]: 2025-09-30 14:46:20.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:46:20 compute-0 nova_compute[261524]: 2025-09-30 14:46:20.510 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Releasing lock "refresh_cache-ab354489-bdb3-49d0-9ed1-574d93130913" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Sep 30 14:46:20 compute-0 nova_compute[261524]: 2025-09-30 14:46:20.511 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Sep 30 14:46:20 compute-0 nova_compute[261524]: 2025-09-30 14:46:20.511 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:46:20 compute-0 nova_compute[261524]: 2025-09-30 14:46:20.512 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:46:20 compute-0 nova_compute[261524]: 2025-09-30 14:46:20.512 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:46:20 compute-0 nova_compute[261524]: 2025-09-30 14:46:20.512 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:46:20 compute-0 nova_compute[261524]: 2025-09-30 14:46:20.512 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Sep 30 14:46:20 compute-0 nova_compute[261524]: 2025-09-30 14:46:20.512 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:46:20 compute-0 nova_compute[261524]: 2025-09-30 14:46:20.536 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:46:20 compute-0 nova_compute[261524]: 2025-09-30 14:46:20.537 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:46:20 compute-0 nova_compute[261524]: 2025-09-30 14:46:20.537 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:46:20 compute-0 nova_compute[261524]: 2025-09-30 14:46:20.537 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Sep 30 14:46:20 compute-0 nova_compute[261524]: 2025-09-30 14:46:20.538 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Sep 30 14:46:20 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v925: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 14 KiB/s wr, 69 op/s
Sep 30 14:46:20 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 14:46:20 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/789809689' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:46:20 compute-0 nova_compute[261524]: 2025-09-30 14:46:20.973 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Sep 30 14:46:21 compute-0 nova_compute[261524]: 2025-09-30 14:46:21.042 2 DEBUG nova.virt.libvirt.driver [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Sep 30 14:46:21 compute-0 nova_compute[261524]: 2025-09-30 14:46:21.043 2 DEBUG nova.virt.libvirt.driver [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Sep 30 14:46:21 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/789809689' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:46:21 compute-0 nova_compute[261524]: 2025-09-30 14:46:21.263 2 WARNING nova.virt.libvirt.driver [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 14:46:21 compute-0 nova_compute[261524]: 2025-09-30 14:46:21.264 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4388MB free_disk=59.897010803222656GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Sep 30 14:46:21 compute-0 nova_compute[261524]: 2025-09-30 14:46:21.264 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:46:21 compute-0 nova_compute[261524]: 2025-09-30 14:46:21.264 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:46:21 compute-0 sshd-session[278492]: Failed password for root from 91.224.92.108 port 51198 ssh2
Sep 30 14:46:21 compute-0 nova_compute[261524]: 2025-09-30 14:46:21.326 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Instance ab354489-bdb3-49d0-9ed1-574d93130913 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Sep 30 14:46:21 compute-0 nova_compute[261524]: 2025-09-30 14:46:21.326 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Sep 30 14:46:21 compute-0 nova_compute[261524]: 2025-09-30 14:46:21.326 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Sep 30 14:46:21 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:46:21 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:46:21 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:46:21.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:46:21 compute-0 nova_compute[261524]: 2025-09-30 14:46:21.367 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Sep 30 14:46:21 compute-0 sshd-session[278492]: Received disconnect from 91.224.92.108 port 51198:11:  [preauth]
Sep 30 14:46:21 compute-0 sshd-session[278492]: Disconnected from authenticating user root 91.224.92.108 port 51198 [preauth]
Sep 30 14:46:21 compute-0 sshd-session[278492]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=91.224.92.108  user=root
Sep 30 14:46:21 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 14:46:21 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2051488299' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:46:21 compute-0 nova_compute[261524]: 2025-09-30 14:46:21.920 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Sep 30 14:46:21 compute-0 nova_compute[261524]: 2025-09-30 14:46:21.927 2 DEBUG nova.compute.provider_tree [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Inventory has not changed in ProviderTree for provider: 06783cfc-6d32-454d-9501-ebd8adea3735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Sep 30 14:46:21 compute-0 nova_compute[261524]: 2025-09-30 14:46:21.949 2 DEBUG nova.scheduler.client.report [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Inventory has not changed for provider 06783cfc-6d32-454d-9501-ebd8adea3735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Sep 30 14:46:21 compute-0 nova_compute[261524]: 2025-09-30 14:46:21.952 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Sep 30 14:46:21 compute-0 nova_compute[261524]: 2025-09-30 14:46:21.953 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.689s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:46:22 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:46:22 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:46:22 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:46:22.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:46:22 compute-0 ceph-mon[74194]: pgmap v925: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 14 KiB/s wr, 69 op/s
Sep 30 14:46:22 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/2051488299' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:46:22 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:46:22 compute-0 unix_chkpwd[278551]: password check failed for user (root)
Sep 30 14:46:22 compute-0 sshd-session[278546]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=91.224.92.108  user=root
Sep 30 14:46:22 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v926: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 72 KiB/s rd, 14 KiB/s wr, 121 op/s
Sep 30 14:46:23 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:46:23 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:46:23 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:46:23.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:46:23 compute-0 nova_compute[261524]: 2025-09-30 14:46:23.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:46:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:46:23.665Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:46:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:46:23 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:46:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:46:23 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:46:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:46:23 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:46:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:46:24 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:46:24 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:46:24 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:46:24 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:46:24.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:46:24 compute-0 ceph-mon[74194]: pgmap v926: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 72 KiB/s rd, 14 KiB/s wr, 121 op/s
Sep 30 14:46:24 compute-0 sshd-session[278546]: Failed password for root from 91.224.92.108 port 45706 ssh2
Sep 30 14:46:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:46:24] "GET /metrics HTTP/1.1" 200 48551 "" "Prometheus/2.51.0"
Sep 30 14:46:24 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:46:24] "GET /metrics HTTP/1.1" 200 48551 "" "Prometheus/2.51.0"
Sep 30 14:46:24 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v927: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 72 KiB/s rd, 1023 B/s wr, 119 op/s
Sep 30 14:46:25 compute-0 sudo[278554]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:46:25 compute-0 sudo[278554]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:46:25 compute-0 sudo[278554]: pam_unix(sudo:session): session closed for user root
Sep 30 14:46:25 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:46:25 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:46:25 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:46:25.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:46:25 compute-0 nova_compute[261524]: 2025-09-30 14:46:25.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:46:26 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:46:26 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:46:26 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:46:26.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:46:26 compute-0 ceph-mon[74194]: pgmap v927: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 72 KiB/s rd, 1023 B/s wr, 119 op/s
Sep 30 14:46:26 compute-0 unix_chkpwd[278581]: password check failed for user (root)
Sep 30 14:46:26 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v928: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 72 KiB/s rd, 4.3 KiB/s wr, 120 op/s
Sep 30 14:46:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:46:27.165Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:46:27 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:46:27 compute-0 ceph-mon[74194]: pgmap v928: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 72 KiB/s rd, 4.3 KiB/s wr, 120 op/s
Sep 30 14:46:27 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:46:27 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:46:27 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:46:27.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:46:27 compute-0 sshd-session[278546]: Failed password for root from 91.224.92.108 port 45706 ssh2
Sep 30 14:46:28 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:46:28 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:46:28 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:46:28.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:46:28 compute-0 unix_chkpwd[278584]: password check failed for user (root)
Sep 30 14:46:28 compute-0 nova_compute[261524]: 2025-09-30 14:46:28.607 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:46:28 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v929: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 3.3 KiB/s wr, 53 op/s
Sep 30 14:46:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:46:28 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:46:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:46:28 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:46:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:46:28 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:46:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:46:29 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:46:29 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:46:29 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:46:29 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:46:29.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:46:29 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:46:29 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:46:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:46:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:46:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:46:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:46:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:46:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:46:29 compute-0 ceph-mon[74194]: pgmap v929: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 3.3 KiB/s wr, 53 op/s
Sep 30 14:46:29 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:46:30 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:46:30 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:46:30 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:46:30.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:46:30 compute-0 ovn_controller[154021]: 2025-09-30T14:46:30Z|00070|memory_trim|INFO|Detected inactivity (last active 30000 ms ago): trimming memory
Sep 30 14:46:30 compute-0 nova_compute[261524]: 2025-09-30 14:46:30.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:46:30 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v930: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 3.3 KiB/s wr, 53 op/s
Sep 30 14:46:31 compute-0 sshd-session[278546]: Failed password for root from 91.224.92.108 port 45706 ssh2
Sep 30 14:46:31 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:46:31 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.002000049s ======
Sep 30 14:46:31 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:46:31.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000049s
Sep 30 14:46:31 compute-0 ceph-mon[74194]: pgmap v930: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 3.3 KiB/s wr, 53 op/s
Sep 30 14:46:32 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:46:32 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000024s ======
Sep 30 14:46:32 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:46:32.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Sep 30 14:46:32 compute-0 podman[278591]: 2025-09-30 14:46:32.14990961 +0000 UTC m=+0.059276162 container health_status b3fb2fd96e3ed0561f41ccf5f3a6a8f5edc1610051f196882b9fe64f54041c07 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Sep 30 14:46:32 compute-0 podman[278589]: 2025-09-30 14:46:32.153896328 +0000 UTC m=+0.071610016 container health_status 3f9405f717bf7bccb1d94628a6cea0442375ebf8d5cf43ef2536ee30dce6c6e0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, container_name=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, tcib_managed=true)
Sep 30 14:46:32 compute-0 podman[278590]: 2025-09-30 14:46:32.176158077 +0000 UTC m=+0.092303566 container health_status 8ace90c1ad44d98a416105b72e1c541724757ab08efa10adfaa0df7e8c2027c6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Sep 30 14:46:32 compute-0 podman[278602]: 2025-09-30 14:46:32.202742882 +0000 UTC m=+0.096411957 container health_status c96c6cc1b89de09f21811bef665f35e069874e3ad1bbd4c37d02227af98e3458 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Sep 30 14:46:32 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:46:32 compute-0 sshd-session[278546]: Received disconnect from 91.224.92.108 port 45706:11:  [preauth]
Sep 30 14:46:32 compute-0 sshd-session[278546]: Disconnected from authenticating user root 91.224.92.108 port 45706 [preauth]
Sep 30 14:46:32 compute-0 sshd-session[278546]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=91.224.92.108  user=root
Sep 30 14:46:32 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v931: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 5.3 KiB/s wr, 54 op/s
Sep 30 14:46:33 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:46:33 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:46:33 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:46:33.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:46:33 compute-0 nova_compute[261524]: 2025-09-30 14:46:33.609 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:46:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:46:33.667Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:46:33 compute-0 ceph-mon[74194]: pgmap v931: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 5.3 KiB/s wr, 54 op/s
Sep 30 14:46:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:46:33 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:46:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:46:33 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:46:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:46:33 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:46:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:46:34 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:46:34 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:46:34 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:46:34 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:46:34.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:46:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:46:34] "GET /metrics HTTP/1.1" 200 48544 "" "Prometheus/2.51.0"
Sep 30 14:46:34 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:46:34] "GET /metrics HTTP/1.1" 200 48544 "" "Prometheus/2.51.0"
Sep 30 14:46:34 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v932: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 5.3 KiB/s wr, 1 op/s
Sep 30 14:46:35 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:46:35 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 14:46:35 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:46:35.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 14:46:35 compute-0 nova_compute[261524]: 2025-09-30 14:46:35.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:46:35 compute-0 ceph-mon[74194]: pgmap v932: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 5.3 KiB/s wr, 1 op/s
Sep 30 14:46:36 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:46:36 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:46:36 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:46:36.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:46:36 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v933: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 6.3 KiB/s wr, 1 op/s
Sep 30 14:46:37 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:46:37.167Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:46:37 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:46:37 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:46:37 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:46:37 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:46:37.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:46:37 compute-0 ceph-mon[74194]: pgmap v933: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 6.3 KiB/s wr, 1 op/s
Sep 30 14:46:38 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:46:38 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:46:38 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:46:38.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:46:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:46:38.262 163966 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:46:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:46:38.262 163966 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:46:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:46:38.263 163966 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:46:38 compute-0 nova_compute[261524]: 2025-09-30 14:46:38.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:46:38 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v934: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 3.0 KiB/s wr, 0 op/s
Sep 30 14:46:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:46:38 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:46:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:46:38 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:46:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:46:38 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:46:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:46:39 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:46:39 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:46:39 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:46:39 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:46:39.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:46:39 compute-0 ceph-mon[74194]: pgmap v934: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 3.0 KiB/s wr, 0 op/s
Sep 30 14:46:40 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:46:40 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000024s ======
Sep 30 14:46:40 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:46:40.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Sep 30 14:46:40 compute-0 nova_compute[261524]: 2025-09-30 14:46:40.499 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:46:40 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v935: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 3.0 KiB/s wr, 0 op/s
Sep 30 14:46:41 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:46:41 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:46:41 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:46:41.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:46:41 compute-0 ceph-mon[74194]: pgmap v935: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 3.0 KiB/s wr, 0 op/s
Sep 30 14:46:42 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:46:42 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:46:42 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:46:42.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:46:42 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:46:42 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v936: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 12 KiB/s wr, 2 op/s
Sep 30 14:46:43 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:46:43 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:46:43 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:46:43.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:46:43 compute-0 nova_compute[261524]: 2025-09-30 14:46:43.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:46:43 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:46:43.668Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:46:43 compute-0 ceph-mon[74194]: pgmap v936: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 12 KiB/s wr, 2 op/s
Sep 30 14:46:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:46:43 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:46:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:46:43 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:46:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:46:43 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:46:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:46:44 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:46:44 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:46:44 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:46:44 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:46:44.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:46:44 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:46:44 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:46:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:46:44] "GET /metrics HTTP/1.1" 200 48547 "" "Prometheus/2.51.0"
Sep 30 14:46:44 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:46:44] "GET /metrics HTTP/1.1" 200 48547 "" "Prometheus/2.51.0"
Sep 30 14:46:44 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v937: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 10 KiB/s wr, 1 op/s
Sep 30 14:46:45 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:46:45 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:46:45 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 14:46:45 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:46:45.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 14:46:45 compute-0 sudo[278685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:46:45 compute-0 sudo[278685]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:46:45 compute-0 sudo[278685]: pam_unix(sudo:session): session closed for user root
Sep 30 14:46:45 compute-0 nova_compute[261524]: 2025-09-30 14:46:45.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:46:46 compute-0 ceph-mon[74194]: pgmap v937: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 10 KiB/s wr, 1 op/s
Sep 30 14:46:46 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:46:46 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:46:46 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:46:46.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:46:46 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v938: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 13 KiB/s wr, 2 op/s
Sep 30 14:46:47 compute-0 sudo[278712]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:46:47 compute-0 sudo[278712]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:46:47 compute-0 sudo[278712]: pam_unix(sudo:session): session closed for user root
Sep 30 14:46:47 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:46:47.169Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:46:47 compute-0 sudo[278737]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 14:46:47 compute-0 sudo[278737]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:46:47 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:46:47 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:46:47 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:46:47 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:46:47.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:46:47 compute-0 sudo[278737]: pam_unix(sudo:session): session closed for user root
Sep 30 14:46:47 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:46:47 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:46:47 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 14:46:47 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 14:46:47 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 14:46:47 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:46:47 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 14:46:47 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:46:47 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 14:46:47 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 14:46:47 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 14:46:47 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 14:46:47 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:46:47 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:46:48 compute-0 ceph-mon[74194]: pgmap v938: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 13 KiB/s wr, 2 op/s
Sep 30 14:46:48 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:46:48 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 14:46:48 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:46:48 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:46:48 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 14:46:48 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 14:46:48 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:46:48 compute-0 sudo[278794]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:46:48 compute-0 sudo[278794]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:46:48 compute-0 sudo[278794]: pam_unix(sudo:session): session closed for user root
Sep 30 14:46:48 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:46:48 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:46:48 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:46:48.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:46:48 compute-0 sudo[278819]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 14:46:48 compute-0 sudo[278819]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:46:48 compute-0 nova_compute[261524]: 2025-09-30 14:46:48.619 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:46:48 compute-0 podman[278885]: 2025-09-30 14:46:48.62264133 +0000 UTC m=+0.103765148 container create 31fa1a56225a2c2c91a9bdd991ea434864aaad7e45ba7e55f76c9d857a4badd0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_lehmann, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Sep 30 14:46:48 compute-0 podman[278885]: 2025-09-30 14:46:48.564418956 +0000 UTC m=+0.045542834 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:46:48 compute-0 systemd[1]: Started libpod-conmon-31fa1a56225a2c2c91a9bdd991ea434864aaad7e45ba7e55f76c9d857a4badd0.scope.
Sep 30 14:46:48 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:46:48 compute-0 podman[278885]: 2025-09-30 14:46:48.731300698 +0000 UTC m=+0.212424496 container init 31fa1a56225a2c2c91a9bdd991ea434864aaad7e45ba7e55f76c9d857a4badd0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_lehmann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Sep 30 14:46:48 compute-0 podman[278885]: 2025-09-30 14:46:48.73987368 +0000 UTC m=+0.220997488 container start 31fa1a56225a2c2c91a9bdd991ea434864aaad7e45ba7e55f76c9d857a4badd0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_lehmann, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Sep 30 14:46:48 compute-0 podman[278885]: 2025-09-30 14:46:48.74271415 +0000 UTC m=+0.223837968 container attach 31fa1a56225a2c2c91a9bdd991ea434864aaad7e45ba7e55f76c9d857a4badd0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_lehmann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:46:48 compute-0 bold_lehmann[278902]: 167 167
Sep 30 14:46:48 compute-0 systemd[1]: libpod-31fa1a56225a2c2c91a9bdd991ea434864aaad7e45ba7e55f76c9d857a4badd0.scope: Deactivated successfully.
Sep 30 14:46:48 compute-0 conmon[278902]: conmon 31fa1a56225a2c2c91a9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-31fa1a56225a2c2c91a9bdd991ea434864aaad7e45ba7e55f76c9d857a4badd0.scope/container/memory.events
Sep 30 14:46:48 compute-0 podman[278907]: 2025-09-30 14:46:48.787608686 +0000 UTC m=+0.027047467 container died 31fa1a56225a2c2c91a9bdd991ea434864aaad7e45ba7e55f76c9d857a4badd0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Sep 30 14:46:48 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v939: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 12 KiB/s wr, 2 op/s
Sep 30 14:46:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-8c0b52626dc754309154bd302a6a40ebba50d2e8cdbe2191162be6fb443138ac-merged.mount: Deactivated successfully.
Sep 30 14:46:48 compute-0 podman[278907]: 2025-09-30 14:46:48.844021426 +0000 UTC m=+0.083460187 container remove 31fa1a56225a2c2c91a9bdd991ea434864aaad7e45ba7e55f76c9d857a4badd0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_lehmann, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:46:48 compute-0 systemd[1]: libpod-conmon-31fa1a56225a2c2c91a9bdd991ea434864aaad7e45ba7e55f76c9d857a4badd0.scope: Deactivated successfully.
Sep 30 14:46:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:46:48 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:46:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:46:48 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:46:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:46:48 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:46:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:46:49 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:46:49 compute-0 podman[278929]: 2025-09-30 14:46:49.096328095 +0000 UTC m=+0.041437283 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:46:49 compute-0 podman[278929]: 2025-09-30 14:46:49.224929244 +0000 UTC m=+0.170038442 container create 7575d698ea8961fd7648fba74b498e2443c831d36fb281a10ec8df192a2431c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_poitras, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Sep 30 14:46:49 compute-0 systemd[1]: Started libpod-conmon-7575d698ea8961fd7648fba74b498e2443c831d36fb281a10ec8df192a2431c4.scope.
Sep 30 14:46:49 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:46:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d27db6cd974c3cf9ba543a0e825f77e94e460c3416f41434ebedeb933c33924/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:46:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d27db6cd974c3cf9ba543a0e825f77e94e460c3416f41434ebedeb933c33924/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:46:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d27db6cd974c3cf9ba543a0e825f77e94e460c3416f41434ebedeb933c33924/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:46:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d27db6cd974c3cf9ba543a0e825f77e94e460c3416f41434ebedeb933c33924/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:46:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d27db6cd974c3cf9ba543a0e825f77e94e460c3416f41434ebedeb933c33924/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 14:46:49 compute-0 podman[278929]: 2025-09-30 14:46:49.353028011 +0000 UTC m=+0.298137169 container init 7575d698ea8961fd7648fba74b498e2443c831d36fb281a10ec8df192a2431c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_poitras, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:46:49 compute-0 podman[278929]: 2025-09-30 14:46:49.364935735 +0000 UTC m=+0.310044893 container start 7575d698ea8961fd7648fba74b498e2443c831d36fb281a10ec8df192a2431c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_poitras, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:46:49 compute-0 podman[278929]: 2025-09-30 14:46:49.369027225 +0000 UTC m=+0.314136383 container attach 7575d698ea8961fd7648fba74b498e2443c831d36fb281a10ec8df192a2431c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_poitras, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True)
Sep 30 14:46:49 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:46:49 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:46:49 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:46:49.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:46:49 compute-0 hopeful_poitras[278945]: --> passed data devices: 0 physical, 1 LVM
Sep 30 14:46:49 compute-0 hopeful_poitras[278945]: --> All data devices are unavailable
Sep 30 14:46:49 compute-0 systemd[1]: libpod-7575d698ea8961fd7648fba74b498e2443c831d36fb281a10ec8df192a2431c4.scope: Deactivated successfully.
Sep 30 14:46:49 compute-0 podman[278929]: 2025-09-30 14:46:49.720885176 +0000 UTC m=+0.665994334 container died 7575d698ea8961fd7648fba74b498e2443c831d36fb281a10ec8df192a2431c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_poitras, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:46:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-7d27db6cd974c3cf9ba543a0e825f77e94e460c3416f41434ebedeb933c33924-merged.mount: Deactivated successfully.
Sep 30 14:46:49 compute-0 podman[278929]: 2025-09-30 14:46:49.768871879 +0000 UTC m=+0.713981037 container remove 7575d698ea8961fd7648fba74b498e2443c831d36fb281a10ec8df192a2431c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_poitras, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:46:49 compute-0 systemd[1]: libpod-conmon-7575d698ea8961fd7648fba74b498e2443c831d36fb281a10ec8df192a2431c4.scope: Deactivated successfully.
Sep 30 14:46:49 compute-0 sudo[278819]: pam_unix(sudo:session): session closed for user root
Sep 30 14:46:49 compute-0 sudo[278973]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:46:49 compute-0 sudo[278973]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:46:49 compute-0 sudo[278973]: pam_unix(sudo:session): session closed for user root
Sep 30 14:46:49 compute-0 sudo[278998]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -- lvm list --format json
Sep 30 14:46:49 compute-0 sudo[278998]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:46:50 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:46:50 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:46:50 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:46:50.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:46:50 compute-0 ceph-mon[74194]: pgmap v939: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 12 KiB/s wr, 2 op/s
Sep 30 14:46:50 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:46:50.408 163966 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ea:30:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '86:54:af:bb:5a:5f'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Sep 30 14:46:50 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:46:50.411 163966 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Sep 30 14:46:50 compute-0 nova_compute[261524]: 2025-09-30 14:46:50.450 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:46:50 compute-0 podman[279065]: 2025-09-30 14:46:50.485587282 +0000 UTC m=+0.098789325 container create ed4a29a749fde47ab4bc4822a0e306a290f04d31f450a1ce4418e40b26f3afc2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_rosalind, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1)
Sep 30 14:46:50 compute-0 nova_compute[261524]: 2025-09-30 14:46:50.505 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:46:50 compute-0 systemd[1]: Started libpod-conmon-ed4a29a749fde47ab4bc4822a0e306a290f04d31f450a1ce4418e40b26f3afc2.scope.
Sep 30 14:46:50 compute-0 podman[279065]: 2025-09-30 14:46:50.460920694 +0000 UTC m=+0.074122827 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:46:50 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:46:50 compute-0 podman[279065]: 2025-09-30 14:46:50.578312008 +0000 UTC m=+0.191514081 container init ed4a29a749fde47ab4bc4822a0e306a290f04d31f450a1ce4418e40b26f3afc2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_rosalind, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Sep 30 14:46:50 compute-0 podman[279065]: 2025-09-30 14:46:50.585013603 +0000 UTC m=+0.198215646 container start ed4a29a749fde47ab4bc4822a0e306a290f04d31f450a1ce4418e40b26f3afc2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_rosalind, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:46:50 compute-0 podman[279065]: 2025-09-30 14:46:50.588068748 +0000 UTC m=+0.201270801 container attach ed4a29a749fde47ab4bc4822a0e306a290f04d31f450a1ce4418e40b26f3afc2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_rosalind, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Sep 30 14:46:50 compute-0 epic_rosalind[279082]: 167 167
Sep 30 14:46:50 compute-0 systemd[1]: libpod-ed4a29a749fde47ab4bc4822a0e306a290f04d31f450a1ce4418e40b26f3afc2.scope: Deactivated successfully.
Sep 30 14:46:50 compute-0 podman[279065]: 2025-09-30 14:46:50.590525249 +0000 UTC m=+0.203727282 container died ed4a29a749fde47ab4bc4822a0e306a290f04d31f450a1ce4418e40b26f3afc2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_rosalind, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Sep 30 14:46:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-47b48c99b6c398505642e8a1d25d7ef7a3610c755a4c2333c603f3988a04c2e6-merged.mount: Deactivated successfully.
Sep 30 14:46:50 compute-0 podman[279065]: 2025-09-30 14:46:50.632498983 +0000 UTC m=+0.245701016 container remove ed4a29a749fde47ab4bc4822a0e306a290f04d31f450a1ce4418e40b26f3afc2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_rosalind, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Sep 30 14:46:50 compute-0 systemd[1]: libpod-conmon-ed4a29a749fde47ab4bc4822a0e306a290f04d31f450a1ce4418e40b26f3afc2.scope: Deactivated successfully.
Sep 30 14:46:50 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v940: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 12 KiB/s wr, 2 op/s
Sep 30 14:46:50 compute-0 podman[279107]: 2025-09-30 14:46:50.833850825 +0000 UTC m=+0.061841805 container create 6bbeecaa02a61f47c6430c6133961f79002b6fa53b50fb941eda50b6413948da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_maxwell, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Sep 30 14:46:50 compute-0 systemd[1]: Started libpod-conmon-6bbeecaa02a61f47c6430c6133961f79002b6fa53b50fb941eda50b6413948da.scope.
Sep 30 14:46:50 compute-0 podman[279107]: 2025-09-30 14:46:50.807688121 +0000 UTC m=+0.035679111 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:46:50 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:46:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6345931a6f727c2fef2167d2cb473692e661a5d15c2d753e2e333dc7ca84c6d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:46:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6345931a6f727c2fef2167d2cb473692e661a5d15c2d753e2e333dc7ca84c6d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:46:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6345931a6f727c2fef2167d2cb473692e661a5d15c2d753e2e333dc7ca84c6d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:46:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6345931a6f727c2fef2167d2cb473692e661a5d15c2d753e2e333dc7ca84c6d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:46:50 compute-0 podman[279107]: 2025-09-30 14:46:50.943869637 +0000 UTC m=+0.171860647 container init 6bbeecaa02a61f47c6430c6133961f79002b6fa53b50fb941eda50b6413948da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_maxwell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Sep 30 14:46:50 compute-0 podman[279107]: 2025-09-30 14:46:50.956002656 +0000 UTC m=+0.183993596 container start 6bbeecaa02a61f47c6430c6133961f79002b6fa53b50fb941eda50b6413948da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_maxwell, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:46:50 compute-0 podman[279107]: 2025-09-30 14:46:50.959744538 +0000 UTC m=+0.187735518 container attach 6bbeecaa02a61f47c6430c6133961f79002b6fa53b50fb941eda50b6413948da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_maxwell, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default)
Sep 30 14:46:51 compute-0 affectionate_maxwell[279124]: {
Sep 30 14:46:51 compute-0 affectionate_maxwell[279124]:     "0": [
Sep 30 14:46:51 compute-0 affectionate_maxwell[279124]:         {
Sep 30 14:46:51 compute-0 affectionate_maxwell[279124]:             "devices": [
Sep 30 14:46:51 compute-0 affectionate_maxwell[279124]:                 "/dev/loop3"
Sep 30 14:46:51 compute-0 affectionate_maxwell[279124]:             ],
Sep 30 14:46:51 compute-0 affectionate_maxwell[279124]:             "lv_name": "ceph_lv0",
Sep 30 14:46:51 compute-0 affectionate_maxwell[279124]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:46:51 compute-0 affectionate_maxwell[279124]:             "lv_size": "21470642176",
Sep 30 14:46:51 compute-0 affectionate_maxwell[279124]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5e3c7776-ac03-5698-b79f-a6dc2d80cae6,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1bf35304-bfb4-41f5-b832-570aa31de1b2,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 14:46:51 compute-0 affectionate_maxwell[279124]:             "lv_uuid": "f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L",
Sep 30 14:46:51 compute-0 affectionate_maxwell[279124]:             "name": "ceph_lv0",
Sep 30 14:46:51 compute-0 affectionate_maxwell[279124]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:46:51 compute-0 affectionate_maxwell[279124]:             "tags": {
Sep 30 14:46:51 compute-0 affectionate_maxwell[279124]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:46:51 compute-0 affectionate_maxwell[279124]:                 "ceph.block_uuid": "f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L",
Sep 30 14:46:51 compute-0 affectionate_maxwell[279124]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 14:46:51 compute-0 affectionate_maxwell[279124]:                 "ceph.cluster_fsid": "5e3c7776-ac03-5698-b79f-a6dc2d80cae6",
Sep 30 14:46:51 compute-0 affectionate_maxwell[279124]:                 "ceph.cluster_name": "ceph",
Sep 30 14:46:51 compute-0 affectionate_maxwell[279124]:                 "ceph.crush_device_class": "",
Sep 30 14:46:51 compute-0 affectionate_maxwell[279124]:                 "ceph.encrypted": "0",
Sep 30 14:46:51 compute-0 affectionate_maxwell[279124]:                 "ceph.osd_fsid": "1bf35304-bfb4-41f5-b832-570aa31de1b2",
Sep 30 14:46:51 compute-0 affectionate_maxwell[279124]:                 "ceph.osd_id": "0",
Sep 30 14:46:51 compute-0 affectionate_maxwell[279124]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 14:46:51 compute-0 affectionate_maxwell[279124]:                 "ceph.type": "block",
Sep 30 14:46:51 compute-0 affectionate_maxwell[279124]:                 "ceph.vdo": "0",
Sep 30 14:46:51 compute-0 affectionate_maxwell[279124]:                 "ceph.with_tpm": "0"
Sep 30 14:46:51 compute-0 affectionate_maxwell[279124]:             },
Sep 30 14:46:51 compute-0 affectionate_maxwell[279124]:             "type": "block",
Sep 30 14:46:51 compute-0 affectionate_maxwell[279124]:             "vg_name": "ceph_vg0"
Sep 30 14:46:51 compute-0 affectionate_maxwell[279124]:         }
Sep 30 14:46:51 compute-0 affectionate_maxwell[279124]:     ]
Sep 30 14:46:51 compute-0 affectionate_maxwell[279124]: }
Sep 30 14:46:51 compute-0 systemd[1]: libpod-6bbeecaa02a61f47c6430c6133961f79002b6fa53b50fb941eda50b6413948da.scope: Deactivated successfully.
Sep 30 14:46:51 compute-0 podman[279107]: 2025-09-30 14:46:51.288663794 +0000 UTC m=+0.516654734 container died 6bbeecaa02a61f47c6430c6133961f79002b6fa53b50fb941eda50b6413948da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_maxwell, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Sep 30 14:46:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-c6345931a6f727c2fef2167d2cb473692e661a5d15c2d753e2e333dc7ca84c6d-merged.mount: Deactivated successfully.
Sep 30 14:46:51 compute-0 podman[279107]: 2025-09-30 14:46:51.326862706 +0000 UTC m=+0.554853646 container remove 6bbeecaa02a61f47c6430c6133961f79002b6fa53b50fb941eda50b6413948da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_maxwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Sep 30 14:46:51 compute-0 systemd[1]: libpod-conmon-6bbeecaa02a61f47c6430c6133961f79002b6fa53b50fb941eda50b6413948da.scope: Deactivated successfully.
Sep 30 14:46:51 compute-0 sudo[278998]: pam_unix(sudo:session): session closed for user root
Sep 30 14:46:51 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:46:51 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:46:51 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:46:51.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:46:51 compute-0 sudo[279148]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:46:51 compute-0 sudo[279148]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:46:51 compute-0 sudo[279148]: pam_unix(sudo:session): session closed for user root
Sep 30 14:46:51 compute-0 sudo[279174]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -- raw list --format json
Sep 30 14:46:51 compute-0 sudo[279174]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:46:52 compute-0 podman[279242]: 2025-09-30 14:46:52.00466035 +0000 UTC m=+0.052544985 container create 4278c8db4da9fe7b54205396ac5d6fe8e0ed196f4f8ba2bcafe164bdde73fe48 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_edison, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:46:52 compute-0 systemd[1]: Started libpod-conmon-4278c8db4da9fe7b54205396ac5d6fe8e0ed196f4f8ba2bcafe164bdde73fe48.scope.
Sep 30 14:46:52 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:46:52 compute-0 podman[279242]: 2025-09-30 14:46:51.980557927 +0000 UTC m=+0.028442612 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:46:52 compute-0 podman[279242]: 2025-09-30 14:46:52.09146252 +0000 UTC m=+0.139347235 container init 4278c8db4da9fe7b54205396ac5d6fe8e0ed196f4f8ba2bcafe164bdde73fe48 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_edison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Sep 30 14:46:52 compute-0 podman[279242]: 2025-09-30 14:46:52.100835511 +0000 UTC m=+0.148720146 container start 4278c8db4da9fe7b54205396ac5d6fe8e0ed196f4f8ba2bcafe164bdde73fe48 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_edison, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Sep 30 14:46:52 compute-0 podman[279242]: 2025-09-30 14:46:52.10444011 +0000 UTC m=+0.152324825 container attach 4278c8db4da9fe7b54205396ac5d6fe8e0ed196f4f8ba2bcafe164bdde73fe48 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_edison, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:46:52 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:46:52 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:46:52 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:46:52.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:46:52 compute-0 affectionate_edison[279258]: 167 167
Sep 30 14:46:52 compute-0 systemd[1]: libpod-4278c8db4da9fe7b54205396ac5d6fe8e0ed196f4f8ba2bcafe164bdde73fe48.scope: Deactivated successfully.
Sep 30 14:46:52 compute-0 podman[279242]: 2025-09-30 14:46:52.109260328 +0000 UTC m=+0.157144973 container died 4278c8db4da9fe7b54205396ac5d6fe8e0ed196f4f8ba2bcafe164bdde73fe48 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_edison, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:46:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-b38fa2bb790d95e4d465b03d565041012bee90a48ef0c8f252528179e9eb6502-merged.mount: Deactivated successfully.
Sep 30 14:46:52 compute-0 podman[279242]: 2025-09-30 14:46:52.147220194 +0000 UTC m=+0.195104829 container remove 4278c8db4da9fe7b54205396ac5d6fe8e0ed196f4f8ba2bcafe164bdde73fe48 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_edison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:46:52 compute-0 systemd[1]: libpod-conmon-4278c8db4da9fe7b54205396ac5d6fe8e0ed196f4f8ba2bcafe164bdde73fe48.scope: Deactivated successfully.
Sep 30 14:46:52 compute-0 ceph-mon[74194]: pgmap v940: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 12 KiB/s wr, 2 op/s
Sep 30 14:46:52 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Sep 30 14:46:52 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:46:52 compute-0 ceph-mon[74194]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #60. Immutable memtables: 0.
Sep 30 14:46:52 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:46:52.245824) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Sep 30 14:46:52 compute-0 ceph-mon[74194]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 60
Sep 30 14:46:52 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759243612245893, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 1188, "num_deletes": 255, "total_data_size": 2086683, "memory_usage": 2116416, "flush_reason": "Manual Compaction"}
Sep 30 14:46:52 compute-0 ceph-mon[74194]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #61: started
Sep 30 14:46:52 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759243612268549, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 61, "file_size": 2056280, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 27065, "largest_seqno": 28252, "table_properties": {"data_size": 2050632, "index_size": 2979, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 11907, "raw_average_key_size": 19, "raw_value_size": 2039231, "raw_average_value_size": 3294, "num_data_blocks": 132, "num_entries": 619, "num_filter_entries": 619, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759243505, "oldest_key_time": 1759243505, "file_creation_time": 1759243612, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4a74fe2f-a33e-416b-ba25-743e7942b3ac", "db_session_id": "KY5CTSKWFSFJYE5835A9", "orig_file_number": 61, "seqno_to_time_mapping": "N/A"}}
Sep 30 14:46:52 compute-0 ceph-mon[74194]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 22992 microseconds, and 8809 cpu microseconds.
Sep 30 14:46:52 compute-0 ceph-mon[74194]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 14:46:52 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:46:52.268794) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #61: 2056280 bytes OK
Sep 30 14:46:52 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:46:52.268861) [db/memtable_list.cc:519] [default] Level-0 commit table #61 started
Sep 30 14:46:52 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:46:52.271206) [db/memtable_list.cc:722] [default] Level-0 commit table #61: memtable #1 done
Sep 30 14:46:52 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:46:52.271244) EVENT_LOG_v1 {"time_micros": 1759243612271233, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Sep 30 14:46:52 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:46:52.271269) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Sep 30 14:46:52 compute-0 ceph-mon[74194]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 2081376, prev total WAL file size 2081376, number of live WAL files 2.
Sep 30 14:46:52 compute-0 ceph-mon[74194]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000057.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 14:46:52 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:46:52.272049) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353031' seq:72057594037927935, type:22 .. '6C6F676D00373532' seq:0, type:0; will stop at (end)
Sep 30 14:46:52 compute-0 ceph-mon[74194]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Sep 30 14:46:52 compute-0 ceph-mon[74194]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [61(2008KB)], [59(14MB)]
Sep 30 14:46:52 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759243612272116, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [61], "files_L6": [59], "score": -1, "input_data_size": 17040706, "oldest_snapshot_seqno": -1}
Sep 30 14:46:52 compute-0 ceph-mon[74194]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #62: 6001 keys, 16907268 bytes, temperature: kUnknown
Sep 30 14:46:52 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759243612436207, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 62, "file_size": 16907268, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16863738, "index_size": 27420, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15045, "raw_key_size": 153023, "raw_average_key_size": 25, "raw_value_size": 16752109, "raw_average_value_size": 2791, "num_data_blocks": 1123, "num_entries": 6001, "num_filter_entries": 6001, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759241526, "oldest_key_time": 0, "file_creation_time": 1759243612, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4a74fe2f-a33e-416b-ba25-743e7942b3ac", "db_session_id": "KY5CTSKWFSFJYE5835A9", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Sep 30 14:46:52 compute-0 ceph-mon[74194]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 14:46:52 compute-0 podman[279285]: 2025-09-30 14:46:52.441728502 +0000 UTC m=+0.120956952 container create df22f0a6a9e41adf87ff36d57a94f552ca600d687b6315dec29b600341dad215 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_banzai, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:46:52 compute-0 podman[279285]: 2025-09-30 14:46:52.349360956 +0000 UTC m=+0.028589426 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:46:52 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:46:52.436485) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 16907268 bytes
Sep 30 14:46:52 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:46:52.442440) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 103.8 rd, 103.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 14.3 +0.0 blob) out(16.1 +0.0 blob), read-write-amplify(16.5) write-amplify(8.2) OK, records in: 6527, records dropped: 526 output_compression: NoCompression
Sep 30 14:46:52 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:46:52.442460) EVENT_LOG_v1 {"time_micros": 1759243612442452, "job": 32, "event": "compaction_finished", "compaction_time_micros": 164206, "compaction_time_cpu_micros": 38102, "output_level": 6, "num_output_files": 1, "total_output_size": 16907268, "num_input_records": 6527, "num_output_records": 6001, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Sep 30 14:46:52 compute-0 ceph-mon[74194]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000061.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 14:46:52 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759243612442999, "job": 32, "event": "table_file_deletion", "file_number": 61}
Sep 30 14:46:52 compute-0 ceph-mon[74194]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 14:46:52 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759243612445780, "job": 32, "event": "table_file_deletion", "file_number": 59}
Sep 30 14:46:52 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:46:52.271975) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:46:52 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:46:52.445959) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:46:52 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:46:52.445968) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:46:52 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:46:52.445970) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:46:52 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:46:52.445973) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:46:52 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:46:52.445978) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:46:52 compute-0 systemd[1]: Started libpod-conmon-df22f0a6a9e41adf87ff36d57a94f552ca600d687b6315dec29b600341dad215.scope.
Sep 30 14:46:52 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:46:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99c03e0e8712e571039fe529baa6e0b2a6dbe4300bdfc401c1733c166ef4ed97/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:46:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99c03e0e8712e571039fe529baa6e0b2a6dbe4300bdfc401c1733c166ef4ed97/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:46:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99c03e0e8712e571039fe529baa6e0b2a6dbe4300bdfc401c1733c166ef4ed97/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:46:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99c03e0e8712e571039fe529baa6e0b2a6dbe4300bdfc401c1733c166ef4ed97/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:46:52 compute-0 podman[279285]: 2025-09-30 14:46:52.541589843 +0000 UTC m=+0.220818373 container init df22f0a6a9e41adf87ff36d57a94f552ca600d687b6315dec29b600341dad215 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_banzai, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Sep 30 14:46:52 compute-0 podman[279285]: 2025-09-30 14:46:52.550877672 +0000 UTC m=+0.230106132 container start df22f0a6a9e41adf87ff36d57a94f552ca600d687b6315dec29b600341dad215 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_banzai, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:46:52 compute-0 podman[279285]: 2025-09-30 14:46:52.554486961 +0000 UTC m=+0.233715511 container attach df22f0a6a9e41adf87ff36d57a94f552ca600d687b6315dec29b600341dad215 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Sep 30 14:46:52 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v941: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 14 KiB/s wr, 30 op/s
Sep 30 14:46:53 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/1092860867' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:46:53 compute-0 lvm[279377]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 14:46:53 compute-0 lvm[279377]: VG ceph_vg0 finished
Sep 30 14:46:53 compute-0 stoic_banzai[279302]: {}
Sep 30 14:46:53 compute-0 systemd[1]: libpod-df22f0a6a9e41adf87ff36d57a94f552ca600d687b6315dec29b600341dad215.scope: Deactivated successfully.
Sep 30 14:46:53 compute-0 systemd[1]: libpod-df22f0a6a9e41adf87ff36d57a94f552ca600d687b6315dec29b600341dad215.scope: Consumed 1.311s CPU time.
Sep 30 14:46:53 compute-0 podman[279285]: 2025-09-30 14:46:53.33555305 +0000 UTC m=+1.014781520 container died df22f0a6a9e41adf87ff36d57a94f552ca600d687b6315dec29b600341dad215 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_banzai, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:46:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-99c03e0e8712e571039fe529baa6e0b2a6dbe4300bdfc401c1733c166ef4ed97-merged.mount: Deactivated successfully.
Sep 30 14:46:53 compute-0 podman[279285]: 2025-09-30 14:46:53.38829989 +0000 UTC m=+1.067528340 container remove df22f0a6a9e41adf87ff36d57a94f552ca600d687b6315dec29b600341dad215 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:46:53 compute-0 systemd[1]: libpod-conmon-df22f0a6a9e41adf87ff36d57a94f552ca600d687b6315dec29b600341dad215.scope: Deactivated successfully.
Sep 30 14:46:53 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:46:53 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000024s ======
Sep 30 14:46:53 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:46:53.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Sep 30 14:46:53 compute-0 sudo[279174]: pam_unix(sudo:session): session closed for user root
Sep 30 14:46:53 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 14:46:53 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:46:53 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 14:46:53 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:46:53 compute-0 sudo[279393]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 14:46:53 compute-0 sudo[279393]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:46:53 compute-0 sudo[279393]: pam_unix(sudo:session): session closed for user root
Sep 30 14:46:53 compute-0 nova_compute[261524]: 2025-09-30 14:46:53.622 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:46:53 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:46:53.670Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:46:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:46:53 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:46:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:46:54 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:46:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:46:54 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:46:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:46:54 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:46:54 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:46:54 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 14:46:54 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:46:54.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 14:46:54 compute-0 ceph-mon[74194]: pgmap v941: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 14 KiB/s wr, 30 op/s
Sep 30 14:46:54 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:46:54 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:46:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:46:54] "GET /metrics HTTP/1.1" 200 48547 "" "Prometheus/2.51.0"
Sep 30 14:46:54 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:46:54] "GET /metrics HTTP/1.1" 200 48547 "" "Prometheus/2.51.0"
Sep 30 14:46:54 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v942: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 4.8 KiB/s wr, 28 op/s
Sep 30 14:46:55 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:46:55 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 14:46:55 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:46:55.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 14:46:55 compute-0 nova_compute[261524]: 2025-09-30 14:46:55.506 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:46:56 compute-0 nova_compute[261524]: 2025-09-30 14:46:56.032 2 DEBUG oslo_concurrency.lockutils [None req-3e6cde06-d340-4d80-bdad-76caf8288fbd 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Acquiring lock "interface-ab354489-bdb3-49d0-9ed1-574d93130913-9647a6b7-6ba5-4788-9075-bdfb0924041c" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:46:56 compute-0 nova_compute[261524]: 2025-09-30 14:46:56.033 2 DEBUG oslo_concurrency.lockutils [None req-3e6cde06-d340-4d80-bdad-76caf8288fbd 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Lock "interface-ab354489-bdb3-49d0-9ed1-574d93130913-9647a6b7-6ba5-4788-9075-bdfb0924041c" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:46:56 compute-0 nova_compute[261524]: 2025-09-30 14:46:56.053 2 DEBUG nova.objects.instance [None req-3e6cde06-d340-4d80-bdad-76caf8288fbd 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Lazy-loading 'flavor' on Instance uuid ab354489-bdb3-49d0-9ed1-574d93130913 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Sep 30 14:46:56 compute-0 nova_compute[261524]: 2025-09-30 14:46:56.073 2 DEBUG nova.virt.libvirt.vif [None req-3e6cde06-d340-4d80-bdad-76caf8288fbd 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-09-30T14:44:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-711458846',display_name='tempest-TestNetworkBasicOps-server-711458846',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-711458846',id=6,image_ref='7c70cf84-edc3-42b2-a094-ae3c1dbaffe4',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIMQL53T2ZkSAoVzfinB0Xb6YV6zqFtICzdovU1Kn/PIvW0fTnkL2hml556IQQU+IFdjIRu6Xc3RQKHc2DkPb73zFKtN5c4E62Q7wZZkQI9VBc0aWDqG12KKHVj732hp6w==',key_name='tempest-TestNetworkBasicOps-1073344022',keypairs=<?>,launch_index=0,launched_at=2025-09-30T14:44:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0f6bbb74396f4cb7bfa999ebdabfe722',ramdisk_id='',reservation_id='r-z3tdfpa2',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c70cf84-edc3-42b2-a094-ae3c1dbaffe4',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-195302952',owner_user_name='tempest-TestNetworkBasicOps-195302952-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-09-30T14:44:53Z,user_data=None,user_id='59c80c4f189d4667aec64b43afc69ed2',uuid=ab354489-bdb3-49d0-9ed1-574d93130913,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9647a6b7-6ba5-4788-9075-bdfb0924041c", "address": "fa:16:3e:21:35:09", "network": {"id": "4f96ad7c-4512-478c-acee-7360218cf3ea", "bridge": "br-int", "label": "tempest-network-smoke--980620503", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0f6bbb74396f4cb7bfa999ebdabfe722", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9647a6b7-6b", "ovs_interfaceid": "9647a6b7-6ba5-4788-9075-bdfb0924041c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Sep 30 14:46:56 compute-0 nova_compute[261524]: 2025-09-30 14:46:56.074 2 DEBUG nova.network.os_vif_util [None req-3e6cde06-d340-4d80-bdad-76caf8288fbd 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Converting VIF {"id": "9647a6b7-6ba5-4788-9075-bdfb0924041c", "address": "fa:16:3e:21:35:09", "network": {"id": "4f96ad7c-4512-478c-acee-7360218cf3ea", "bridge": "br-int", "label": "tempest-network-smoke--980620503", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0f6bbb74396f4cb7bfa999ebdabfe722", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9647a6b7-6b", "ovs_interfaceid": "9647a6b7-6ba5-4788-9075-bdfb0924041c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Sep 30 14:46:56 compute-0 nova_compute[261524]: 2025-09-30 14:46:56.076 2 DEBUG nova.network.os_vif_util [None req-3e6cde06-d340-4d80-bdad-76caf8288fbd 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:21:35:09,bridge_name='br-int',has_traffic_filtering=True,id=9647a6b7-6ba5-4788-9075-bdfb0924041c,network=Network(4f96ad7c-4512-478c-acee-7360218cf3ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9647a6b7-6b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Sep 30 14:46:56 compute-0 nova_compute[261524]: 2025-09-30 14:46:56.081 2 DEBUG nova.virt.libvirt.guest [None req-3e6cde06-d340-4d80-bdad-76caf8288fbd 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:21:35:09"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9647a6b7-6b"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Sep 30 14:46:56 compute-0 nova_compute[261524]: 2025-09-30 14:46:56.085 2 DEBUG nova.virt.libvirt.guest [None req-3e6cde06-d340-4d80-bdad-76caf8288fbd 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:21:35:09"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9647a6b7-6b"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Sep 30 14:46:56 compute-0 nova_compute[261524]: 2025-09-30 14:46:56.088 2 DEBUG nova.virt.libvirt.driver [None req-3e6cde06-d340-4d80-bdad-76caf8288fbd 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Attempting to detach device tap9647a6b7-6b from instance ab354489-bdb3-49d0-9ed1-574d93130913 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Sep 30 14:46:56 compute-0 nova_compute[261524]: 2025-09-30 14:46:56.089 2 DEBUG nova.virt.libvirt.guest [None req-3e6cde06-d340-4d80-bdad-76caf8288fbd 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] detach device xml: <interface type="ethernet">
Sep 30 14:46:56 compute-0 nova_compute[261524]:   <mac address="fa:16:3e:21:35:09"/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:   <model type="virtio"/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:   <driver name="vhost" rx_queue_size="512"/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:   <mtu size="1442"/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:   <target dev="tap9647a6b7-6b"/>
Sep 30 14:46:56 compute-0 nova_compute[261524]: </interface>
Sep 30 14:46:56 compute-0 nova_compute[261524]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Sep 30 14:46:56 compute-0 nova_compute[261524]: 2025-09-30 14:46:56.095 2 DEBUG nova.virt.libvirt.guest [None req-3e6cde06-d340-4d80-bdad-76caf8288fbd 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:21:35:09"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9647a6b7-6b"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Sep 30 14:46:56 compute-0 nova_compute[261524]: 2025-09-30 14:46:56.099 2 DEBUG nova.virt.libvirt.guest [None req-3e6cde06-d340-4d80-bdad-76caf8288fbd 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:21:35:09"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9647a6b7-6b"/></interface>not found in domain: <domain type='kvm' id='3'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:   <name>instance-00000006</name>
Sep 30 14:46:56 compute-0 nova_compute[261524]:   <uuid>ab354489-bdb3-49d0-9ed1-574d93130913</uuid>
Sep 30 14:46:56 compute-0 nova_compute[261524]:   <metadata>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Sep 30 14:46:56 compute-0 nova_compute[261524]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:   <nova:name>tempest-TestNetworkBasicOps-server-711458846</nova:name>
Sep 30 14:46:56 compute-0 nova_compute[261524]:   <nova:creationTime>2025-09-30 14:45:23</nova:creationTime>
Sep 30 14:46:56 compute-0 nova_compute[261524]:   <nova:flavor name="m1.nano">
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <nova:memory>128</nova:memory>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <nova:disk>1</nova:disk>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <nova:swap>0</nova:swap>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <nova:ephemeral>0</nova:ephemeral>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <nova:vcpus>1</nova:vcpus>
Sep 30 14:46:56 compute-0 nova_compute[261524]:   </nova:flavor>
Sep 30 14:46:56 compute-0 nova_compute[261524]:   <nova:owner>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <nova:user uuid="59c80c4f189d4667aec64b43afc69ed2">tempest-TestNetworkBasicOps-195302952-project-member</nova:user>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <nova:project uuid="0f6bbb74396f4cb7bfa999ebdabfe722">tempest-TestNetworkBasicOps-195302952</nova:project>
Sep 30 14:46:56 compute-0 nova_compute[261524]:   </nova:owner>
Sep 30 14:46:56 compute-0 nova_compute[261524]:   <nova:root type="image" uuid="7c70cf84-edc3-42b2-a094-ae3c1dbaffe4"/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:   <nova:ports>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <nova:port uuid="70e1bfe9-6006-4e08-9c7f-c0d64c8269a0">
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     </nova:port>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <nova:port uuid="9647a6b7-6ba5-4788-9075-bdfb0924041c">
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <nova:ip type="fixed" address="10.100.0.18" ipVersion="4"/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     </nova:port>
Sep 30 14:46:56 compute-0 nova_compute[261524]:   </nova:ports>
Sep 30 14:46:56 compute-0 nova_compute[261524]: </nova:instance>
Sep 30 14:46:56 compute-0 nova_compute[261524]:   </metadata>
Sep 30 14:46:56 compute-0 nova_compute[261524]:   <memory unit='KiB'>131072</memory>
Sep 30 14:46:56 compute-0 nova_compute[261524]:   <currentMemory unit='KiB'>131072</currentMemory>
Sep 30 14:46:56 compute-0 nova_compute[261524]:   <vcpu placement='static'>1</vcpu>
Sep 30 14:46:56 compute-0 nova_compute[261524]:   <resource>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <partition>/machine</partition>
Sep 30 14:46:56 compute-0 nova_compute[261524]:   </resource>
Sep 30 14:46:56 compute-0 nova_compute[261524]:   <sysinfo type='smbios'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <system>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <entry name='manufacturer'>RDO</entry>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <entry name='product'>OpenStack Compute</entry>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <entry name='serial'>ab354489-bdb3-49d0-9ed1-574d93130913</entry>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <entry name='uuid'>ab354489-bdb3-49d0-9ed1-574d93130913</entry>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <entry name='family'>Virtual Machine</entry>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     </system>
Sep 30 14:46:56 compute-0 nova_compute[261524]:   </sysinfo>
Sep 30 14:46:56 compute-0 nova_compute[261524]:   <os>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <boot dev='hd'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <smbios mode='sysinfo'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:   </os>
Sep 30 14:46:56 compute-0 nova_compute[261524]:   <features>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <acpi/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <apic/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <vmcoreinfo state='on'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:   </features>
Sep 30 14:46:56 compute-0 nova_compute[261524]:   <cpu mode='custom' match='exact' check='full'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <model fallback='forbid'>EPYC-Rome</model>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <vendor>AMD</vendor>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <feature policy='require' name='x2apic'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <feature policy='require' name='tsc-deadline'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <feature policy='require' name='hypervisor'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <feature policy='require' name='tsc_adjust'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <feature policy='require' name='spec-ctrl'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <feature policy='require' name='stibp'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <feature policy='require' name='arch-capabilities'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <feature policy='require' name='ssbd'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <feature policy='require' name='cmp_legacy'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <feature policy='require' name='overflow-recov'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <feature policy='require' name='succor'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <feature policy='require' name='ibrs'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <feature policy='require' name='amd-ssbd'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <feature policy='require' name='virt-ssbd'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <feature policy='disable' name='lbrv'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <feature policy='disable' name='tsc-scale'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <feature policy='disable' name='vmcb-clean'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <feature policy='disable' name='flushbyasid'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <feature policy='disable' name='pause-filter'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <feature policy='disable' name='pfthreshold'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <feature policy='disable' name='svme-addr-chk'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <feature policy='require' name='lfence-always-serializing'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <feature policy='require' name='rdctl-no'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <feature policy='require' name='skip-l1dfl-vmentry'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <feature policy='require' name='mds-no'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <feature policy='require' name='pschange-mc-no'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <feature policy='require' name='gds-no'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <feature policy='require' name='rfds-no'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <feature policy='disable' name='xsaves'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <feature policy='disable' name='svm'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <feature policy='require' name='topoext'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <feature policy='disable' name='npt'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <feature policy='disable' name='nrip-save'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:   </cpu>
Sep 30 14:46:56 compute-0 nova_compute[261524]:   <clock offset='utc'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <timer name='pit' tickpolicy='delay'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <timer name='rtc' tickpolicy='catchup'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <timer name='hpet' present='no'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:   </clock>
Sep 30 14:46:56 compute-0 nova_compute[261524]:   <on_poweroff>destroy</on_poweroff>
Sep 30 14:46:56 compute-0 nova_compute[261524]:   <on_reboot>restart</on_reboot>
Sep 30 14:46:56 compute-0 nova_compute[261524]:   <on_crash>destroy</on_crash>
Sep 30 14:46:56 compute-0 nova_compute[261524]:   <devices>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <disk type='network' device='disk'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <driver name='qemu' type='raw' cache='none'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <auth username='openstack'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:         <secret type='ceph' uuid='5e3c7776-ac03-5698-b79f-a6dc2d80cae6'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       </auth>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <source protocol='rbd' name='vms/ab354489-bdb3-49d0-9ed1-574d93130913_disk' index='2'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:         <host name='192.168.122.100' port='6789'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:         <host name='192.168.122.102' port='6789'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:         <host name='192.168.122.101' port='6789'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       </source>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <target dev='vda' bus='virtio'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <alias name='virtio-disk0'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     </disk>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <disk type='network' device='cdrom'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <driver name='qemu' type='raw' cache='none'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <auth username='openstack'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:         <secret type='ceph' uuid='5e3c7776-ac03-5698-b79f-a6dc2d80cae6'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       </auth>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <source protocol='rbd' name='vms/ab354489-bdb3-49d0-9ed1-574d93130913_disk.config' index='1'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:         <host name='192.168.122.100' port='6789'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:         <host name='192.168.122.102' port='6789'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:         <host name='192.168.122.101' port='6789'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       </source>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <target dev='sda' bus='sata'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <readonly/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <alias name='sata0-0-0'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     </disk>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <controller type='pci' index='0' model='pcie-root'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <alias name='pcie.0'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <controller type='pci' index='1' model='pcie-root-port'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <target chassis='1' port='0x10'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <alias name='pci.1'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <controller type='pci' index='2' model='pcie-root-port'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <target chassis='2' port='0x11'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <alias name='pci.2'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <controller type='pci' index='3' model='pcie-root-port'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <target chassis='3' port='0x12'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <alias name='pci.3'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <controller type='pci' index='4' model='pcie-root-port'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <target chassis='4' port='0x13'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <alias name='pci.4'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <controller type='pci' index='5' model='pcie-root-port'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <target chassis='5' port='0x14'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <alias name='pci.5'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <controller type='pci' index='6' model='pcie-root-port'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <target chassis='6' port='0x15'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <alias name='pci.6'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <controller type='pci' index='7' model='pcie-root-port'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <target chassis='7' port='0x16'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <alias name='pci.7'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <controller type='pci' index='8' model='pcie-root-port'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <target chassis='8' port='0x17'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <alias name='pci.8'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <controller type='pci' index='9' model='pcie-root-port'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <target chassis='9' port='0x18'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <alias name='pci.9'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <controller type='pci' index='10' model='pcie-root-port'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <target chassis='10' port='0x19'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <alias name='pci.10'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <controller type='pci' index='11' model='pcie-root-port'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <target chassis='11' port='0x1a'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <alias name='pci.11'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <controller type='pci' index='12' model='pcie-root-port'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <target chassis='12' port='0x1b'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <alias name='pci.12'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <controller type='pci' index='13' model='pcie-root-port'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <target chassis='13' port='0x1c'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <alias name='pci.13'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <controller type='pci' index='14' model='pcie-root-port'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <target chassis='14' port='0x1d'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <alias name='pci.14'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <controller type='pci' index='15' model='pcie-root-port'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <target chassis='15' port='0x1e'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <alias name='pci.15'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <controller type='pci' index='16' model='pcie-root-port'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <target chassis='16' port='0x1f'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <alias name='pci.16'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <controller type='pci' index='17' model='pcie-root-port'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <target chassis='17' port='0x20'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <alias name='pci.17'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <controller type='pci' index='18' model='pcie-root-port'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <target chassis='18' port='0x21'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <alias name='pci.18'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <controller type='pci' index='19' model='pcie-root-port'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <target chassis='19' port='0x22'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <alias name='pci.19'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <controller type='pci' index='20' model='pcie-root-port'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <target chassis='20' port='0x23'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <alias name='pci.20'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <controller type='pci' index='21' model='pcie-root-port'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <target chassis='21' port='0x24'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <alias name='pci.21'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <controller type='pci' index='22' model='pcie-root-port'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <target chassis='22' port='0x25'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <alias name='pci.22'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <controller type='pci' index='23' model='pcie-root-port'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <target chassis='23' port='0x26'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <alias name='pci.23'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <controller type='pci' index='24' model='pcie-root-port'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <target chassis='24' port='0x27'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <alias name='pci.24'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <controller type='pci' index='25' model='pcie-root-port'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <target chassis='25' port='0x28'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <alias name='pci.25'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <model name='pcie-pci-bridge'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <alias name='pci.26'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <controller type='usb' index='0' model='piix3-uhci'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <alias name='usb'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <controller type='sata' index='0'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <alias name='ide'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <interface type='ethernet'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <mac address='fa:16:3e:db:b9:ad'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <target dev='tap70e1bfe9-60'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <model type='virtio'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <driver name='vhost' rx_queue_size='512'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <mtu size='1442'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <alias name='net0'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     </interface>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <interface type='ethernet'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <mac address='fa:16:3e:21:35:09'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <target dev='tap9647a6b7-6b'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <model type='virtio'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <driver name='vhost' rx_queue_size='512'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <mtu size='1442'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <alias name='net1'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     </interface>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <serial type='pty'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <source path='/dev/pts/0'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <log file='/var/lib/nova/instances/ab354489-bdb3-49d0-9ed1-574d93130913/console.log' append='off'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <target type='isa-serial' port='0'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:         <model name='isa-serial'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       </target>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <alias name='serial0'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     </serial>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <console type='pty' tty='/dev/pts/0'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <source path='/dev/pts/0'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <log file='/var/lib/nova/instances/ab354489-bdb3-49d0-9ed1-574d93130913/console.log' append='off'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <target type='serial' port='0'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <alias name='serial0'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     </console>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <input type='tablet' bus='usb'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <alias name='input0'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <address type='usb' bus='0' port='1'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     </input>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <input type='mouse' bus='ps2'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <alias name='input1'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     </input>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <input type='keyboard' bus='ps2'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <alias name='input2'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     </input>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <listen type='address' address='::0'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     </graphics>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <audio id='1' type='none'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <video>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <model type='virtio' heads='1' primary='yes'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <alias name='video0'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     </video>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <watchdog model='itco' action='reset'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <alias name='watchdog0'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     </watchdog>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <memballoon model='virtio'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <stats period='10'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <alias name='balloon0'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     </memballoon>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <rng model='virtio'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <backend model='random'>/dev/urandom</backend>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <alias name='rng0'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     </rng>
Sep 30 14:46:56 compute-0 nova_compute[261524]:   </devices>
Sep 30 14:46:56 compute-0 nova_compute[261524]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <label>system_u:system_r:svirt_t:s0:c620,c988</label>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c620,c988</imagelabel>
Sep 30 14:46:56 compute-0 nova_compute[261524]:   </seclabel>
Sep 30 14:46:56 compute-0 nova_compute[261524]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <label>+107:+107</label>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <imagelabel>+107:+107</imagelabel>
Sep 30 14:46:56 compute-0 nova_compute[261524]:   </seclabel>
Sep 30 14:46:56 compute-0 nova_compute[261524]: </domain>
Sep 30 14:46:56 compute-0 nova_compute[261524]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Sep 30 14:46:56 compute-0 nova_compute[261524]: 2025-09-30 14:46:56.101 2 INFO nova.virt.libvirt.driver [None req-3e6cde06-d340-4d80-bdad-76caf8288fbd 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Successfully detached device tap9647a6b7-6b from instance ab354489-bdb3-49d0-9ed1-574d93130913 from the persistent domain config.
Sep 30 14:46:56 compute-0 nova_compute[261524]: 2025-09-30 14:46:56.101 2 DEBUG nova.virt.libvirt.driver [None req-3e6cde06-d340-4d80-bdad-76caf8288fbd 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] (1/8): Attempting to detach device tap9647a6b7-6b with device alias net1 from instance ab354489-bdb3-49d0-9ed1-574d93130913 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Sep 30 14:46:56 compute-0 nova_compute[261524]: 2025-09-30 14:46:56.102 2 DEBUG nova.virt.libvirt.guest [None req-3e6cde06-d340-4d80-bdad-76caf8288fbd 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] detach device xml: <interface type="ethernet">
Sep 30 14:46:56 compute-0 nova_compute[261524]:   <mac address="fa:16:3e:21:35:09"/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:   <model type="virtio"/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:   <driver name="vhost" rx_queue_size="512"/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:   <mtu size="1442"/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:   <target dev="tap9647a6b7-6b"/>
Sep 30 14:46:56 compute-0 nova_compute[261524]: </interface>
Sep 30 14:46:56 compute-0 nova_compute[261524]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Sep 30 14:46:56 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:46:56 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:46:56 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:46:56.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:46:56 compute-0 kernel: tap9647a6b7-6b (unregistering): left promiscuous mode
Sep 30 14:46:56 compute-0 NetworkManager[45472]: <info>  [1759243616.2165] device (tap9647a6b7-6b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Sep 30 14:46:56 compute-0 nova_compute[261524]: 2025-09-30 14:46:56.228 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:46:56 compute-0 ovn_controller[154021]: 2025-09-30T14:46:56Z|00071|binding|INFO|Releasing lport 9647a6b7-6ba5-4788-9075-bdfb0924041c from this chassis (sb_readonly=0)
Sep 30 14:46:56 compute-0 ovn_controller[154021]: 2025-09-30T14:46:56Z|00072|binding|INFO|Setting lport 9647a6b7-6ba5-4788-9075-bdfb0924041c down in Southbound
Sep 30 14:46:56 compute-0 ovn_controller[154021]: 2025-09-30T14:46:56Z|00073|binding|INFO|Removing iface tap9647a6b7-6b ovn-installed in OVS
Sep 30 14:46:56 compute-0 nova_compute[261524]: 2025-09-30 14:46:56.229 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:46:56 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:46:56.234 163966 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:21:35:09 10.100.0.18', 'unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28', 'neutron:device_id': 'ab354489-bdb3-49d0-9ed1-574d93130913', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4f96ad7c-4512-478c-acee-7360218cf3ea', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0f6bbb74396f4cb7bfa999ebdabfe722', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=093e98dd-9645-4629-9a56-b4dac70fd8d8, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f8c6753f7f0>], logical_port=9647a6b7-6ba5-4788-9075-bdfb0924041c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f8c6753f7f0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Sep 30 14:46:56 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:46:56.238 163966 INFO neutron.agent.ovn.metadata.agent [-] Port 9647a6b7-6ba5-4788-9075-bdfb0924041c in datapath 4f96ad7c-4512-478c-acee-7360218cf3ea unbound from our chassis
Sep 30 14:46:56 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:46:56.239 163966 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4f96ad7c-4512-478c-acee-7360218cf3ea, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Sep 30 14:46:56 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:46:56.241 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[ddf4d5bf-595d-44dd-bd13-b55198075f0d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:46:56 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:46:56.242 163966 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4f96ad7c-4512-478c-acee-7360218cf3ea namespace which is not needed anymore
Sep 30 14:46:56 compute-0 nova_compute[261524]: 2025-09-30 14:46:56.242 2 DEBUG nova.virt.libvirt.driver [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] Received event <DeviceRemovedEvent: 1759243616.2419622, ab354489-bdb3-49d0-9ed1-574d93130913 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Sep 30 14:46:56 compute-0 nova_compute[261524]: 2025-09-30 14:46:56.247 2 DEBUG nova.virt.libvirt.driver [None req-3e6cde06-d340-4d80-bdad-76caf8288fbd 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Start waiting for the detach event from libvirt for device tap9647a6b7-6b with device alias net1 for instance ab354489-bdb3-49d0-9ed1-574d93130913 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Sep 30 14:46:56 compute-0 nova_compute[261524]: 2025-09-30 14:46:56.248 2 DEBUG nova.virt.libvirt.guest [None req-3e6cde06-d340-4d80-bdad-76caf8288fbd 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:21:35:09"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9647a6b7-6b"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Sep 30 14:46:56 compute-0 nova_compute[261524]: 2025-09-30 14:46:56.251 2 DEBUG nova.virt.libvirt.guest [None req-3e6cde06-d340-4d80-bdad-76caf8288fbd 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:21:35:09"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9647a6b7-6b"/></interface>not found in domain: <domain type='kvm' id='3'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:   <name>instance-00000006</name>
Sep 30 14:46:56 compute-0 nova_compute[261524]:   <uuid>ab354489-bdb3-49d0-9ed1-574d93130913</uuid>
Sep 30 14:46:56 compute-0 nova_compute[261524]:   <metadata>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Sep 30 14:46:56 compute-0 nova_compute[261524]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:   <nova:name>tempest-TestNetworkBasicOps-server-711458846</nova:name>
Sep 30 14:46:56 compute-0 nova_compute[261524]:   <nova:creationTime>2025-09-30 14:45:23</nova:creationTime>
Sep 30 14:46:56 compute-0 nova_compute[261524]:   <nova:flavor name="m1.nano">
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <nova:memory>128</nova:memory>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <nova:disk>1</nova:disk>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <nova:swap>0</nova:swap>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <nova:ephemeral>0</nova:ephemeral>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <nova:vcpus>1</nova:vcpus>
Sep 30 14:46:56 compute-0 nova_compute[261524]:   </nova:flavor>
Sep 30 14:46:56 compute-0 nova_compute[261524]:   <nova:owner>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <nova:user uuid="59c80c4f189d4667aec64b43afc69ed2">tempest-TestNetworkBasicOps-195302952-project-member</nova:user>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <nova:project uuid="0f6bbb74396f4cb7bfa999ebdabfe722">tempest-TestNetworkBasicOps-195302952</nova:project>
Sep 30 14:46:56 compute-0 nova_compute[261524]:   </nova:owner>
Sep 30 14:46:56 compute-0 nova_compute[261524]:   <nova:root type="image" uuid="7c70cf84-edc3-42b2-a094-ae3c1dbaffe4"/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:   <nova:ports>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <nova:port uuid="70e1bfe9-6006-4e08-9c7f-c0d64c8269a0">
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     </nova:port>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <nova:port uuid="9647a6b7-6ba5-4788-9075-bdfb0924041c">
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <nova:ip type="fixed" address="10.100.0.18" ipVersion="4"/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     </nova:port>
Sep 30 14:46:56 compute-0 nova_compute[261524]:   </nova:ports>
Sep 30 14:46:56 compute-0 nova_compute[261524]: </nova:instance>
Sep 30 14:46:56 compute-0 nova_compute[261524]:   </metadata>
Sep 30 14:46:56 compute-0 nova_compute[261524]:   <memory unit='KiB'>131072</memory>
Sep 30 14:46:56 compute-0 nova_compute[261524]:   <currentMemory unit='KiB'>131072</currentMemory>
Sep 30 14:46:56 compute-0 nova_compute[261524]:   <vcpu placement='static'>1</vcpu>
Sep 30 14:46:56 compute-0 nova_compute[261524]:   <resource>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <partition>/machine</partition>
Sep 30 14:46:56 compute-0 nova_compute[261524]:   </resource>
Sep 30 14:46:56 compute-0 nova_compute[261524]:   <sysinfo type='smbios'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <system>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <entry name='manufacturer'>RDO</entry>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <entry name='product'>OpenStack Compute</entry>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <entry name='serial'>ab354489-bdb3-49d0-9ed1-574d93130913</entry>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <entry name='uuid'>ab354489-bdb3-49d0-9ed1-574d93130913</entry>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <entry name='family'>Virtual Machine</entry>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     </system>
Sep 30 14:46:56 compute-0 nova_compute[261524]:   </sysinfo>
Sep 30 14:46:56 compute-0 nova_compute[261524]:   <os>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <boot dev='hd'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <smbios mode='sysinfo'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:   </os>
Sep 30 14:46:56 compute-0 nova_compute[261524]:   <features>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <acpi/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <apic/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <vmcoreinfo state='on'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:   </features>
Sep 30 14:46:56 compute-0 nova_compute[261524]:   <cpu mode='custom' match='exact' check='full'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <model fallback='forbid'>EPYC-Rome</model>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <vendor>AMD</vendor>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <feature policy='require' name='x2apic'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <feature policy='require' name='tsc-deadline'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <feature policy='require' name='hypervisor'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <feature policy='require' name='tsc_adjust'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <feature policy='require' name='spec-ctrl'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <feature policy='require' name='stibp'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <feature policy='require' name='arch-capabilities'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <feature policy='require' name='ssbd'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <feature policy='require' name='cmp_legacy'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <feature policy='require' name='overflow-recov'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <feature policy='require' name='succor'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <feature policy='require' name='ibrs'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <feature policy='require' name='amd-ssbd'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <feature policy='require' name='virt-ssbd'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <feature policy='disable' name='lbrv'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <feature policy='disable' name='tsc-scale'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <feature policy='disable' name='vmcb-clean'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <feature policy='disable' name='flushbyasid'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <feature policy='disable' name='pause-filter'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <feature policy='disable' name='pfthreshold'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <feature policy='disable' name='svme-addr-chk'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <feature policy='require' name='lfence-always-serializing'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <feature policy='require' name='rdctl-no'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <feature policy='require' name='skip-l1dfl-vmentry'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <feature policy='require' name='mds-no'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <feature policy='require' name='pschange-mc-no'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <feature policy='require' name='gds-no'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <feature policy='require' name='rfds-no'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <feature policy='disable' name='xsaves'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <feature policy='disable' name='svm'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <feature policy='require' name='topoext'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <feature policy='disable' name='npt'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <feature policy='disable' name='nrip-save'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:   </cpu>
Sep 30 14:46:56 compute-0 nova_compute[261524]:   <clock offset='utc'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <timer name='pit' tickpolicy='delay'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <timer name='rtc' tickpolicy='catchup'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <timer name='hpet' present='no'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:   </clock>
Sep 30 14:46:56 compute-0 nova_compute[261524]:   <on_poweroff>destroy</on_poweroff>
Sep 30 14:46:56 compute-0 nova_compute[261524]:   <on_reboot>restart</on_reboot>
Sep 30 14:46:56 compute-0 nova_compute[261524]:   <on_crash>destroy</on_crash>
Sep 30 14:46:56 compute-0 nova_compute[261524]:   <devices>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <disk type='network' device='disk'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <driver name='qemu' type='raw' cache='none'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <auth username='openstack'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:         <secret type='ceph' uuid='5e3c7776-ac03-5698-b79f-a6dc2d80cae6'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       </auth>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <source protocol='rbd' name='vms/ab354489-bdb3-49d0-9ed1-574d93130913_disk' index='2'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:         <host name='192.168.122.100' port='6789'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:         <host name='192.168.122.102' port='6789'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:         <host name='192.168.122.101' port='6789'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       </source>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <target dev='vda' bus='virtio'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <alias name='virtio-disk0'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     </disk>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <disk type='network' device='cdrom'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <driver name='qemu' type='raw' cache='none'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <auth username='openstack'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:         <secret type='ceph' uuid='5e3c7776-ac03-5698-b79f-a6dc2d80cae6'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       </auth>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <source protocol='rbd' name='vms/ab354489-bdb3-49d0-9ed1-574d93130913_disk.config' index='1'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:         <host name='192.168.122.100' port='6789'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:         <host name='192.168.122.102' port='6789'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:         <host name='192.168.122.101' port='6789'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       </source>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <target dev='sda' bus='sata'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <readonly/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <alias name='sata0-0-0'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     </disk>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <controller type='pci' index='0' model='pcie-root'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <alias name='pcie.0'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <controller type='pci' index='1' model='pcie-root-port'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <target chassis='1' port='0x10'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <alias name='pci.1'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <controller type='pci' index='2' model='pcie-root-port'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <target chassis='2' port='0x11'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <alias name='pci.2'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <controller type='pci' index='3' model='pcie-root-port'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <target chassis='3' port='0x12'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <alias name='pci.3'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <controller type='pci' index='4' model='pcie-root-port'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <target chassis='4' port='0x13'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <alias name='pci.4'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <controller type='pci' index='5' model='pcie-root-port'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <target chassis='5' port='0x14'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <alias name='pci.5'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <controller type='pci' index='6' model='pcie-root-port'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <target chassis='6' port='0x15'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <alias name='pci.6'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <controller type='pci' index='7' model='pcie-root-port'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <target chassis='7' port='0x16'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <alias name='pci.7'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <controller type='pci' index='8' model='pcie-root-port'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <target chassis='8' port='0x17'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <alias name='pci.8'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <controller type='pci' index='9' model='pcie-root-port'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <target chassis='9' port='0x18'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <alias name='pci.9'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <controller type='pci' index='10' model='pcie-root-port'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <target chassis='10' port='0x19'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <alias name='pci.10'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <controller type='pci' index='11' model='pcie-root-port'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <target chassis='11' port='0x1a'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <alias name='pci.11'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <controller type='pci' index='12' model='pcie-root-port'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <target chassis='12' port='0x1b'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <alias name='pci.12'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <controller type='pci' index='13' model='pcie-root-port'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <target chassis='13' port='0x1c'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <alias name='pci.13'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <controller type='pci' index='14' model='pcie-root-port'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <target chassis='14' port='0x1d'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <alias name='pci.14'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <controller type='pci' index='15' model='pcie-root-port'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <target chassis='15' port='0x1e'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <alias name='pci.15'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <controller type='pci' index='16' model='pcie-root-port'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <target chassis='16' port='0x1f'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <alias name='pci.16'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <controller type='pci' index='17' model='pcie-root-port'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <target chassis='17' port='0x20'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <alias name='pci.17'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <controller type='pci' index='18' model='pcie-root-port'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <target chassis='18' port='0x21'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <alias name='pci.18'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <controller type='pci' index='19' model='pcie-root-port'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <target chassis='19' port='0x22'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <alias name='pci.19'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <controller type='pci' index='20' model='pcie-root-port'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <target chassis='20' port='0x23'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <alias name='pci.20'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <controller type='pci' index='21' model='pcie-root-port'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <target chassis='21' port='0x24'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <alias name='pci.21'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <controller type='pci' index='22' model='pcie-root-port'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <target chassis='22' port='0x25'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <alias name='pci.22'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <controller type='pci' index='23' model='pcie-root-port'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <target chassis='23' port='0x26'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <alias name='pci.23'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <controller type='pci' index='24' model='pcie-root-port'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <target chassis='24' port='0x27'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <alias name='pci.24'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <controller type='pci' index='25' model='pcie-root-port'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <target chassis='25' port='0x28'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <alias name='pci.25'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <model name='pcie-pci-bridge'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <alias name='pci.26'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <controller type='usb' index='0' model='piix3-uhci'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <alias name='usb'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <controller type='sata' index='0'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <alias name='ide'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <interface type='ethernet'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <mac address='fa:16:3e:db:b9:ad'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <target dev='tap70e1bfe9-60'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <model type='virtio'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <driver name='vhost' rx_queue_size='512'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <mtu size='1442'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <alias name='net0'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     </interface>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <serial type='pty'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <source path='/dev/pts/0'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <log file='/var/lib/nova/instances/ab354489-bdb3-49d0-9ed1-574d93130913/console.log' append='off'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <target type='isa-serial' port='0'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:         <model name='isa-serial'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       </target>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <alias name='serial0'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     </serial>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <console type='pty' tty='/dev/pts/0'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <source path='/dev/pts/0'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <log file='/var/lib/nova/instances/ab354489-bdb3-49d0-9ed1-574d93130913/console.log' append='off'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <target type='serial' port='0'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <alias name='serial0'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     </console>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <input type='tablet' bus='usb'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <alias name='input0'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <address type='usb' bus='0' port='1'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     </input>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <input type='mouse' bus='ps2'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <alias name='input1'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     </input>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <input type='keyboard' bus='ps2'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <alias name='input2'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     </input>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <listen type='address' address='::0'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     </graphics>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <audio id='1' type='none'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <video>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <model type='virtio' heads='1' primary='yes'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <alias name='video0'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     </video>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <watchdog model='itco' action='reset'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <alias name='watchdog0'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     </watchdog>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <memballoon model='virtio'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <stats period='10'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <alias name='balloon0'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     </memballoon>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <rng model='virtio'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <backend model='random'>/dev/urandom</backend>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <alias name='rng0'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     </rng>
Sep 30 14:46:56 compute-0 nova_compute[261524]:   </devices>
Sep 30 14:46:56 compute-0 nova_compute[261524]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <label>system_u:system_r:svirt_t:s0:c620,c988</label>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c620,c988</imagelabel>
Sep 30 14:46:56 compute-0 nova_compute[261524]:   </seclabel>
Sep 30 14:46:56 compute-0 nova_compute[261524]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <label>+107:+107</label>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <imagelabel>+107:+107</imagelabel>
Sep 30 14:46:56 compute-0 nova_compute[261524]:   </seclabel>
Sep 30 14:46:56 compute-0 nova_compute[261524]: </domain>
Sep 30 14:46:56 compute-0 nova_compute[261524]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Sep 30 14:46:56 compute-0 nova_compute[261524]: 2025-09-30 14:46:56.251 2 INFO nova.virt.libvirt.driver [None req-3e6cde06-d340-4d80-bdad-76caf8288fbd 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Successfully detached device tap9647a6b7-6b from instance ab354489-bdb3-49d0-9ed1-574d93130913 from the live domain config.
Sep 30 14:46:56 compute-0 nova_compute[261524]: 2025-09-30 14:46:56.251 2 DEBUG nova.virt.libvirt.vif [None req-3e6cde06-d340-4d80-bdad-76caf8288fbd 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-09-30T14:44:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-711458846',display_name='tempest-TestNetworkBasicOps-server-711458846',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-711458846',id=6,image_ref='7c70cf84-edc3-42b2-a094-ae3c1dbaffe4',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIMQL53T2ZkSAoVzfinB0Xb6YV6zqFtICzdovU1Kn/PIvW0fTnkL2hml556IQQU+IFdjIRu6Xc3RQKHc2DkPb73zFKtN5c4E62Q7wZZkQI9VBc0aWDqG12KKHVj732hp6w==',key_name='tempest-TestNetworkBasicOps-1073344022',keypairs=<?>,launch_index=0,launched_at=2025-09-30T14:44:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0f6bbb74396f4cb7bfa999ebdabfe722',ramdisk_id='',reservation_id='r-z3tdfpa2',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c70cf84-edc3-42b2-a094-ae3c1dbaffe4',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-195302952',owner_user_name='tempest-TestNetworkBasicOps-195302952-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-09-30T14:44:53Z,user_data=None,user_id='59c80c4f189d4667aec64b43afc69ed2',uuid=ab354489-bdb3-49d0-9ed1-574d93130913,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9647a6b7-6ba5-4788-9075-bdfb0924041c", "address": "fa:16:3e:21:35:09", "network": {"id": "4f96ad7c-4512-478c-acee-7360218cf3ea", "bridge": "br-int", "label": "tempest-network-smoke--980620503", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0f6bbb74396f4cb7bfa999ebdabfe722", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9647a6b7-6b", "ovs_interfaceid": "9647a6b7-6ba5-4788-9075-bdfb0924041c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Sep 30 14:46:56 compute-0 nova_compute[261524]: 2025-09-30 14:46:56.252 2 DEBUG nova.network.os_vif_util [None req-3e6cde06-d340-4d80-bdad-76caf8288fbd 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Converting VIF {"id": "9647a6b7-6ba5-4788-9075-bdfb0924041c", "address": "fa:16:3e:21:35:09", "network": {"id": "4f96ad7c-4512-478c-acee-7360218cf3ea", "bridge": "br-int", "label": "tempest-network-smoke--980620503", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0f6bbb74396f4cb7bfa999ebdabfe722", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9647a6b7-6b", "ovs_interfaceid": "9647a6b7-6ba5-4788-9075-bdfb0924041c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Sep 30 14:46:56 compute-0 nova_compute[261524]: 2025-09-30 14:46:56.252 2 DEBUG nova.network.os_vif_util [None req-3e6cde06-d340-4d80-bdad-76caf8288fbd 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:21:35:09,bridge_name='br-int',has_traffic_filtering=True,id=9647a6b7-6ba5-4788-9075-bdfb0924041c,network=Network(4f96ad7c-4512-478c-acee-7360218cf3ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9647a6b7-6b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Sep 30 14:46:56 compute-0 nova_compute[261524]: 2025-09-30 14:46:56.253 2 DEBUG os_vif [None req-3e6cde06-d340-4d80-bdad-76caf8288fbd 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:21:35:09,bridge_name='br-int',has_traffic_filtering=True,id=9647a6b7-6ba5-4788-9075-bdfb0924041c,network=Network(4f96ad7c-4512-478c-acee-7360218cf3ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9647a6b7-6b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Sep 30 14:46:56 compute-0 nova_compute[261524]: 2025-09-30 14:46:56.255 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:46:56 compute-0 nova_compute[261524]: 2025-09-30 14:46:56.255 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9647a6b7-6b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 14:46:56 compute-0 nova_compute[261524]: 2025-09-30 14:46:56.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:46:56 compute-0 nova_compute[261524]: 2025-09-30 14:46:56.260 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:46:56 compute-0 nova_compute[261524]: 2025-09-30 14:46:56.263 2 INFO os_vif [None req-3e6cde06-d340-4d80-bdad-76caf8288fbd 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:21:35:09,bridge_name='br-int',has_traffic_filtering=True,id=9647a6b7-6ba5-4788-9075-bdfb0924041c,network=Network(4f96ad7c-4512-478c-acee-7360218cf3ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9647a6b7-6b')
Sep 30 14:46:56 compute-0 nova_compute[261524]: 2025-09-30 14:46:56.264 2 DEBUG nova.virt.libvirt.guest [None req-3e6cde06-d340-4d80-bdad-76caf8288fbd 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Sep 30 14:46:56 compute-0 nova_compute[261524]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:   <nova:name>tempest-TestNetworkBasicOps-server-711458846</nova:name>
Sep 30 14:46:56 compute-0 nova_compute[261524]:   <nova:creationTime>2025-09-30 14:46:56</nova:creationTime>
Sep 30 14:46:56 compute-0 nova_compute[261524]:   <nova:flavor name="m1.nano">
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <nova:memory>128</nova:memory>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <nova:disk>1</nova:disk>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <nova:swap>0</nova:swap>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <nova:ephemeral>0</nova:ephemeral>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <nova:vcpus>1</nova:vcpus>
Sep 30 14:46:56 compute-0 nova_compute[261524]:   </nova:flavor>
Sep 30 14:46:56 compute-0 nova_compute[261524]:   <nova:owner>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <nova:user uuid="59c80c4f189d4667aec64b43afc69ed2">tempest-TestNetworkBasicOps-195302952-project-member</nova:user>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <nova:project uuid="0f6bbb74396f4cb7bfa999ebdabfe722">tempest-TestNetworkBasicOps-195302952</nova:project>
Sep 30 14:46:56 compute-0 nova_compute[261524]:   </nova:owner>
Sep 30 14:46:56 compute-0 nova_compute[261524]:   <nova:root type="image" uuid="7c70cf84-edc3-42b2-a094-ae3c1dbaffe4"/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:   <nova:ports>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     <nova:port uuid="70e1bfe9-6006-4e08-9c7f-c0d64c8269a0">
Sep 30 14:46:56 compute-0 nova_compute[261524]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Sep 30 14:46:56 compute-0 nova_compute[261524]:     </nova:port>
Sep 30 14:46:56 compute-0 nova_compute[261524]:   </nova:ports>
Sep 30 14:46:56 compute-0 nova_compute[261524]: </nova:instance>
Sep 30 14:46:56 compute-0 nova_compute[261524]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Sep 30 14:46:56 compute-0 ceph-mon[74194]: pgmap v942: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 4.8 KiB/s wr, 28 op/s
Sep 30 14:46:56 compute-0 neutron-haproxy-ovnmeta-4f96ad7c-4512-478c-acee-7360218cf3ea[277496]: [NOTICE]   (277500) : haproxy version is 2.8.14-c23fe91
Sep 30 14:46:56 compute-0 neutron-haproxy-ovnmeta-4f96ad7c-4512-478c-acee-7360218cf3ea[277496]: [NOTICE]   (277500) : path to executable is /usr/sbin/haproxy
Sep 30 14:46:56 compute-0 neutron-haproxy-ovnmeta-4f96ad7c-4512-478c-acee-7360218cf3ea[277496]: [WARNING]  (277500) : Exiting Master process...
Sep 30 14:46:56 compute-0 neutron-haproxy-ovnmeta-4f96ad7c-4512-478c-acee-7360218cf3ea[277496]: [ALERT]    (277500) : Current worker (277502) exited with code 143 (Terminated)
Sep 30 14:46:56 compute-0 neutron-haproxy-ovnmeta-4f96ad7c-4512-478c-acee-7360218cf3ea[277496]: [WARNING]  (277500) : All workers exited. Exiting... (0)
Sep 30 14:46:56 compute-0 systemd[1]: libpod-bc5b8fa766ce1f8f9b8500f1eda942945f5212cd4c0cfa1b832930b8d469f60b.scope: Deactivated successfully.
Sep 30 14:46:56 compute-0 podman[279444]: 2025-09-30 14:46:56.417948026 +0000 UTC m=+0.044638231 container died bc5b8fa766ce1f8f9b8500f1eda942945f5212cd4c0cfa1b832930b8d469f60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4f96ad7c-4512-478c-acee-7360218cf3ea, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Sep 30 14:46:56 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bc5b8fa766ce1f8f9b8500f1eda942945f5212cd4c0cfa1b832930b8d469f60b-userdata-shm.mount: Deactivated successfully.
Sep 30 14:46:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-482fc40af09dd4bda9578911cfb5534a6ea27758df09ab94a5eec6c961ae2757-merged.mount: Deactivated successfully.
Sep 30 14:46:56 compute-0 podman[279444]: 2025-09-30 14:46:56.465652012 +0000 UTC m=+0.092342217 container cleanup bc5b8fa766ce1f8f9b8500f1eda942945f5212cd4c0cfa1b832930b8d469f60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4f96ad7c-4512-478c-acee-7360218cf3ea, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=36bccb96575468ec919301205d8daa2c, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20250923)
Sep 30 14:46:56 compute-0 systemd[1]: libpod-conmon-bc5b8fa766ce1f8f9b8500f1eda942945f5212cd4c0cfa1b832930b8d469f60b.scope: Deactivated successfully.
Sep 30 14:46:56 compute-0 podman[279475]: 2025-09-30 14:46:56.543571932 +0000 UTC m=+0.049360077 container remove bc5b8fa766ce1f8f9b8500f1eda942945f5212cd4c0cfa1b832930b8d469f60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4f96ad7c-4512-478c-acee-7360218cf3ea, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Sep 30 14:46:56 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:46:56.552 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[8ed8dda7-53b4-4348-a03b-28b1e37d8cd2]: (4, ('Tue Sep 30 02:46:56 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4f96ad7c-4512-478c-acee-7360218cf3ea (bc5b8fa766ce1f8f9b8500f1eda942945f5212cd4c0cfa1b832930b8d469f60b)\nbc5b8fa766ce1f8f9b8500f1eda942945f5212cd4c0cfa1b832930b8d469f60b\nTue Sep 30 02:46:56 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4f96ad7c-4512-478c-acee-7360218cf3ea (bc5b8fa766ce1f8f9b8500f1eda942945f5212cd4c0cfa1b832930b8d469f60b)\nbc5b8fa766ce1f8f9b8500f1eda942945f5212cd4c0cfa1b832930b8d469f60b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:46:56 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:46:56.555 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[b082083d-98ae-4aa4-887f-7fcb361d4983]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:46:56 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:46:56.556 163966 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4f96ad7c-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 14:46:56 compute-0 nova_compute[261524]: 2025-09-30 14:46:56.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:46:56 compute-0 kernel: tap4f96ad7c-40: left promiscuous mode
Sep 30 14:46:56 compute-0 nova_compute[261524]: 2025-09-30 14:46:56.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:46:56 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:46:56.566 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[ffd2bede-e7a4-49e5-be25-df39ba613d71]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:46:56 compute-0 nova_compute[261524]: 2025-09-30 14:46:56.574 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:46:56 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:46:56.601 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[6b5a4790-5288-49ee-90fd-0e10c4e681bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:46:56 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:46:56.603 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[ffb9003e-0ba1-4d77-9c5f-3959d88867b0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:46:56 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:46:56.628 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[3878adef-308c-4bfe-b4bf-00ffa497f259]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 688600, 'reachable_time': 19451, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 279490, 'error': None, 'target': 'ovnmeta-4f96ad7c-4512-478c-acee-7360218cf3ea', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:46:56 compute-0 systemd[1]: run-netns-ovnmeta\x2d4f96ad7c\x2d4512\x2d478c\x2dacee\x2d7360218cf3ea.mount: Deactivated successfully.
Sep 30 14:46:56 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:46:56.631 164124 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4f96ad7c-4512-478c-acee-7360218cf3ea deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Sep 30 14:46:56 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:46:56.632 164124 DEBUG oslo.privsep.daemon [-] privsep: reply[5dec0db0-9997-4d7d-959e-6682b9ccc6e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:46:56 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v943: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 5.9 KiB/s wr, 29 op/s
Sep 30 14:46:57 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:46:57.171Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:46:57 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:46:57 compute-0 nova_compute[261524]: 2025-09-30 14:46:57.279 2 DEBUG nova.compute.manager [req-1c8b4025-9d08-4eef-b968-2eb0aabe838c req-d42f7eca-c460-4266-972a-d64a3dc54bc0 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Received event network-vif-unplugged-9647a6b7-6ba5-4788-9075-bdfb0924041c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Sep 30 14:46:57 compute-0 nova_compute[261524]: 2025-09-30 14:46:57.280 2 DEBUG oslo_concurrency.lockutils [req-1c8b4025-9d08-4eef-b968-2eb0aabe838c req-d42f7eca-c460-4266-972a-d64a3dc54bc0 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Acquiring lock "ab354489-bdb3-49d0-9ed1-574d93130913-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:46:57 compute-0 nova_compute[261524]: 2025-09-30 14:46:57.280 2 DEBUG oslo_concurrency.lockutils [req-1c8b4025-9d08-4eef-b968-2eb0aabe838c req-d42f7eca-c460-4266-972a-d64a3dc54bc0 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Lock "ab354489-bdb3-49d0-9ed1-574d93130913-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:46:57 compute-0 nova_compute[261524]: 2025-09-30 14:46:57.280 2 DEBUG oslo_concurrency.lockutils [req-1c8b4025-9d08-4eef-b968-2eb0aabe838c req-d42f7eca-c460-4266-972a-d64a3dc54bc0 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Lock "ab354489-bdb3-49d0-9ed1-574d93130913-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:46:57 compute-0 nova_compute[261524]: 2025-09-30 14:46:57.281 2 DEBUG nova.compute.manager [req-1c8b4025-9d08-4eef-b968-2eb0aabe838c req-d42f7eca-c460-4266-972a-d64a3dc54bc0 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] No waiting events found dispatching network-vif-unplugged-9647a6b7-6ba5-4788-9075-bdfb0924041c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Sep 30 14:46:57 compute-0 nova_compute[261524]: 2025-09-30 14:46:57.282 2 WARNING nova.compute.manager [req-1c8b4025-9d08-4eef-b968-2eb0aabe838c req-d42f7eca-c460-4266-972a-d64a3dc54bc0 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Received unexpected event network-vif-unplugged-9647a6b7-6ba5-4788-9075-bdfb0924041c for instance with vm_state active and task_state None.
Sep 30 14:46:57 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:46:57 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:46:57 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:46:57.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:46:58 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:46:58 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:46:58 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:46:58.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:46:58 compute-0 ceph-mon[74194]: pgmap v943: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 5.9 KiB/s wr, 29 op/s
Sep 30 14:46:58 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:46:58.413 163966 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c6331d25-78a2-493c-bb43-51ad387342be, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 14:46:58 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v944: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 3.2 KiB/s wr, 28 op/s
Sep 30 14:46:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:46:58 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:46:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:46:58 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:46:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:46:58 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:46:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:46:59 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:46:59 compute-0 nova_compute[261524]: 2025-09-30 14:46:59.359 2 DEBUG nova.compute.manager [req-c06be5cd-032c-40cf-b736-6e5a2af7e22e req-895c89dc-3f5d-4034-ae81-06ecfbc8d9a4 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Received event network-vif-plugged-9647a6b7-6ba5-4788-9075-bdfb0924041c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Sep 30 14:46:59 compute-0 nova_compute[261524]: 2025-09-30 14:46:59.359 2 DEBUG oslo_concurrency.lockutils [req-c06be5cd-032c-40cf-b736-6e5a2af7e22e req-895c89dc-3f5d-4034-ae81-06ecfbc8d9a4 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Acquiring lock "ab354489-bdb3-49d0-9ed1-574d93130913-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:46:59 compute-0 nova_compute[261524]: 2025-09-30 14:46:59.360 2 DEBUG oslo_concurrency.lockutils [req-c06be5cd-032c-40cf-b736-6e5a2af7e22e req-895c89dc-3f5d-4034-ae81-06ecfbc8d9a4 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Lock "ab354489-bdb3-49d0-9ed1-574d93130913-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:46:59 compute-0 nova_compute[261524]: 2025-09-30 14:46:59.360 2 DEBUG oslo_concurrency.lockutils [req-c06be5cd-032c-40cf-b736-6e5a2af7e22e req-895c89dc-3f5d-4034-ae81-06ecfbc8d9a4 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Lock "ab354489-bdb3-49d0-9ed1-574d93130913-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:46:59 compute-0 nova_compute[261524]: 2025-09-30 14:46:59.361 2 DEBUG nova.compute.manager [req-c06be5cd-032c-40cf-b736-6e5a2af7e22e req-895c89dc-3f5d-4034-ae81-06ecfbc8d9a4 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] No waiting events found dispatching network-vif-plugged-9647a6b7-6ba5-4788-9075-bdfb0924041c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Sep 30 14:46:59 compute-0 nova_compute[261524]: 2025-09-30 14:46:59.361 2 WARNING nova.compute.manager [req-c06be5cd-032c-40cf-b736-6e5a2af7e22e req-895c89dc-3f5d-4034-ae81-06ecfbc8d9a4 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Received unexpected event network-vif-plugged-9647a6b7-6ba5-4788-9075-bdfb0924041c for instance with vm_state active and task_state None.
Sep 30 14:46:59 compute-0 ceph-mon[74194]: pgmap v944: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 3.2 KiB/s wr, 28 op/s
Sep 30 14:46:59 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:46:59 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:46:59 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:46:59.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:46:59 compute-0 ceph-mgr[74485]: [balancer INFO root] Optimize plan auto_2025-09-30_14:46:59
Sep 30 14:46:59 compute-0 ceph-mgr[74485]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 14:46:59 compute-0 ceph-mgr[74485]: [balancer INFO root] do_upmap
Sep 30 14:46:59 compute-0 ceph-mgr[74485]: [balancer INFO root] pools ['vms', 'images', 'cephfs.cephfs.data', 'default.rgw.log', 'volumes', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.control', '.nfs', '.mgr', 'backups']
Sep 30 14:46:59 compute-0 ceph-mgr[74485]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 14:46:59 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:46:59 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:46:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:46:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:46:59 compute-0 nova_compute[261524]: 2025-09-30 14:46:59.816 2 DEBUG oslo_concurrency.lockutils [None req-3e6cde06-d340-4d80-bdad-76caf8288fbd 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Acquiring lock "refresh_cache-ab354489-bdb3-49d0-9ed1-574d93130913" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Sep 30 14:46:59 compute-0 nova_compute[261524]: 2025-09-30 14:46:59.816 2 DEBUG oslo_concurrency.lockutils [None req-3e6cde06-d340-4d80-bdad-76caf8288fbd 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Acquired lock "refresh_cache-ab354489-bdb3-49d0-9ed1-574d93130913" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Sep 30 14:46:59 compute-0 nova_compute[261524]: 2025-09-30 14:46:59.817 2 DEBUG nova.network.neutron [None req-3e6cde06-d340-4d80-bdad-76caf8288fbd 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Sep 30 14:46:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:46:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:46:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:46:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:46:59 compute-0 nova_compute[261524]: 2025-09-30 14:46:59.855 2 DEBUG nova.compute.manager [req-36154a68-b7d6-4356-aa7c-8f3d245bdfae req-fe4fa1ab-8198-48b0-a1ff-943d6757723a e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Received event network-vif-deleted-9647a6b7-6ba5-4788-9075-bdfb0924041c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Sep 30 14:46:59 compute-0 nova_compute[261524]: 2025-09-30 14:46:59.855 2 INFO nova.compute.manager [req-36154a68-b7d6-4356-aa7c-8f3d245bdfae req-fe4fa1ab-8198-48b0-a1ff-943d6757723a e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Neutron deleted interface 9647a6b7-6ba5-4788-9075-bdfb0924041c; detaching it from the instance and deleting it from the info cache
Sep 30 14:46:59 compute-0 nova_compute[261524]: 2025-09-30 14:46:59.855 2 DEBUG nova.network.neutron [req-36154a68-b7d6-4356-aa7c-8f3d245bdfae req-fe4fa1ab-8198-48b0-a1ff-943d6757723a e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Updating instance_info_cache with network_info: [{"id": "70e1bfe9-6006-4e08-9c7f-c0d64c8269a0", "address": "fa:16:3e:db:b9:ad", "network": {"id": "653945fb-0a1b-4a3b-b45f-4bafe62f765f", "bridge": "br-int", "label": "tempest-network-smoke--969342711", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0f6bbb74396f4cb7bfa999ebdabfe722", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70e1bfe9-60", "ovs_interfaceid": "70e1bfe9-6006-4e08-9c7f-c0d64c8269a0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Sep 30 14:46:59 compute-0 nova_compute[261524]: 2025-09-30 14:46:59.879 2 DEBUG nova.objects.instance [req-36154a68-b7d6-4356-aa7c-8f3d245bdfae req-fe4fa1ab-8198-48b0-a1ff-943d6757723a e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Lazy-loading 'system_metadata' on Instance uuid ab354489-bdb3-49d0-9ed1-574d93130913 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Sep 30 14:46:59 compute-0 nova_compute[261524]: 2025-09-30 14:46:59.910 2 DEBUG nova.objects.instance [req-36154a68-b7d6-4356-aa7c-8f3d245bdfae req-fe4fa1ab-8198-48b0-a1ff-943d6757723a e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Lazy-loading 'flavor' on Instance uuid ab354489-bdb3-49d0-9ed1-574d93130913 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Sep 30 14:46:59 compute-0 nova_compute[261524]: 2025-09-30 14:46:59.943 2 DEBUG nova.virt.libvirt.vif [req-36154a68-b7d6-4356-aa7c-8f3d245bdfae req-fe4fa1ab-8198-48b0-a1ff-943d6757723a e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-09-30T14:44:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-711458846',display_name='tempest-TestNetworkBasicOps-server-711458846',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-711458846',id=6,image_ref='7c70cf84-edc3-42b2-a094-ae3c1dbaffe4',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIMQL53T2ZkSAoVzfinB0Xb6YV6zqFtICzdovU1Kn/PIvW0fTnkL2hml556IQQU+IFdjIRu6Xc3RQKHc2DkPb73zFKtN5c4E62Q7wZZkQI9VBc0aWDqG12KKHVj732hp6w==',key_name='tempest-TestNetworkBasicOps-1073344022',keypairs=<?>,launch_index=0,launched_at=2025-09-30T14:44:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0f6bbb74396f4cb7bfa999ebdabfe722',ramdisk_id='',reservation_id='r-z3tdfpa2',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c70cf84-edc3-42b2-a094-ae3c1dbaffe4',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-195302952',owner_user_name='tempest-TestNetworkBasicOps-195302952-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-09-30T14:44:53Z,user_data=None,user_id='59c80c4f189d4667aec64b43afc69ed2',uuid=ab354489-bdb3-49d0-9ed1-574d93130913,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9647a6b7-6ba5-4788-9075-bdfb0924041c", "address": "fa:16:3e:21:35:09", "network": {"id": "4f96ad7c-4512-478c-acee-7360218cf3ea", "bridge": "br-int", "label": "tempest-network-smoke--980620503", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0f6bbb74396f4cb7bfa999ebdabfe722", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9647a6b7-6b", "ovs_interfaceid": "9647a6b7-6ba5-4788-9075-bdfb0924041c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Sep 30 14:46:59 compute-0 nova_compute[261524]: 2025-09-30 14:46:59.943 2 DEBUG nova.network.os_vif_util [req-36154a68-b7d6-4356-aa7c-8f3d245bdfae req-fe4fa1ab-8198-48b0-a1ff-943d6757723a e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Converting VIF {"id": "9647a6b7-6ba5-4788-9075-bdfb0924041c", "address": "fa:16:3e:21:35:09", "network": {"id": "4f96ad7c-4512-478c-acee-7360218cf3ea", "bridge": "br-int", "label": "tempest-network-smoke--980620503", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0f6bbb74396f4cb7bfa999ebdabfe722", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9647a6b7-6b", "ovs_interfaceid": "9647a6b7-6ba5-4788-9075-bdfb0924041c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Sep 30 14:46:59 compute-0 nova_compute[261524]: 2025-09-30 14:46:59.944 2 DEBUG nova.network.os_vif_util [req-36154a68-b7d6-4356-aa7c-8f3d245bdfae req-fe4fa1ab-8198-48b0-a1ff-943d6757723a e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:21:35:09,bridge_name='br-int',has_traffic_filtering=True,id=9647a6b7-6ba5-4788-9075-bdfb0924041c,network=Network(4f96ad7c-4512-478c-acee-7360218cf3ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9647a6b7-6b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Sep 30 14:46:59 compute-0 nova_compute[261524]: 2025-09-30 14:46:59.947 2 DEBUG nova.virt.libvirt.guest [req-36154a68-b7d6-4356-aa7c-8f3d245bdfae req-fe4fa1ab-8198-48b0-a1ff-943d6757723a e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:21:35:09"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9647a6b7-6b"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Sep 30 14:46:59 compute-0 nova_compute[261524]: 2025-09-30 14:46:59.958 2 DEBUG nova.virt.libvirt.guest [req-36154a68-b7d6-4356-aa7c-8f3d245bdfae req-fe4fa1ab-8198-48b0-a1ff-943d6757723a e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:21:35:09"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9647a6b7-6b"/></interface>not found in domain: <domain type='kvm' id='3'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:   <name>instance-00000006</name>
Sep 30 14:46:59 compute-0 nova_compute[261524]:   <uuid>ab354489-bdb3-49d0-9ed1-574d93130913</uuid>
Sep 30 14:46:59 compute-0 nova_compute[261524]:   <metadata>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Sep 30 14:46:59 compute-0 nova_compute[261524]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:   <nova:name>tempest-TestNetworkBasicOps-server-711458846</nova:name>
Sep 30 14:46:59 compute-0 nova_compute[261524]:   <nova:creationTime>2025-09-30 14:46:56</nova:creationTime>
Sep 30 14:46:59 compute-0 nova_compute[261524]:   <nova:flavor name="m1.nano">
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <nova:memory>128</nova:memory>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <nova:disk>1</nova:disk>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <nova:swap>0</nova:swap>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <nova:ephemeral>0</nova:ephemeral>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <nova:vcpus>1</nova:vcpus>
Sep 30 14:46:59 compute-0 nova_compute[261524]:   </nova:flavor>
Sep 30 14:46:59 compute-0 nova_compute[261524]:   <nova:owner>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <nova:user uuid="59c80c4f189d4667aec64b43afc69ed2">tempest-TestNetworkBasicOps-195302952-project-member</nova:user>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <nova:project uuid="0f6bbb74396f4cb7bfa999ebdabfe722">tempest-TestNetworkBasicOps-195302952</nova:project>
Sep 30 14:46:59 compute-0 nova_compute[261524]:   </nova:owner>
Sep 30 14:46:59 compute-0 nova_compute[261524]:   <nova:root type="image" uuid="7c70cf84-edc3-42b2-a094-ae3c1dbaffe4"/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:   <nova:ports>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <nova:port uuid="70e1bfe9-6006-4e08-9c7f-c0d64c8269a0">
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     </nova:port>
Sep 30 14:46:59 compute-0 nova_compute[261524]:   </nova:ports>
Sep 30 14:46:59 compute-0 nova_compute[261524]: </nova:instance>
Sep 30 14:46:59 compute-0 nova_compute[261524]:   </metadata>
Sep 30 14:46:59 compute-0 nova_compute[261524]:   <memory unit='KiB'>131072</memory>
Sep 30 14:46:59 compute-0 nova_compute[261524]:   <currentMemory unit='KiB'>131072</currentMemory>
Sep 30 14:46:59 compute-0 nova_compute[261524]:   <vcpu placement='static'>1</vcpu>
Sep 30 14:46:59 compute-0 nova_compute[261524]:   <resource>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <partition>/machine</partition>
Sep 30 14:46:59 compute-0 nova_compute[261524]:   </resource>
Sep 30 14:46:59 compute-0 nova_compute[261524]:   <sysinfo type='smbios'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <system>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <entry name='manufacturer'>RDO</entry>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <entry name='product'>OpenStack Compute</entry>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <entry name='serial'>ab354489-bdb3-49d0-9ed1-574d93130913</entry>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <entry name='uuid'>ab354489-bdb3-49d0-9ed1-574d93130913</entry>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <entry name='family'>Virtual Machine</entry>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     </system>
Sep 30 14:46:59 compute-0 nova_compute[261524]:   </sysinfo>
Sep 30 14:46:59 compute-0 nova_compute[261524]:   <os>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <boot dev='hd'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <smbios mode='sysinfo'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:   </os>
Sep 30 14:46:59 compute-0 nova_compute[261524]:   <features>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <acpi/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <apic/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <vmcoreinfo state='on'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:   </features>
Sep 30 14:46:59 compute-0 nova_compute[261524]:   <cpu mode='custom' match='exact' check='full'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <model fallback='forbid'>EPYC-Rome</model>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <vendor>AMD</vendor>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <feature policy='require' name='x2apic'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <feature policy='require' name='tsc-deadline'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <feature policy='require' name='hypervisor'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <feature policy='require' name='tsc_adjust'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <feature policy='require' name='spec-ctrl'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <feature policy='require' name='stibp'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <feature policy='require' name='arch-capabilities'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <feature policy='require' name='ssbd'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <feature policy='require' name='cmp_legacy'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <feature policy='require' name='overflow-recov'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <feature policy='require' name='succor'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <feature policy='require' name='ibrs'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <feature policy='require' name='amd-ssbd'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <feature policy='require' name='virt-ssbd'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <feature policy='disable' name='lbrv'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <feature policy='disable' name='tsc-scale'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <feature policy='disable' name='vmcb-clean'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <feature policy='disable' name='flushbyasid'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <feature policy='disable' name='pause-filter'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <feature policy='disable' name='pfthreshold'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <feature policy='disable' name='svme-addr-chk'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <feature policy='require' name='lfence-always-serializing'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <feature policy='require' name='rdctl-no'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <feature policy='require' name='skip-l1dfl-vmentry'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <feature policy='require' name='mds-no'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <feature policy='require' name='pschange-mc-no'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <feature policy='require' name='gds-no'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <feature policy='require' name='rfds-no'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <feature policy='disable' name='xsaves'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <feature policy='disable' name='svm'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <feature policy='require' name='topoext'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <feature policy='disable' name='npt'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <feature policy='disable' name='nrip-save'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:   </cpu>
Sep 30 14:46:59 compute-0 nova_compute[261524]:   <clock offset='utc'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <timer name='pit' tickpolicy='delay'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <timer name='rtc' tickpolicy='catchup'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <timer name='hpet' present='no'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:   </clock>
Sep 30 14:46:59 compute-0 nova_compute[261524]:   <on_poweroff>destroy</on_poweroff>
Sep 30 14:46:59 compute-0 nova_compute[261524]:   <on_reboot>restart</on_reboot>
Sep 30 14:46:59 compute-0 nova_compute[261524]:   <on_crash>destroy</on_crash>
Sep 30 14:46:59 compute-0 nova_compute[261524]:   <devices>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <disk type='network' device='disk'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <driver name='qemu' type='raw' cache='none'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <auth username='openstack'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:         <secret type='ceph' uuid='5e3c7776-ac03-5698-b79f-a6dc2d80cae6'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       </auth>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <source protocol='rbd' name='vms/ab354489-bdb3-49d0-9ed1-574d93130913_disk' index='2'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:         <host name='192.168.122.100' port='6789'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:         <host name='192.168.122.102' port='6789'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:         <host name='192.168.122.101' port='6789'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       </source>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <target dev='vda' bus='virtio'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <alias name='virtio-disk0'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     </disk>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <disk type='network' device='cdrom'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <driver name='qemu' type='raw' cache='none'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <auth username='openstack'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:         <secret type='ceph' uuid='5e3c7776-ac03-5698-b79f-a6dc2d80cae6'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       </auth>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <source protocol='rbd' name='vms/ab354489-bdb3-49d0-9ed1-574d93130913_disk.config' index='1'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:         <host name='192.168.122.100' port='6789'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:         <host name='192.168.122.102' port='6789'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:         <host name='192.168.122.101' port='6789'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       </source>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <target dev='sda' bus='sata'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <readonly/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <alias name='sata0-0-0'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     </disk>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <controller type='pci' index='0' model='pcie-root'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <alias name='pcie.0'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <controller type='pci' index='1' model='pcie-root-port'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <target chassis='1' port='0x10'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <alias name='pci.1'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <controller type='pci' index='2' model='pcie-root-port'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <target chassis='2' port='0x11'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <alias name='pci.2'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <controller type='pci' index='3' model='pcie-root-port'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <target chassis='3' port='0x12'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <alias name='pci.3'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <controller type='pci' index='4' model='pcie-root-port'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <target chassis='4' port='0x13'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <alias name='pci.4'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <controller type='pci' index='5' model='pcie-root-port'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <target chassis='5' port='0x14'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <alias name='pci.5'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <controller type='pci' index='6' model='pcie-root-port'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <target chassis='6' port='0x15'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <alias name='pci.6'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <controller type='pci' index='7' model='pcie-root-port'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <target chassis='7' port='0x16'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <alias name='pci.7'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <controller type='pci' index='8' model='pcie-root-port'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <target chassis='8' port='0x17'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <alias name='pci.8'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <controller type='pci' index='9' model='pcie-root-port'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <target chassis='9' port='0x18'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <alias name='pci.9'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <controller type='pci' index='10' model='pcie-root-port'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <target chassis='10' port='0x19'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <alias name='pci.10'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <controller type='pci' index='11' model='pcie-root-port'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <target chassis='11' port='0x1a'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <alias name='pci.11'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <controller type='pci' index='12' model='pcie-root-port'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <target chassis='12' port='0x1b'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <alias name='pci.12'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <controller type='pci' index='13' model='pcie-root-port'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <target chassis='13' port='0x1c'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <alias name='pci.13'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <controller type='pci' index='14' model='pcie-root-port'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <target chassis='14' port='0x1d'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <alias name='pci.14'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <controller type='pci' index='15' model='pcie-root-port'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <target chassis='15' port='0x1e'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <alias name='pci.15'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <controller type='pci' index='16' model='pcie-root-port'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <target chassis='16' port='0x1f'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <alias name='pci.16'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <controller type='pci' index='17' model='pcie-root-port'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <target chassis='17' port='0x20'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <alias name='pci.17'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <controller type='pci' index='18' model='pcie-root-port'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <target chassis='18' port='0x21'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <alias name='pci.18'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <controller type='pci' index='19' model='pcie-root-port'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <target chassis='19' port='0x22'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <alias name='pci.19'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <controller type='pci' index='20' model='pcie-root-port'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <target chassis='20' port='0x23'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <alias name='pci.20'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <controller type='pci' index='21' model='pcie-root-port'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <target chassis='21' port='0x24'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <alias name='pci.21'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <controller type='pci' index='22' model='pcie-root-port'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <target chassis='22' port='0x25'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <alias name='pci.22'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <controller type='pci' index='23' model='pcie-root-port'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <target chassis='23' port='0x26'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <alias name='pci.23'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <controller type='pci' index='24' model='pcie-root-port'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <target chassis='24' port='0x27'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <alias name='pci.24'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <controller type='pci' index='25' model='pcie-root-port'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <target chassis='25' port='0x28'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <alias name='pci.25'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <model name='pcie-pci-bridge'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <alias name='pci.26'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <controller type='usb' index='0' model='piix3-uhci'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <alias name='usb'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <controller type='sata' index='0'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <alias name='ide'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <interface type='ethernet'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <mac address='fa:16:3e:db:b9:ad'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <target dev='tap70e1bfe9-60'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <model type='virtio'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <driver name='vhost' rx_queue_size='512'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <mtu size='1442'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <alias name='net0'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     </interface>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <serial type='pty'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <source path='/dev/pts/0'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <log file='/var/lib/nova/instances/ab354489-bdb3-49d0-9ed1-574d93130913/console.log' append='off'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <target type='isa-serial' port='0'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:         <model name='isa-serial'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       </target>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <alias name='serial0'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     </serial>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <console type='pty' tty='/dev/pts/0'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <source path='/dev/pts/0'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <log file='/var/lib/nova/instances/ab354489-bdb3-49d0-9ed1-574d93130913/console.log' append='off'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <target type='serial' port='0'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <alias name='serial0'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     </console>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <input type='tablet' bus='usb'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <alias name='input0'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <address type='usb' bus='0' port='1'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     </input>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <input type='mouse' bus='ps2'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <alias name='input1'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     </input>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <input type='keyboard' bus='ps2'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <alias name='input2'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     </input>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <listen type='address' address='::0'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     </graphics>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <audio id='1' type='none'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <video>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <model type='virtio' heads='1' primary='yes'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <alias name='video0'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     </video>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <watchdog model='itco' action='reset'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <alias name='watchdog0'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     </watchdog>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <memballoon model='virtio'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <stats period='10'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <alias name='balloon0'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     </memballoon>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <rng model='virtio'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <backend model='random'>/dev/urandom</backend>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <alias name='rng0'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     </rng>
Sep 30 14:46:59 compute-0 nova_compute[261524]:   </devices>
Sep 30 14:46:59 compute-0 nova_compute[261524]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <label>system_u:system_r:svirt_t:s0:c620,c988</label>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c620,c988</imagelabel>
Sep 30 14:46:59 compute-0 nova_compute[261524]:   </seclabel>
Sep 30 14:46:59 compute-0 nova_compute[261524]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <label>+107:+107</label>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <imagelabel>+107:+107</imagelabel>
Sep 30 14:46:59 compute-0 nova_compute[261524]:   </seclabel>
Sep 30 14:46:59 compute-0 nova_compute[261524]: </domain>
Sep 30 14:46:59 compute-0 nova_compute[261524]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Sep 30 14:46:59 compute-0 nova_compute[261524]: 2025-09-30 14:46:59.959 2 DEBUG nova.virt.libvirt.guest [req-36154a68-b7d6-4356-aa7c-8f3d245bdfae req-fe4fa1ab-8198-48b0-a1ff-943d6757723a e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:21:35:09"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9647a6b7-6b"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Sep 30 14:46:59 compute-0 nova_compute[261524]: 2025-09-30 14:46:59.964 2 DEBUG nova.virt.libvirt.guest [req-36154a68-b7d6-4356-aa7c-8f3d245bdfae req-fe4fa1ab-8198-48b0-a1ff-943d6757723a e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:21:35:09"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9647a6b7-6b"/></interface>not found in domain: <domain type='kvm' id='3'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:   <name>instance-00000006</name>
Sep 30 14:46:59 compute-0 nova_compute[261524]:   <uuid>ab354489-bdb3-49d0-9ed1-574d93130913</uuid>
Sep 30 14:46:59 compute-0 nova_compute[261524]:   <metadata>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Sep 30 14:46:59 compute-0 nova_compute[261524]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:   <nova:name>tempest-TestNetworkBasicOps-server-711458846</nova:name>
Sep 30 14:46:59 compute-0 nova_compute[261524]:   <nova:creationTime>2025-09-30 14:46:56</nova:creationTime>
Sep 30 14:46:59 compute-0 nova_compute[261524]:   <nova:flavor name="m1.nano">
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <nova:memory>128</nova:memory>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <nova:disk>1</nova:disk>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <nova:swap>0</nova:swap>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <nova:ephemeral>0</nova:ephemeral>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <nova:vcpus>1</nova:vcpus>
Sep 30 14:46:59 compute-0 nova_compute[261524]:   </nova:flavor>
Sep 30 14:46:59 compute-0 nova_compute[261524]:   <nova:owner>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <nova:user uuid="59c80c4f189d4667aec64b43afc69ed2">tempest-TestNetworkBasicOps-195302952-project-member</nova:user>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <nova:project uuid="0f6bbb74396f4cb7bfa999ebdabfe722">tempest-TestNetworkBasicOps-195302952</nova:project>
Sep 30 14:46:59 compute-0 nova_compute[261524]:   </nova:owner>
Sep 30 14:46:59 compute-0 nova_compute[261524]:   <nova:root type="image" uuid="7c70cf84-edc3-42b2-a094-ae3c1dbaffe4"/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:   <nova:ports>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <nova:port uuid="70e1bfe9-6006-4e08-9c7f-c0d64c8269a0">
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     </nova:port>
Sep 30 14:46:59 compute-0 nova_compute[261524]:   </nova:ports>
Sep 30 14:46:59 compute-0 nova_compute[261524]: </nova:instance>
Sep 30 14:46:59 compute-0 nova_compute[261524]:   </metadata>
Sep 30 14:46:59 compute-0 nova_compute[261524]:   <memory unit='KiB'>131072</memory>
Sep 30 14:46:59 compute-0 nova_compute[261524]:   <currentMemory unit='KiB'>131072</currentMemory>
Sep 30 14:46:59 compute-0 nova_compute[261524]:   <vcpu placement='static'>1</vcpu>
Sep 30 14:46:59 compute-0 nova_compute[261524]:   <resource>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <partition>/machine</partition>
Sep 30 14:46:59 compute-0 nova_compute[261524]:   </resource>
Sep 30 14:46:59 compute-0 nova_compute[261524]:   <sysinfo type='smbios'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <system>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <entry name='manufacturer'>RDO</entry>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <entry name='product'>OpenStack Compute</entry>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <entry name='serial'>ab354489-bdb3-49d0-9ed1-574d93130913</entry>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <entry name='uuid'>ab354489-bdb3-49d0-9ed1-574d93130913</entry>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <entry name='family'>Virtual Machine</entry>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     </system>
Sep 30 14:46:59 compute-0 nova_compute[261524]:   </sysinfo>
Sep 30 14:46:59 compute-0 nova_compute[261524]:   <os>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <boot dev='hd'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <smbios mode='sysinfo'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:   </os>
Sep 30 14:46:59 compute-0 nova_compute[261524]:   <features>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <acpi/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <apic/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <vmcoreinfo state='on'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:   </features>
Sep 30 14:46:59 compute-0 nova_compute[261524]:   <cpu mode='custom' match='exact' check='full'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <model fallback='forbid'>EPYC-Rome</model>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <vendor>AMD</vendor>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <feature policy='require' name='x2apic'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <feature policy='require' name='tsc-deadline'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <feature policy='require' name='hypervisor'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <feature policy='require' name='tsc_adjust'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <feature policy='require' name='spec-ctrl'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <feature policy='require' name='stibp'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <feature policy='require' name='arch-capabilities'/>
Sep 30 14:46:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <feature policy='require' name='ssbd'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <feature policy='require' name='cmp_legacy'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <feature policy='require' name='overflow-recov'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <feature policy='require' name='succor'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <feature policy='require' name='ibrs'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <feature policy='require' name='amd-ssbd'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <feature policy='require' name='virt-ssbd'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <feature policy='disable' name='lbrv'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <feature policy='disable' name='tsc-scale'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <feature policy='disable' name='vmcb-clean'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <feature policy='disable' name='flushbyasid'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <feature policy='disable' name='pause-filter'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <feature policy='disable' name='pfthreshold'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <feature policy='disable' name='svme-addr-chk'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <feature policy='require' name='lfence-always-serializing'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <feature policy='require' name='rdctl-no'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <feature policy='require' name='skip-l1dfl-vmentry'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <feature policy='require' name='mds-no'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <feature policy='require' name='pschange-mc-no'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <feature policy='require' name='gds-no'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <feature policy='require' name='rfds-no'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <feature policy='disable' name='xsaves'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <feature policy='disable' name='svm'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <feature policy='require' name='topoext'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <feature policy='disable' name='npt'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <feature policy='disable' name='nrip-save'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:   </cpu>
Sep 30 14:46:59 compute-0 nova_compute[261524]:   <clock offset='utc'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <timer name='pit' tickpolicy='delay'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <timer name='rtc' tickpolicy='catchup'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <timer name='hpet' present='no'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:   </clock>
Sep 30 14:46:59 compute-0 nova_compute[261524]:   <on_poweroff>destroy</on_poweroff>
Sep 30 14:46:59 compute-0 nova_compute[261524]:   <on_reboot>restart</on_reboot>
Sep 30 14:46:59 compute-0 nova_compute[261524]:   <on_crash>destroy</on_crash>
Sep 30 14:46:59 compute-0 nova_compute[261524]:   <devices>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <disk type='network' device='disk'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <driver name='qemu' type='raw' cache='none'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <auth username='openstack'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:         <secret type='ceph' uuid='5e3c7776-ac03-5698-b79f-a6dc2d80cae6'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       </auth>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <source protocol='rbd' name='vms/ab354489-bdb3-49d0-9ed1-574d93130913_disk' index='2'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:         <host name='192.168.122.100' port='6789'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:         <host name='192.168.122.102' port='6789'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:         <host name='192.168.122.101' port='6789'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       </source>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <target dev='vda' bus='virtio'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <alias name='virtio-disk0'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     </disk>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <disk type='network' device='cdrom'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <driver name='qemu' type='raw' cache='none'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <auth username='openstack'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:         <secret type='ceph' uuid='5e3c7776-ac03-5698-b79f-a6dc2d80cae6'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       </auth>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <source protocol='rbd' name='vms/ab354489-bdb3-49d0-9ed1-574d93130913_disk.config' index='1'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:         <host name='192.168.122.100' port='6789'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:         <host name='192.168.122.102' port='6789'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:         <host name='192.168.122.101' port='6789'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       </source>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <target dev='sda' bus='sata'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <readonly/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <alias name='sata0-0-0'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     </disk>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <controller type='pci' index='0' model='pcie-root'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <alias name='pcie.0'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <controller type='pci' index='1' model='pcie-root-port'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <target chassis='1' port='0x10'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <alias name='pci.1'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <controller type='pci' index='2' model='pcie-root-port'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <target chassis='2' port='0x11'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <alias name='pci.2'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <controller type='pci' index='3' model='pcie-root-port'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <target chassis='3' port='0x12'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <alias name='pci.3'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <controller type='pci' index='4' model='pcie-root-port'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <target chassis='4' port='0x13'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <alias name='pci.4'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <controller type='pci' index='5' model='pcie-root-port'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <target chassis='5' port='0x14'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <alias name='pci.5'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <controller type='pci' index='6' model='pcie-root-port'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <target chassis='6' port='0x15'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <alias name='pci.6'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <controller type='pci' index='7' model='pcie-root-port'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <target chassis='7' port='0x16'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <alias name='pci.7'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <controller type='pci' index='8' model='pcie-root-port'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <target chassis='8' port='0x17'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <alias name='pci.8'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <controller type='pci' index='9' model='pcie-root-port'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <target chassis='9' port='0x18'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <alias name='pci.9'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <controller type='pci' index='10' model='pcie-root-port'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <target chassis='10' port='0x19'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <alias name='pci.10'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <controller type='pci' index='11' model='pcie-root-port'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <target chassis='11' port='0x1a'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <alias name='pci.11'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <controller type='pci' index='12' model='pcie-root-port'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <target chassis='12' port='0x1b'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <alias name='pci.12'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <controller type='pci' index='13' model='pcie-root-port'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <target chassis='13' port='0x1c'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <alias name='pci.13'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <controller type='pci' index='14' model='pcie-root-port'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <target chassis='14' port='0x1d'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <alias name='pci.14'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <controller type='pci' index='15' model='pcie-root-port'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <target chassis='15' port='0x1e'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <alias name='pci.15'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <controller type='pci' index='16' model='pcie-root-port'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <target chassis='16' port='0x1f'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <alias name='pci.16'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <controller type='pci' index='17' model='pcie-root-port'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <target chassis='17' port='0x20'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <alias name='pci.17'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <controller type='pci' index='18' model='pcie-root-port'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <target chassis='18' port='0x21'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <alias name='pci.18'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <controller type='pci' index='19' model='pcie-root-port'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <target chassis='19' port='0x22'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <alias name='pci.19'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <controller type='pci' index='20' model='pcie-root-port'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <target chassis='20' port='0x23'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <alias name='pci.20'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <controller type='pci' index='21' model='pcie-root-port'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <target chassis='21' port='0x24'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <alias name='pci.21'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <controller type='pci' index='22' model='pcie-root-port'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <target chassis='22' port='0x25'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <alias name='pci.22'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <controller type='pci' index='23' model='pcie-root-port'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <target chassis='23' port='0x26'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <alias name='pci.23'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <controller type='pci' index='24' model='pcie-root-port'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <target chassis='24' port='0x27'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <alias name='pci.24'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <controller type='pci' index='25' model='pcie-root-port'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <model name='pcie-root-port'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <target chassis='25' port='0x28'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <alias name='pci.25'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <model name='pcie-pci-bridge'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <alias name='pci.26'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <controller type='usb' index='0' model='piix3-uhci'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <alias name='usb'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <controller type='sata' index='0'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <alias name='ide'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     </controller>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <interface type='ethernet'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <mac address='fa:16:3e:db:b9:ad'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <target dev='tap70e1bfe9-60'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <model type='virtio'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <driver name='vhost' rx_queue_size='512'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <mtu size='1442'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <alias name='net0'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     </interface>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <serial type='pty'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <source path='/dev/pts/0'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <log file='/var/lib/nova/instances/ab354489-bdb3-49d0-9ed1-574d93130913/console.log' append='off'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <target type='isa-serial' port='0'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:         <model name='isa-serial'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       </target>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <alias name='serial0'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     </serial>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <console type='pty' tty='/dev/pts/0'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <source path='/dev/pts/0'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <log file='/var/lib/nova/instances/ab354489-bdb3-49d0-9ed1-574d93130913/console.log' append='off'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <target type='serial' port='0'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <alias name='serial0'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     </console>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <input type='tablet' bus='usb'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <alias name='input0'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <address type='usb' bus='0' port='1'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     </input>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <input type='mouse' bus='ps2'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <alias name='input1'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     </input>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <input type='keyboard' bus='ps2'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <alias name='input2'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     </input>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <listen type='address' address='::0'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     </graphics>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <audio id='1' type='none'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <video>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <model type='virtio' heads='1' primary='yes'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <alias name='video0'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     </video>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <watchdog model='itco' action='reset'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <alias name='watchdog0'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     </watchdog>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <memballoon model='virtio'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <stats period='10'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <alias name='balloon0'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     </memballoon>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <rng model='virtio'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <backend model='random'>/dev/urandom</backend>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <alias name='rng0'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     </rng>
Sep 30 14:46:59 compute-0 nova_compute[261524]:   </devices>
Sep 30 14:46:59 compute-0 nova_compute[261524]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <label>system_u:system_r:svirt_t:s0:c620,c988</label>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c620,c988</imagelabel>
Sep 30 14:46:59 compute-0 nova_compute[261524]:   </seclabel>
Sep 30 14:46:59 compute-0 nova_compute[261524]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <label>+107:+107</label>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <imagelabel>+107:+107</imagelabel>
Sep 30 14:46:59 compute-0 nova_compute[261524]:   </seclabel>
Sep 30 14:46:59 compute-0 nova_compute[261524]: </domain>
Sep 30 14:46:59 compute-0 nova_compute[261524]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Sep 30 14:46:59 compute-0 nova_compute[261524]: 2025-09-30 14:46:59.965 2 WARNING nova.virt.libvirt.driver [req-36154a68-b7d6-4356-aa7c-8f3d245bdfae req-fe4fa1ab-8198-48b0-a1ff-943d6757723a e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Detaching interface fa:16:3e:21:35:09 failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tap9647a6b7-6b' not found.
Sep 30 14:46:59 compute-0 nova_compute[261524]: 2025-09-30 14:46:59.966 2 DEBUG nova.virt.libvirt.vif [req-36154a68-b7d6-4356-aa7c-8f3d245bdfae req-fe4fa1ab-8198-48b0-a1ff-943d6757723a e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-09-30T14:44:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-711458846',display_name='tempest-TestNetworkBasicOps-server-711458846',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-711458846',id=6,image_ref='7c70cf84-edc3-42b2-a094-ae3c1dbaffe4',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIMQL53T2ZkSAoVzfinB0Xb6YV6zqFtICzdovU1Kn/PIvW0fTnkL2hml556IQQU+IFdjIRu6Xc3RQKHc2DkPb73zFKtN5c4E62Q7wZZkQI9VBc0aWDqG12KKHVj732hp6w==',key_name='tempest-TestNetworkBasicOps-1073344022',keypairs=<?>,launch_index=0,launched_at=2025-09-30T14:44:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0f6bbb74396f4cb7bfa999ebdabfe722',ramdisk_id='',reservation_id='r-z3tdfpa2',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c70cf84-edc3-42b2-a094-ae3c1dbaffe4',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-195302952',owner_user_name='tempest-TestNetworkBasicOps-195302952-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-09-30T14:44:53Z,user_data=None,user_id='59c80c4f189d4667aec64b43afc69ed2',uuid=ab354489-bdb3-49d0-9ed1-574d93130913,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9647a6b7-6ba5-4788-9075-bdfb0924041c", "address": "fa:16:3e:21:35:09", "network": {"id": "4f96ad7c-4512-478c-acee-7360218cf3ea", "bridge": "br-int", "label": "tempest-network-smoke--980620503", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0f6bbb74396f4cb7bfa999ebdabfe722", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9647a6b7-6b", "ovs_interfaceid": "9647a6b7-6ba5-4788-9075-bdfb0924041c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Sep 30 14:46:59 compute-0 nova_compute[261524]: 2025-09-30 14:46:59.966 2 DEBUG nova.network.os_vif_util [req-36154a68-b7d6-4356-aa7c-8f3d245bdfae req-fe4fa1ab-8198-48b0-a1ff-943d6757723a e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Converting VIF {"id": "9647a6b7-6ba5-4788-9075-bdfb0924041c", "address": "fa:16:3e:21:35:09", "network": {"id": "4f96ad7c-4512-478c-acee-7360218cf3ea", "bridge": "br-int", "label": "tempest-network-smoke--980620503", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0f6bbb74396f4cb7bfa999ebdabfe722", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9647a6b7-6b", "ovs_interfaceid": "9647a6b7-6ba5-4788-9075-bdfb0924041c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Sep 30 14:46:59 compute-0 nova_compute[261524]: 2025-09-30 14:46:59.967 2 DEBUG nova.network.os_vif_util [req-36154a68-b7d6-4356-aa7c-8f3d245bdfae req-fe4fa1ab-8198-48b0-a1ff-943d6757723a e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:21:35:09,bridge_name='br-int',has_traffic_filtering=True,id=9647a6b7-6ba5-4788-9075-bdfb0924041c,network=Network(4f96ad7c-4512-478c-acee-7360218cf3ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9647a6b7-6b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Sep 30 14:46:59 compute-0 nova_compute[261524]: 2025-09-30 14:46:59.967 2 DEBUG os_vif [req-36154a68-b7d6-4356-aa7c-8f3d245bdfae req-fe4fa1ab-8198-48b0-a1ff-943d6757723a e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:21:35:09,bridge_name='br-int',has_traffic_filtering=True,id=9647a6b7-6ba5-4788-9075-bdfb0924041c,network=Network(4f96ad7c-4512-478c-acee-7360218cf3ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9647a6b7-6b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Sep 30 14:46:59 compute-0 nova_compute[261524]: 2025-09-30 14:46:59.969 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:46:59 compute-0 nova_compute[261524]: 2025-09-30 14:46:59.969 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9647a6b7-6b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 14:46:59 compute-0 nova_compute[261524]: 2025-09-30 14:46:59.969 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 14:46:59 compute-0 nova_compute[261524]: 2025-09-30 14:46:59.972 2 INFO os_vif [req-36154a68-b7d6-4356-aa7c-8f3d245bdfae req-fe4fa1ab-8198-48b0-a1ff-943d6757723a e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:21:35:09,bridge_name='br-int',has_traffic_filtering=True,id=9647a6b7-6ba5-4788-9075-bdfb0924041c,network=Network(4f96ad7c-4512-478c-acee-7360218cf3ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9647a6b7-6b')
Sep 30 14:46:59 compute-0 nova_compute[261524]: 2025-09-30 14:46:59.974 2 DEBUG nova.virt.libvirt.guest [req-36154a68-b7d6-4356-aa7c-8f3d245bdfae req-fe4fa1ab-8198-48b0-a1ff-943d6757723a e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Sep 30 14:46:59 compute-0 nova_compute[261524]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:   <nova:name>tempest-TestNetworkBasicOps-server-711458846</nova:name>
Sep 30 14:46:59 compute-0 nova_compute[261524]:   <nova:creationTime>2025-09-30 14:46:59</nova:creationTime>
Sep 30 14:46:59 compute-0 nova_compute[261524]:   <nova:flavor name="m1.nano">
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <nova:memory>128</nova:memory>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <nova:disk>1</nova:disk>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <nova:swap>0</nova:swap>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <nova:ephemeral>0</nova:ephemeral>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <nova:vcpus>1</nova:vcpus>
Sep 30 14:46:59 compute-0 nova_compute[261524]:   </nova:flavor>
Sep 30 14:46:59 compute-0 nova_compute[261524]:   <nova:owner>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <nova:user uuid="59c80c4f189d4667aec64b43afc69ed2">tempest-TestNetworkBasicOps-195302952-project-member</nova:user>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <nova:project uuid="0f6bbb74396f4cb7bfa999ebdabfe722">tempest-TestNetworkBasicOps-195302952</nova:project>
Sep 30 14:46:59 compute-0 nova_compute[261524]:   </nova:owner>
Sep 30 14:46:59 compute-0 nova_compute[261524]:   <nova:root type="image" uuid="7c70cf84-edc3-42b2-a094-ae3c1dbaffe4"/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:   <nova:ports>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     <nova:port uuid="70e1bfe9-6006-4e08-9c7f-c0d64c8269a0">
Sep 30 14:46:59 compute-0 nova_compute[261524]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Sep 30 14:46:59 compute-0 nova_compute[261524]:     </nova:port>
Sep 30 14:46:59 compute-0 nova_compute[261524]:   </nova:ports>
Sep 30 14:46:59 compute-0 nova_compute[261524]: </nova:instance>
Sep 30 14:46:59 compute-0 nova_compute[261524]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Sep 30 14:46:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:46:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 14:46:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:46:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007639151730481214 of space, bias 1.0, pg target 0.22917455191443642 quantized to 32 (current 32)
Sep 30 14:46:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:46:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:46:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:46:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:46:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:46:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Sep 30 14:46:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:46:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Sep 30 14:46:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:46:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:46:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:46:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Sep 30 14:46:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:46:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Sep 30 14:46:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:46:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:46:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:46:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 14:46:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:46:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 14:47:00 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:47:00 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:47:00 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:47:00.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:47:00 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:47:00 compute-0 nova_compute[261524]: 2025-09-30 14:47:00.509 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:47:00 compute-0 ovn_controller[154021]: 2025-09-30T14:47:00Z|00074|binding|INFO|Releasing lport 774ce5b0-5e80-4a27-9cdb-1f1629fd42f7 from this chassis (sb_readonly=0)
Sep 30 14:47:00 compute-0 nova_compute[261524]: 2025-09-30 14:47:00.601 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:47:00 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v945: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 3.2 KiB/s wr, 28 op/s
Sep 30 14:47:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 14:47:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 14:47:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 14:47:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 14:47:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 14:47:01 compute-0 nova_compute[261524]: 2025-09-30 14:47:01.048 2 INFO nova.network.neutron [None req-3e6cde06-d340-4d80-bdad-76caf8288fbd 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Port 9647a6b7-6ba5-4788-9075-bdfb0924041c from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Sep 30 14:47:01 compute-0 nova_compute[261524]: 2025-09-30 14:47:01.049 2 DEBUG nova.network.neutron [None req-3e6cde06-d340-4d80-bdad-76caf8288fbd 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Updating instance_info_cache with network_info: [{"id": "70e1bfe9-6006-4e08-9c7f-c0d64c8269a0", "address": "fa:16:3e:db:b9:ad", "network": {"id": "653945fb-0a1b-4a3b-b45f-4bafe62f765f", "bridge": "br-int", "label": "tempest-network-smoke--969342711", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0f6bbb74396f4cb7bfa999ebdabfe722", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70e1bfe9-60", "ovs_interfaceid": "70e1bfe9-6006-4e08-9c7f-c0d64c8269a0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Sep 30 14:47:01 compute-0 nova_compute[261524]: 2025-09-30 14:47:01.066 2 DEBUG oslo_concurrency.lockutils [None req-3e6cde06-d340-4d80-bdad-76caf8288fbd 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Releasing lock "refresh_cache-ab354489-bdb3-49d0-9ed1-574d93130913" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Sep 30 14:47:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 14:47:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 14:47:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 14:47:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 14:47:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 14:47:01 compute-0 nova_compute[261524]: 2025-09-30 14:47:01.090 2 DEBUG oslo_concurrency.lockutils [None req-3e6cde06-d340-4d80-bdad-76caf8288fbd 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Lock "interface-ab354489-bdb3-49d0-9ed1-574d93130913-9647a6b7-6ba5-4788-9075-bdfb0924041c" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 5.057s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:47:01 compute-0 nova_compute[261524]: 2025-09-30 14:47:01.297 2 DEBUG oslo_concurrency.lockutils [None req-17adb54f-1091-4a24-a19e-e428899aecf5 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Acquiring lock "ab354489-bdb3-49d0-9ed1-574d93130913" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:47:01 compute-0 nova_compute[261524]: 2025-09-30 14:47:01.298 2 DEBUG oslo_concurrency.lockutils [None req-17adb54f-1091-4a24-a19e-e428899aecf5 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Lock "ab354489-bdb3-49d0-9ed1-574d93130913" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:47:01 compute-0 nova_compute[261524]: 2025-09-30 14:47:01.298 2 DEBUG oslo_concurrency.lockutils [None req-17adb54f-1091-4a24-a19e-e428899aecf5 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Acquiring lock "ab354489-bdb3-49d0-9ed1-574d93130913-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:47:01 compute-0 nova_compute[261524]: 2025-09-30 14:47:01.299 2 DEBUG oslo_concurrency.lockutils [None req-17adb54f-1091-4a24-a19e-e428899aecf5 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Lock "ab354489-bdb3-49d0-9ed1-574d93130913-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:47:01 compute-0 nova_compute[261524]: 2025-09-30 14:47:01.299 2 DEBUG oslo_concurrency.lockutils [None req-17adb54f-1091-4a24-a19e-e428899aecf5 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Lock "ab354489-bdb3-49d0-9ed1-574d93130913-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:47:01 compute-0 nova_compute[261524]: 2025-09-30 14:47:01.301 2 INFO nova.compute.manager [None req-17adb54f-1091-4a24-a19e-e428899aecf5 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Terminating instance
Sep 30 14:47:01 compute-0 nova_compute[261524]: 2025-09-30 14:47:01.302 2 DEBUG nova.compute.manager [None req-17adb54f-1091-4a24-a19e-e428899aecf5 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Sep 30 14:47:01 compute-0 nova_compute[261524]: 2025-09-30 14:47:01.303 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:47:01 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:47:01 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000024s ======
Sep 30 14:47:01 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:47:01.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Sep 30 14:47:01 compute-0 ceph-mon[74194]: pgmap v945: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 3.2 KiB/s wr, 28 op/s
Sep 30 14:47:01 compute-0 kernel: tap70e1bfe9-60 (unregistering): left promiscuous mode
Sep 30 14:47:01 compute-0 NetworkManager[45472]: <info>  [1759243621.7642] device (tap70e1bfe9-60): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Sep 30 14:47:01 compute-0 ovn_controller[154021]: 2025-09-30T14:47:01Z|00075|binding|INFO|Releasing lport 70e1bfe9-6006-4e08-9c7f-c0d64c8269a0 from this chassis (sb_readonly=0)
Sep 30 14:47:01 compute-0 ovn_controller[154021]: 2025-09-30T14:47:01Z|00076|binding|INFO|Setting lport 70e1bfe9-6006-4e08-9c7f-c0d64c8269a0 down in Southbound
Sep 30 14:47:01 compute-0 ovn_controller[154021]: 2025-09-30T14:47:01Z|00077|binding|INFO|Removing iface tap70e1bfe9-60 ovn-installed in OVS
Sep 30 14:47:01 compute-0 nova_compute[261524]: 2025-09-30 14:47:01.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:47:01 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:47:01.779 163966 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:db:b9:ad 10.100.0.14'], port_security=['fa:16:3e:db:b9:ad 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'ab354489-bdb3-49d0-9ed1-574d93130913', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-653945fb-0a1b-4a3b-b45f-4bafe62f765f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0f6bbb74396f4cb7bfa999ebdabfe722', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a439bb63-9919-40fb-8adf-828076e3652c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f85ab132-9b06-4fe7-bf67-10b54f3571f8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f8c6753f7f0>], logical_port=70e1bfe9-6006-4e08-9c7f-c0d64c8269a0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f8c6753f7f0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Sep 30 14:47:01 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:47:01.782 163966 INFO neutron.agent.ovn.metadata.agent [-] Port 70e1bfe9-6006-4e08-9c7f-c0d64c8269a0 in datapath 653945fb-0a1b-4a3b-b45f-4bafe62f765f unbound from our chassis
Sep 30 14:47:01 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:47:01.783 163966 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 653945fb-0a1b-4a3b-b45f-4bafe62f765f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Sep 30 14:47:01 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:47:01.784 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[a7d1d2b1-892d-455c-aea7-6fce66697b84]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:47:01 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:47:01.785 163966 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-653945fb-0a1b-4a3b-b45f-4bafe62f765f namespace which is not needed anymore
Sep 30 14:47:01 compute-0 nova_compute[261524]: 2025-09-30 14:47:01.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:47:01 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000006.scope: Deactivated successfully.
Sep 30 14:47:01 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000006.scope: Consumed 20.222s CPU time.
Sep 30 14:47:01 compute-0 systemd-machined[215710]: Machine qemu-3-instance-00000006 terminated.
Sep 30 14:47:01 compute-0 nova_compute[261524]: 2025-09-30 14:47:01.940 2 INFO nova.virt.libvirt.driver [-] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Instance destroyed successfully.
Sep 30 14:47:01 compute-0 nova_compute[261524]: 2025-09-30 14:47:01.940 2 DEBUG nova.objects.instance [None req-17adb54f-1091-4a24-a19e-e428899aecf5 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Lazy-loading 'resources' on Instance uuid ab354489-bdb3-49d0-9ed1-574d93130913 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Sep 30 14:47:01 compute-0 nova_compute[261524]: 2025-09-30 14:47:01.950 2 DEBUG nova.compute.manager [req-e93b84b5-eff3-4396-8244-81e0d8109abb req-1c736ef3-cdd4-4407-8fc6-5972860b2bf0 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Received event network-changed-70e1bfe9-6006-4e08-9c7f-c0d64c8269a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Sep 30 14:47:01 compute-0 nova_compute[261524]: 2025-09-30 14:47:01.951 2 DEBUG nova.compute.manager [req-e93b84b5-eff3-4396-8244-81e0d8109abb req-1c736ef3-cdd4-4407-8fc6-5972860b2bf0 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Refreshing instance network info cache due to event network-changed-70e1bfe9-6006-4e08-9c7f-c0d64c8269a0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Sep 30 14:47:01 compute-0 nova_compute[261524]: 2025-09-30 14:47:01.951 2 DEBUG oslo_concurrency.lockutils [req-e93b84b5-eff3-4396-8244-81e0d8109abb req-1c736ef3-cdd4-4407-8fc6-5972860b2bf0 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Acquiring lock "refresh_cache-ab354489-bdb3-49d0-9ed1-574d93130913" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Sep 30 14:47:01 compute-0 nova_compute[261524]: 2025-09-30 14:47:01.952 2 DEBUG oslo_concurrency.lockutils [req-e93b84b5-eff3-4396-8244-81e0d8109abb req-1c736ef3-cdd4-4407-8fc6-5972860b2bf0 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Acquired lock "refresh_cache-ab354489-bdb3-49d0-9ed1-574d93130913" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Sep 30 14:47:01 compute-0 nova_compute[261524]: 2025-09-30 14:47:01.952 2 DEBUG nova.network.neutron [req-e93b84b5-eff3-4396-8244-81e0d8109abb req-1c736ef3-cdd4-4407-8fc6-5972860b2bf0 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Refreshing network info cache for port 70e1bfe9-6006-4e08-9c7f-c0d64c8269a0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Sep 30 14:47:01 compute-0 nova_compute[261524]: 2025-09-30 14:47:01.955 2 DEBUG nova.virt.libvirt.vif [None req-17adb54f-1091-4a24-a19e-e428899aecf5 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-09-30T14:44:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-711458846',display_name='tempest-TestNetworkBasicOps-server-711458846',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-711458846',id=6,image_ref='7c70cf84-edc3-42b2-a094-ae3c1dbaffe4',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIMQL53T2ZkSAoVzfinB0Xb6YV6zqFtICzdovU1Kn/PIvW0fTnkL2hml556IQQU+IFdjIRu6Xc3RQKHc2DkPb73zFKtN5c4E62Q7wZZkQI9VBc0aWDqG12KKHVj732hp6w==',key_name='tempest-TestNetworkBasicOps-1073344022',keypairs=<?>,launch_index=0,launched_at=2025-09-30T14:44:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0f6bbb74396f4cb7bfa999ebdabfe722',ramdisk_id='',reservation_id='r-z3tdfpa2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c70cf84-edc3-42b2-a094-ae3c1dbaffe4',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-195302952',owner_user_name='tempest-TestNetworkBasicOps-195302952-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-09-30T14:44:53Z,user_data=None,user_id='59c80c4f189d4667aec64b43afc69ed2',uuid=ab354489-bdb3-49d0-9ed1-574d93130913,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "70e1bfe9-6006-4e08-9c7f-c0d64c8269a0", "address": "fa:16:3e:db:b9:ad", "network": {"id": "653945fb-0a1b-4a3b-b45f-4bafe62f765f", "bridge": "br-int", "label": "tempest-network-smoke--969342711", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0f6bbb74396f4cb7bfa999ebdabfe722", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70e1bfe9-60", "ovs_interfaceid": "70e1bfe9-6006-4e08-9c7f-c0d64c8269a0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Sep 30 14:47:01 compute-0 nova_compute[261524]: 2025-09-30 14:47:01.955 2 DEBUG nova.network.os_vif_util [None req-17adb54f-1091-4a24-a19e-e428899aecf5 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Converting VIF {"id": "70e1bfe9-6006-4e08-9c7f-c0d64c8269a0", "address": "fa:16:3e:db:b9:ad", "network": {"id": "653945fb-0a1b-4a3b-b45f-4bafe62f765f", "bridge": "br-int", "label": "tempest-network-smoke--969342711", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0f6bbb74396f4cb7bfa999ebdabfe722", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70e1bfe9-60", "ovs_interfaceid": "70e1bfe9-6006-4e08-9c7f-c0d64c8269a0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Sep 30 14:47:01 compute-0 nova_compute[261524]: 2025-09-30 14:47:01.956 2 DEBUG nova.network.os_vif_util [None req-17adb54f-1091-4a24-a19e-e428899aecf5 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:db:b9:ad,bridge_name='br-int',has_traffic_filtering=True,id=70e1bfe9-6006-4e08-9c7f-c0d64c8269a0,network=Network(653945fb-0a1b-4a3b-b45f-4bafe62f765f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap70e1bfe9-60') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Sep 30 14:47:01 compute-0 nova_compute[261524]: 2025-09-30 14:47:01.956 2 DEBUG os_vif [None req-17adb54f-1091-4a24-a19e-e428899aecf5 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:db:b9:ad,bridge_name='br-int',has_traffic_filtering=True,id=70e1bfe9-6006-4e08-9c7f-c0d64c8269a0,network=Network(653945fb-0a1b-4a3b-b45f-4bafe62f765f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap70e1bfe9-60') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Sep 30 14:47:01 compute-0 nova_compute[261524]: 2025-09-30 14:47:01.958 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:47:01 compute-0 nova_compute[261524]: 2025-09-30 14:47:01.958 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap70e1bfe9-60, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 14:47:01 compute-0 nova_compute[261524]: 2025-09-30 14:47:01.960 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:47:01 compute-0 nova_compute[261524]: 2025-09-30 14:47:01.961 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:47:01 compute-0 nova_compute[261524]: 2025-09-30 14:47:01.965 2 INFO os_vif [None req-17adb54f-1091-4a24-a19e-e428899aecf5 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:db:b9:ad,bridge_name='br-int',has_traffic_filtering=True,id=70e1bfe9-6006-4e08-9c7f-c0d64c8269a0,network=Network(653945fb-0a1b-4a3b-b45f-4bafe62f765f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap70e1bfe9-60')
Sep 30 14:47:01 compute-0 nova_compute[261524]: 2025-09-30 14:47:01.988 2 DEBUG nova.compute.manager [req-a2510641-51c5-49eb-a734-c90b190d7d57 req-f8c51bf6-61f5-464c-ad0f-5257a7a801b3 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Received event network-vif-unplugged-70e1bfe9-6006-4e08-9c7f-c0d64c8269a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Sep 30 14:47:01 compute-0 nova_compute[261524]: 2025-09-30 14:47:01.989 2 DEBUG oslo_concurrency.lockutils [req-a2510641-51c5-49eb-a734-c90b190d7d57 req-f8c51bf6-61f5-464c-ad0f-5257a7a801b3 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Acquiring lock "ab354489-bdb3-49d0-9ed1-574d93130913-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:47:01 compute-0 nova_compute[261524]: 2025-09-30 14:47:01.989 2 DEBUG oslo_concurrency.lockutils [req-a2510641-51c5-49eb-a734-c90b190d7d57 req-f8c51bf6-61f5-464c-ad0f-5257a7a801b3 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Lock "ab354489-bdb3-49d0-9ed1-574d93130913-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:47:01 compute-0 nova_compute[261524]: 2025-09-30 14:47:01.989 2 DEBUG oslo_concurrency.lockutils [req-a2510641-51c5-49eb-a734-c90b190d7d57 req-f8c51bf6-61f5-464c-ad0f-5257a7a801b3 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Lock "ab354489-bdb3-49d0-9ed1-574d93130913-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:47:01 compute-0 nova_compute[261524]: 2025-09-30 14:47:01.990 2 DEBUG nova.compute.manager [req-a2510641-51c5-49eb-a734-c90b190d7d57 req-f8c51bf6-61f5-464c-ad0f-5257a7a801b3 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] No waiting events found dispatching network-vif-unplugged-70e1bfe9-6006-4e08-9c7f-c0d64c8269a0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Sep 30 14:47:01 compute-0 nova_compute[261524]: 2025-09-30 14:47:01.990 2 DEBUG nova.compute.manager [req-a2510641-51c5-49eb-a734-c90b190d7d57 req-f8c51bf6-61f5-464c-ad0f-5257a7a801b3 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Received event network-vif-unplugged-70e1bfe9-6006-4e08-9c7f-c0d64c8269a0 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Sep 30 14:47:02 compute-0 neutron-haproxy-ovnmeta-653945fb-0a1b-4a3b-b45f-4bafe62f765f[277207]: [NOTICE]   (277211) : haproxy version is 2.8.14-c23fe91
Sep 30 14:47:02 compute-0 neutron-haproxy-ovnmeta-653945fb-0a1b-4a3b-b45f-4bafe62f765f[277207]: [NOTICE]   (277211) : path to executable is /usr/sbin/haproxy
Sep 30 14:47:02 compute-0 neutron-haproxy-ovnmeta-653945fb-0a1b-4a3b-b45f-4bafe62f765f[277207]: [WARNING]  (277211) : Exiting Master process...
Sep 30 14:47:02 compute-0 neutron-haproxy-ovnmeta-653945fb-0a1b-4a3b-b45f-4bafe62f765f[277207]: [WARNING]  (277211) : Exiting Master process...
Sep 30 14:47:02 compute-0 neutron-haproxy-ovnmeta-653945fb-0a1b-4a3b-b45f-4bafe62f765f[277207]: [ALERT]    (277211) : Current worker (277213) exited with code 143 (Terminated)
Sep 30 14:47:02 compute-0 neutron-haproxy-ovnmeta-653945fb-0a1b-4a3b-b45f-4bafe62f765f[277207]: [WARNING]  (277211) : All workers exited. Exiting... (0)
Sep 30 14:47:02 compute-0 systemd[1]: libpod-8ec4d148dbca58bfb8df57321046b18ccbe18e95ac2018a6b9ec3d4800b4b8a1.scope: Deactivated successfully.
Sep 30 14:47:02 compute-0 podman[279522]: 2025-09-30 14:47:02.087748927 +0000 UTC m=+0.211590526 container died 8ec4d148dbca58bfb8df57321046b18ccbe18e95ac2018a6b9ec3d4800b4b8a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-653945fb-0a1b-4a3b-b45f-4bafe62f765f, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Sep 30 14:47:02 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:47:02 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000024s ======
Sep 30 14:47:02 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:47:02.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Sep 30 14:47:02 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:47:02 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8ec4d148dbca58bfb8df57321046b18ccbe18e95ac2018a6b9ec3d4800b4b8a1-userdata-shm.mount: Deactivated successfully.
Sep 30 14:47:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-23b98b075e583dcc583f15e9ea5f42d76d9d59597714d4f45cba2b1331fbb896-merged.mount: Deactivated successfully.
Sep 30 14:47:02 compute-0 podman[279522]: 2025-09-30 14:47:02.53463536 +0000 UTC m=+0.658476959 container cleanup 8ec4d148dbca58bfb8df57321046b18ccbe18e95ac2018a6b9ec3d4800b4b8a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-653945fb-0a1b-4a3b-b45f-4bafe62f765f, tcib_managed=true, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Sep 30 14:47:02 compute-0 podman[279581]: 2025-09-30 14:47:02.538750822 +0000 UTC m=+0.220109026 container health_status b3fb2fd96e3ed0561f41ccf5f3a6a8f5edc1610051f196882b9fe64f54041c07 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:47:02 compute-0 podman[279580]: 2025-09-30 14:47:02.541815547 +0000 UTC m=+0.226377060 container health_status 8ace90c1ad44d98a416105b72e1c541724757ab08efa10adfaa0df7e8c2027c6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Sep 30 14:47:02 compute-0 systemd[1]: libpod-conmon-8ec4d148dbca58bfb8df57321046b18ccbe18e95ac2018a6b9ec3d4800b4b8a1.scope: Deactivated successfully.
Sep 30 14:47:02 compute-0 podman[279582]: 2025-09-30 14:47:02.544222977 +0000 UTC m=+0.213817751 container health_status c96c6cc1b89de09f21811bef665f35e069874e3ad1bbd4c37d02227af98e3458 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Sep 30 14:47:02 compute-0 podman[279579]: 2025-09-30 14:47:02.544996146 +0000 UTC m=+0.219031389 container health_status 3f9405f717bf7bccb1d94628a6cea0442375ebf8d5cf43ef2536ee30dce6c6e0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Sep 30 14:47:02 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v946: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 3.2 KiB/s wr, 28 op/s
Sep 30 14:47:03 compute-0 podman[279664]: 2025-09-30 14:47:03.140782148 +0000 UTC m=+0.573710759 container remove 8ec4d148dbca58bfb8df57321046b18ccbe18e95ac2018a6b9ec3d4800b4b8a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-653945fb-0a1b-4a3b-b45f-4bafe62f765f, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923)
Sep 30 14:47:03 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:47:03.151 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[0c4daf08-254d-46fa-ba1e-9e847534c0ed]: (4, ('Tue Sep 30 02:47:01 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-653945fb-0a1b-4a3b-b45f-4bafe62f765f (8ec4d148dbca58bfb8df57321046b18ccbe18e95ac2018a6b9ec3d4800b4b8a1)\n8ec4d148dbca58bfb8df57321046b18ccbe18e95ac2018a6b9ec3d4800b4b8a1\nTue Sep 30 02:47:02 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-653945fb-0a1b-4a3b-b45f-4bafe62f765f (8ec4d148dbca58bfb8df57321046b18ccbe18e95ac2018a6b9ec3d4800b4b8a1)\n8ec4d148dbca58bfb8df57321046b18ccbe18e95ac2018a6b9ec3d4800b4b8a1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:47:03 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:47:03.153 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[c01b127d-27c2-43be-ba76-a957739ecbe0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:47:03 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:47:03.154 163966 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap653945fb-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 14:47:03 compute-0 nova_compute[261524]: 2025-09-30 14:47:03.157 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:47:03 compute-0 kernel: tap653945fb-00: left promiscuous mode
Sep 30 14:47:03 compute-0 nova_compute[261524]: 2025-09-30 14:47:03.174 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:47:03 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:47:03.176 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[166011c4-e002-41ad-898e-e7678daf03f7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:47:03 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:47:03.206 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[d3a93a0d-cde8-4a29-ac9a-599c92e76b03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:47:03 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:47:03.207 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[74d8bec3-a9d8-4e48-afc1-4e2b653ef81a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:47:03 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:47:03.229 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[16ceacd2-a51a-42b0-b5be-393a265df196]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 685473, 'reachable_time': 18488, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 279678, 'error': None, 'target': 'ovnmeta-653945fb-0a1b-4a3b-b45f-4bafe62f765f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:47:03 compute-0 systemd[1]: run-netns-ovnmeta\x2d653945fb\x2d0a1b\x2d4a3b\x2db45f\x2d4bafe62f765f.mount: Deactivated successfully.
Sep 30 14:47:03 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:47:03.231 164124 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-653945fb-0a1b-4a3b-b45f-4bafe62f765f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Sep 30 14:47:03 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:47:03.231 164124 DEBUG oslo.privsep.daemon [-] privsep: reply[0a665f52-905d-48ca-8ed4-fc0be380a0ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:47:03 compute-0 nova_compute[261524]: 2025-09-30 14:47:03.359 2 DEBUG nova.network.neutron [req-e93b84b5-eff3-4396-8244-81e0d8109abb req-1c736ef3-cdd4-4407-8fc6-5972860b2bf0 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Updated VIF entry in instance network info cache for port 70e1bfe9-6006-4e08-9c7f-c0d64c8269a0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Sep 30 14:47:03 compute-0 nova_compute[261524]: 2025-09-30 14:47:03.360 2 DEBUG nova.network.neutron [req-e93b84b5-eff3-4396-8244-81e0d8109abb req-1c736ef3-cdd4-4407-8fc6-5972860b2bf0 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Updating instance_info_cache with network_info: [{"id": "70e1bfe9-6006-4e08-9c7f-c0d64c8269a0", "address": "fa:16:3e:db:b9:ad", "network": {"id": "653945fb-0a1b-4a3b-b45f-4bafe62f765f", "bridge": "br-int", "label": "tempest-network-smoke--969342711", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0f6bbb74396f4cb7bfa999ebdabfe722", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70e1bfe9-60", "ovs_interfaceid": "70e1bfe9-6006-4e08-9c7f-c0d64c8269a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Sep 30 14:47:03 compute-0 nova_compute[261524]: 2025-09-30 14:47:03.385 2 DEBUG oslo_concurrency.lockutils [req-e93b84b5-eff3-4396-8244-81e0d8109abb req-1c736ef3-cdd4-4407-8fc6-5972860b2bf0 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Releasing lock "refresh_cache-ab354489-bdb3-49d0-9ed1-574d93130913" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Sep 30 14:47:03 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:47:03 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:47:03 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:47:03.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:47:03 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:47:03.671Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:47:03 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:47:03.672Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:47:03 compute-0 nova_compute[261524]: 2025-09-30 14:47:03.724 2 INFO nova.virt.libvirt.driver [None req-17adb54f-1091-4a24-a19e-e428899aecf5 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Deleting instance files /var/lib/nova/instances/ab354489-bdb3-49d0-9ed1-574d93130913_del
Sep 30 14:47:03 compute-0 nova_compute[261524]: 2025-09-30 14:47:03.726 2 INFO nova.virt.libvirt.driver [None req-17adb54f-1091-4a24-a19e-e428899aecf5 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Deletion of /var/lib/nova/instances/ab354489-bdb3-49d0-9ed1-574d93130913_del complete
Sep 30 14:47:03 compute-0 nova_compute[261524]: 2025-09-30 14:47:03.779 2 INFO nova.compute.manager [None req-17adb54f-1091-4a24-a19e-e428899aecf5 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Took 2.48 seconds to destroy the instance on the hypervisor.
Sep 30 14:47:03 compute-0 nova_compute[261524]: 2025-09-30 14:47:03.780 2 DEBUG oslo.service.loopingcall [None req-17adb54f-1091-4a24-a19e-e428899aecf5 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Sep 30 14:47:03 compute-0 nova_compute[261524]: 2025-09-30 14:47:03.780 2 DEBUG nova.compute.manager [-] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Sep 30 14:47:03 compute-0 nova_compute[261524]: 2025-09-30 14:47:03.781 2 DEBUG nova.network.neutron [-] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Sep 30 14:47:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:47:03 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:47:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:47:03 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:47:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:47:03 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:47:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:47:04 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:47:04 compute-0 nova_compute[261524]: 2025-09-30 14:47:04.053 2 DEBUG nova.compute.manager [req-2d30aa0d-bb1d-4d89-ad17-2dac54c6b2e2 req-72f8be16-8c27-4676-95d6-1c671817f4c5 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Received event network-vif-plugged-70e1bfe9-6006-4e08-9c7f-c0d64c8269a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Sep 30 14:47:04 compute-0 nova_compute[261524]: 2025-09-30 14:47:04.054 2 DEBUG oslo_concurrency.lockutils [req-2d30aa0d-bb1d-4d89-ad17-2dac54c6b2e2 req-72f8be16-8c27-4676-95d6-1c671817f4c5 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Acquiring lock "ab354489-bdb3-49d0-9ed1-574d93130913-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:47:04 compute-0 nova_compute[261524]: 2025-09-30 14:47:04.054 2 DEBUG oslo_concurrency.lockutils [req-2d30aa0d-bb1d-4d89-ad17-2dac54c6b2e2 req-72f8be16-8c27-4676-95d6-1c671817f4c5 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Lock "ab354489-bdb3-49d0-9ed1-574d93130913-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:47:04 compute-0 nova_compute[261524]: 2025-09-30 14:47:04.055 2 DEBUG oslo_concurrency.lockutils [req-2d30aa0d-bb1d-4d89-ad17-2dac54c6b2e2 req-72f8be16-8c27-4676-95d6-1c671817f4c5 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Lock "ab354489-bdb3-49d0-9ed1-574d93130913-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:47:04 compute-0 nova_compute[261524]: 2025-09-30 14:47:04.055 2 DEBUG nova.compute.manager [req-2d30aa0d-bb1d-4d89-ad17-2dac54c6b2e2 req-72f8be16-8c27-4676-95d6-1c671817f4c5 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] No waiting events found dispatching network-vif-plugged-70e1bfe9-6006-4e08-9c7f-c0d64c8269a0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Sep 30 14:47:04 compute-0 nova_compute[261524]: 2025-09-30 14:47:04.056 2 WARNING nova.compute.manager [req-2d30aa0d-bb1d-4d89-ad17-2dac54c6b2e2 req-72f8be16-8c27-4676-95d6-1c671817f4c5 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Received unexpected event network-vif-plugged-70e1bfe9-6006-4e08-9c7f-c0d64c8269a0 for instance with vm_state active and task_state deleting.
Sep 30 14:47:04 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:47:04 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:47:04 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:47:04.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:47:04 compute-0 ceph-mon[74194]: pgmap v946: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 3.2 KiB/s wr, 28 op/s
Sep 30 14:47:04 compute-0 nova_compute[261524]: 2025-09-30 14:47:04.579 2 DEBUG nova.network.neutron [-] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Sep 30 14:47:04 compute-0 nova_compute[261524]: 2025-09-30 14:47:04.603 2 INFO nova.compute.manager [-] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Took 0.82 seconds to deallocate network for instance.
Sep 30 14:47:04 compute-0 nova_compute[261524]: 2025-09-30 14:47:04.651 2 DEBUG oslo_concurrency.lockutils [None req-17adb54f-1091-4a24-a19e-e428899aecf5 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:47:04 compute-0 nova_compute[261524]: 2025-09-30 14:47:04.652 2 DEBUG oslo_concurrency.lockutils [None req-17adb54f-1091-4a24-a19e-e428899aecf5 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:47:04 compute-0 nova_compute[261524]: 2025-09-30 14:47:04.665 2 DEBUG nova.compute.manager [req-d63cff90-6ee5-4be2-b83d-addd8434716a req-9af77508-ad8f-4304-a551-b464d6dcd54f e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Received event network-vif-deleted-70e1bfe9-6006-4e08-9c7f-c0d64c8269a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Sep 30 14:47:04 compute-0 nova_compute[261524]: 2025-09-30 14:47:04.700 2 DEBUG oslo_concurrency.processutils [None req-17adb54f-1091-4a24-a19e-e428899aecf5 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Sep 30 14:47:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:47:04] "GET /metrics HTTP/1.1" 200 48549 "" "Prometheus/2.51.0"
Sep 30 14:47:04 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:47:04] "GET /metrics HTTP/1.1" 200 48549 "" "Prometheus/2.51.0"
Sep 30 14:47:04 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v947: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 1.1 KiB/s wr, 0 op/s
Sep 30 14:47:05 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 14:47:05 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2241171405' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:47:05 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/2241171405' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:47:05 compute-0 nova_compute[261524]: 2025-09-30 14:47:05.171 2 DEBUG oslo_concurrency.processutils [None req-17adb54f-1091-4a24-a19e-e428899aecf5 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Sep 30 14:47:05 compute-0 nova_compute[261524]: 2025-09-30 14:47:05.179 2 DEBUG nova.compute.provider_tree [None req-17adb54f-1091-4a24-a19e-e428899aecf5 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Inventory has not changed in ProviderTree for provider: 06783cfc-6d32-454d-9501-ebd8adea3735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Sep 30 14:47:05 compute-0 nova_compute[261524]: 2025-09-30 14:47:05.196 2 DEBUG nova.scheduler.client.report [None req-17adb54f-1091-4a24-a19e-e428899aecf5 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Inventory has not changed for provider 06783cfc-6d32-454d-9501-ebd8adea3735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Sep 30 14:47:05 compute-0 nova_compute[261524]: 2025-09-30 14:47:05.214 2 DEBUG oslo_concurrency.lockutils [None req-17adb54f-1091-4a24-a19e-e428899aecf5 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.562s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:47:05 compute-0 nova_compute[261524]: 2025-09-30 14:47:05.237 2 INFO nova.scheduler.client.report [None req-17adb54f-1091-4a24-a19e-e428899aecf5 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Deleted allocations for instance ab354489-bdb3-49d0-9ed1-574d93130913
Sep 30 14:47:05 compute-0 nova_compute[261524]: 2025-09-30 14:47:05.295 2 DEBUG oslo_concurrency.lockutils [None req-17adb54f-1091-4a24-a19e-e428899aecf5 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Lock "ab354489-bdb3-49d0-9ed1-574d93130913" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.997s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:47:05 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:47:05 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 14:47:05 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:47:05.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 14:47:05 compute-0 sudo[279705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:47:05 compute-0 sudo[279705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:47:05 compute-0 sudo[279705]: pam_unix(sudo:session): session closed for user root
Sep 30 14:47:05 compute-0 nova_compute[261524]: 2025-09-30 14:47:05.511 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:47:06 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:47:06 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:47:06 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:47:06.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:47:06 compute-0 ceph-mon[74194]: pgmap v947: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 1.1 KiB/s wr, 0 op/s
Sep 30 14:47:06 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v948: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 2.2 KiB/s wr, 28 op/s
Sep 30 14:47:06 compute-0 nova_compute[261524]: 2025-09-30 14:47:06.962 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:47:07 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:47:07.171Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:47:07 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:47:07 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:47:07 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:47:07 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:47:07.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:47:08 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:47:08 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:47:08 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:47:08.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:47:08 compute-0 ceph-mon[74194]: pgmap v948: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 2.2 KiB/s wr, 28 op/s
Sep 30 14:47:08 compute-0 nova_compute[261524]: 2025-09-30 14:47:08.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:47:08 compute-0 nova_compute[261524]: 2025-09-30 14:47:08.742 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:47:08 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v949: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 14:47:09 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:47:08 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:47:09 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:47:08 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:47:09 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:47:08 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:47:09 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:47:09 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:47:09 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:47:09 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:47:09 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:47:09.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:47:10 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:47:10 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:47:10 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:47:10.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:47:10 compute-0 ceph-mon[74194]: pgmap v949: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 14:47:10 compute-0 nova_compute[261524]: 2025-09-30 14:47:10.514 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:47:10 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v950: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 14:47:11 compute-0 ceph-mon[74194]: from='client.? 192.168.122.10:0/619061578' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 14:47:11 compute-0 ceph-mon[74194]: from='client.? 192.168.122.10:0/619061578' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 14:47:11 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:47:11 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:47:11 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:47:11.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:47:11 compute-0 nova_compute[261524]: 2025-09-30 14:47:11.966 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:47:12 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:47:12 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 14:47:12 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:47:12.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 14:47:12 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:47:12 compute-0 ceph-mon[74194]: pgmap v950: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 14:47:12 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v951: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 14:47:13 compute-0 ceph-mon[74194]: pgmap v951: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 14:47:13 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:47:13 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:47:13 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:47:13.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:47:13 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:47:13.673Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:47:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:47:13 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:47:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:47:13 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:47:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:47:13 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:47:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:47:14 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:47:14 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:47:14 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:47:14 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:47:14.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:47:14 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:47:14 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:47:14 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:47:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:47:14] "GET /metrics HTTP/1.1" 200 48532 "" "Prometheus/2.51.0"
Sep 30 14:47:14 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:47:14] "GET /metrics HTTP/1.1" 200 48532 "" "Prometheus/2.51.0"
Sep 30 14:47:14 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v952: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 14:47:15 compute-0 nova_compute[261524]: 2025-09-30 14:47:15.395 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:47:15 compute-0 nova_compute[261524]: 2025-09-30 14:47:15.395 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:47:15 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:47:15 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:47:15 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:47:15.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:47:15 compute-0 nova_compute[261524]: 2025-09-30 14:47:15.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:47:15 compute-0 ceph-mon[74194]: pgmap v952: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 14:47:15 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/3206672362' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:47:15 compute-0 nova_compute[261524]: 2025-09-30 14:47:15.948 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:47:15 compute-0 nova_compute[261524]: 2025-09-30 14:47:15.951 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:47:15 compute-0 nova_compute[261524]: 2025-09-30 14:47:15.952 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Sep 30 14:47:15 compute-0 nova_compute[261524]: 2025-09-30 14:47:15.952 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Sep 30 14:47:15 compute-0 nova_compute[261524]: 2025-09-30 14:47:15.965 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Sep 30 14:47:15 compute-0 nova_compute[261524]: 2025-09-30 14:47:15.966 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:47:15 compute-0 nova_compute[261524]: 2025-09-30 14:47:15.967 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:47:16 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:47:16 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:47:16 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:47:16.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:47:16 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/148167092' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:47:16 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/2896075325' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:47:16 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v953: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 14:47:16 compute-0 nova_compute[261524]: 2025-09-30 14:47:16.939 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759243621.938322, ab354489-bdb3-49d0-9ed1-574d93130913 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Sep 30 14:47:16 compute-0 nova_compute[261524]: 2025-09-30 14:47:16.940 2 INFO nova.compute.manager [-] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] VM Stopped (Lifecycle Event)
Sep 30 14:47:16 compute-0 nova_compute[261524]: 2025-09-30 14:47:16.951 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:47:16 compute-0 nova_compute[261524]: 2025-09-30 14:47:16.952 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Sep 30 14:47:16 compute-0 nova_compute[261524]: 2025-09-30 14:47:16.952 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:47:16 compute-0 nova_compute[261524]: 2025-09-30 14:47:16.959 2 DEBUG nova.compute.manager [None req-b8719317-35cf-41eb-a8b2-e65e5c2abf63 - - - - - -] [instance: ab354489-bdb3-49d0-9ed1-574d93130913] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Sep 30 14:47:16 compute-0 nova_compute[261524]: 2025-09-30 14:47:16.972 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:47:16 compute-0 nova_compute[261524]: 2025-09-30 14:47:16.972 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:47:16 compute-0 nova_compute[261524]: 2025-09-30 14:47:16.972 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:47:16 compute-0 nova_compute[261524]: 2025-09-30 14:47:16.973 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Sep 30 14:47:16 compute-0 nova_compute[261524]: 2025-09-30 14:47:16.973 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Sep 30 14:47:16 compute-0 nova_compute[261524]: 2025-09-30 14:47:16.997 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:47:17 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:47:17.172Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:47:17 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:47:17 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:47:17 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:47:17 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:47:17.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:47:17 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 14:47:17 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2855505784' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:47:17 compute-0 nova_compute[261524]: 2025-09-30 14:47:17.462 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Sep 30 14:47:17 compute-0 nova_compute[261524]: 2025-09-30 14:47:17.674 2 WARNING nova.virt.libvirt.driver [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 14:47:17 compute-0 nova_compute[261524]: 2025-09-30 14:47:17.675 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4588MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Sep 30 14:47:17 compute-0 nova_compute[261524]: 2025-09-30 14:47:17.675 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:47:17 compute-0 nova_compute[261524]: 2025-09-30 14:47:17.675 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:47:17 compute-0 nova_compute[261524]: 2025-09-30 14:47:17.750 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Sep 30 14:47:17 compute-0 nova_compute[261524]: 2025-09-30 14:47:17.751 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Sep 30 14:47:17 compute-0 ceph-mon[74194]: pgmap v953: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 14:47:17 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/2269119367' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:47:17 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/2855505784' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:47:17 compute-0 nova_compute[261524]: 2025-09-30 14:47:17.804 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Sep 30 14:47:18 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:47:18 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 14:47:18 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:47:18.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 14:47:18 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 14:47:18 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3581658387' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:47:18 compute-0 nova_compute[261524]: 2025-09-30 14:47:18.299 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Sep 30 14:47:18 compute-0 nova_compute[261524]: 2025-09-30 14:47:18.305 2 DEBUG nova.compute.provider_tree [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Inventory has not changed in ProviderTree for provider: 06783cfc-6d32-454d-9501-ebd8adea3735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Sep 30 14:47:18 compute-0 nova_compute[261524]: 2025-09-30 14:47:18.329 2 DEBUG nova.scheduler.client.report [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Inventory has not changed for provider 06783cfc-6d32-454d-9501-ebd8adea3735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Sep 30 14:47:18 compute-0 nova_compute[261524]: 2025-09-30 14:47:18.348 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Sep 30 14:47:18 compute-0 nova_compute[261524]: 2025-09-30 14:47:18.349 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.674s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:47:18 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/3581658387' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:47:18 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v954: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:47:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:47:18 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:47:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:47:18 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:47:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:47:18 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:47:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:47:19 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:47:19 compute-0 nova_compute[261524]: 2025-09-30 14:47:19.350 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:47:19 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:47:19 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:47:19 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:47:19.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:47:19 compute-0 ceph-mon[74194]: pgmap v954: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:47:20 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:47:20 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:47:20 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:47:20.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:47:20 compute-0 nova_compute[261524]: 2025-09-30 14:47:20.556 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:47:20 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v955: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:47:21 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:47:21 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 14:47:21 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:47:21.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 14:47:21 compute-0 ceph-mon[74194]: pgmap v955: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:47:22 compute-0 nova_compute[261524]: 2025-09-30 14:47:22.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:47:22 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:47:22 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000024s ======
Sep 30 14:47:22 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:47:22.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Sep 30 14:47:22 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:47:22 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v956: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:47:23 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:47:23 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:47:23 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:47:23.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:47:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:47:23.674Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:47:23 compute-0 ceph-mon[74194]: pgmap v956: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:47:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:47:23 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:47:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:47:23 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:47:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:47:23 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:47:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:47:24 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:47:24 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:47:24 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:47:24 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:47:24.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:47:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:47:24] "GET /metrics HTTP/1.1" 200 48532 "" "Prometheus/2.51.0"
Sep 30 14:47:24 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:47:24] "GET /metrics HTTP/1.1" 200 48532 "" "Prometheus/2.51.0"
Sep 30 14:47:24 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v957: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:47:25 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:47:25 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:47:25 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:47:25.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:47:25 compute-0 nova_compute[261524]: 2025-09-30 14:47:25.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:47:25 compute-0 sudo[279796]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:47:25 compute-0 sudo[279796]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:47:25 compute-0 sudo[279796]: pam_unix(sudo:session): session closed for user root
Sep 30 14:47:25 compute-0 ceph-mon[74194]: pgmap v957: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:47:26 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:47:26 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:47:26 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:47:26.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:47:26 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v958: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:47:27 compute-0 nova_compute[261524]: 2025-09-30 14:47:27.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:47:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:47:27.173Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:47:27 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:47:27 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:47:27 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000024s ======
Sep 30 14:47:27 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:47:27.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Sep 30 14:47:27 compute-0 ceph-mon[74194]: pgmap v958: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:47:28 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:47:28 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:47:28 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:47:28.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:47:28 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v959: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:47:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:47:28 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:47:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:47:28 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:47:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:47:28 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:47:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:47:29 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:47:29 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:47:29 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:47:29 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:47:29.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:47:29 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:47:29 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:47:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:47:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:47:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:47:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:47:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:47:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:47:29 compute-0 ceph-mon[74194]: pgmap v959: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:47:29 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:47:30 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:47:30 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:47:30 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:47:30.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:47:30 compute-0 nova_compute[261524]: 2025-09-30 14:47:30.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:47:30 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v960: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:47:31 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:47:31 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.003000074s ======
Sep 30 14:47:31 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:47:31.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000074s
Sep 30 14:47:31 compute-0 ceph-mon[74194]: pgmap v960: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:47:32 compute-0 nova_compute[261524]: 2025-09-30 14:47:32.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:47:32 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:47:32 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:47:32 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:47:32.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:47:32 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:47:32 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v961: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:47:32 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/1567438477' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:47:33 compute-0 podman[279829]: 2025-09-30 14:47:33.225047595 +0000 UTC m=+0.129232956 container health_status 8ace90c1ad44d98a416105b72e1c541724757ab08efa10adfaa0df7e8c2027c6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20250923)
Sep 30 14:47:33 compute-0 podman[279828]: 2025-09-30 14:47:33.225967297 +0000 UTC m=+0.131194904 container health_status 3f9405f717bf7bccb1d94628a6cea0442375ebf8d5cf43ef2536ee30dce6c6e0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid)
Sep 30 14:47:33 compute-0 podman[279831]: 2025-09-30 14:47:33.225068165 +0000 UTC m=+0.115944278 container health_status c96c6cc1b89de09f21811bef665f35e069874e3ad1bbd4c37d02227af98e3458 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Sep 30 14:47:33 compute-0 podman[279830]: 2025-09-30 14:47:33.238715941 +0000 UTC m=+0.134359742 container health_status b3fb2fd96e3ed0561f41ccf5f3a6a8f5edc1610051f196882b9fe64f54041c07 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Sep 30 14:47:33 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:47:33 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000024s ======
Sep 30 14:47:33 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:47:33.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Sep 30 14:47:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:47:33.675Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:47:33 compute-0 ceph-mon[74194]: pgmap v961: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:47:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:47:33 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:47:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:47:34 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:47:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:47:34 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:47:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:47:34 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:47:34 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:47:34 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:47:34 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:47:34.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:47:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:47:34] "GET /metrics HTTP/1.1" 200 48530 "" "Prometheus/2.51.0"
Sep 30 14:47:34 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:47:34] "GET /metrics HTTP/1.1" 200 48530 "" "Prometheus/2.51.0"
Sep 30 14:47:34 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v962: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:47:35 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:47:35 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 14:47:35 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:47:35.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 14:47:35 compute-0 nova_compute[261524]: 2025-09-30 14:47:35.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:47:36 compute-0 ceph-mon[74194]: pgmap v962: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:47:36 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:47:36 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:47:36 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:47:36.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:47:36 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v963: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 14:47:37 compute-0 nova_compute[261524]: 2025-09-30 14:47:37.011 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:47:37 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/2229715555' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 14:47:37 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:47:37.174Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:47:37 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:47:37 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:47:37 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000024s ======
Sep 30 14:47:37 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:47:37.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Sep 30 14:47:38 compute-0 ceph-mon[74194]: pgmap v963: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 14:47:38 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/8796965' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 14:47:38 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:47:38 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:47:38 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:47:38.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:47:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:47:38.263 163966 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:47:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:47:38.263 163966 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:47:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:47:38.264 163966 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:47:38 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v964: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 14:47:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:47:38 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:47:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:47:39 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:47:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:47:39 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:47:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:47:39 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:47:39 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:47:39 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:47:39 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:47:39.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:47:40 compute-0 ceph-mon[74194]: pgmap v964: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 14:47:40 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:47:40 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 14:47:40 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:47:40.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 14:47:40 compute-0 nova_compute[261524]: 2025-09-30 14:47:40.601 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:47:40 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v965: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 14:47:41 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:47:41 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:47:41 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:47:41.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:47:42 compute-0 nova_compute[261524]: 2025-09-30 14:47:42.014 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:47:42 compute-0 ceph-mon[74194]: pgmap v965: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 14:47:42 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:47:42 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:47:42 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:47:42.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:47:42 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:47:42 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v966: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 95 op/s
Sep 30 14:47:43 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:47:43 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:47:43 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:47:43.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:47:43 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:47:43.676Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:47:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:47:43 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:47:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:47:43 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:47:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:47:43 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:47:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:47:44 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:47:44 compute-0 ceph-mon[74194]: pgmap v966: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 95 op/s
Sep 30 14:47:44 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:47:44 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:47:44 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:47:44.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:47:44 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:47:44 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:47:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:47:44] "GET /metrics HTTP/1.1" 200 48549 "" "Prometheus/2.51.0"
Sep 30 14:47:44 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:47:44] "GET /metrics HTTP/1.1" 200 48549 "" "Prometheus/2.51.0"
Sep 30 14:47:44 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v967: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 95 op/s
Sep 30 14:47:45 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:47:45 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:47:45 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:47:45 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:47:45.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:47:45 compute-0 nova_compute[261524]: 2025-09-30 14:47:45.603 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:47:45 compute-0 sudo[279920]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:47:45 compute-0 sudo[279920]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:47:45 compute-0 sudo[279920]: pam_unix(sudo:session): session closed for user root
Sep 30 14:47:46 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:47:46 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 14:47:46 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:47:46.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 14:47:46 compute-0 ceph-mon[74194]: pgmap v967: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 95 op/s
Sep 30 14:47:46 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v968: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Sep 30 14:47:47 compute-0 nova_compute[261524]: 2025-09-30 14:47:47.017 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:47:47 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:47:47.174Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:47:47 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:47:47.175Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:47:47 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:47:47 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:47:47 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:47:47 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:47:47.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:47:47 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/129960426' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:47:48 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:47:48 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:47:48 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:47:48.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:47:48 compute-0 ceph-mon[74194]: pgmap v968: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Sep 30 14:47:48 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v969: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Sep 30 14:47:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:47:48 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:47:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:47:48 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:47:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:47:48 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:47:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:47:49 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:47:49 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:47:49 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:47:49 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:47:49.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:47:49 compute-0 ceph-mon[74194]: pgmap v969: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Sep 30 14:47:50 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:47:50 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 14:47:50 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:47:50.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 14:47:50 compute-0 nova_compute[261524]: 2025-09-30 14:47:50.606 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:47:50 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v970: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Sep 30 14:47:51 compute-0 ovn_controller[154021]: 2025-09-30T14:47:51Z|00078|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Sep 30 14:47:51 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:47:51 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 14:47:51 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:47:51.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 14:47:51 compute-0 ceph-mon[74194]: pgmap v970: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Sep 30 14:47:52 compute-0 nova_compute[261524]: 2025-09-30 14:47:52.021 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:47:52 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:47:52 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:47:52 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:47:52.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:47:52 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:47:52 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v971: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Sep 30 14:47:53 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:47:53 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:47:53 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:47:53.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:47:53 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:47:53.677Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:47:53 compute-0 sudo[279953]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:47:53 compute-0 sudo[279953]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:47:53 compute-0 nova_compute[261524]: 2025-09-30 14:47:53.835 2 DEBUG oslo_concurrency.lockutils [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Acquiring lock "c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:47:53 compute-0 nova_compute[261524]: 2025-09-30 14:47:53.835 2 DEBUG oslo_concurrency.lockutils [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Lock "c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:47:53 compute-0 sudo[279953]: pam_unix(sudo:session): session closed for user root
Sep 30 14:47:53 compute-0 nova_compute[261524]: 2025-09-30 14:47:53.850 2 DEBUG nova.compute.manager [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Sep 30 14:47:53 compute-0 sudo[279979]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 14:47:53 compute-0 sudo[279979]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:47:53 compute-0 ceph-mon[74194]: pgmap v971: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Sep 30 14:47:53 compute-0 nova_compute[261524]: 2025-09-30 14:47:53.923 2 DEBUG oslo_concurrency.lockutils [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:47:53 compute-0 nova_compute[261524]: 2025-09-30 14:47:53.923 2 DEBUG oslo_concurrency.lockutils [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:47:53 compute-0 nova_compute[261524]: 2025-09-30 14:47:53.929 2 DEBUG nova.virt.hardware [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Sep 30 14:47:53 compute-0 nova_compute[261524]: 2025-09-30 14:47:53.930 2 INFO nova.compute.claims [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0] Claim successful on node compute-0.ctlplane.example.com
Sep 30 14:47:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:47:53 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:47:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:47:53 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:47:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:47:53 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:47:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:47:54 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:47:54 compute-0 nova_compute[261524]: 2025-09-30 14:47:54.034 2 DEBUG oslo_concurrency.processutils [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Sep 30 14:47:54 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:47:54 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000024s ======
Sep 30 14:47:54 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:47:54.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Sep 30 14:47:54 compute-0 sudo[279979]: pam_unix(sudo:session): session closed for user root
Sep 30 14:47:54 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 14:47:54 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/588764621' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:47:54 compute-0 nova_compute[261524]: 2025-09-30 14:47:54.509 2 DEBUG oslo_concurrency.processutils [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Sep 30 14:47:54 compute-0 nova_compute[261524]: 2025-09-30 14:47:54.519 2 DEBUG nova.compute.provider_tree [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Inventory has not changed in ProviderTree for provider: 06783cfc-6d32-454d-9501-ebd8adea3735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Sep 30 14:47:54 compute-0 nova_compute[261524]: 2025-09-30 14:47:54.541 2 DEBUG nova.scheduler.client.report [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Inventory has not changed for provider 06783cfc-6d32-454d-9501-ebd8adea3735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Sep 30 14:47:54 compute-0 nova_compute[261524]: 2025-09-30 14:47:54.569 2 DEBUG oslo_concurrency.lockutils [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.646s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:47:54 compute-0 nova_compute[261524]: 2025-09-30 14:47:54.570 2 DEBUG nova.compute.manager [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Sep 30 14:47:54 compute-0 nova_compute[261524]: 2025-09-30 14:47:54.614 2 DEBUG nova.compute.manager [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Sep 30 14:47:54 compute-0 nova_compute[261524]: 2025-09-30 14:47:54.615 2 DEBUG nova.network.neutron [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Sep 30 14:47:54 compute-0 nova_compute[261524]: 2025-09-30 14:47:54.637 2 INFO nova.virt.libvirt.driver [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Sep 30 14:47:54 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:47:54 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:47:54 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 14:47:54 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 14:47:54 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 14:47:54 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:47:54 compute-0 nova_compute[261524]: 2025-09-30 14:47:54.668 2 DEBUG nova.compute.manager [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Sep 30 14:47:54 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 14:47:54 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:47:54 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 14:47:54 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 14:47:54 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 14:47:54 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 14:47:54 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:47:54 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:47:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:47:54] "GET /metrics HTTP/1.1" 200 48549 "" "Prometheus/2.51.0"
Sep 30 14:47:54 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:47:54] "GET /metrics HTTP/1.1" 200 48549 "" "Prometheus/2.51.0"
Sep 30 14:47:54 compute-0 sudo[280058]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:47:54 compute-0 sudo[280058]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:47:54 compute-0 sudo[280058]: pam_unix(sudo:session): session closed for user root
Sep 30 14:47:54 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v972: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 200 KiB/s rd, 1.2 KiB/s wr, 32 op/s
Sep 30 14:47:54 compute-0 nova_compute[261524]: 2025-09-30 14:47:54.880 2 DEBUG nova.compute.manager [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Sep 30 14:47:54 compute-0 nova_compute[261524]: 2025-09-30 14:47:54.881 2 DEBUG nova.virt.libvirt.driver [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Sep 30 14:47:54 compute-0 nova_compute[261524]: 2025-09-30 14:47:54.882 2 INFO nova.virt.libvirt.driver [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0] Creating image(s)
Sep 30 14:47:54 compute-0 sudo[280083]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 14:47:54 compute-0 sudo[280083]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:47:54 compute-0 nova_compute[261524]: 2025-09-30 14:47:54.911 2 DEBUG nova.storage.rbd_utils [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] rbd image c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Sep 30 14:47:54 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/588764621' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:47:54 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:47:54 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 14:47:54 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:47:54 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:47:54 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 14:47:54 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 14:47:54 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:47:54 compute-0 nova_compute[261524]: 2025-09-30 14:47:54.952 2 DEBUG nova.storage.rbd_utils [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] rbd image c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Sep 30 14:47:54 compute-0 nova_compute[261524]: 2025-09-30 14:47:54.988 2 DEBUG nova.storage.rbd_utils [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] rbd image c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Sep 30 14:47:54 compute-0 nova_compute[261524]: 2025-09-30 14:47:54.993 2 DEBUG oslo_concurrency.processutils [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5be88f2030ae3f90b4568c2fe3300967dbe88639 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Sep 30 14:47:55 compute-0 nova_compute[261524]: 2025-09-30 14:47:55.060 2 DEBUG oslo_concurrency.processutils [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5be88f2030ae3f90b4568c2fe3300967dbe88639 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Sep 30 14:47:55 compute-0 nova_compute[261524]: 2025-09-30 14:47:55.061 2 DEBUG oslo_concurrency.lockutils [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Acquiring lock "5be88f2030ae3f90b4568c2fe3300967dbe88639" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:47:55 compute-0 nova_compute[261524]: 2025-09-30 14:47:55.062 2 DEBUG oslo_concurrency.lockutils [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Lock "5be88f2030ae3f90b4568c2fe3300967dbe88639" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:47:55 compute-0 nova_compute[261524]: 2025-09-30 14:47:55.062 2 DEBUG oslo_concurrency.lockutils [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Lock "5be88f2030ae3f90b4568c2fe3300967dbe88639" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:47:55 compute-0 nova_compute[261524]: 2025-09-30 14:47:55.099 2 DEBUG nova.storage.rbd_utils [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] rbd image c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Sep 30 14:47:55 compute-0 nova_compute[261524]: 2025-09-30 14:47:55.104 2 DEBUG oslo_concurrency.processutils [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/5be88f2030ae3f90b4568c2fe3300967dbe88639 c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Sep 30 14:47:55 compute-0 nova_compute[261524]: 2025-09-30 14:47:55.427 2 DEBUG oslo_concurrency.processutils [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/5be88f2030ae3f90b4568c2fe3300967dbe88639 c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.323s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Sep 30 14:47:55 compute-0 podman[280243]: 2025-09-30 14:47:55.436164463 +0000 UTC m=+0.073153144 container create e4695f3fc619140f68ba6d8e9bb2f88b55df66612cb180ee56af1edb009df38c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_lamarr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1)
Sep 30 14:47:55 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:47:55 compute-0 systemd[1]: Started libpod-conmon-e4695f3fc619140f68ba6d8e9bb2f88b55df66612cb180ee56af1edb009df38c.scope.
Sep 30 14:47:55 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:47:55 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:47:55.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:47:55 compute-0 podman[280243]: 2025-09-30 14:47:55.40480661 +0000 UTC m=+0.041795371 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:47:55 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:47:55 compute-0 nova_compute[261524]: 2025-09-30 14:47:55.523 2 DEBUG nova.storage.rbd_utils [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] resizing rbd image c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Sep 30 14:47:55 compute-0 podman[280243]: 2025-09-30 14:47:55.532601039 +0000 UTC m=+0.169589730 container init e4695f3fc619140f68ba6d8e9bb2f88b55df66612cb180ee56af1edb009df38c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_lamarr, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:47:55 compute-0 podman[280243]: 2025-09-30 14:47:55.54114085 +0000 UTC m=+0.178129561 container start e4695f3fc619140f68ba6d8e9bb2f88b55df66612cb180ee56af1edb009df38c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_lamarr, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:47:55 compute-0 podman[280243]: 2025-09-30 14:47:55.546536513 +0000 UTC m=+0.183525214 container attach e4695f3fc619140f68ba6d8e9bb2f88b55df66612cb180ee56af1edb009df38c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_lamarr, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Sep 30 14:47:55 compute-0 ecstatic_lamarr[280278]: 167 167
Sep 30 14:47:55 compute-0 systemd[1]: libpod-e4695f3fc619140f68ba6d8e9bb2f88b55df66612cb180ee56af1edb009df38c.scope: Deactivated successfully.
Sep 30 14:47:55 compute-0 podman[280243]: 2025-09-30 14:47:55.549830384 +0000 UTC m=+0.186819065 container died e4695f3fc619140f68ba6d8e9bb2f88b55df66612cb180ee56af1edb009df38c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_lamarr, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True)
Sep 30 14:47:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-187c572c40e8d4fd7f119b85c60c9f3f649d7adb1124db63d06e987c1c57e606-merged.mount: Deactivated successfully.
Sep 30 14:47:55 compute-0 podman[280243]: 2025-09-30 14:47:55.592057215 +0000 UTC m=+0.229045896 container remove e4695f3fc619140f68ba6d8e9bb2f88b55df66612cb180ee56af1edb009df38c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:47:55 compute-0 nova_compute[261524]: 2025-09-30 14:47:55.607 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:47:55 compute-0 systemd[1]: libpod-conmon-e4695f3fc619140f68ba6d8e9bb2f88b55df66612cb180ee56af1edb009df38c.scope: Deactivated successfully.
Sep 30 14:47:55 compute-0 nova_compute[261524]: 2025-09-30 14:47:55.673 2 DEBUG nova.objects.instance [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Lazy-loading 'migration_context' on Instance uuid c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Sep 30 14:47:55 compute-0 nova_compute[261524]: 2025-09-30 14:47:55.687 2 DEBUG nova.virt.libvirt.driver [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Sep 30 14:47:55 compute-0 nova_compute[261524]: 2025-09-30 14:47:55.688 2 DEBUG nova.virt.libvirt.driver [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0] Ensure instance console log exists: /var/lib/nova/instances/c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Sep 30 14:47:55 compute-0 nova_compute[261524]: 2025-09-30 14:47:55.688 2 DEBUG oslo_concurrency.lockutils [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:47:55 compute-0 nova_compute[261524]: 2025-09-30 14:47:55.689 2 DEBUG oslo_concurrency.lockutils [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:47:55 compute-0 nova_compute[261524]: 2025-09-30 14:47:55.689 2 DEBUG oslo_concurrency.lockutils [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:47:55 compute-0 podman[280356]: 2025-09-30 14:47:55.77972742 +0000 UTC m=+0.048573508 container create 3eff6b701f11d796ac702e45d69f4cffa3a9d6f6008f032ad412fc3ea27ec093 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_jones, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Sep 30 14:47:55 compute-0 systemd[1]: Started libpod-conmon-3eff6b701f11d796ac702e45d69f4cffa3a9d6f6008f032ad412fc3ea27ec093.scope.
Sep 30 14:47:55 compute-0 nova_compute[261524]: 2025-09-30 14:47:55.834 2 DEBUG nova.policy [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '59c80c4f189d4667aec64b43afc69ed2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0f6bbb74396f4cb7bfa999ebdabfe722', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Sep 30 14:47:55 compute-0 podman[280356]: 2025-09-30 14:47:55.760336602 +0000 UTC m=+0.029182720 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:47:55 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:47:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/937461556b75cf7a6bc9029c08505ea6a50319db4e8fdb5f224a30813569bc5e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:47:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/937461556b75cf7a6bc9029c08505ea6a50319db4e8fdb5f224a30813569bc5e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:47:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/937461556b75cf7a6bc9029c08505ea6a50319db4e8fdb5f224a30813569bc5e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:47:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/937461556b75cf7a6bc9029c08505ea6a50319db4e8fdb5f224a30813569bc5e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:47:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/937461556b75cf7a6bc9029c08505ea6a50319db4e8fdb5f224a30813569bc5e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 14:47:55 compute-0 podman[280356]: 2025-09-30 14:47:55.883528718 +0000 UTC m=+0.152375126 container init 3eff6b701f11d796ac702e45d69f4cffa3a9d6f6008f032ad412fc3ea27ec093 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_jones, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Sep 30 14:47:55 compute-0 podman[280356]: 2025-09-30 14:47:55.892014127 +0000 UTC m=+0.160860215 container start 3eff6b701f11d796ac702e45d69f4cffa3a9d6f6008f032ad412fc3ea27ec093 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_jones, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:47:55 compute-0 podman[280356]: 2025-09-30 14:47:55.895479163 +0000 UTC m=+0.164325281 container attach 3eff6b701f11d796ac702e45d69f4cffa3a9d6f6008f032ad412fc3ea27ec093 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_jones, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:47:55 compute-0 ceph-mon[74194]: pgmap v972: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 200 KiB/s rd, 1.2 KiB/s wr, 32 op/s
Sep 30 14:47:56 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:47:56 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000024s ======
Sep 30 14:47:56 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:47:56.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Sep 30 14:47:56 compute-0 gifted_jones[280373]: --> passed data devices: 0 physical, 1 LVM
Sep 30 14:47:56 compute-0 gifted_jones[280373]: --> All data devices are unavailable
Sep 30 14:47:56 compute-0 systemd[1]: libpod-3eff6b701f11d796ac702e45d69f4cffa3a9d6f6008f032ad412fc3ea27ec093.scope: Deactivated successfully.
Sep 30 14:47:56 compute-0 podman[280356]: 2025-09-30 14:47:56.269601173 +0000 UTC m=+0.538447291 container died 3eff6b701f11d796ac702e45d69f4cffa3a9d6f6008f032ad412fc3ea27ec093 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_jones, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:47:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-937461556b75cf7a6bc9029c08505ea6a50319db4e8fdb5f224a30813569bc5e-merged.mount: Deactivated successfully.
Sep 30 14:47:56 compute-0 podman[280356]: 2025-09-30 14:47:56.321555544 +0000 UTC m=+0.590401632 container remove 3eff6b701f11d796ac702e45d69f4cffa3a9d6f6008f032ad412fc3ea27ec093 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_jones, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:47:56 compute-0 systemd[1]: libpod-conmon-3eff6b701f11d796ac702e45d69f4cffa3a9d6f6008f032ad412fc3ea27ec093.scope: Deactivated successfully.
Sep 30 14:47:56 compute-0 sudo[280083]: pam_unix(sudo:session): session closed for user root
Sep 30 14:47:56 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:47:56.406 163966 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ea:30:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '86:54:af:bb:5a:5f'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Sep 30 14:47:56 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:47:56.407 163966 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Sep 30 14:47:56 compute-0 sudo[280402]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:47:56 compute-0 sudo[280402]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:47:56 compute-0 sudo[280402]: pam_unix(sudo:session): session closed for user root
Sep 30 14:47:56 compute-0 nova_compute[261524]: 2025-09-30 14:47:56.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:47:56 compute-0 sudo[280427]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -- lvm list --format json
Sep 30 14:47:56 compute-0 sudo[280427]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:47:56 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v973: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 218 KiB/s rd, 1.8 MiB/s wr, 59 op/s
Sep 30 14:47:56 compute-0 podman[280495]: 2025-09-30 14:47:56.957557338 +0000 UTC m=+0.038681334 container create 18ab738e96780614c2e039c7fd6614f48ebc0e130bb602df9511f2a0c8e5791b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_ellis, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Sep 30 14:47:57 compute-0 systemd[1]: Started libpod-conmon-18ab738e96780614c2e039c7fd6614f48ebc0e130bb602df9511f2a0c8e5791b.scope.
Sep 30 14:47:57 compute-0 nova_compute[261524]: 2025-09-30 14:47:57.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:47:57 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:47:57 compute-0 podman[280495]: 2025-09-30 14:47:56.939260907 +0000 UTC m=+0.020384923 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:47:57 compute-0 podman[280495]: 2025-09-30 14:47:57.050783206 +0000 UTC m=+0.131907282 container init 18ab738e96780614c2e039c7fd6614f48ebc0e130bb602df9511f2a0c8e5791b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_ellis, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Sep 30 14:47:57 compute-0 podman[280495]: 2025-09-30 14:47:57.06434716 +0000 UTC m=+0.145471156 container start 18ab738e96780614c2e039c7fd6614f48ebc0e130bb602df9511f2a0c8e5791b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_ellis, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2)
Sep 30 14:47:57 compute-0 podman[280495]: 2025-09-30 14:47:57.068148484 +0000 UTC m=+0.149272530 container attach 18ab738e96780614c2e039c7fd6614f48ebc0e130bb602df9511f2a0c8e5791b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_ellis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:47:57 compute-0 vigorous_ellis[280511]: 167 167
Sep 30 14:47:57 compute-0 systemd[1]: libpod-18ab738e96780614c2e039c7fd6614f48ebc0e130bb602df9511f2a0c8e5791b.scope: Deactivated successfully.
Sep 30 14:47:57 compute-0 podman[280495]: 2025-09-30 14:47:57.069733403 +0000 UTC m=+0.150857409 container died 18ab738e96780614c2e039c7fd6614f48ebc0e130bb602df9511f2a0c8e5791b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_ellis, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Sep 30 14:47:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-8b6348d0c9460e59624354369cd8b91dea46291836264b3e1383ad7fc2fb84bd-merged.mount: Deactivated successfully.
Sep 30 14:47:57 compute-0 podman[280495]: 2025-09-30 14:47:57.109548304 +0000 UTC m=+0.190672300 container remove 18ab738e96780614c2e039c7fd6614f48ebc0e130bb602df9511f2a0c8e5791b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_ellis, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1)
Sep 30 14:47:57 compute-0 systemd[1]: libpod-conmon-18ab738e96780614c2e039c7fd6614f48ebc0e130bb602df9511f2a0c8e5791b.scope: Deactivated successfully.
Sep 30 14:47:57 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:47:57.176Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:47:57 compute-0 podman[280535]: 2025-09-30 14:47:57.295611459 +0000 UTC m=+0.061636120 container create 89926c2d0536eb695ec442c4f82465b45d5b654fda53b355c81d767e2fa3f033 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_torvalds, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Sep 30 14:47:57 compute-0 systemd[1]: Started libpod-conmon-89926c2d0536eb695ec442c4f82465b45d5b654fda53b355c81d767e2fa3f033.scope.
Sep 30 14:47:57 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:47:57 compute-0 nova_compute[261524]: 2025-09-30 14:47:57.364 2 DEBUG nova.network.neutron [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0] Successfully updated port: e747243d-8f01-4e0e-b24c-7b450e7731b3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Sep 30 14:47:57 compute-0 podman[280535]: 2025-09-30 14:47:57.272936121 +0000 UTC m=+0.038960802 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:47:57 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:47:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50016f7b5e780d1ef037bd429685d38e7be1ef9719e75dbdeeabf17cf4ccaf30/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:47:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50016f7b5e780d1ef037bd429685d38e7be1ef9719e75dbdeeabf17cf4ccaf30/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:47:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50016f7b5e780d1ef037bd429685d38e7be1ef9719e75dbdeeabf17cf4ccaf30/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:47:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50016f7b5e780d1ef037bd429685d38e7be1ef9719e75dbdeeabf17cf4ccaf30/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:47:57 compute-0 nova_compute[261524]: 2025-09-30 14:47:57.383 2 DEBUG oslo_concurrency.lockutils [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Acquiring lock "refresh_cache-c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Sep 30 14:47:57 compute-0 nova_compute[261524]: 2025-09-30 14:47:57.383 2 DEBUG oslo_concurrency.lockutils [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Acquired lock "refresh_cache-c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Sep 30 14:47:57 compute-0 nova_compute[261524]: 2025-09-30 14:47:57.384 2 DEBUG nova.network.neutron [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Sep 30 14:47:57 compute-0 podman[280535]: 2025-09-30 14:47:57.385255519 +0000 UTC m=+0.151280200 container init 89926c2d0536eb695ec442c4f82465b45d5b654fda53b355c81d767e2fa3f033 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_torvalds, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Sep 30 14:47:57 compute-0 podman[280535]: 2025-09-30 14:47:57.397341107 +0000 UTC m=+0.163365748 container start 89926c2d0536eb695ec442c4f82465b45d5b654fda53b355c81d767e2fa3f033 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_torvalds, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:47:57 compute-0 podman[280535]: 2025-09-30 14:47:57.40112754 +0000 UTC m=+0.167152221 container attach 89926c2d0536eb695ec442c4f82465b45d5b654fda53b355c81d767e2fa3f033 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_torvalds, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:47:57 compute-0 nova_compute[261524]: 2025-09-30 14:47:57.455 2 DEBUG nova.compute.manager [req-a554e7d4-11d7-4718-96c2-a70d567d734d req-06ad225f-890a-4c70-a9db-bdeef95634d9 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0] Received event network-changed-e747243d-8f01-4e0e-b24c-7b450e7731b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Sep 30 14:47:57 compute-0 nova_compute[261524]: 2025-09-30 14:47:57.456 2 DEBUG nova.compute.manager [req-a554e7d4-11d7-4718-96c2-a70d567d734d req-06ad225f-890a-4c70-a9db-bdeef95634d9 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0] Refreshing instance network info cache due to event network-changed-e747243d-8f01-4e0e-b24c-7b450e7731b3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Sep 30 14:47:57 compute-0 nova_compute[261524]: 2025-09-30 14:47:57.456 2 DEBUG oslo_concurrency.lockutils [req-a554e7d4-11d7-4718-96c2-a70d567d734d req-06ad225f-890a-4c70-a9db-bdeef95634d9 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Acquiring lock "refresh_cache-c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Sep 30 14:47:57 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:47:57 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:47:57 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:47:57.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:47:57 compute-0 musing_torvalds[280551]: {
Sep 30 14:47:57 compute-0 musing_torvalds[280551]:     "0": [
Sep 30 14:47:57 compute-0 musing_torvalds[280551]:         {
Sep 30 14:47:57 compute-0 musing_torvalds[280551]:             "devices": [
Sep 30 14:47:57 compute-0 musing_torvalds[280551]:                 "/dev/loop3"
Sep 30 14:47:57 compute-0 musing_torvalds[280551]:             ],
Sep 30 14:47:57 compute-0 musing_torvalds[280551]:             "lv_name": "ceph_lv0",
Sep 30 14:47:57 compute-0 musing_torvalds[280551]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:47:57 compute-0 musing_torvalds[280551]:             "lv_size": "21470642176",
Sep 30 14:47:57 compute-0 musing_torvalds[280551]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5e3c7776-ac03-5698-b79f-a6dc2d80cae6,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1bf35304-bfb4-41f5-b832-570aa31de1b2,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 14:47:57 compute-0 musing_torvalds[280551]:             "lv_uuid": "f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L",
Sep 30 14:47:57 compute-0 musing_torvalds[280551]:             "name": "ceph_lv0",
Sep 30 14:47:57 compute-0 musing_torvalds[280551]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:47:57 compute-0 musing_torvalds[280551]:             "tags": {
Sep 30 14:47:57 compute-0 musing_torvalds[280551]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:47:57 compute-0 musing_torvalds[280551]:                 "ceph.block_uuid": "f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L",
Sep 30 14:47:57 compute-0 musing_torvalds[280551]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 14:47:57 compute-0 musing_torvalds[280551]:                 "ceph.cluster_fsid": "5e3c7776-ac03-5698-b79f-a6dc2d80cae6",
Sep 30 14:47:57 compute-0 musing_torvalds[280551]:                 "ceph.cluster_name": "ceph",
Sep 30 14:47:57 compute-0 musing_torvalds[280551]:                 "ceph.crush_device_class": "",
Sep 30 14:47:57 compute-0 musing_torvalds[280551]:                 "ceph.encrypted": "0",
Sep 30 14:47:57 compute-0 musing_torvalds[280551]:                 "ceph.osd_fsid": "1bf35304-bfb4-41f5-b832-570aa31de1b2",
Sep 30 14:47:57 compute-0 musing_torvalds[280551]:                 "ceph.osd_id": "0",
Sep 30 14:47:57 compute-0 musing_torvalds[280551]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 14:47:57 compute-0 musing_torvalds[280551]:                 "ceph.type": "block",
Sep 30 14:47:57 compute-0 musing_torvalds[280551]:                 "ceph.vdo": "0",
Sep 30 14:47:57 compute-0 musing_torvalds[280551]:                 "ceph.with_tpm": "0"
Sep 30 14:47:57 compute-0 musing_torvalds[280551]:             },
Sep 30 14:47:57 compute-0 musing_torvalds[280551]:             "type": "block",
Sep 30 14:47:57 compute-0 musing_torvalds[280551]:             "vg_name": "ceph_vg0"
Sep 30 14:47:57 compute-0 musing_torvalds[280551]:         }
Sep 30 14:47:57 compute-0 musing_torvalds[280551]:     ]
Sep 30 14:47:57 compute-0 musing_torvalds[280551]: }
Sep 30 14:47:57 compute-0 systemd[1]: libpod-89926c2d0536eb695ec442c4f82465b45d5b654fda53b355c81d767e2fa3f033.scope: Deactivated successfully.
Sep 30 14:47:57 compute-0 podman[280535]: 2025-09-30 14:47:57.709796436 +0000 UTC m=+0.475821077 container died 89926c2d0536eb695ec442c4f82465b45d5b654fda53b355c81d767e2fa3f033 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_torvalds, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Sep 30 14:47:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-50016f7b5e780d1ef037bd429685d38e7be1ef9719e75dbdeeabf17cf4ccaf30-merged.mount: Deactivated successfully.
Sep 30 14:47:57 compute-0 podman[280535]: 2025-09-30 14:47:57.763058229 +0000 UTC m=+0.529082870 container remove 89926c2d0536eb695ec442c4f82465b45d5b654fda53b355c81d767e2fa3f033 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_torvalds, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Sep 30 14:47:57 compute-0 systemd[1]: libpod-conmon-89926c2d0536eb695ec442c4f82465b45d5b654fda53b355c81d767e2fa3f033.scope: Deactivated successfully.
Sep 30 14:47:57 compute-0 sudo[280427]: pam_unix(sudo:session): session closed for user root
Sep 30 14:47:57 compute-0 sudo[280574]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:47:57 compute-0 sudo[280574]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:47:57 compute-0 sudo[280574]: pam_unix(sudo:session): session closed for user root
Sep 30 14:47:57 compute-0 ceph-mon[74194]: pgmap v973: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 218 KiB/s rd, 1.8 MiB/s wr, 59 op/s
Sep 30 14:47:58 compute-0 sudo[280599]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -- raw list --format json
Sep 30 14:47:58 compute-0 sudo[280599]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:47:58 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:47:58 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:47:58 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:47:58.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:47:58 compute-0 nova_compute[261524]: 2025-09-30 14:47:58.300 2 DEBUG nova.network.neutron [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Sep 30 14:47:58 compute-0 podman[280666]: 2025-09-30 14:47:58.406594999 +0000 UTC m=+0.045910382 container create 36037acf82ed2de84e5c56c6216eefab8bfa5104d4649c07dfb5310d866ef294 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_varahamihira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True)
Sep 30 14:47:58 compute-0 systemd[1]: Started libpod-conmon-36037acf82ed2de84e5c56c6216eefab8bfa5104d4649c07dfb5310d866ef294.scope.
Sep 30 14:47:58 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:47:58 compute-0 podman[280666]: 2025-09-30 14:47:58.38633929 +0000 UTC m=+0.025654703 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:47:58 compute-0 podman[280666]: 2025-09-30 14:47:58.487624536 +0000 UTC m=+0.126939959 container init 36037acf82ed2de84e5c56c6216eefab8bfa5104d4649c07dfb5310d866ef294 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_varahamihira, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:47:58 compute-0 podman[280666]: 2025-09-30 14:47:58.497325895 +0000 UTC m=+0.136641318 container start 36037acf82ed2de84e5c56c6216eefab8bfa5104d4649c07dfb5310d866ef294 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_varahamihira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Sep 30 14:47:58 compute-0 podman[280666]: 2025-09-30 14:47:58.501324294 +0000 UTC m=+0.140639727 container attach 36037acf82ed2de84e5c56c6216eefab8bfa5104d4649c07dfb5310d866ef294 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_varahamihira, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:47:58 compute-0 dazzling_varahamihira[280682]: 167 167
Sep 30 14:47:58 compute-0 systemd[1]: libpod-36037acf82ed2de84e5c56c6216eefab8bfa5104d4649c07dfb5310d866ef294.scope: Deactivated successfully.
Sep 30 14:47:58 compute-0 podman[280666]: 2025-09-30 14:47:58.502847661 +0000 UTC m=+0.142163084 container died 36037acf82ed2de84e5c56c6216eefab8bfa5104d4649c07dfb5310d866ef294 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_varahamihira, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:47:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-4fcd2189a62960b7c4e1f1cb7bf06196f20fc6697a5152e0836529ecff24ca44-merged.mount: Deactivated successfully.
Sep 30 14:47:58 compute-0 podman[280666]: 2025-09-30 14:47:58.564540172 +0000 UTC m=+0.203855555 container remove 36037acf82ed2de84e5c56c6216eefab8bfa5104d4649c07dfb5310d866ef294 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_varahamihira, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Sep 30 14:47:58 compute-0 systemd[1]: libpod-conmon-36037acf82ed2de84e5c56c6216eefab8bfa5104d4649c07dfb5310d866ef294.scope: Deactivated successfully.
Sep 30 14:47:58 compute-0 podman[280706]: 2025-09-30 14:47:58.78437788 +0000 UTC m=+0.052822183 container create 937c8036a73501bb7ae134a5dad591f49a7a1b8ad79f644e597b6870b611eb64 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_johnson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Sep 30 14:47:58 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v974: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 14:47:58 compute-0 systemd[1]: Started libpod-conmon-937c8036a73501bb7ae134a5dad591f49a7a1b8ad79f644e597b6870b611eb64.scope.
Sep 30 14:47:58 compute-0 podman[280706]: 2025-09-30 14:47:58.766848658 +0000 UTC m=+0.035292931 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:47:58 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:47:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c1d4cb9e47774ff07d9aab90599329e6725884a9d3c7a0ad18d98a5a58ddebe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:47:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c1d4cb9e47774ff07d9aab90599329e6725884a9d3c7a0ad18d98a5a58ddebe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:47:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c1d4cb9e47774ff07d9aab90599329e6725884a9d3c7a0ad18d98a5a58ddebe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:47:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c1d4cb9e47774ff07d9aab90599329e6725884a9d3c7a0ad18d98a5a58ddebe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:47:58 compute-0 podman[280706]: 2025-09-30 14:47:58.891880439 +0000 UTC m=+0.160324732 container init 937c8036a73501bb7ae134a5dad591f49a7a1b8ad79f644e597b6870b611eb64 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_johnson, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:47:58 compute-0 podman[280706]: 2025-09-30 14:47:58.898579614 +0000 UTC m=+0.167023867 container start 937c8036a73501bb7ae134a5dad591f49a7a1b8ad79f644e597b6870b611eb64 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_johnson, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Sep 30 14:47:58 compute-0 podman[280706]: 2025-09-30 14:47:58.901821384 +0000 UTC m=+0.170265677 container attach 937c8036a73501bb7ae134a5dad591f49a7a1b8ad79f644e597b6870b611eb64 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_johnson, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Sep 30 14:47:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:47:58 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:47:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:47:59 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:47:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:47:59 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:47:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:47:59 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:47:59 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:47:59 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000024s ======
Sep 30 14:47:59 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:47:59.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Sep 30 14:47:59 compute-0 lvm[280797]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 14:47:59 compute-0 lvm[280797]: VG ceph_vg0 finished
Sep 30 14:47:59 compute-0 ceph-mgr[74485]: [balancer INFO root] Optimize plan auto_2025-09-30_14:47:59
Sep 30 14:47:59 compute-0 ceph-mgr[74485]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 14:47:59 compute-0 ceph-mgr[74485]: [balancer INFO root] do_upmap
Sep 30 14:47:59 compute-0 ceph-mgr[74485]: [balancer INFO root] pools ['backups', 'images', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.meta', '.rgw.root', '.mgr', 'volumes', 'default.rgw.control', '.nfs', 'vms', 'cephfs.cephfs.data']
Sep 30 14:47:59 compute-0 ceph-mgr[74485]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 14:47:59 compute-0 nervous_johnson[280722]: {}
Sep 30 14:47:59 compute-0 systemd[1]: libpod-937c8036a73501bb7ae134a5dad591f49a7a1b8ad79f644e597b6870b611eb64.scope: Deactivated successfully.
Sep 30 14:47:59 compute-0 systemd[1]: libpod-937c8036a73501bb7ae134a5dad591f49a7a1b8ad79f644e597b6870b611eb64.scope: Consumed 1.139s CPU time.
Sep 30 14:47:59 compute-0 podman[280706]: 2025-09-30 14:47:59.607317301 +0000 UTC m=+0.875761644 container died 937c8036a73501bb7ae134a5dad591f49a7a1b8ad79f644e597b6870b611eb64 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_johnson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:47:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-1c1d4cb9e47774ff07d9aab90599329e6725884a9d3c7a0ad18d98a5a58ddebe-merged.mount: Deactivated successfully.
Sep 30 14:47:59 compute-0 podman[280706]: 2025-09-30 14:47:59.662013269 +0000 UTC m=+0.930457542 container remove 937c8036a73501bb7ae134a5dad591f49a7a1b8ad79f644e597b6870b611eb64 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_johnson, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:47:59 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:47:59 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:47:59 compute-0 systemd[1]: libpod-conmon-937c8036a73501bb7ae134a5dad591f49a7a1b8ad79f644e597b6870b611eb64.scope: Deactivated successfully.
Sep 30 14:47:59 compute-0 sudo[280599]: pam_unix(sudo:session): session closed for user root
Sep 30 14:47:59 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 14:47:59 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:47:59 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 14:47:59 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:47:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:47:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:47:59 compute-0 nova_compute[261524]: 2025-09-30 14:47:59.776 2 DEBUG nova.network.neutron [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0] Updating instance_info_cache with network_info: [{"id": "e747243d-8f01-4e0e-b24c-7b450e7731b3", "address": "fa:16:3e:8f:d1:dc", "network": {"id": "6ec5ed93-a47a-47b3-b4e5-86709a4bab07", "bridge": "br-int", "label": "tempest-network-smoke--1509195432", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0f6bbb74396f4cb7bfa999ebdabfe722", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape747243d-8f", "ovs_interfaceid": "e747243d-8f01-4e0e-b24c-7b450e7731b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Sep 30 14:47:59 compute-0 nova_compute[261524]: 2025-09-30 14:47:59.797 2 DEBUG oslo_concurrency.lockutils [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Releasing lock "refresh_cache-c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Sep 30 14:47:59 compute-0 nova_compute[261524]: 2025-09-30 14:47:59.798 2 DEBUG nova.compute.manager [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0] Instance network_info: |[{"id": "e747243d-8f01-4e0e-b24c-7b450e7731b3", "address": "fa:16:3e:8f:d1:dc", "network": {"id": "6ec5ed93-a47a-47b3-b4e5-86709a4bab07", "bridge": "br-int", "label": "tempest-network-smoke--1509195432", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0f6bbb74396f4cb7bfa999ebdabfe722", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape747243d-8f", "ovs_interfaceid": "e747243d-8f01-4e0e-b24c-7b450e7731b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Sep 30 14:47:59 compute-0 nova_compute[261524]: 2025-09-30 14:47:59.798 2 DEBUG oslo_concurrency.lockutils [req-a554e7d4-11d7-4718-96c2-a70d567d734d req-06ad225f-890a-4c70-a9db-bdeef95634d9 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Acquired lock "refresh_cache-c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Sep 30 14:47:59 compute-0 nova_compute[261524]: 2025-09-30 14:47:59.799 2 DEBUG nova.network.neutron [req-a554e7d4-11d7-4718-96c2-a70d567d734d req-06ad225f-890a-4c70-a9db-bdeef95634d9 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0] Refreshing network info cache for port e747243d-8f01-4e0e-b24c-7b450e7731b3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Sep 30 14:47:59 compute-0 nova_compute[261524]: 2025-09-30 14:47:59.804 2 DEBUG nova.virt.libvirt.driver [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0] Start _get_guest_xml network_info=[{"id": "e747243d-8f01-4e0e-b24c-7b450e7731b3", "address": "fa:16:3e:8f:d1:dc", "network": {"id": "6ec5ed93-a47a-47b3-b4e5-86709a4bab07", "bridge": "br-int", "label": "tempest-network-smoke--1509195432", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0f6bbb74396f4cb7bfa999ebdabfe722", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape747243d-8f", "ovs_interfaceid": "e747243d-8f01-4e0e-b24c-7b450e7731b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-09-30T14:39:17Z,direct_url=<?>,disk_format='qcow2',id=7c70cf84-edc3-42b2-a094-ae3c1dbaffe4,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5beed35d375f4bd185a6774dc475e0b9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-09-30T14:39:19Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'encryption_options': None, 'device_name': '/dev/vda', 'size': 0, 'encryption_format': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'guest_format': None, 'disk_bus': 'virtio', 'image_id': '7c70cf84-edc3-42b2-a094-ae3c1dbaffe4'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Sep 30 14:47:59 compute-0 nova_compute[261524]: 2025-09-30 14:47:59.811 2 WARNING nova.virt.libvirt.driver [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 14:47:59 compute-0 nova_compute[261524]: 2025-09-30 14:47:59.817 2 DEBUG nova.virt.libvirt.host [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Sep 30 14:47:59 compute-0 nova_compute[261524]: 2025-09-30 14:47:59.818 2 DEBUG nova.virt.libvirt.host [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Sep 30 14:47:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:47:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:47:59 compute-0 nova_compute[261524]: 2025-09-30 14:47:59.827 2 DEBUG nova.virt.libvirt.host [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Sep 30 14:47:59 compute-0 nova_compute[261524]: 2025-09-30 14:47:59.829 2 DEBUG nova.virt.libvirt.host [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Sep 30 14:47:59 compute-0 sudo[280814]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 14:47:59 compute-0 nova_compute[261524]: 2025-09-30 14:47:59.830 2 DEBUG nova.virt.libvirt.driver [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Sep 30 14:47:59 compute-0 nova_compute[261524]: 2025-09-30 14:47:59.830 2 DEBUG nova.virt.hardware [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-09-30T14:39:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='64f3d3b9-41b6-4b89-8bbd-f654faf17546',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-09-30T14:39:17Z,direct_url=<?>,disk_format='qcow2',id=7c70cf84-edc3-42b2-a094-ae3c1dbaffe4,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5beed35d375f4bd185a6774dc475e0b9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-09-30T14:39:19Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Sep 30 14:47:59 compute-0 nova_compute[261524]: 2025-09-30 14:47:59.831 2 DEBUG nova.virt.hardware [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Sep 30 14:47:59 compute-0 nova_compute[261524]: 2025-09-30 14:47:59.832 2 DEBUG nova.virt.hardware [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Sep 30 14:47:59 compute-0 nova_compute[261524]: 2025-09-30 14:47:59.832 2 DEBUG nova.virt.hardware [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Sep 30 14:47:59 compute-0 nova_compute[261524]: 2025-09-30 14:47:59.833 2 DEBUG nova.virt.hardware [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Sep 30 14:47:59 compute-0 nova_compute[261524]: 2025-09-30 14:47:59.833 2 DEBUG nova.virt.hardware [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Sep 30 14:47:59 compute-0 nova_compute[261524]: 2025-09-30 14:47:59.833 2 DEBUG nova.virt.hardware [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Sep 30 14:47:59 compute-0 nova_compute[261524]: 2025-09-30 14:47:59.834 2 DEBUG nova.virt.hardware [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Sep 30 14:47:59 compute-0 sudo[280814]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:47:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:47:59 compute-0 nova_compute[261524]: 2025-09-30 14:47:59.834 2 DEBUG nova.virt.hardware [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Sep 30 14:47:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:47:59 compute-0 nova_compute[261524]: 2025-09-30 14:47:59.835 2 DEBUG nova.virt.hardware [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Sep 30 14:47:59 compute-0 nova_compute[261524]: 2025-09-30 14:47:59.835 2 DEBUG nova.virt.hardware [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Sep 30 14:47:59 compute-0 sudo[280814]: pam_unix(sudo:session): session closed for user root
Sep 30 14:47:59 compute-0 nova_compute[261524]: 2025-09-30 14:47:59.839 2 DEBUG oslo_concurrency.processutils [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Sep 30 14:47:59 compute-0 ceph-mon[74194]: pgmap v974: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 14:47:59 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:47:59 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:47:59 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:47:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 14:47:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:47:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 14:47:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:47:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003459970412515465 of space, bias 1.0, pg target 0.10379911237546395 quantized to 32 (current 32)
Sep 30 14:47:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:47:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:47:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:47:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:47:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:47:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Sep 30 14:47:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:47:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Sep 30 14:47:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:47:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:47:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:47:59 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Sep 30 14:48:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:48:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Sep 30 14:48:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:48:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:48:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:48:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 14:48:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:48:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 14:48:00 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:48:00 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:48:00 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:48:00.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:48:00 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Sep 30 14:48:00 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/947956249' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 14:48:00 compute-0 nova_compute[261524]: 2025-09-30 14:48:00.262 2 DEBUG oslo_concurrency.processutils [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Sep 30 14:48:00 compute-0 nova_compute[261524]: 2025-09-30 14:48:00.308 2 DEBUG nova.storage.rbd_utils [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] rbd image c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Sep 30 14:48:00 compute-0 nova_compute[261524]: 2025-09-30 14:48:00.314 2 DEBUG oslo_concurrency.processutils [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Sep 30 14:48:00 compute-0 nova_compute[261524]: 2025-09-30 14:48:00.609 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:48:00 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Sep 30 14:48:00 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2345579675' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 14:48:00 compute-0 nova_compute[261524]: 2025-09-30 14:48:00.798 2 DEBUG oslo_concurrency.processutils [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Sep 30 14:48:00 compute-0 nova_compute[261524]: 2025-09-30 14:48:00.802 2 DEBUG nova.virt.libvirt.vif [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-09-30T14:47:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-739140428',display_name='tempest-TestNetworkBasicOps-server-739140428',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-739140428',id=9,image_ref='7c70cf84-edc3-42b2-a094-ae3c1dbaffe4',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFcOpREO8dqSvT6udbSdc8QolXOyW9sjdRSsFUenM7c5Hmbrvu7VpqSEKGB8rSCraG+oFsQDKRB4CTLJ/+Ql6kKWkz4gT45V1VLpqzcv5KOn9oA9f9iMPaAelP8f/4L6Aw==',key_name='tempest-TestNetworkBasicOps-920022896',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0f6bbb74396f4cb7bfa999ebdabfe722',ramdisk_id='',reservation_id='r-xaem05h3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c70cf84-edc3-42b2-a094-ae3c1dbaffe4',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-195302952',owner_user_name='tempest-TestNetworkBasicOps-195302952-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-09-30T14:47:54Z,user_data=None,user_id='59c80c4f189d4667aec64b43afc69ed2',uuid=c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e747243d-8f01-4e0e-b24c-7b450e7731b3", "address": "fa:16:3e:8f:d1:dc", "network": {"id": "6ec5ed93-a47a-47b3-b4e5-86709a4bab07", "bridge": "br-int", "label": "tempest-network-smoke--1509195432", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0f6bbb74396f4cb7bfa999ebdabfe722", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape747243d-8f", "ovs_interfaceid": "e747243d-8f01-4e0e-b24c-7b450e7731b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Sep 30 14:48:00 compute-0 nova_compute[261524]: 2025-09-30 14:48:00.802 2 DEBUG nova.network.os_vif_util [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Converting VIF {"id": "e747243d-8f01-4e0e-b24c-7b450e7731b3", "address": "fa:16:3e:8f:d1:dc", "network": {"id": "6ec5ed93-a47a-47b3-b4e5-86709a4bab07", "bridge": "br-int", "label": "tempest-network-smoke--1509195432", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0f6bbb74396f4cb7bfa999ebdabfe722", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape747243d-8f", "ovs_interfaceid": "e747243d-8f01-4e0e-b24c-7b450e7731b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Sep 30 14:48:00 compute-0 nova_compute[261524]: 2025-09-30 14:48:00.804 2 DEBUG nova.network.os_vif_util [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8f:d1:dc,bridge_name='br-int',has_traffic_filtering=True,id=e747243d-8f01-4e0e-b24c-7b450e7731b3,network=Network(6ec5ed93-a47a-47b3-b4e5-86709a4bab07),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape747243d-8f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Sep 30 14:48:00 compute-0 nova_compute[261524]: 2025-09-30 14:48:00.807 2 DEBUG nova.objects.instance [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Lazy-loading 'pci_devices' on Instance uuid c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Sep 30 14:48:00 compute-0 nova_compute[261524]: 2025-09-30 14:48:00.825 2 DEBUG nova.virt.libvirt.driver [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0] End _get_guest_xml xml=<domain type="kvm">
Sep 30 14:48:00 compute-0 nova_compute[261524]:   <uuid>c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0</uuid>
Sep 30 14:48:00 compute-0 nova_compute[261524]:   <name>instance-00000009</name>
Sep 30 14:48:00 compute-0 nova_compute[261524]:   <memory>131072</memory>
Sep 30 14:48:00 compute-0 nova_compute[261524]:   <vcpu>1</vcpu>
Sep 30 14:48:00 compute-0 nova_compute[261524]:   <metadata>
Sep 30 14:48:00 compute-0 nova_compute[261524]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Sep 30 14:48:00 compute-0 nova_compute[261524]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Sep 30 14:48:00 compute-0 nova_compute[261524]:       <nova:name>tempest-TestNetworkBasicOps-server-739140428</nova:name>
Sep 30 14:48:00 compute-0 nova_compute[261524]:       <nova:creationTime>2025-09-30 14:47:59</nova:creationTime>
Sep 30 14:48:00 compute-0 nova_compute[261524]:       <nova:flavor name="m1.nano">
Sep 30 14:48:00 compute-0 nova_compute[261524]:         <nova:memory>128</nova:memory>
Sep 30 14:48:00 compute-0 nova_compute[261524]:         <nova:disk>1</nova:disk>
Sep 30 14:48:00 compute-0 nova_compute[261524]:         <nova:swap>0</nova:swap>
Sep 30 14:48:00 compute-0 nova_compute[261524]:         <nova:ephemeral>0</nova:ephemeral>
Sep 30 14:48:00 compute-0 nova_compute[261524]:         <nova:vcpus>1</nova:vcpus>
Sep 30 14:48:00 compute-0 nova_compute[261524]:       </nova:flavor>
Sep 30 14:48:00 compute-0 nova_compute[261524]:       <nova:owner>
Sep 30 14:48:00 compute-0 nova_compute[261524]:         <nova:user uuid="59c80c4f189d4667aec64b43afc69ed2">tempest-TestNetworkBasicOps-195302952-project-member</nova:user>
Sep 30 14:48:00 compute-0 nova_compute[261524]:         <nova:project uuid="0f6bbb74396f4cb7bfa999ebdabfe722">tempest-TestNetworkBasicOps-195302952</nova:project>
Sep 30 14:48:00 compute-0 nova_compute[261524]:       </nova:owner>
Sep 30 14:48:00 compute-0 nova_compute[261524]:       <nova:root type="image" uuid="7c70cf84-edc3-42b2-a094-ae3c1dbaffe4"/>
Sep 30 14:48:00 compute-0 nova_compute[261524]:       <nova:ports>
Sep 30 14:48:00 compute-0 nova_compute[261524]:         <nova:port uuid="e747243d-8f01-4e0e-b24c-7b450e7731b3">
Sep 30 14:48:00 compute-0 nova_compute[261524]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Sep 30 14:48:00 compute-0 nova_compute[261524]:         </nova:port>
Sep 30 14:48:00 compute-0 nova_compute[261524]:       </nova:ports>
Sep 30 14:48:00 compute-0 nova_compute[261524]:     </nova:instance>
Sep 30 14:48:00 compute-0 nova_compute[261524]:   </metadata>
Sep 30 14:48:00 compute-0 nova_compute[261524]:   <sysinfo type="smbios">
Sep 30 14:48:00 compute-0 nova_compute[261524]:     <system>
Sep 30 14:48:00 compute-0 nova_compute[261524]:       <entry name="manufacturer">RDO</entry>
Sep 30 14:48:00 compute-0 nova_compute[261524]:       <entry name="product">OpenStack Compute</entry>
Sep 30 14:48:00 compute-0 nova_compute[261524]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Sep 30 14:48:00 compute-0 nova_compute[261524]:       <entry name="serial">c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0</entry>
Sep 30 14:48:00 compute-0 nova_compute[261524]:       <entry name="uuid">c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0</entry>
Sep 30 14:48:00 compute-0 nova_compute[261524]:       <entry name="family">Virtual Machine</entry>
Sep 30 14:48:00 compute-0 nova_compute[261524]:     </system>
Sep 30 14:48:00 compute-0 nova_compute[261524]:   </sysinfo>
Sep 30 14:48:00 compute-0 nova_compute[261524]:   <os>
Sep 30 14:48:00 compute-0 nova_compute[261524]:     <type arch="x86_64" machine="q35">hvm</type>
Sep 30 14:48:00 compute-0 nova_compute[261524]:     <boot dev="hd"/>
Sep 30 14:48:00 compute-0 nova_compute[261524]:     <smbios mode="sysinfo"/>
Sep 30 14:48:00 compute-0 nova_compute[261524]:   </os>
Sep 30 14:48:00 compute-0 nova_compute[261524]:   <features>
Sep 30 14:48:00 compute-0 nova_compute[261524]:     <acpi/>
Sep 30 14:48:00 compute-0 nova_compute[261524]:     <apic/>
Sep 30 14:48:00 compute-0 nova_compute[261524]:     <vmcoreinfo/>
Sep 30 14:48:00 compute-0 nova_compute[261524]:   </features>
Sep 30 14:48:00 compute-0 nova_compute[261524]:   <clock offset="utc">
Sep 30 14:48:00 compute-0 nova_compute[261524]:     <timer name="pit" tickpolicy="delay"/>
Sep 30 14:48:00 compute-0 nova_compute[261524]:     <timer name="rtc" tickpolicy="catchup"/>
Sep 30 14:48:00 compute-0 nova_compute[261524]:     <timer name="hpet" present="no"/>
Sep 30 14:48:00 compute-0 nova_compute[261524]:   </clock>
Sep 30 14:48:00 compute-0 nova_compute[261524]:   <cpu mode="host-model" match="exact">
Sep 30 14:48:00 compute-0 nova_compute[261524]:     <topology sockets="1" cores="1" threads="1"/>
Sep 30 14:48:00 compute-0 nova_compute[261524]:   </cpu>
Sep 30 14:48:00 compute-0 nova_compute[261524]:   <devices>
Sep 30 14:48:00 compute-0 nova_compute[261524]:     <disk type="network" device="disk">
Sep 30 14:48:00 compute-0 nova_compute[261524]:       <driver type="raw" cache="none"/>
Sep 30 14:48:00 compute-0 nova_compute[261524]:       <source protocol="rbd" name="vms/c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0_disk">
Sep 30 14:48:00 compute-0 nova_compute[261524]:         <host name="192.168.122.100" port="6789"/>
Sep 30 14:48:00 compute-0 nova_compute[261524]:         <host name="192.168.122.102" port="6789"/>
Sep 30 14:48:00 compute-0 nova_compute[261524]:         <host name="192.168.122.101" port="6789"/>
Sep 30 14:48:00 compute-0 nova_compute[261524]:       </source>
Sep 30 14:48:00 compute-0 nova_compute[261524]:       <auth username="openstack">
Sep 30 14:48:00 compute-0 nova_compute[261524]:         <secret type="ceph" uuid="5e3c7776-ac03-5698-b79f-a6dc2d80cae6"/>
Sep 30 14:48:00 compute-0 nova_compute[261524]:       </auth>
Sep 30 14:48:00 compute-0 nova_compute[261524]:       <target dev="vda" bus="virtio"/>
Sep 30 14:48:00 compute-0 nova_compute[261524]:     </disk>
Sep 30 14:48:00 compute-0 nova_compute[261524]:     <disk type="network" device="cdrom">
Sep 30 14:48:00 compute-0 nova_compute[261524]:       <driver type="raw" cache="none"/>
Sep 30 14:48:00 compute-0 nova_compute[261524]:       <source protocol="rbd" name="vms/c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0_disk.config">
Sep 30 14:48:00 compute-0 nova_compute[261524]:         <host name="192.168.122.100" port="6789"/>
Sep 30 14:48:00 compute-0 nova_compute[261524]:         <host name="192.168.122.102" port="6789"/>
Sep 30 14:48:00 compute-0 nova_compute[261524]:         <host name="192.168.122.101" port="6789"/>
Sep 30 14:48:00 compute-0 nova_compute[261524]:       </source>
Sep 30 14:48:00 compute-0 nova_compute[261524]:       <auth username="openstack">
Sep 30 14:48:00 compute-0 nova_compute[261524]:         <secret type="ceph" uuid="5e3c7776-ac03-5698-b79f-a6dc2d80cae6"/>
Sep 30 14:48:00 compute-0 nova_compute[261524]:       </auth>
Sep 30 14:48:00 compute-0 nova_compute[261524]:       <target dev="sda" bus="sata"/>
Sep 30 14:48:00 compute-0 nova_compute[261524]:     </disk>
Sep 30 14:48:00 compute-0 nova_compute[261524]:     <interface type="ethernet">
Sep 30 14:48:00 compute-0 nova_compute[261524]:       <mac address="fa:16:3e:8f:d1:dc"/>
Sep 30 14:48:00 compute-0 nova_compute[261524]:       <model type="virtio"/>
Sep 30 14:48:00 compute-0 nova_compute[261524]:       <driver name="vhost" rx_queue_size="512"/>
Sep 30 14:48:00 compute-0 nova_compute[261524]:       <mtu size="1442"/>
Sep 30 14:48:00 compute-0 nova_compute[261524]:       <target dev="tape747243d-8f"/>
Sep 30 14:48:00 compute-0 nova_compute[261524]:     </interface>
Sep 30 14:48:00 compute-0 nova_compute[261524]:     <serial type="pty">
Sep 30 14:48:00 compute-0 nova_compute[261524]:       <log file="/var/lib/nova/instances/c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0/console.log" append="off"/>
Sep 30 14:48:00 compute-0 nova_compute[261524]:     </serial>
Sep 30 14:48:00 compute-0 nova_compute[261524]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Sep 30 14:48:00 compute-0 nova_compute[261524]:     <video>
Sep 30 14:48:00 compute-0 nova_compute[261524]:       <model type="virtio"/>
Sep 30 14:48:00 compute-0 nova_compute[261524]:     </video>
Sep 30 14:48:00 compute-0 nova_compute[261524]:     <input type="tablet" bus="usb"/>
Sep 30 14:48:00 compute-0 nova_compute[261524]:     <rng model="virtio">
Sep 30 14:48:00 compute-0 nova_compute[261524]:       <backend model="random">/dev/urandom</backend>
Sep 30 14:48:00 compute-0 nova_compute[261524]:     </rng>
Sep 30 14:48:00 compute-0 nova_compute[261524]:     <controller type="pci" model="pcie-root"/>
Sep 30 14:48:00 compute-0 nova_compute[261524]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 14:48:00 compute-0 nova_compute[261524]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 14:48:00 compute-0 nova_compute[261524]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 14:48:00 compute-0 nova_compute[261524]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 14:48:00 compute-0 nova_compute[261524]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 14:48:00 compute-0 nova_compute[261524]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 14:48:00 compute-0 nova_compute[261524]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 14:48:00 compute-0 nova_compute[261524]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 14:48:00 compute-0 nova_compute[261524]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 14:48:00 compute-0 nova_compute[261524]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 14:48:00 compute-0 nova_compute[261524]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 14:48:00 compute-0 nova_compute[261524]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 14:48:00 compute-0 nova_compute[261524]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 14:48:00 compute-0 nova_compute[261524]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 14:48:00 compute-0 nova_compute[261524]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 14:48:00 compute-0 nova_compute[261524]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 14:48:00 compute-0 nova_compute[261524]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 14:48:00 compute-0 nova_compute[261524]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 14:48:00 compute-0 nova_compute[261524]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 14:48:00 compute-0 nova_compute[261524]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 14:48:00 compute-0 nova_compute[261524]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 14:48:00 compute-0 nova_compute[261524]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 14:48:00 compute-0 nova_compute[261524]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 14:48:00 compute-0 nova_compute[261524]:     <controller type="pci" model="pcie-root-port"/>
Sep 30 14:48:00 compute-0 nova_compute[261524]:     <controller type="usb" index="0"/>
Sep 30 14:48:00 compute-0 nova_compute[261524]:     <memballoon model="virtio">
Sep 30 14:48:00 compute-0 nova_compute[261524]:       <stats period="10"/>
Sep 30 14:48:00 compute-0 nova_compute[261524]:     </memballoon>
Sep 30 14:48:00 compute-0 nova_compute[261524]:   </devices>
Sep 30 14:48:00 compute-0 nova_compute[261524]: </domain>
Sep 30 14:48:00 compute-0 nova_compute[261524]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Sep 30 14:48:00 compute-0 nova_compute[261524]: 2025-09-30 14:48:00.827 2 DEBUG nova.compute.manager [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0] Preparing to wait for external event network-vif-plugged-e747243d-8f01-4e0e-b24c-7b450e7731b3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Sep 30 14:48:00 compute-0 nova_compute[261524]: 2025-09-30 14:48:00.828 2 DEBUG oslo_concurrency.lockutils [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Acquiring lock "c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:48:00 compute-0 nova_compute[261524]: 2025-09-30 14:48:00.828 2 DEBUG oslo_concurrency.lockutils [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Lock "c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:48:00 compute-0 nova_compute[261524]: 2025-09-30 14:48:00.829 2 DEBUG oslo_concurrency.lockutils [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Lock "c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:48:00 compute-0 nova_compute[261524]: 2025-09-30 14:48:00.830 2 DEBUG nova.virt.libvirt.vif [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-09-30T14:47:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-739140428',display_name='tempest-TestNetworkBasicOps-server-739140428',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-739140428',id=9,image_ref='7c70cf84-edc3-42b2-a094-ae3c1dbaffe4',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFcOpREO8dqSvT6udbSdc8QolXOyW9sjdRSsFUenM7c5Hmbrvu7VpqSEKGB8rSCraG+oFsQDKRB4CTLJ/+Ql6kKWkz4gT45V1VLpqzcv5KOn9oA9f9iMPaAelP8f/4L6Aw==',key_name='tempest-TestNetworkBasicOps-920022896',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0f6bbb74396f4cb7bfa999ebdabfe722',ramdisk_id='',reservation_id='r-xaem05h3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c70cf84-edc3-42b2-a094-ae3c1dbaffe4',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-195302952',owner_user_name='tempest-TestNetworkBasicOps-195302952-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-09-30T14:47:54Z,user_data=None,user_id='59c80c4f189d4667aec64b43afc69ed2',uuid=c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e747243d-8f01-4e0e-b24c-7b450e7731b3", "address": "fa:16:3e:8f:d1:dc", "network": {"id": "6ec5ed93-a47a-47b3-b4e5-86709a4bab07", "bridge": "br-int", "label": "tempest-network-smoke--1509195432", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0f6bbb74396f4cb7bfa999ebdabfe722", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape747243d-8f", "ovs_interfaceid": "e747243d-8f01-4e0e-b24c-7b450e7731b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Sep 30 14:48:00 compute-0 nova_compute[261524]: 2025-09-30 14:48:00.831 2 DEBUG nova.network.os_vif_util [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Converting VIF {"id": "e747243d-8f01-4e0e-b24c-7b450e7731b3", "address": "fa:16:3e:8f:d1:dc", "network": {"id": "6ec5ed93-a47a-47b3-b4e5-86709a4bab07", "bridge": "br-int", "label": "tempest-network-smoke--1509195432", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0f6bbb74396f4cb7bfa999ebdabfe722", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape747243d-8f", "ovs_interfaceid": "e747243d-8f01-4e0e-b24c-7b450e7731b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Sep 30 14:48:00 compute-0 nova_compute[261524]: 2025-09-30 14:48:00.832 2 DEBUG nova.network.os_vif_util [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8f:d1:dc,bridge_name='br-int',has_traffic_filtering=True,id=e747243d-8f01-4e0e-b24c-7b450e7731b3,network=Network(6ec5ed93-a47a-47b3-b4e5-86709a4bab07),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape747243d-8f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Sep 30 14:48:00 compute-0 nova_compute[261524]: 2025-09-30 14:48:00.832 2 DEBUG os_vif [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8f:d1:dc,bridge_name='br-int',has_traffic_filtering=True,id=e747243d-8f01-4e0e-b24c-7b450e7731b3,network=Network(6ec5ed93-a47a-47b3-b4e5-86709a4bab07),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape747243d-8f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Sep 30 14:48:00 compute-0 nova_compute[261524]: 2025-09-30 14:48:00.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:48:00 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v975: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 14:48:00 compute-0 nova_compute[261524]: 2025-09-30 14:48:00.834 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 14:48:00 compute-0 nova_compute[261524]: 2025-09-30 14:48:00.835 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 14:48:00 compute-0 nova_compute[261524]: 2025-09-30 14:48:00.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:48:00 compute-0 nova_compute[261524]: 2025-09-30 14:48:00.840 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape747243d-8f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 14:48:00 compute-0 nova_compute[261524]: 2025-09-30 14:48:00.840 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape747243d-8f, col_values=(('external_ids', {'iface-id': 'e747243d-8f01-4e0e-b24c-7b450e7731b3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8f:d1:dc', 'vm-uuid': 'c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 14:48:00 compute-0 nova_compute[261524]: 2025-09-30 14:48:00.843 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:48:00 compute-0 NetworkManager[45472]: <info>  [1759243680.8445] manager: (tape747243d-8f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/53)
Sep 30 14:48:00 compute-0 nova_compute[261524]: 2025-09-30 14:48:00.853 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Sep 30 14:48:00 compute-0 nova_compute[261524]: 2025-09-30 14:48:00.854 2 INFO os_vif [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8f:d1:dc,bridge_name='br-int',has_traffic_filtering=True,id=e747243d-8f01-4e0e-b24c-7b450e7731b3,network=Network(6ec5ed93-a47a-47b3-b4e5-86709a4bab07),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape747243d-8f')
Sep 30 14:48:00 compute-0 nova_compute[261524]: 2025-09-30 14:48:00.915 2 DEBUG nova.virt.libvirt.driver [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Sep 30 14:48:00 compute-0 nova_compute[261524]: 2025-09-30 14:48:00.915 2 DEBUG nova.virt.libvirt.driver [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Sep 30 14:48:00 compute-0 nova_compute[261524]: 2025-09-30 14:48:00.917 2 DEBUG nova.virt.libvirt.driver [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] No VIF found with MAC fa:16:3e:8f:d1:dc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Sep 30 14:48:00 compute-0 nova_compute[261524]: 2025-09-30 14:48:00.918 2 INFO nova.virt.libvirt.driver [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0] Using config drive
Sep 30 14:48:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 14:48:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 14:48:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 14:48:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 14:48:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 14:48:00 compute-0 nova_compute[261524]: 2025-09-30 14:48:00.960 2 DEBUG nova.storage.rbd_utils [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] rbd image c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Sep 30 14:48:00 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/947956249' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 14:48:00 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/2345579675' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 14:48:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 14:48:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 14:48:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 14:48:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 14:48:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 14:48:01 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:48:01.408 163966 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c6331d25-78a2-493c-bb43-51ad387342be, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 14:48:01 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:48:01 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 14:48:01 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:48:01.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 14:48:01 compute-0 nova_compute[261524]: 2025-09-30 14:48:01.505 2 INFO nova.virt.libvirt.driver [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0] Creating config drive at /var/lib/nova/instances/c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0/disk.config
Sep 30 14:48:01 compute-0 nova_compute[261524]: 2025-09-30 14:48:01.513 2 DEBUG oslo_concurrency.processutils [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbggiqxan execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Sep 30 14:48:01 compute-0 nova_compute[261524]: 2025-09-30 14:48:01.641 2 DEBUG oslo_concurrency.processutils [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbggiqxan" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Sep 30 14:48:01 compute-0 nova_compute[261524]: 2025-09-30 14:48:01.676 2 DEBUG nova.storage.rbd_utils [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] rbd image c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Sep 30 14:48:01 compute-0 nova_compute[261524]: 2025-09-30 14:48:01.681 2 DEBUG oslo_concurrency.processutils [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0/disk.config c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Sep 30 14:48:01 compute-0 nova_compute[261524]: 2025-09-30 14:48:01.873 2 DEBUG oslo_concurrency.processutils [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0/disk.config c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.191s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Sep 30 14:48:01 compute-0 nova_compute[261524]: 2025-09-30 14:48:01.873 2 INFO nova.virt.libvirt.driver [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0] Deleting local config drive /var/lib/nova/instances/c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0/disk.config because it was imported into RBD.
Sep 30 14:48:01 compute-0 systemd[1]: Starting libvirt secret daemon...
Sep 30 14:48:01 compute-0 systemd[1]: Started libvirt secret daemon.
Sep 30 14:48:01 compute-0 kernel: tape747243d-8f: entered promiscuous mode
Sep 30 14:48:01 compute-0 NetworkManager[45472]: <info>  [1759243681.9683] manager: (tape747243d-8f): new Tun device (/org/freedesktop/NetworkManager/Devices/54)
Sep 30 14:48:01 compute-0 ovn_controller[154021]: 2025-09-30T14:48:01Z|00079|binding|INFO|Claiming lport e747243d-8f01-4e0e-b24c-7b450e7731b3 for this chassis.
Sep 30 14:48:01 compute-0 ovn_controller[154021]: 2025-09-30T14:48:01Z|00080|binding|INFO|e747243d-8f01-4e0e-b24c-7b450e7731b3: Claiming fa:16:3e:8f:d1:dc 10.100.0.11
Sep 30 14:48:01 compute-0 systemd-udevd[280796]: Network interface NamePolicy= disabled on kernel command line.
Sep 30 14:48:01 compute-0 nova_compute[261524]: 2025-09-30 14:48:01.970 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:48:01 compute-0 NetworkManager[45472]: <info>  [1759243681.9805] manager: (patch-br-int-to-provnet-5acf2efb-cf69-45fa-8cf3-f555bc74ee6d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/55)
Sep 30 14:48:01 compute-0 NetworkManager[45472]: <info>  [1759243681.9814] manager: (patch-provnet-5acf2efb-cf69-45fa-8cf3-f555bc74ee6d-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/56)
Sep 30 14:48:01 compute-0 nova_compute[261524]: 2025-09-30 14:48:01.982 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:48:01 compute-0 NetworkManager[45472]: <info>  [1759243681.9926] device (tape747243d-8f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Sep 30 14:48:01 compute-0 NetworkManager[45472]: <info>  [1759243681.9939] device (tape747243d-8f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Sep 30 14:48:01 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:48:01.990 163966 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8f:d1:dc 10.100.0.11'], port_security=['fa:16:3e:8f:d1:dc 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-353521520', 'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6ec5ed93-a47a-47b3-b4e5-86709a4bab07', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-353521520', 'neutron:project_id': '0f6bbb74396f4cb7bfa999ebdabfe722', 'neutron:revision_number': '7', 'neutron:security_group_ids': '577c7718-6276-434c-be06-b394756c15c1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.200'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55cf1edb-01a7-42f4-94c9-ac083fd0aa1f, chassis=[<ovs.db.idl.Row object at 0x7f8c6753f7f0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f8c6753f7f0>], logical_port=e747243d-8f01-4e0e-b24c-7b450e7731b3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Sep 30 14:48:01 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:48:01.992 163966 INFO neutron.agent.ovn.metadata.agent [-] Port e747243d-8f01-4e0e-b24c-7b450e7731b3 in datapath 6ec5ed93-a47a-47b3-b4e5-86709a4bab07 bound to our chassis
Sep 30 14:48:01 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:48:01.993 163966 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6ec5ed93-a47a-47b3-b4e5-86709a4bab07
Sep 30 14:48:02 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:48:02.003 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[3a3eef00-a2d1-4855-aad0-177602d325a5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:48:02 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:48:02.004 163966 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6ec5ed93-a1 in ovnmeta-6ec5ed93-a47a-47b3-b4e5-86709a4bab07 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Sep 30 14:48:02 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:48:02.005 269027 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6ec5ed93-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Sep 30 14:48:02 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:48:02.005 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[cc9faf64-120a-4297-a30f-b01dd0bf0cca]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:48:02 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:48:02.006 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[db254388-1d47-431e-b29f-a859e23d03cf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:48:02 compute-0 ceph-mon[74194]: pgmap v975: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 14:48:02 compute-0 systemd-machined[215710]: New machine qemu-4-instance-00000009.
Sep 30 14:48:02 compute-0 nova_compute[261524]: 2025-09-30 14:48:02.017 2 DEBUG nova.network.neutron [req-a554e7d4-11d7-4718-96c2-a70d567d734d req-06ad225f-890a-4c70-a9db-bdeef95634d9 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0] Updated VIF entry in instance network info cache for port e747243d-8f01-4e0e-b24c-7b450e7731b3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Sep 30 14:48:02 compute-0 nova_compute[261524]: 2025-09-30 14:48:02.018 2 DEBUG nova.network.neutron [req-a554e7d4-11d7-4718-96c2-a70d567d734d req-06ad225f-890a-4c70-a9db-bdeef95634d9 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0] Updating instance_info_cache with network_info: [{"id": "e747243d-8f01-4e0e-b24c-7b450e7731b3", "address": "fa:16:3e:8f:d1:dc", "network": {"id": "6ec5ed93-a47a-47b3-b4e5-86709a4bab07", "bridge": "br-int", "label": "tempest-network-smoke--1509195432", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0f6bbb74396f4cb7bfa999ebdabfe722", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape747243d-8f", "ovs_interfaceid": "e747243d-8f01-4e0e-b24c-7b450e7731b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Sep 30 14:48:02 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:48:02.019 164124 DEBUG oslo.privsep.daemon [-] privsep: reply[d4c06d2f-4269-4cc5-ba2a-b6e1c303b8f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:48:02 compute-0 nova_compute[261524]: 2025-09-30 14:48:02.038 2 DEBUG oslo_concurrency.lockutils [req-a554e7d4-11d7-4718-96c2-a70d567d734d req-06ad225f-890a-4c70-a9db-bdeef95634d9 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Releasing lock "refresh_cache-c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Sep 30 14:48:02 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:48:02.043 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[d17bf79e-5738-4cbe-aa4f-3ebb49c48219]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:48:02 compute-0 systemd[1]: Started Virtual Machine qemu-4-instance-00000009.
Sep 30 14:48:02 compute-0 nova_compute[261524]: 2025-09-30 14:48:02.070 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:48:02 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:48:02.074 269085 DEBUG oslo.privsep.daemon [-] privsep: reply[c40d329b-bd82-4104-af77-f44d57739140]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:48:02 compute-0 nova_compute[261524]: 2025-09-30 14:48:02.080 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:48:02 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:48:02.084 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[9ca08fe0-1a6c-4077-8303-a87168e76efd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:48:02 compute-0 NetworkManager[45472]: <info>  [1759243682.0868] manager: (tap6ec5ed93-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/57)
Sep 30 14:48:02 compute-0 ovn_controller[154021]: 2025-09-30T14:48:02Z|00081|binding|INFO|Setting lport e747243d-8f01-4e0e-b24c-7b450e7731b3 ovn-installed in OVS
Sep 30 14:48:02 compute-0 ovn_controller[154021]: 2025-09-30T14:48:02Z|00082|binding|INFO|Setting lport e747243d-8f01-4e0e-b24c-7b450e7731b3 up in Southbound
Sep 30 14:48:02 compute-0 nova_compute[261524]: 2025-09-30 14:48:02.091 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:48:02 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:48:02.119 269085 DEBUG oslo.privsep.daemon [-] privsep: reply[0311250e-1c91-413b-8bd8-97e9575d52dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:48:02 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:48:02.122 269085 DEBUG oslo.privsep.daemon [-] privsep: reply[50ed1449-0a6d-4bed-b5af-4af721c8bcaf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:48:02 compute-0 NetworkManager[45472]: <info>  [1759243682.1498] device (tap6ec5ed93-a0): carrier: link connected
Sep 30 14:48:02 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:48:02.157 269085 DEBUG oslo.privsep.daemon [-] privsep: reply[bd5fe70d-c09a-47a5-82dc-00aea7886755]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:48:02 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:48:02.186 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[99e78673-7eea-47b0-8467-8c7e4f034ff6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6ec5ed93-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:09:ef:50'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 28], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 704450, 'reachable_time': 24774, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 281025, 'error': None, 'target': 'ovnmeta-6ec5ed93-a47a-47b3-b4e5-86709a4bab07', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:48:02 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:48:02.208 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[a35d2ae5-18c0-4a29-ac47-5642db3e2788]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe09:ef50'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 704450, 'tstamp': 704450}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 281026, 'error': None, 'target': 'ovnmeta-6ec5ed93-a47a-47b3-b4e5-86709a4bab07', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:48:02 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:48:02 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 14:48:02 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:48:02.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 14:48:02 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:48:02.236 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[88cdbee1-eab0-4e60-beaf-c1228db182bd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6ec5ed93-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:09:ef:50'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 28], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 704450, 'reachable_time': 24774, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 281027, 'error': None, 'target': 'ovnmeta-6ec5ed93-a47a-47b3-b4e5-86709a4bab07', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:48:02 compute-0 nova_compute[261524]: 2025-09-30 14:48:02.246 2 DEBUG nova.compute.manager [req-cfe2f2d2-7548-49bc-b785-dfaa76b53ca7 req-9e8470f8-aaf6-49d5-9918-6778664246ca e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0] Received event network-vif-plugged-e747243d-8f01-4e0e-b24c-7b450e7731b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Sep 30 14:48:02 compute-0 nova_compute[261524]: 2025-09-30 14:48:02.246 2 DEBUG oslo_concurrency.lockutils [req-cfe2f2d2-7548-49bc-b785-dfaa76b53ca7 req-9e8470f8-aaf6-49d5-9918-6778664246ca e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Acquiring lock "c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:48:02 compute-0 nova_compute[261524]: 2025-09-30 14:48:02.246 2 DEBUG oslo_concurrency.lockutils [req-cfe2f2d2-7548-49bc-b785-dfaa76b53ca7 req-9e8470f8-aaf6-49d5-9918-6778664246ca e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Lock "c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:48:02 compute-0 nova_compute[261524]: 2025-09-30 14:48:02.246 2 DEBUG oslo_concurrency.lockutils [req-cfe2f2d2-7548-49bc-b785-dfaa76b53ca7 req-9e8470f8-aaf6-49d5-9918-6778664246ca e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Lock "c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:48:02 compute-0 nova_compute[261524]: 2025-09-30 14:48:02.247 2 DEBUG nova.compute.manager [req-cfe2f2d2-7548-49bc-b785-dfaa76b53ca7 req-9e8470f8-aaf6-49d5-9918-6778664246ca e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0] Processing event network-vif-plugged-e747243d-8f01-4e0e-b24c-7b450e7731b3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Sep 30 14:48:02 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:48:02.276 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[32985bdf-519e-4ab3-880a-d9dc94c8dedd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:48:02 compute-0 ceph-mgr[74485]: [devicehealth INFO root] Check health
Sep 30 14:48:02 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:48:02 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:48:02.355 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[3be9812e-628d-4d81-ab93-3cc9a09c6f99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:48:02 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:48:02.356 163966 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6ec5ed93-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 14:48:02 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:48:02.356 163966 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Sep 30 14:48:02 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:48:02.357 163966 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6ec5ed93-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 14:48:02 compute-0 kernel: tap6ec5ed93-a0: entered promiscuous mode
Sep 30 14:48:02 compute-0 nova_compute[261524]: 2025-09-30 14:48:02.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:48:02 compute-0 NetworkManager[45472]: <info>  [1759243682.3607] manager: (tap6ec5ed93-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/58)
Sep 30 14:48:02 compute-0 nova_compute[261524]: 2025-09-30 14:48:02.363 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:48:02 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:48:02.364 163966 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6ec5ed93-a0, col_values=(('external_ids', {'iface-id': 'ec6f655d-5421-44f6-9d24-d355b27cc206'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 14:48:02 compute-0 nova_compute[261524]: 2025-09-30 14:48:02.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:48:02 compute-0 ovn_controller[154021]: 2025-09-30T14:48:02Z|00083|binding|INFO|Releasing lport ec6f655d-5421-44f6-9d24-d355b27cc206 from this chassis (sb_readonly=0)
Sep 30 14:48:02 compute-0 nova_compute[261524]: 2025-09-30 14:48:02.394 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:48:02 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:48:02.396 163966 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6ec5ed93-a47a-47b3-b4e5-86709a4bab07.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6ec5ed93-a47a-47b3-b4e5-86709a4bab07.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Sep 30 14:48:02 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:48:02.397 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[7fd97ab7-1d55-4b42-a405-68c77c70b9e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:48:02 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:48:02.398 163966 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Sep 30 14:48:02 compute-0 ovn_metadata_agent[163949]: global
Sep 30 14:48:02 compute-0 ovn_metadata_agent[163949]:     log         /dev/log local0 debug
Sep 30 14:48:02 compute-0 ovn_metadata_agent[163949]:     log-tag     haproxy-metadata-proxy-6ec5ed93-a47a-47b3-b4e5-86709a4bab07
Sep 30 14:48:02 compute-0 ovn_metadata_agent[163949]:     user        root
Sep 30 14:48:02 compute-0 ovn_metadata_agent[163949]:     group       root
Sep 30 14:48:02 compute-0 ovn_metadata_agent[163949]:     maxconn     1024
Sep 30 14:48:02 compute-0 ovn_metadata_agent[163949]:     pidfile     /var/lib/neutron/external/pids/6ec5ed93-a47a-47b3-b4e5-86709a4bab07.pid.haproxy
Sep 30 14:48:02 compute-0 ovn_metadata_agent[163949]:     daemon
Sep 30 14:48:02 compute-0 ovn_metadata_agent[163949]: 
Sep 30 14:48:02 compute-0 ovn_metadata_agent[163949]: defaults
Sep 30 14:48:02 compute-0 ovn_metadata_agent[163949]:     log global
Sep 30 14:48:02 compute-0 ovn_metadata_agent[163949]:     mode http
Sep 30 14:48:02 compute-0 ovn_metadata_agent[163949]:     option httplog
Sep 30 14:48:02 compute-0 ovn_metadata_agent[163949]:     option dontlognull
Sep 30 14:48:02 compute-0 ovn_metadata_agent[163949]:     option http-server-close
Sep 30 14:48:02 compute-0 ovn_metadata_agent[163949]:     option forwardfor
Sep 30 14:48:02 compute-0 ovn_metadata_agent[163949]:     retries                 3
Sep 30 14:48:02 compute-0 ovn_metadata_agent[163949]:     timeout http-request    30s
Sep 30 14:48:02 compute-0 ovn_metadata_agent[163949]:     timeout connect         30s
Sep 30 14:48:02 compute-0 ovn_metadata_agent[163949]:     timeout client          32s
Sep 30 14:48:02 compute-0 ovn_metadata_agent[163949]:     timeout server          32s
Sep 30 14:48:02 compute-0 ovn_metadata_agent[163949]:     timeout http-keep-alive 30s
Sep 30 14:48:02 compute-0 ovn_metadata_agent[163949]: 
Sep 30 14:48:02 compute-0 ovn_metadata_agent[163949]: 
Sep 30 14:48:02 compute-0 ovn_metadata_agent[163949]: listen listener
Sep 30 14:48:02 compute-0 ovn_metadata_agent[163949]:     bind 169.254.169.254:80
Sep 30 14:48:02 compute-0 ovn_metadata_agent[163949]:     server metadata /var/lib/neutron/metadata_proxy
Sep 30 14:48:02 compute-0 ovn_metadata_agent[163949]:     http-request add-header X-OVN-Network-ID 6ec5ed93-a47a-47b3-b4e5-86709a4bab07
Sep 30 14:48:02 compute-0 ovn_metadata_agent[163949]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Sep 30 14:48:02 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:48:02.399 163966 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6ec5ed93-a47a-47b3-b4e5-86709a4bab07', 'env', 'PROCESS_TAG=haproxy-6ec5ed93-a47a-47b3-b4e5-86709a4bab07', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6ec5ed93-a47a-47b3-b4e5-86709a4bab07.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Sep 30 14:48:02 compute-0 podman[281059]: 2025-09-30 14:48:02.749217893 +0000 UTC m=+0.040185581 container create 78c8bae7e2f8414e1b4e5802d03d2db491dee5a46f66b4f059ea5ce672c9fb0a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6ec5ed93-a47a-47b3-b4e5-86709a4bab07, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.build-date=20250923, tcib_managed=true, org.label-schema.license=GPLv2)
Sep 30 14:48:02 compute-0 systemd[1]: Started libpod-conmon-78c8bae7e2f8414e1b4e5802d03d2db491dee5a46f66b4f059ea5ce672c9fb0a.scope.
Sep 30 14:48:02 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:48:02 compute-0 podman[281059]: 2025-09-30 14:48:02.728525373 +0000 UTC m=+0.019493081 image pull aa21cc3d2531fe07b45a943d4ac1ba0268bfab26b0884a4a00fbad7695318ba9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Sep 30 14:48:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11038bf5f9708c1f76ead9134f377d257b6499f9d9fe91149ccaeeabee1cc9eb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Sep 30 14:48:02 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v976: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 14:48:02 compute-0 podman[281059]: 2025-09-30 14:48:02.847887665 +0000 UTC m=+0.138855443 container init 78c8bae7e2f8414e1b4e5802d03d2db491dee5a46f66b4f059ea5ce672c9fb0a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6ec5ed93-a47a-47b3-b4e5-86709a4bab07, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true)
Sep 30 14:48:02 compute-0 podman[281059]: 2025-09-30 14:48:02.859783918 +0000 UTC m=+0.150751646 container start 78c8bae7e2f8414e1b4e5802d03d2db491dee5a46f66b4f059ea5ce672c9fb0a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6ec5ed93-a47a-47b3-b4e5-86709a4bab07, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Sep 30 14:48:02 compute-0 neutron-haproxy-ovnmeta-6ec5ed93-a47a-47b3-b4e5-86709a4bab07[281075]: [NOTICE]   (281079) : New worker (281081) forked
Sep 30 14:48:02 compute-0 neutron-haproxy-ovnmeta-6ec5ed93-a47a-47b3-b4e5-86709a4bab07[281075]: [NOTICE]   (281079) : Loading success.
Sep 30 14:48:03 compute-0 podman[281130]: 2025-09-30 14:48:03.371962121 +0000 UTC m=+0.084475143 container health_status 3f9405f717bf7bccb1d94628a6cea0442375ebf8d5cf43ef2536ee30dce6c6e0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, org.label-schema.build-date=20250923, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid)
Sep 30 14:48:03 compute-0 podman[281133]: 2025-09-30 14:48:03.383018784 +0000 UTC m=+0.086010661 container health_status b3fb2fd96e3ed0561f41ccf5f3a6a8f5edc1610051f196882b9fe64f54041c07 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Sep 30 14:48:03 compute-0 podman[281134]: 2025-09-30 14:48:03.390421696 +0000 UTC m=+0.099593465 container health_status c96c6cc1b89de09f21811bef665f35e069874e3ad1bbd4c37d02227af98e3458 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Sep 30 14:48:03 compute-0 podman[281131]: 2025-09-30 14:48:03.439523906 +0000 UTC m=+0.148996183 container health_status 8ace90c1ad44d98a416105b72e1c541724757ab08efa10adfaa0df7e8c2027c6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20250923, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Sep 30 14:48:03 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:48:03 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 14:48:03 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:48:03.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 14:48:03 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:48:03.677Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:48:03 compute-0 nova_compute[261524]: 2025-09-30 14:48:03.822 2 DEBUG nova.virt.driver [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] Emitting event <LifecycleEvent: 1759243683.8217402, c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Sep 30 14:48:03 compute-0 nova_compute[261524]: 2025-09-30 14:48:03.823 2 INFO nova.compute.manager [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] [instance: c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0] VM Started (Lifecycle Event)
Sep 30 14:48:03 compute-0 nova_compute[261524]: 2025-09-30 14:48:03.825 2 DEBUG nova.compute.manager [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Sep 30 14:48:03 compute-0 nova_compute[261524]: 2025-09-30 14:48:03.831 2 DEBUG nova.virt.libvirt.driver [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Sep 30 14:48:03 compute-0 nova_compute[261524]: 2025-09-30 14:48:03.834 2 INFO nova.virt.libvirt.driver [-] [instance: c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0] Instance spawned successfully.
Sep 30 14:48:03 compute-0 nova_compute[261524]: 2025-09-30 14:48:03.834 2 DEBUG nova.virt.libvirt.driver [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Sep 30 14:48:03 compute-0 nova_compute[261524]: 2025-09-30 14:48:03.842 2 DEBUG nova.compute.manager [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] [instance: c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Sep 30 14:48:03 compute-0 nova_compute[261524]: 2025-09-30 14:48:03.845 2 DEBUG nova.compute.manager [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] [instance: c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Sep 30 14:48:03 compute-0 nova_compute[261524]: 2025-09-30 14:48:03.854 2 DEBUG nova.virt.libvirt.driver [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Sep 30 14:48:03 compute-0 nova_compute[261524]: 2025-09-30 14:48:03.854 2 DEBUG nova.virt.libvirt.driver [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Sep 30 14:48:03 compute-0 nova_compute[261524]: 2025-09-30 14:48:03.855 2 DEBUG nova.virt.libvirt.driver [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Sep 30 14:48:03 compute-0 nova_compute[261524]: 2025-09-30 14:48:03.855 2 DEBUG nova.virt.libvirt.driver [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Sep 30 14:48:03 compute-0 nova_compute[261524]: 2025-09-30 14:48:03.856 2 DEBUG nova.virt.libvirt.driver [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Sep 30 14:48:03 compute-0 nova_compute[261524]: 2025-09-30 14:48:03.856 2 DEBUG nova.virt.libvirt.driver [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Sep 30 14:48:03 compute-0 nova_compute[261524]: 2025-09-30 14:48:03.867 2 INFO nova.compute.manager [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] [instance: c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0] During sync_power_state the instance has a pending task (spawning). Skip.
Sep 30 14:48:03 compute-0 nova_compute[261524]: 2025-09-30 14:48:03.868 2 DEBUG nova.virt.driver [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] Emitting event <LifecycleEvent: 1759243683.8222718, c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Sep 30 14:48:03 compute-0 nova_compute[261524]: 2025-09-30 14:48:03.868 2 INFO nova.compute.manager [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] [instance: c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0] VM Paused (Lifecycle Event)
Sep 30 14:48:03 compute-0 nova_compute[261524]: 2025-09-30 14:48:03.893 2 DEBUG nova.compute.manager [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] [instance: c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Sep 30 14:48:03 compute-0 nova_compute[261524]: 2025-09-30 14:48:03.897 2 DEBUG nova.virt.driver [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] Emitting event <LifecycleEvent: 1759243683.828375, c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Sep 30 14:48:03 compute-0 nova_compute[261524]: 2025-09-30 14:48:03.897 2 INFO nova.compute.manager [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] [instance: c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0] VM Resumed (Lifecycle Event)
Sep 30 14:48:03 compute-0 nova_compute[261524]: 2025-09-30 14:48:03.917 2 DEBUG nova.compute.manager [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] [instance: c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Sep 30 14:48:03 compute-0 nova_compute[261524]: 2025-09-30 14:48:03.922 2 INFO nova.compute.manager [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0] Took 9.04 seconds to spawn the instance on the hypervisor.
Sep 30 14:48:03 compute-0 nova_compute[261524]: 2025-09-30 14:48:03.923 2 DEBUG nova.compute.manager [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Sep 30 14:48:03 compute-0 nova_compute[261524]: 2025-09-30 14:48:03.923 2 DEBUG nova.compute.manager [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] [instance: c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Sep 30 14:48:03 compute-0 nova_compute[261524]: 2025-09-30 14:48:03.950 2 INFO nova.compute.manager [None req-922298cc-cb02-4ff8-877b-38e4c5e1ae62 - - - - - -] [instance: c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0] During sync_power_state the instance has a pending task (spawning). Skip.
Sep 30 14:48:03 compute-0 nova_compute[261524]: 2025-09-30 14:48:03.985 2 INFO nova.compute.manager [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0] Took 10.09 seconds to build instance.
Sep 30 14:48:03 compute-0 nova_compute[261524]: 2025-09-30 14:48:03.998 2 DEBUG oslo_concurrency.lockutils [None req-062b1e96-4596-49b0-9080-30b31dde913f 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Lock "c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.164s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:48:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:48:03 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:48:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:48:03 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:48:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:48:03 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:48:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:48:04 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:48:04 compute-0 ceph-mon[74194]: pgmap v976: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 14:48:04 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:48:04 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.002000049s ======
Sep 30 14:48:04 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:48:04.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000049s
Sep 30 14:48:04 compute-0 nova_compute[261524]: 2025-09-30 14:48:04.322 2 DEBUG nova.compute.manager [req-c636f62b-70e6-471a-a3af-23628da8819c req-ed021da3-bc2c-4a4a-aa1c-36c3a5a06a9d e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0] Received event network-vif-plugged-e747243d-8f01-4e0e-b24c-7b450e7731b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Sep 30 14:48:04 compute-0 nova_compute[261524]: 2025-09-30 14:48:04.323 2 DEBUG oslo_concurrency.lockutils [req-c636f62b-70e6-471a-a3af-23628da8819c req-ed021da3-bc2c-4a4a-aa1c-36c3a5a06a9d e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Acquiring lock "c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:48:04 compute-0 nova_compute[261524]: 2025-09-30 14:48:04.324 2 DEBUG oslo_concurrency.lockutils [req-c636f62b-70e6-471a-a3af-23628da8819c req-ed021da3-bc2c-4a4a-aa1c-36c3a5a06a9d e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Lock "c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:48:04 compute-0 nova_compute[261524]: 2025-09-30 14:48:04.325 2 DEBUG oslo_concurrency.lockutils [req-c636f62b-70e6-471a-a3af-23628da8819c req-ed021da3-bc2c-4a4a-aa1c-36c3a5a06a9d e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Lock "c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:48:04 compute-0 nova_compute[261524]: 2025-09-30 14:48:04.325 2 DEBUG nova.compute.manager [req-c636f62b-70e6-471a-a3af-23628da8819c req-ed021da3-bc2c-4a4a-aa1c-36c3a5a06a9d e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0] No waiting events found dispatching network-vif-plugged-e747243d-8f01-4e0e-b24c-7b450e7731b3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Sep 30 14:48:04 compute-0 nova_compute[261524]: 2025-09-30 14:48:04.326 2 WARNING nova.compute.manager [req-c636f62b-70e6-471a-a3af-23628da8819c req-ed021da3-bc2c-4a4a-aa1c-36c3a5a06a9d e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0] Received unexpected event network-vif-plugged-e747243d-8f01-4e0e-b24c-7b450e7731b3 for instance with vm_state active and task_state None.
Sep 30 14:48:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:48:04] "GET /metrics HTTP/1.1" 200 48551 "" "Prometheus/2.51.0"
Sep 30 14:48:04 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:48:04] "GET /metrics HTTP/1.1" 200 48551 "" "Prometheus/2.51.0"
Sep 30 14:48:04 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v977: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 14:48:05 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:48:05 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:48:05 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:48:05.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:48:05 compute-0 nova_compute[261524]: 2025-09-30 14:48:05.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:48:05 compute-0 sudo[281220]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:48:05 compute-0 sudo[281220]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:48:05 compute-0 sudo[281220]: pam_unix(sudo:session): session closed for user root
Sep 30 14:48:05 compute-0 nova_compute[261524]: 2025-09-30 14:48:05.843 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:48:06 compute-0 ceph-mon[74194]: pgmap v977: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 14:48:06 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:48:06 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:48:06 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:48:06.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:48:06 compute-0 nova_compute[261524]: 2025-09-30 14:48:06.472 2 DEBUG oslo_concurrency.lockutils [None req-619cb4e8-4308-47a4-8271-da7d6d458c63 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Acquiring lock "c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:48:06 compute-0 nova_compute[261524]: 2025-09-30 14:48:06.473 2 DEBUG oslo_concurrency.lockutils [None req-619cb4e8-4308-47a4-8271-da7d6d458c63 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Lock "c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:48:06 compute-0 nova_compute[261524]: 2025-09-30 14:48:06.473 2 DEBUG oslo_concurrency.lockutils [None req-619cb4e8-4308-47a4-8271-da7d6d458c63 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Acquiring lock "c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:48:06 compute-0 nova_compute[261524]: 2025-09-30 14:48:06.474 2 DEBUG oslo_concurrency.lockutils [None req-619cb4e8-4308-47a4-8271-da7d6d458c63 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Lock "c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:48:06 compute-0 nova_compute[261524]: 2025-09-30 14:48:06.474 2 DEBUG oslo_concurrency.lockutils [None req-619cb4e8-4308-47a4-8271-da7d6d458c63 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Lock "c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:48:06 compute-0 nova_compute[261524]: 2025-09-30 14:48:06.475 2 INFO nova.compute.manager [None req-619cb4e8-4308-47a4-8271-da7d6d458c63 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0] Terminating instance
Sep 30 14:48:06 compute-0 nova_compute[261524]: 2025-09-30 14:48:06.476 2 DEBUG nova.compute.manager [None req-619cb4e8-4308-47a4-8271-da7d6d458c63 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Sep 30 14:48:06 compute-0 kernel: tape747243d-8f (unregistering): left promiscuous mode
Sep 30 14:48:06 compute-0 NetworkManager[45472]: <info>  [1759243686.5207] device (tape747243d-8f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Sep 30 14:48:06 compute-0 nova_compute[261524]: 2025-09-30 14:48:06.531 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:48:06 compute-0 ovn_controller[154021]: 2025-09-30T14:48:06Z|00084|binding|INFO|Releasing lport e747243d-8f01-4e0e-b24c-7b450e7731b3 from this chassis (sb_readonly=0)
Sep 30 14:48:06 compute-0 ovn_controller[154021]: 2025-09-30T14:48:06Z|00085|binding|INFO|Setting lport e747243d-8f01-4e0e-b24c-7b450e7731b3 down in Southbound
Sep 30 14:48:06 compute-0 ovn_controller[154021]: 2025-09-30T14:48:06Z|00086|binding|INFO|Removing iface tape747243d-8f ovn-installed in OVS
Sep 30 14:48:06 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:48:06.538 163966 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8f:d1:dc 10.100.0.11'], port_security=['fa:16:3e:8f:d1:dc 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-353521520', 'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6ec5ed93-a47a-47b3-b4e5-86709a4bab07', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-353521520', 'neutron:project_id': '0f6bbb74396f4cb7bfa999ebdabfe722', 'neutron:revision_number': '9', 'neutron:security_group_ids': '577c7718-6276-434c-be06-b394756c15c1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.200', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55cf1edb-01a7-42f4-94c9-ac083fd0aa1f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f8c6753f7f0>], logical_port=e747243d-8f01-4e0e-b24c-7b450e7731b3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f8c6753f7f0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Sep 30 14:48:06 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:48:06.539 163966 INFO neutron.agent.ovn.metadata.agent [-] Port e747243d-8f01-4e0e-b24c-7b450e7731b3 in datapath 6ec5ed93-a47a-47b3-b4e5-86709a4bab07 unbound from our chassis
Sep 30 14:48:06 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:48:06.540 163966 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6ec5ed93-a47a-47b3-b4e5-86709a4bab07, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Sep 30 14:48:06 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:48:06.541 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[e03898a2-b230-4c90-9f77-d45443eb819a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:48:06 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:48:06.542 163966 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6ec5ed93-a47a-47b3-b4e5-86709a4bab07 namespace which is not needed anymore
Sep 30 14:48:06 compute-0 nova_compute[261524]: 2025-09-30 14:48:06.563 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:48:06 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000009.scope: Deactivated successfully.
Sep 30 14:48:06 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000009.scope: Consumed 4.471s CPU time.
Sep 30 14:48:06 compute-0 systemd-machined[215710]: Machine qemu-4-instance-00000009 terminated.
Sep 30 14:48:06 compute-0 neutron-haproxy-ovnmeta-6ec5ed93-a47a-47b3-b4e5-86709a4bab07[281075]: [NOTICE]   (281079) : haproxy version is 2.8.14-c23fe91
Sep 30 14:48:06 compute-0 neutron-haproxy-ovnmeta-6ec5ed93-a47a-47b3-b4e5-86709a4bab07[281075]: [NOTICE]   (281079) : path to executable is /usr/sbin/haproxy
Sep 30 14:48:06 compute-0 neutron-haproxy-ovnmeta-6ec5ed93-a47a-47b3-b4e5-86709a4bab07[281075]: [WARNING]  (281079) : Exiting Master process...
Sep 30 14:48:06 compute-0 neutron-haproxy-ovnmeta-6ec5ed93-a47a-47b3-b4e5-86709a4bab07[281075]: [ALERT]    (281079) : Current worker (281081) exited with code 143 (Terminated)
Sep 30 14:48:06 compute-0 neutron-haproxy-ovnmeta-6ec5ed93-a47a-47b3-b4e5-86709a4bab07[281075]: [WARNING]  (281079) : All workers exited. Exiting... (0)
Sep 30 14:48:06 compute-0 systemd[1]: libpod-78c8bae7e2f8414e1b4e5802d03d2db491dee5a46f66b4f059ea5ce672c9fb0a.scope: Deactivated successfully.
Sep 30 14:48:06 compute-0 podman[281269]: 2025-09-30 14:48:06.678570393 +0000 UTC m=+0.050991760 container died 78c8bae7e2f8414e1b4e5802d03d2db491dee5a46f66b4f059ea5ce672c9fb0a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6ec5ed93-a47a-47b3-b4e5-86709a4bab07, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Sep 30 14:48:06 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-78c8bae7e2f8414e1b4e5802d03d2db491dee5a46f66b4f059ea5ce672c9fb0a-userdata-shm.mount: Deactivated successfully.
Sep 30 14:48:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-11038bf5f9708c1f76ead9134f377d257b6499f9d9fe91149ccaeeabee1cc9eb-merged.mount: Deactivated successfully.
Sep 30 14:48:06 compute-0 nova_compute[261524]: 2025-09-30 14:48:06.729 2 INFO nova.virt.libvirt.driver [-] [instance: c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0] Instance destroyed successfully.
Sep 30 14:48:06 compute-0 podman[281269]: 2025-09-30 14:48:06.732901001 +0000 UTC m=+0.105322368 container cleanup 78c8bae7e2f8414e1b4e5802d03d2db491dee5a46f66b4f059ea5ce672c9fb0a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6ec5ed93-a47a-47b3-b4e5-86709a4bab07, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, io.buildah.version=1.41.3)
Sep 30 14:48:06 compute-0 nova_compute[261524]: 2025-09-30 14:48:06.730 2 DEBUG nova.objects.instance [None req-619cb4e8-4308-47a4-8271-da7d6d458c63 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Lazy-loading 'resources' on Instance uuid c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Sep 30 14:48:06 compute-0 systemd[1]: libpod-conmon-78c8bae7e2f8414e1b4e5802d03d2db491dee5a46f66b4f059ea5ce672c9fb0a.scope: Deactivated successfully.
Sep 30 14:48:06 compute-0 nova_compute[261524]: 2025-09-30 14:48:06.762 2 DEBUG nova.virt.libvirt.vif [None req-619cb4e8-4308-47a4-8271-da7d6d458c63 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-09-30T14:47:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-739140428',display_name='tempest-TestNetworkBasicOps-server-739140428',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-739140428',id=9,image_ref='7c70cf84-edc3-42b2-a094-ae3c1dbaffe4',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFcOpREO8dqSvT6udbSdc8QolXOyW9sjdRSsFUenM7c5Hmbrvu7VpqSEKGB8rSCraG+oFsQDKRB4CTLJ/+Ql6kKWkz4gT45V1VLpqzcv5KOn9oA9f9iMPaAelP8f/4L6Aw==',key_name='tempest-TestNetworkBasicOps-920022896',keypairs=<?>,launch_index=0,launched_at=2025-09-30T14:48:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0f6bbb74396f4cb7bfa999ebdabfe722',ramdisk_id='',reservation_id='r-xaem05h3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c70cf84-edc3-42b2-a094-ae3c1dbaffe4',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-195302952',owner_user_name='tempest-TestNetworkBasicOps-195302952-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-09-30T14:48:03Z,user_data=None,user_id='59c80c4f189d4667aec64b43afc69ed2',uuid=c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e747243d-8f01-4e0e-b24c-7b450e7731b3", "address": "fa:16:3e:8f:d1:dc", "network": {"id": "6ec5ed93-a47a-47b3-b4e5-86709a4bab07", "bridge": "br-int", "label": "tempest-network-smoke--1509195432", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0f6bbb74396f4cb7bfa999ebdabfe722", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape747243d-8f", "ovs_interfaceid": "e747243d-8f01-4e0e-b24c-7b450e7731b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Sep 30 14:48:06 compute-0 nova_compute[261524]: 2025-09-30 14:48:06.763 2 DEBUG nova.network.os_vif_util [None req-619cb4e8-4308-47a4-8271-da7d6d458c63 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Converting VIF {"id": "e747243d-8f01-4e0e-b24c-7b450e7731b3", "address": "fa:16:3e:8f:d1:dc", "network": {"id": "6ec5ed93-a47a-47b3-b4e5-86709a4bab07", "bridge": "br-int", "label": "tempest-network-smoke--1509195432", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0f6bbb74396f4cb7bfa999ebdabfe722", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape747243d-8f", "ovs_interfaceid": "e747243d-8f01-4e0e-b24c-7b450e7731b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Sep 30 14:48:06 compute-0 nova_compute[261524]: 2025-09-30 14:48:06.764 2 DEBUG nova.network.os_vif_util [None req-619cb4e8-4308-47a4-8271-da7d6d458c63 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8f:d1:dc,bridge_name='br-int',has_traffic_filtering=True,id=e747243d-8f01-4e0e-b24c-7b450e7731b3,network=Network(6ec5ed93-a47a-47b3-b4e5-86709a4bab07),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape747243d-8f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Sep 30 14:48:06 compute-0 nova_compute[261524]: 2025-09-30 14:48:06.764 2 DEBUG os_vif [None req-619cb4e8-4308-47a4-8271-da7d6d458c63 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8f:d1:dc,bridge_name='br-int',has_traffic_filtering=True,id=e747243d-8f01-4e0e-b24c-7b450e7731b3,network=Network(6ec5ed93-a47a-47b3-b4e5-86709a4bab07),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape747243d-8f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Sep 30 14:48:06 compute-0 nova_compute[261524]: 2025-09-30 14:48:06.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:48:06 compute-0 nova_compute[261524]: 2025-09-30 14:48:06.766 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape747243d-8f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 14:48:06 compute-0 nova_compute[261524]: 2025-09-30 14:48:06.767 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:48:06 compute-0 nova_compute[261524]: 2025-09-30 14:48:06.769 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:48:06 compute-0 nova_compute[261524]: 2025-09-30 14:48:06.773 2 INFO os_vif [None req-619cb4e8-4308-47a4-8271-da7d6d458c63 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8f:d1:dc,bridge_name='br-int',has_traffic_filtering=True,id=e747243d-8f01-4e0e-b24c-7b450e7731b3,network=Network(6ec5ed93-a47a-47b3-b4e5-86709a4bab07),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape747243d-8f')
Sep 30 14:48:06 compute-0 podman[281308]: 2025-09-30 14:48:06.810089998 +0000 UTC m=+0.049662515 container remove 78c8bae7e2f8414e1b4e5802d03d2db491dee5a46f66b4f059ea5ce672c9fb0a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6ec5ed93-a47a-47b3-b4e5-86709a4bab07, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20250923, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Sep 30 14:48:06 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:48:06.817 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[8e9486b1-9be9-4b11-ad3b-95f157bb7093]: (4, ('Tue Sep 30 02:48:06 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6ec5ed93-a47a-47b3-b4e5-86709a4bab07 (78c8bae7e2f8414e1b4e5802d03d2db491dee5a46f66b4f059ea5ce672c9fb0a)\n78c8bae7e2f8414e1b4e5802d03d2db491dee5a46f66b4f059ea5ce672c9fb0a\nTue Sep 30 02:48:06 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6ec5ed93-a47a-47b3-b4e5-86709a4bab07 (78c8bae7e2f8414e1b4e5802d03d2db491dee5a46f66b4f059ea5ce672c9fb0a)\n78c8bae7e2f8414e1b4e5802d03d2db491dee5a46f66b4f059ea5ce672c9fb0a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:48:06 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:48:06.818 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[a1e900c9-c528-4f13-84b6-e40e315693ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:48:06 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:48:06.819 163966 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6ec5ed93-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 14:48:06 compute-0 nova_compute[261524]: 2025-09-30 14:48:06.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:48:06 compute-0 kernel: tap6ec5ed93-a0: left promiscuous mode
Sep 30 14:48:06 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v978: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Sep 30 14:48:06 compute-0 nova_compute[261524]: 2025-09-30 14:48:06.842 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:48:06 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:48:06.843 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[efe87c06-1c04-4f8d-ab7e-c7235bf0d6e3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:48:06 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:48:06.874 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[ffbbd457-3bf8-49ad-98bc-9e33df987133]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:48:06 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:48:06.876 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[6e435e3f-525c-4363-92b8-1fbfa365761a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:48:06 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:48:06.892 269027 DEBUG oslo.privsep.daemon [-] privsep: reply[fef6ee65-57e1-4800-9c7a-65601ad694a8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 704441, 'reachable_time': 42192, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 281342, 'error': None, 'target': 'ovnmeta-6ec5ed93-a47a-47b3-b4e5-86709a4bab07', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:48:06 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:48:06.896 164124 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6ec5ed93-a47a-47b3-b4e5-86709a4bab07 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Sep 30 14:48:06 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:48:06.896 164124 DEBUG oslo.privsep.daemon [-] privsep: reply[724d8224-8aec-403a-8500-26df9124801c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Sep 30 14:48:06 compute-0 systemd[1]: run-netns-ovnmeta\x2d6ec5ed93\x2da47a\x2d47b3\x2db4e5\x2d86709a4bab07.mount: Deactivated successfully.
Sep 30 14:48:07 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:48:07.177Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:48:07 compute-0 nova_compute[261524]: 2025-09-30 14:48:07.244 2 DEBUG nova.compute.manager [req-9094c25d-ebd3-4068-ab36-2d52ab3b2555 req-c7c8176d-7ee8-4428-9c2b-fbad8b85c027 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0] Received event network-vif-unplugged-e747243d-8f01-4e0e-b24c-7b450e7731b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Sep 30 14:48:07 compute-0 nova_compute[261524]: 2025-09-30 14:48:07.245 2 DEBUG oslo_concurrency.lockutils [req-9094c25d-ebd3-4068-ab36-2d52ab3b2555 req-c7c8176d-7ee8-4428-9c2b-fbad8b85c027 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Acquiring lock "c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:48:07 compute-0 nova_compute[261524]: 2025-09-30 14:48:07.245 2 DEBUG oslo_concurrency.lockutils [req-9094c25d-ebd3-4068-ab36-2d52ab3b2555 req-c7c8176d-7ee8-4428-9c2b-fbad8b85c027 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Lock "c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:48:07 compute-0 nova_compute[261524]: 2025-09-30 14:48:07.246 2 DEBUG oslo_concurrency.lockutils [req-9094c25d-ebd3-4068-ab36-2d52ab3b2555 req-c7c8176d-7ee8-4428-9c2b-fbad8b85c027 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Lock "c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:48:07 compute-0 nova_compute[261524]: 2025-09-30 14:48:07.246 2 DEBUG nova.compute.manager [req-9094c25d-ebd3-4068-ab36-2d52ab3b2555 req-c7c8176d-7ee8-4428-9c2b-fbad8b85c027 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0] No waiting events found dispatching network-vif-unplugged-e747243d-8f01-4e0e-b24c-7b450e7731b3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Sep 30 14:48:07 compute-0 nova_compute[261524]: 2025-09-30 14:48:07.246 2 DEBUG nova.compute.manager [req-9094c25d-ebd3-4068-ab36-2d52ab3b2555 req-c7c8176d-7ee8-4428-9c2b-fbad8b85c027 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0] Received event network-vif-unplugged-e747243d-8f01-4e0e-b24c-7b450e7731b3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Sep 30 14:48:07 compute-0 nova_compute[261524]: 2025-09-30 14:48:07.327 2 INFO nova.virt.libvirt.driver [None req-619cb4e8-4308-47a4-8271-da7d6d458c63 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0] Deleting instance files /var/lib/nova/instances/c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0_del
Sep 30 14:48:07 compute-0 nova_compute[261524]: 2025-09-30 14:48:07.328 2 INFO nova.virt.libvirt.driver [None req-619cb4e8-4308-47a4-8271-da7d6d458c63 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0] Deletion of /var/lib/nova/instances/c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0_del complete
Sep 30 14:48:07 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:48:07 compute-0 nova_compute[261524]: 2025-09-30 14:48:07.389 2 INFO nova.compute.manager [None req-619cb4e8-4308-47a4-8271-da7d6d458c63 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] [instance: c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0] Took 0.91 seconds to destroy the instance on the hypervisor.
Sep 30 14:48:07 compute-0 nova_compute[261524]: 2025-09-30 14:48:07.390 2 DEBUG oslo.service.loopingcall [None req-619cb4e8-4308-47a4-8271-da7d6d458c63 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Sep 30 14:48:07 compute-0 nova_compute[261524]: 2025-09-30 14:48:07.390 2 DEBUG nova.compute.manager [-] [instance: c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Sep 30 14:48:07 compute-0 nova_compute[261524]: 2025-09-30 14:48:07.391 2 DEBUG nova.network.neutron [-] [instance: c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Sep 30 14:48:07 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:48:07 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:48:07 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:48:07.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:48:08 compute-0 ceph-mon[74194]: pgmap v978: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Sep 30 14:48:08 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:48:08 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:48:08 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:48:08.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:48:08 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v979: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Sep 30 14:48:09 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:48:08 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:48:09 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:48:08 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:48:09 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:48:08 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:48:09 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:48:09 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:48:09 compute-0 nova_compute[261524]: 2025-09-30 14:48:09.343 2 DEBUG nova.compute.manager [req-6b0d3ac9-0a65-4314-b576-ae557a8cba60 req-15d62182-c15f-4161-aae9-0ce21b67f101 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0] Received event network-vif-plugged-e747243d-8f01-4e0e-b24c-7b450e7731b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Sep 30 14:48:09 compute-0 nova_compute[261524]: 2025-09-30 14:48:09.343 2 DEBUG oslo_concurrency.lockutils [req-6b0d3ac9-0a65-4314-b576-ae557a8cba60 req-15d62182-c15f-4161-aae9-0ce21b67f101 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Acquiring lock "c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:48:09 compute-0 nova_compute[261524]: 2025-09-30 14:48:09.344 2 DEBUG oslo_concurrency.lockutils [req-6b0d3ac9-0a65-4314-b576-ae557a8cba60 req-15d62182-c15f-4161-aae9-0ce21b67f101 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Lock "c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:48:09 compute-0 nova_compute[261524]: 2025-09-30 14:48:09.344 2 DEBUG oslo_concurrency.lockutils [req-6b0d3ac9-0a65-4314-b576-ae557a8cba60 req-15d62182-c15f-4161-aae9-0ce21b67f101 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] Lock "c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:48:09 compute-0 nova_compute[261524]: 2025-09-30 14:48:09.345 2 DEBUG nova.compute.manager [req-6b0d3ac9-0a65-4314-b576-ae557a8cba60 req-15d62182-c15f-4161-aae9-0ce21b67f101 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0] No waiting events found dispatching network-vif-plugged-e747243d-8f01-4e0e-b24c-7b450e7731b3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Sep 30 14:48:09 compute-0 nova_compute[261524]: 2025-09-30 14:48:09.345 2 WARNING nova.compute.manager [req-6b0d3ac9-0a65-4314-b576-ae557a8cba60 req-15d62182-c15f-4161-aae9-0ce21b67f101 e7e4c3fa3d0d4d94b9d68d4fe2bf5fe0 8bc8efda5ab447f8b5d5418b9ecd7c11 - - default default] [instance: c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0] Received unexpected event network-vif-plugged-e747243d-8f01-4e0e-b24c-7b450e7731b3 for instance with vm_state active and task_state deleting.
Sep 30 14:48:09 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:48:09 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:48:09 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:48:09.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:48:09 compute-0 nova_compute[261524]: 2025-09-30 14:48:09.804 2 DEBUG nova.network.neutron [-] [instance: c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Sep 30 14:48:09 compute-0 nova_compute[261524]: 2025-09-30 14:48:09.822 2 INFO nova.compute.manager [-] [instance: c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0] Took 2.43 seconds to deallocate network for instance.
Sep 30 14:48:09 compute-0 nova_compute[261524]: 2025-09-30 14:48:09.862 2 DEBUG oslo_concurrency.lockutils [None req-619cb4e8-4308-47a4-8271-da7d6d458c63 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:48:09 compute-0 nova_compute[261524]: 2025-09-30 14:48:09.863 2 DEBUG oslo_concurrency.lockutils [None req-619cb4e8-4308-47a4-8271-da7d6d458c63 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:48:09 compute-0 nova_compute[261524]: 2025-09-30 14:48:09.918 2 DEBUG oslo_concurrency.processutils [None req-619cb4e8-4308-47a4-8271-da7d6d458c63 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Sep 30 14:48:10 compute-0 ceph-mon[74194]: pgmap v979: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Sep 30 14:48:10 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:48:10 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:48:10 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:48:10.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:48:10 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 14:48:10 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2591671382' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:48:10 compute-0 nova_compute[261524]: 2025-09-30 14:48:10.408 2 DEBUG oslo_concurrency.processutils [None req-619cb4e8-4308-47a4-8271-da7d6d458c63 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Sep 30 14:48:10 compute-0 nova_compute[261524]: 2025-09-30 14:48:10.415 2 DEBUG nova.compute.provider_tree [None req-619cb4e8-4308-47a4-8271-da7d6d458c63 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Inventory has not changed in ProviderTree for provider: 06783cfc-6d32-454d-9501-ebd8adea3735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Sep 30 14:48:10 compute-0 nova_compute[261524]: 2025-09-30 14:48:10.439 2 DEBUG nova.scheduler.client.report [None req-619cb4e8-4308-47a4-8271-da7d6d458c63 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Inventory has not changed for provider 06783cfc-6d32-454d-9501-ebd8adea3735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Sep 30 14:48:10 compute-0 nova_compute[261524]: 2025-09-30 14:48:10.465 2 DEBUG oslo_concurrency.lockutils [None req-619cb4e8-4308-47a4-8271-da7d6d458c63 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.602s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:48:10 compute-0 nova_compute[261524]: 2025-09-30 14:48:10.494 2 INFO nova.scheduler.client.report [None req-619cb4e8-4308-47a4-8271-da7d6d458c63 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Deleted allocations for instance c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0
Sep 30 14:48:10 compute-0 nova_compute[261524]: 2025-09-30 14:48:10.560 2 DEBUG oslo_concurrency.lockutils [None req-619cb4e8-4308-47a4-8271-da7d6d458c63 59c80c4f189d4667aec64b43afc69ed2 0f6bbb74396f4cb7bfa999ebdabfe722 - - default default] Lock "c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.087s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:48:10 compute-0 nova_compute[261524]: 2025-09-30 14:48:10.615 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:48:10 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v980: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Sep 30 14:48:11 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/2591671382' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:48:11 compute-0 ceph-mon[74194]: from='client.? 192.168.122.10:0/1822103928' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 14:48:11 compute-0 ceph-mon[74194]: from='client.? 192.168.122.10:0/1822103928' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 14:48:11 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:48:11 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:48:11 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:48:11.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:48:11 compute-0 nova_compute[261524]: 2025-09-30 14:48:11.769 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:48:12 compute-0 ceph-mon[74194]: pgmap v980: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Sep 30 14:48:12 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:48:12 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:48:12 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:48:12.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:48:12 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:48:12 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v981: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Sep 30 14:48:13 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:48:13 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:48:13 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:48:13.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:48:13 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:48:13.678Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:48:13 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:48:13.678Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:48:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:48:13 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:48:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:48:14 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:48:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:48:14 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:48:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:48:14 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:48:14 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:48:14 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:48:14 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:48:14.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:48:14 compute-0 ceph-mon[74194]: pgmap v981: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Sep 30 14:48:14 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:48:14 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:48:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:48:14] "GET /metrics HTTP/1.1" 200 48532 "" "Prometheus/2.51.0"
Sep 30 14:48:14 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:48:14] "GET /metrics HTTP/1.1" 200 48532 "" "Prometheus/2.51.0"
Sep 30 14:48:14 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v982: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Sep 30 14:48:14 compute-0 nova_compute[261524]: 2025-09-30 14:48:14.949 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:48:14 compute-0 nova_compute[261524]: 2025-09-30 14:48:14.970 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:48:14 compute-0 nova_compute[261524]: 2025-09-30 14:48:14.970 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:48:15 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:48:15 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/1366965487' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:48:15 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:48:15 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:48:15 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:48:15.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:48:15 compute-0 nova_compute[261524]: 2025-09-30 14:48:15.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:48:15 compute-0 nova_compute[261524]: 2025-09-30 14:48:15.952 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:48:15 compute-0 nova_compute[261524]: 2025-09-30 14:48:15.952 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:48:15 compute-0 nova_compute[261524]: 2025-09-30 14:48:15.952 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Sep 30 14:48:15 compute-0 nova_compute[261524]: 2025-09-30 14:48:15.953 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Sep 30 14:48:15 compute-0 nova_compute[261524]: 2025-09-30 14:48:15.983 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Sep 30 14:48:15 compute-0 nova_compute[261524]: 2025-09-30 14:48:15.983 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:48:15 compute-0 nova_compute[261524]: 2025-09-30 14:48:15.983 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:48:16 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:48:16 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:48:16 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:48:16.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:48:16 compute-0 ceph-mon[74194]: pgmap v982: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Sep 30 14:48:16 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/567745018' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:48:16 compute-0 nova_compute[261524]: 2025-09-30 14:48:16.772 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:48:16 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v983: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Sep 30 14:48:16 compute-0 nova_compute[261524]: 2025-09-30 14:48:16.952 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:48:16 compute-0 nova_compute[261524]: 2025-09-30 14:48:16.953 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Sep 30 14:48:17 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:48:17.177Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:48:17 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/1501349445' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:48:17 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/1470343170' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:48:17 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:48:17 compute-0 nova_compute[261524]: 2025-09-30 14:48:17.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:48:17 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:48:17 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:48:17 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:48:17.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:48:17 compute-0 nova_compute[261524]: 2025-09-30 14:48:17.548 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:48:17 compute-0 nova_compute[261524]: 2025-09-30 14:48:17.952 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:48:17 compute-0 nova_compute[261524]: 2025-09-30 14:48:17.978 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:48:17 compute-0 nova_compute[261524]: 2025-09-30 14:48:17.978 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:48:17 compute-0 nova_compute[261524]: 2025-09-30 14:48:17.979 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:48:17 compute-0 nova_compute[261524]: 2025-09-30 14:48:17.979 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Sep 30 14:48:17 compute-0 nova_compute[261524]: 2025-09-30 14:48:17.980 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Sep 30 14:48:18 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:48:18 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:48:18 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:48:18.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:48:18 compute-0 ceph-mon[74194]: pgmap v983: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Sep 30 14:48:18 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 14:48:18 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/937410207' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:48:18 compute-0 nova_compute[261524]: 2025-09-30 14:48:18.482 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Sep 30 14:48:18 compute-0 nova_compute[261524]: 2025-09-30 14:48:18.666 2 WARNING nova.virt.libvirt.driver [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 14:48:18 compute-0 nova_compute[261524]: 2025-09-30 14:48:18.668 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4550MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Sep 30 14:48:18 compute-0 nova_compute[261524]: 2025-09-30 14:48:18.668 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:48:18 compute-0 nova_compute[261524]: 2025-09-30 14:48:18.668 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:48:18 compute-0 nova_compute[261524]: 2025-09-30 14:48:18.767 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Sep 30 14:48:18 compute-0 nova_compute[261524]: 2025-09-30 14:48:18.767 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Sep 30 14:48:18 compute-0 nova_compute[261524]: 2025-09-30 14:48:18.784 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Sep 30 14:48:18 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v984: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Sep 30 14:48:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:48:18 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:48:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:48:18 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:48:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:48:18 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:48:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:48:19 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:48:19 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 14:48:19 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2505373112' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:48:19 compute-0 nova_compute[261524]: 2025-09-30 14:48:19.261 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Sep 30 14:48:19 compute-0 nova_compute[261524]: 2025-09-30 14:48:19.269 2 DEBUG nova.compute.provider_tree [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Inventory has not changed in ProviderTree for provider: 06783cfc-6d32-454d-9501-ebd8adea3735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Sep 30 14:48:19 compute-0 nova_compute[261524]: 2025-09-30 14:48:19.287 2 DEBUG nova.scheduler.client.report [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Inventory has not changed for provider 06783cfc-6d32-454d-9501-ebd8adea3735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Sep 30 14:48:19 compute-0 nova_compute[261524]: 2025-09-30 14:48:19.313 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Sep 30 14:48:19 compute-0 nova_compute[261524]: 2025-09-30 14:48:19.313 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.645s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:48:19 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/937410207' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:48:19 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/2505373112' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:48:19 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:48:19 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:48:19 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:48:19.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:48:20 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:48:20 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:48:20 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:48:20.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:48:20 compute-0 ceph-mon[74194]: pgmap v984: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Sep 30 14:48:20 compute-0 nova_compute[261524]: 2025-09-30 14:48:20.620 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:48:20 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v985: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Sep 30 14:48:21 compute-0 nova_compute[261524]: 2025-09-30 14:48:21.315 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:48:21 compute-0 ceph-mon[74194]: pgmap v985: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Sep 30 14:48:21 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:48:21 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:48:21 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:48:21.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:48:21 compute-0 nova_compute[261524]: 2025-09-30 14:48:21.718 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759243686.7166603, c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Sep 30 14:48:21 compute-0 nova_compute[261524]: 2025-09-30 14:48:21.718 2 INFO nova.compute.manager [-] [instance: c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0] VM Stopped (Lifecycle Event)
Sep 30 14:48:21 compute-0 nova_compute[261524]: 2025-09-30 14:48:21.746 2 DEBUG nova.compute.manager [None req-97102729-be56-472e-817b-fefc30d98c23 - - - - - -] [instance: c5ebc51f-ff0e-4004-9a6a-40ceba6e18b0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Sep 30 14:48:21 compute-0 nova_compute[261524]: 2025-09-30 14:48:21.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:48:22 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:48:22 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:48:22 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:48:22.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:48:22 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:48:22 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v986: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Sep 30 14:48:23 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:48:23 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:48:23 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:48:23.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:48:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:48:23.679Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:48:23 compute-0 ceph-mon[74194]: pgmap v986: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Sep 30 14:48:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:48:23 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:48:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:48:23 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:48:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:48:23 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:48:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:48:24 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:48:24 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:48:24 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:48:24 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:48:24.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:48:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:48:24] "GET /metrics HTTP/1.1" 200 48532 "" "Prometheus/2.51.0"
Sep 30 14:48:24 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:48:24] "GET /metrics HTTP/1.1" 200 48532 "" "Prometheus/2.51.0"
Sep 30 14:48:24 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v987: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:48:25 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:48:25 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:48:25 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:48:25.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:48:25 compute-0 nova_compute[261524]: 2025-09-30 14:48:25.621 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:48:25 compute-0 sudo[281431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:48:25 compute-0 sudo[281431]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:48:25 compute-0 sudo[281431]: pam_unix(sudo:session): session closed for user root
Sep 30 14:48:25 compute-0 ceph-mon[74194]: pgmap v987: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:48:26 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:48:26 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:48:26 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:48:26.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:48:26 compute-0 nova_compute[261524]: 2025-09-30 14:48:26.778 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:48:26 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v988: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:48:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:48:27.178Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:48:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:48:27.178Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:48:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:48:27.178Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:48:27 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:48:27 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:48:27 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:48:27 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:48:27.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:48:27 compute-0 ceph-mon[74194]: pgmap v988: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:48:28 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:48:28 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:48:28 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:48:28.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:48:28 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v989: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:48:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:48:28 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:48:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:48:28 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:48:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:48:28 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:48:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:48:29 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:48:29 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:48:29 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:48:29 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:48:29.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:48:29 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:48:29 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:48:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:48:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:48:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:48:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:48:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:48:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:48:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[106217]: logger=cleanup t=2025-09-30T14:48:29.884992508Z level=info msg="Completed cleanup jobs" duration=42.903177ms
Sep 30 14:48:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[106217]: logger=plugins.update.checker t=2025-09-30T14:48:29.957034551Z level=info msg="Update check succeeded" duration=47.758825ms
Sep 30 14:48:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[106217]: logger=grafana.update.checker t=2025-09-30T14:48:29.966871689Z level=info msg="Update check succeeded" duration=53.888245ms
Sep 30 14:48:29 compute-0 ceph-mon[74194]: pgmap v989: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:48:29 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:48:30 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:48:30 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:48:30 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:48:30.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:48:30 compute-0 nova_compute[261524]: 2025-09-30 14:48:30.622 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:48:30 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v990: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:48:31 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:48:31 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:48:31 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:48:31.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:48:31 compute-0 nova_compute[261524]: 2025-09-30 14:48:31.781 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:48:32 compute-0 ceph-mon[74194]: pgmap v990: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:48:32 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:48:32 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:48:32 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:48:32.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:48:32 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:48:32 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v991: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:48:33 compute-0 ceph-mon[74194]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #63. Immutable memtables: 0.
Sep 30 14:48:33 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:48:33.022135) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Sep 30 14:48:33 compute-0 ceph-mon[74194]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 63
Sep 30 14:48:33 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759243713022221, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 1428, "num_deletes": 501, "total_data_size": 2010771, "memory_usage": 2036288, "flush_reason": "Manual Compaction"}
Sep 30 14:48:33 compute-0 ceph-mon[74194]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #64: started
Sep 30 14:48:33 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759243713042407, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 64, "file_size": 1927969, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28253, "largest_seqno": 29680, "table_properties": {"data_size": 1921939, "index_size": 2785, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 16909, "raw_average_key_size": 19, "raw_value_size": 1907627, "raw_average_value_size": 2220, "num_data_blocks": 122, "num_entries": 859, "num_filter_entries": 859, "num_deletions": 501, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759243613, "oldest_key_time": 1759243613, "file_creation_time": 1759243713, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4a74fe2f-a33e-416b-ba25-743e7942b3ac", "db_session_id": "KY5CTSKWFSFJYE5835A9", "orig_file_number": 64, "seqno_to_time_mapping": "N/A"}}
Sep 30 14:48:33 compute-0 ceph-mon[74194]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 20304 microseconds, and 6997 cpu microseconds.
Sep 30 14:48:33 compute-0 ceph-mon[74194]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 14:48:33 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:48:33.042446) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #64: 1927969 bytes OK
Sep 30 14:48:33 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:48:33.042467) [db/memtable_list.cc:519] [default] Level-0 commit table #64 started
Sep 30 14:48:33 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:48:33.045443) [db/memtable_list.cc:722] [default] Level-0 commit table #64: memtable #1 done
Sep 30 14:48:33 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:48:33.045457) EVENT_LOG_v1 {"time_micros": 1759243713045453, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Sep 30 14:48:33 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:48:33.045473) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Sep 30 14:48:33 compute-0 ceph-mon[74194]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 2003538, prev total WAL file size 2003538, number of live WAL files 2.
Sep 30 14:48:33 compute-0 ceph-mon[74194]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000060.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 14:48:33 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:48:33.046010) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Sep 30 14:48:33 compute-0 ceph-mon[74194]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Sep 30 14:48:33 compute-0 ceph-mon[74194]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [64(1882KB)], [62(16MB)]
Sep 30 14:48:33 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759243713046044, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [64], "files_L6": [62], "score": -1, "input_data_size": 18835237, "oldest_snapshot_seqno": -1}
Sep 30 14:48:33 compute-0 ceph-mon[74194]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #65: 5845 keys, 12593281 bytes, temperature: kUnknown
Sep 30 14:48:33 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759243713149134, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 65, "file_size": 12593281, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12555782, "index_size": 21773, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14661, "raw_key_size": 150923, "raw_average_key_size": 25, "raw_value_size": 12451805, "raw_average_value_size": 2130, "num_data_blocks": 871, "num_entries": 5845, "num_filter_entries": 5845, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759241526, "oldest_key_time": 0, "file_creation_time": 1759243713, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4a74fe2f-a33e-416b-ba25-743e7942b3ac", "db_session_id": "KY5CTSKWFSFJYE5835A9", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Sep 30 14:48:33 compute-0 ceph-mon[74194]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 14:48:33 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:48:33.149576) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 12593281 bytes
Sep 30 14:48:33 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:48:33.152250) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 182.4 rd, 122.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 16.1 +0.0 blob) out(12.0 +0.0 blob), read-write-amplify(16.3) write-amplify(6.5) OK, records in: 6860, records dropped: 1015 output_compression: NoCompression
Sep 30 14:48:33 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:48:33.152282) EVENT_LOG_v1 {"time_micros": 1759243713152268, "job": 34, "event": "compaction_finished", "compaction_time_micros": 103256, "compaction_time_cpu_micros": 25201, "output_level": 6, "num_output_files": 1, "total_output_size": 12593281, "num_input_records": 6860, "num_output_records": 5845, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Sep 30 14:48:33 compute-0 ceph-mon[74194]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000064.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 14:48:33 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759243713153030, "job": 34, "event": "table_file_deletion", "file_number": 64}
Sep 30 14:48:33 compute-0 ceph-mon[74194]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 14:48:33 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759243713159330, "job": 34, "event": "table_file_deletion", "file_number": 62}
Sep 30 14:48:33 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:48:33.045947) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:48:33 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:48:33.159449) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:48:33 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:48:33.159460) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:48:33 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:48:33.159463) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:48:33 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:48:33.159468) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:48:33 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:48:33.159472) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:48:33 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:48:33 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:48:33 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:48:33.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:48:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:48:33.680Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:48:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:48:33 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:48:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:48:34 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:48:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:48:34 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:48:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:48:34 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:48:34 compute-0 ceph-mon[74194]: pgmap v991: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:48:34 compute-0 podman[281467]: 2025-09-30 14:48:34.136942334 +0000 UTC m=+0.058993251 container health_status b3fb2fd96e3ed0561f41ccf5f3a6a8f5edc1610051f196882b9fe64f54041c07 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Sep 30 14:48:34 compute-0 podman[281465]: 2025-09-30 14:48:34.136978375 +0000 UTC m=+0.063910630 container health_status 3f9405f717bf7bccb1d94628a6cea0442375ebf8d5cf43ef2536ee30dce6c6e0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team)
Sep 30 14:48:34 compute-0 podman[281468]: 2025-09-30 14:48:34.145092538 +0000 UTC m=+0.063351235 container health_status c96c6cc1b89de09f21811bef665f35e069874e3ad1bbd4c37d02227af98e3458 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Sep 30 14:48:34 compute-0 podman[281466]: 2025-09-30 14:48:34.183121457 +0000 UTC m=+0.106798106 container health_status 8ace90c1ad44d98a416105b72e1c541724757ab08efa10adfaa0df7e8c2027c6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller)
Sep 30 14:48:34 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:48:34 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:48:34 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:48:34.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:48:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:48:34] "GET /metrics HTTP/1.1" 200 48533 "" "Prometheus/2.51.0"
Sep 30 14:48:34 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:48:34] "GET /metrics HTTP/1.1" 200 48533 "" "Prometheus/2.51.0"
Sep 30 14:48:34 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v992: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:48:35 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/4196190591' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:48:35 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:48:35 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:48:35 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:48:35.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:48:35 compute-0 nova_compute[261524]: 2025-09-30 14:48:35.624 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:48:36 compute-0 ceph-mon[74194]: pgmap v992: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:48:36 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:48:36 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:48:36 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:48:36.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:48:36 compute-0 nova_compute[261524]: 2025-09-30 14:48:36.783 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:48:36 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v993: 337 pgs: 337 active+clean; 88 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 14:48:37 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:48:37.179Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:48:37 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:48:37.179Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:48:37 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:48:37 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:48:37 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:48:37 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:48:37.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:48:38 compute-0 ceph-mon[74194]: pgmap v993: 337 pgs: 337 active+clean; 88 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 14:48:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:48:38.264 163966 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:48:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:48:38.265 163966 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:48:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:48:38.265 163966 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:48:38 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:48:38 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:48:38 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:48:38.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:48:38 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v994: 337 pgs: 337 active+clean; 88 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 14:48:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:48:39 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:48:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:48:39 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:48:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:48:39 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:48:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:48:39 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:48:39 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/486884364' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 14:48:39 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:48:39 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:48:39 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:48:39.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:48:40 compute-0 ceph-mon[74194]: pgmap v994: 337 pgs: 337 active+clean; 88 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 14:48:40 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/42618037' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 14:48:40 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:48:40 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:48:40 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:48:40.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:48:40 compute-0 nova_compute[261524]: 2025-09-30 14:48:40.626 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:48:40 compute-0 unix_chkpwd[281552]: password check failed for user (root)
Sep 30 14:48:40 compute-0 sshd-session[281549]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=194.0.234.19  user=root
Sep 30 14:48:40 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v995: 337 pgs: 337 active+clean; 88 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 14:48:41 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:48:41 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:48:41 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:48:41.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:48:41 compute-0 nova_compute[261524]: 2025-09-30 14:48:41.804 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:48:42 compute-0 ceph-mon[74194]: pgmap v995: 337 pgs: 337 active+clean; 88 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 14:48:42 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:48:42 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:48:42 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:48:42.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:48:42 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:48:42 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v996: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Sep 30 14:48:43 compute-0 sshd-session[281549]: Failed password for root from 194.0.234.19 port 47242 ssh2
Sep 30 14:48:43 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:48:43 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:48:43 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:48:43.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:48:43 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:48:43.681Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:48:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:48:43 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:48:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:48:43 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:48:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:48:43 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:48:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:48:44 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:48:44 compute-0 ceph-mon[74194]: pgmap v996: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Sep 30 14:48:44 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:48:44 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:48:44 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:48:44.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:48:44 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:48:44 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:48:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:48:44] "GET /metrics HTTP/1.1" 200 48548 "" "Prometheus/2.51.0"
Sep 30 14:48:44 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:48:44] "GET /metrics HTTP/1.1" 200 48548 "" "Prometheus/2.51.0"
Sep 30 14:48:44 compute-0 sshd-session[281549]: Connection closed by authenticating user root 194.0.234.19 port 47242 [preauth]
Sep 30 14:48:44 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v997: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Sep 30 14:48:45 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:48:45 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:48:45 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:48:45 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:48:45.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:48:45 compute-0 nova_compute[261524]: 2025-09-30 14:48:45.629 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:48:45 compute-0 sudo[281559]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:48:45 compute-0 sudo[281559]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:48:45 compute-0 sudo[281559]: pam_unix(sudo:session): session closed for user root
Sep 30 14:48:46 compute-0 ceph-mon[74194]: pgmap v997: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Sep 30 14:48:46 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:48:46 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:48:46 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:48:46.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:48:46 compute-0 nova_compute[261524]: 2025-09-30 14:48:46.807 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:48:46 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v998: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Sep 30 14:48:47 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:48:47.181Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:48:47 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:48:47.181Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:48:47 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:48:47 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:48:47 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:48:47 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:48:47.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:48:48 compute-0 ceph-mon[74194]: pgmap v998: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Sep 30 14:48:48 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:48:48 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:48:48 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:48:48.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:48:48 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v999: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Sep 30 14:48:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:48:48 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:48:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:48:48 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:48:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:48:48 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:48:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:48:49 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:48:49 compute-0 ceph-mon[74194]: pgmap v999: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Sep 30 14:48:49 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:48:49 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:48:49 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:48:49.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:48:50 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:48:50 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:48:50 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:48:50.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:48:50 compute-0 nova_compute[261524]: 2025-09-30 14:48:50.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:48:50 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1000: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Sep 30 14:48:51 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:48:51 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:48:51 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:48:51.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:48:51 compute-0 nova_compute[261524]: 2025-09-30 14:48:51.810 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:48:51 compute-0 ceph-mon[74194]: pgmap v1000: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Sep 30 14:48:52 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:48:52 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:48:52 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:48:52.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:48:52 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:48:52 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1001: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Sep 30 14:48:53 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:48:53 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:48:53 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:48:53.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:48:53 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:48:53.683Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:48:53 compute-0 ceph-mon[74194]: pgmap v1001: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Sep 30 14:48:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:48:53 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:48:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:48:54 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:48:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:48:54 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:48:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:48:54 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:48:54 compute-0 ovn_controller[154021]: 2025-09-30T14:48:54Z|00087|memory_trim|INFO|Detected inactivity (last active 30007 ms ago): trimming memory
Sep 30 14:48:54 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:48:54 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:48:54 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:48:54.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:48:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:48:54] "GET /metrics HTTP/1.1" 200 48548 "" "Prometheus/2.51.0"
Sep 30 14:48:54 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:48:54] "GET /metrics HTTP/1.1" 200 48548 "" "Prometheus/2.51.0"
Sep 30 14:48:54 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1002: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 68 op/s
Sep 30 14:48:55 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:48:55 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:48:55 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:48:55.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:48:55 compute-0 nova_compute[261524]: 2025-09-30 14:48:55.632 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:48:56 compute-0 ceph-mon[74194]: pgmap v1002: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 68 op/s
Sep 30 14:48:56 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:48:56 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:48:56 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:48:56.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:48:56 compute-0 nova_compute[261524]: 2025-09-30 14:48:56.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:48:56 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1003: 337 pgs: 337 active+clean; 121 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 132 op/s
Sep 30 14:48:57 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:48:57.183Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:48:57 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:48:57 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:48:57 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:48:57 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:48:57.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:48:58 compute-0 ceph-mon[74194]: pgmap v1003: 337 pgs: 337 active+clean; 121 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 132 op/s
Sep 30 14:48:58 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:48:58 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:48:58 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:48:58.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:48:58 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1004: 337 pgs: 337 active+clean; 121 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Sep 30 14:48:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:48:58 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:48:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:48:58 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:48:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:48:58 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:48:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:48:59 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:48:59 compute-0 nova_compute[261524]: 2025-09-30 14:48:59.563 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:48:59 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:48:59.563 163966 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ea:30:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '86:54:af:bb:5a:5f'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Sep 30 14:48:59 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:48:59.565 163966 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Sep 30 14:48:59 compute-0 ceph-mgr[74485]: [balancer INFO root] Optimize plan auto_2025-09-30_14:48:59
Sep 30 14:48:59 compute-0 ceph-mgr[74485]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 14:48:59 compute-0 ceph-mgr[74485]: [balancer INFO root] do_upmap
Sep 30 14:48:59 compute-0 ceph-mgr[74485]: [balancer INFO root] pools ['images', 'cephfs.cephfs.data', 'volumes', '.rgw.root', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.control', 'vms', '.mgr', 'backups', '.nfs']
Sep 30 14:48:59 compute-0 ceph-mgr[74485]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 14:48:59 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:48:59 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:48:59 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:48:59.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:48:59 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:48:59 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:48:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:48:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:48:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:48:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:48:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:48:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:49:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 14:49:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:49:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 14:49:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:49:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00075666583235658 of space, bias 1.0, pg target 0.226999749706974 quantized to 32 (current 32)
Sep 30 14:49:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:49:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:49:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:49:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:49:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:49:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Sep 30 14:49:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:49:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Sep 30 14:49:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:49:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:49:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:49:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Sep 30 14:49:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:49:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Sep 30 14:49:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:49:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:49:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:49:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 14:49:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:49:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 14:49:00 compute-0 ceph-mon[74194]: pgmap v1004: 337 pgs: 337 active+clean; 121 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Sep 30 14:49:00 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:49:00 compute-0 sudo[281598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:49:00 compute-0 sudo[281598]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:49:00 compute-0 sudo[281598]: pam_unix(sudo:session): session closed for user root
Sep 30 14:49:00 compute-0 sudo[281623]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 14:49:00 compute-0 sudo[281623]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:49:00 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:49:00 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:49:00 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:49:00.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:49:00 compute-0 nova_compute[261524]: 2025-09-30 14:49:00.634 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:49:00 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1005: 337 pgs: 337 active+clean; 121 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Sep 30 14:49:00 compute-0 sudo[281623]: pam_unix(sudo:session): session closed for user root
Sep 30 14:49:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 14:49:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 14:49:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 14:49:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 14:49:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 14:49:01 compute-0 sudo[281679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:49:01 compute-0 sudo[281679]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:49:01 compute-0 sudo[281679]: pam_unix(sudo:session): session closed for user root
Sep 30 14:49:01 compute-0 sudo[281704]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Sep 30 14:49:01 compute-0 sudo[281704]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:49:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 14:49:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 14:49:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 14:49:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 14:49:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 14:49:01 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Sep 30 14:49:01 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:49:01 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Sep 30 14:49:01 compute-0 sudo[281704]: pam_unix(sudo:session): session closed for user root
Sep 30 14:49:01 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 14:49:01 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Sep 30 14:49:01 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:49:01 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:49:01 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 14:49:01 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:49:01 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Sep 30 14:49:01 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:49:01 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Sep 30 14:49:01 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Sep 30 14:49:01 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:49:01 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:49:01 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:49:01 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 14:49:01 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 14:49:01 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 14:49:01 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:49:01 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 14:49:01 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:49:01 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 14:49:01 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 14:49:01 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 14:49:01 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 14:49:01 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:49:01 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:49:01 compute-0 sudo[281751]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:49:01 compute-0 sudo[281751]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:49:01 compute-0 sudo[281751]: pam_unix(sudo:session): session closed for user root
Sep 30 14:49:01 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:49:01 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:49:01 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:49:01.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:49:01 compute-0 sudo[281776]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 14:49:01 compute-0 sudo[281776]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:49:01 compute-0 nova_compute[261524]: 2025-09-30 14:49:01.816 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:49:02 compute-0 podman[281845]: 2025-09-30 14:49:02.090594215 +0000 UTC m=+0.056451383 container create 73c298d6cf6759ec404929c2d32c69be493960e728af4c1a3194ca91d7fe17f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_dijkstra, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:49:02 compute-0 systemd[1]: Started libpod-conmon-73c298d6cf6759ec404929c2d32c69be493960e728af4c1a3194ca91d7fe17f1.scope.
Sep 30 14:49:02 compute-0 podman[281845]: 2025-09-30 14:49:02.064076529 +0000 UTC m=+0.029933717 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:49:02 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:49:02 compute-0 ceph-mon[74194]: pgmap v1005: 337 pgs: 337 active+clean; 121 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Sep 30 14:49:02 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:49:02 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:49:02 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:49:02 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:49:02 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:49:02 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Sep 30 14:49:02 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:49:02 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:49:02 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 14:49:02 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:49:02 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:49:02 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 14:49:02 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 14:49:02 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:49:02 compute-0 podman[281845]: 2025-09-30 14:49:02.254681266 +0000 UTC m=+0.220538404 container init 73c298d6cf6759ec404929c2d32c69be493960e728af4c1a3194ca91d7fe17f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_dijkstra, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:49:02 compute-0 podman[281845]: 2025-09-30 14:49:02.264821532 +0000 UTC m=+0.230678650 container start 73c298d6cf6759ec404929c2d32c69be493960e728af4c1a3194ca91d7fe17f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_dijkstra, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:49:02 compute-0 podman[281845]: 2025-09-30 14:49:02.268971551 +0000 UTC m=+0.234828749 container attach 73c298d6cf6759ec404929c2d32c69be493960e728af4c1a3194ca91d7fe17f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_dijkstra, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:49:02 compute-0 awesome_dijkstra[281862]: 167 167
Sep 30 14:49:02 compute-0 systemd[1]: libpod-73c298d6cf6759ec404929c2d32c69be493960e728af4c1a3194ca91d7fe17f1.scope: Deactivated successfully.
Sep 30 14:49:02 compute-0 podman[281845]: 2025-09-30 14:49:02.272084523 +0000 UTC m=+0.237941681 container died 73c298d6cf6759ec404929c2d32c69be493960e728af4c1a3194ca91d7fe17f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_dijkstra, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:49:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-79cd6582cd0137131480d31c3eb3cf1ca4f352652a06eded69904e3c347b594c-merged.mount: Deactivated successfully.
Sep 30 14:49:02 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:49:02 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:49:02 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:49:02.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:49:02 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:49:02 compute-0 podman[281845]: 2025-09-30 14:49:02.375329415 +0000 UTC m=+0.341186573 container remove 73c298d6cf6759ec404929c2d32c69be493960e728af4c1a3194ca91d7fe17f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_dijkstra, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Sep 30 14:49:02 compute-0 systemd[1]: libpod-conmon-73c298d6cf6759ec404929c2d32c69be493960e728af4c1a3194ca91d7fe17f1.scope: Deactivated successfully.
Sep 30 14:49:02 compute-0 podman[281889]: 2025-09-30 14:49:02.585566177 +0000 UTC m=+0.053289810 container create a0d1ca80c848f09ae331bb50dfefc8045135bae2e6e6bb870ec2ccb2be61d6ba (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_ardinghelli, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:49:02 compute-0 systemd[1]: Started libpod-conmon-a0d1ca80c848f09ae331bb50dfefc8045135bae2e6e6bb870ec2ccb2be61d6ba.scope.
Sep 30 14:49:02 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:49:02 compute-0 podman[281889]: 2025-09-30 14:49:02.561863835 +0000 UTC m=+0.029587478 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:49:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e2e9f5c298cb02021305cc2ef3d5e91b4497c84d4e8c0d8d0832397cb6876c5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:49:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e2e9f5c298cb02021305cc2ef3d5e91b4497c84d4e8c0d8d0832397cb6876c5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:49:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e2e9f5c298cb02021305cc2ef3d5e91b4497c84d4e8c0d8d0832397cb6876c5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:49:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e2e9f5c298cb02021305cc2ef3d5e91b4497c84d4e8c0d8d0832397cb6876c5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:49:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e2e9f5c298cb02021305cc2ef3d5e91b4497c84d4e8c0d8d0832397cb6876c5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 14:49:02 compute-0 podman[281889]: 2025-09-30 14:49:02.8053377 +0000 UTC m=+0.273061333 container init a0d1ca80c848f09ae331bb50dfefc8045135bae2e6e6bb870ec2ccb2be61d6ba (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_ardinghelli, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:49:02 compute-0 podman[281889]: 2025-09-30 14:49:02.817819238 +0000 UTC m=+0.285542831 container start a0d1ca80c848f09ae331bb50dfefc8045135bae2e6e6bb870ec2ccb2be61d6ba (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_ardinghelli, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:49:02 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1006: 337 pgs: 337 active+clean; 121 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Sep 30 14:49:02 compute-0 podman[281889]: 2025-09-30 14:49:02.883236416 +0000 UTC m=+0.350960009 container attach a0d1ca80c848f09ae331bb50dfefc8045135bae2e6e6bb870ec2ccb2be61d6ba (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_ardinghelli, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Sep 30 14:49:03 compute-0 nifty_ardinghelli[281904]: --> passed data devices: 0 physical, 1 LVM
Sep 30 14:49:03 compute-0 nifty_ardinghelli[281904]: --> All data devices are unavailable
Sep 30 14:49:03 compute-0 systemd[1]: libpod-a0d1ca80c848f09ae331bb50dfefc8045135bae2e6e6bb870ec2ccb2be61d6ba.scope: Deactivated successfully.
Sep 30 14:49:03 compute-0 podman[281889]: 2025-09-30 14:49:03.209095086 +0000 UTC m=+0.676818669 container died a0d1ca80c848f09ae331bb50dfefc8045135bae2e6e6bb870ec2ccb2be61d6ba (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_ardinghelli, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:49:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-1e2e9f5c298cb02021305cc2ef3d5e91b4497c84d4e8c0d8d0832397cb6876c5-merged.mount: Deactivated successfully.
Sep 30 14:49:03 compute-0 podman[281889]: 2025-09-30 14:49:03.429729141 +0000 UTC m=+0.897452734 container remove a0d1ca80c848f09ae331bb50dfefc8045135bae2e6e6bb870ec2ccb2be61d6ba (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_ardinghelli, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:49:03 compute-0 systemd[1]: libpod-conmon-a0d1ca80c848f09ae331bb50dfefc8045135bae2e6e6bb870ec2ccb2be61d6ba.scope: Deactivated successfully.
Sep 30 14:49:03 compute-0 sudo[281776]: pam_unix(sudo:session): session closed for user root
Sep 30 14:49:03 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:49:03 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:49:03 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:49:03.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:49:03 compute-0 sudo[281934]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:49:03 compute-0 sudo[281934]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:49:03 compute-0 sudo[281934]: pam_unix(sudo:session): session closed for user root
Sep 30 14:49:03 compute-0 sudo[281959]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -- lvm list --format json
Sep 30 14:49:03 compute-0 sudo[281959]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:49:03 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:49:03.684Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:49:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:49:03 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:49:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:49:03 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:49:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:49:03 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:49:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:49:04 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:49:04 compute-0 podman[282025]: 2025-09-30 14:49:04.102818812 +0000 UTC m=+0.040545706 container create 7ac04da5a43aa1a1ea20f591df2a38b0e52b5ca864fd9f015535239d599ac95f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_shtern, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Sep 30 14:49:04 compute-0 systemd[1]: Started libpod-conmon-7ac04da5a43aa1a1ea20f591df2a38b0e52b5ca864fd9f015535239d599ac95f.scope.
Sep 30 14:49:04 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:49:04 compute-0 podman[282025]: 2025-09-30 14:49:04.084274555 +0000 UTC m=+0.022001449 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:49:04 compute-0 podman[282025]: 2025-09-30 14:49:04.18728847 +0000 UTC m=+0.125015344 container init 7ac04da5a43aa1a1ea20f591df2a38b0e52b5ca864fd9f015535239d599ac95f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_shtern, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:49:04 compute-0 podman[282025]: 2025-09-30 14:49:04.195071375 +0000 UTC m=+0.132798249 container start 7ac04da5a43aa1a1ea20f591df2a38b0e52b5ca864fd9f015535239d599ac95f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:49:04 compute-0 podman[282025]: 2025-09-30 14:49:04.198379142 +0000 UTC m=+0.136106036 container attach 7ac04da5a43aa1a1ea20f591df2a38b0e52b5ca864fd9f015535239d599ac95f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_shtern, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:49:04 compute-0 busy_shtern[282042]: 167 167
Sep 30 14:49:04 compute-0 systemd[1]: libpod-7ac04da5a43aa1a1ea20f591df2a38b0e52b5ca864fd9f015535239d599ac95f.scope: Deactivated successfully.
Sep 30 14:49:04 compute-0 podman[282025]: 2025-09-30 14:49:04.200089117 +0000 UTC m=+0.137815991 container died 7ac04da5a43aa1a1ea20f591df2a38b0e52b5ca864fd9f015535239d599ac95f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_shtern, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Sep 30 14:49:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-1e4e53c6142f7abd6ea365aa581e866e52ffc7dde73eca60c48b0b33697bf5cb-merged.mount: Deactivated successfully.
Sep 30 14:49:04 compute-0 podman[282025]: 2025-09-30 14:49:04.24208675 +0000 UTC m=+0.179813624 container remove 7ac04da5a43aa1a1ea20f591df2a38b0e52b5ca864fd9f015535239d599ac95f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_shtern, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Sep 30 14:49:04 compute-0 systemd[1]: libpod-conmon-7ac04da5a43aa1a1ea20f591df2a38b0e52b5ca864fd9f015535239d599ac95f.scope: Deactivated successfully.
Sep 30 14:49:04 compute-0 podman[282045]: 2025-09-30 14:49:04.290904752 +0000 UTC m=+0.098444197 container health_status 3f9405f717bf7bccb1d94628a6cea0442375ebf8d5cf43ef2536ee30dce6c6e0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20250923)
Sep 30 14:49:04 compute-0 ceph-mon[74194]: pgmap v1006: 337 pgs: 337 active+clean; 121 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Sep 30 14:49:04 compute-0 podman[282047]: 2025-09-30 14:49:04.301073289 +0000 UTC m=+0.104013863 container health_status c96c6cc1b89de09f21811bef665f35e069874e3ad1bbd4c37d02227af98e3458 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20250923, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Sep 30 14:49:04 compute-0 podman[282046]: 2025-09-30 14:49:04.306080391 +0000 UTC m=+0.108388708 container health_status b3fb2fd96e3ed0561f41ccf5f3a6a8f5edc1610051f196882b9fe64f54041c07 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=multipathd)
Sep 30 14:49:04 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:49:04 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:49:04 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:49:04.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:49:04 compute-0 podman[282066]: 2025-09-30 14:49:04.336087229 +0000 UTC m=+0.104519216 container health_status 8ace90c1ad44d98a416105b72e1c541724757ab08efa10adfaa0df7e8c2027c6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2)
Sep 30 14:49:04 compute-0 podman[282146]: 2025-09-30 14:49:04.416808179 +0000 UTC m=+0.038302797 container create 050e73273e825964498ca193c8b6ad888da351003e7cba74f087b6f1eae7d40e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_noether, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Sep 30 14:49:04 compute-0 systemd[1]: Started libpod-conmon-050e73273e825964498ca193c8b6ad888da351003e7cba74f087b6f1eae7d40e.scope.
Sep 30 14:49:04 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:49:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce93dbbc45d41da148151a2dcc96df8378d83256815886ca418c3d0059db0a34/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:49:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce93dbbc45d41da148151a2dcc96df8378d83256815886ca418c3d0059db0a34/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:49:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce93dbbc45d41da148151a2dcc96df8378d83256815886ca418c3d0059db0a34/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:49:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce93dbbc45d41da148151a2dcc96df8378d83256815886ca418c3d0059db0a34/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:49:04 compute-0 podman[282146]: 2025-09-30 14:49:04.489427507 +0000 UTC m=+0.110922155 container init 050e73273e825964498ca193c8b6ad888da351003e7cba74f087b6f1eae7d40e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_noether, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default)
Sep 30 14:49:04 compute-0 podman[282146]: 2025-09-30 14:49:04.496162964 +0000 UTC m=+0.117657582 container start 050e73273e825964498ca193c8b6ad888da351003e7cba74f087b6f1eae7d40e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_noether, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:49:04 compute-0 podman[282146]: 2025-09-30 14:49:04.401949089 +0000 UTC m=+0.023443727 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:49:04 compute-0 podman[282146]: 2025-09-30 14:49:04.499181883 +0000 UTC m=+0.120676521 container attach 050e73273e825964498ca193c8b6ad888da351003e7cba74f087b6f1eae7d40e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_noether, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Sep 30 14:49:04 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:49:04.567 163966 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c6331d25-78a2-493c-bb43-51ad387342be, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 14:49:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:49:04] "GET /metrics HTTP/1.1" 200 48551 "" "Prometheus/2.51.0"
Sep 30 14:49:04 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:49:04] "GET /metrics HTTP/1.1" 200 48551 "" "Prometheus/2.51.0"
Sep 30 14:49:04 compute-0 epic_noether[282162]: {
Sep 30 14:49:04 compute-0 epic_noether[282162]:     "0": [
Sep 30 14:49:04 compute-0 epic_noether[282162]:         {
Sep 30 14:49:04 compute-0 epic_noether[282162]:             "devices": [
Sep 30 14:49:04 compute-0 epic_noether[282162]:                 "/dev/loop3"
Sep 30 14:49:04 compute-0 epic_noether[282162]:             ],
Sep 30 14:49:04 compute-0 epic_noether[282162]:             "lv_name": "ceph_lv0",
Sep 30 14:49:04 compute-0 epic_noether[282162]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:49:04 compute-0 epic_noether[282162]:             "lv_size": "21470642176",
Sep 30 14:49:04 compute-0 epic_noether[282162]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5e3c7776-ac03-5698-b79f-a6dc2d80cae6,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1bf35304-bfb4-41f5-b832-570aa31de1b2,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 14:49:04 compute-0 epic_noether[282162]:             "lv_uuid": "f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L",
Sep 30 14:49:04 compute-0 epic_noether[282162]:             "name": "ceph_lv0",
Sep 30 14:49:04 compute-0 epic_noether[282162]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:49:04 compute-0 epic_noether[282162]:             "tags": {
Sep 30 14:49:04 compute-0 epic_noether[282162]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:49:04 compute-0 epic_noether[282162]:                 "ceph.block_uuid": "f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L",
Sep 30 14:49:04 compute-0 epic_noether[282162]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 14:49:04 compute-0 epic_noether[282162]:                 "ceph.cluster_fsid": "5e3c7776-ac03-5698-b79f-a6dc2d80cae6",
Sep 30 14:49:04 compute-0 epic_noether[282162]:                 "ceph.cluster_name": "ceph",
Sep 30 14:49:04 compute-0 epic_noether[282162]:                 "ceph.crush_device_class": "",
Sep 30 14:49:04 compute-0 epic_noether[282162]:                 "ceph.encrypted": "0",
Sep 30 14:49:04 compute-0 epic_noether[282162]:                 "ceph.osd_fsid": "1bf35304-bfb4-41f5-b832-570aa31de1b2",
Sep 30 14:49:04 compute-0 epic_noether[282162]:                 "ceph.osd_id": "0",
Sep 30 14:49:04 compute-0 epic_noether[282162]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 14:49:04 compute-0 epic_noether[282162]:                 "ceph.type": "block",
Sep 30 14:49:04 compute-0 epic_noether[282162]:                 "ceph.vdo": "0",
Sep 30 14:49:04 compute-0 epic_noether[282162]:                 "ceph.with_tpm": "0"
Sep 30 14:49:04 compute-0 epic_noether[282162]:             },
Sep 30 14:49:04 compute-0 epic_noether[282162]:             "type": "block",
Sep 30 14:49:04 compute-0 epic_noether[282162]:             "vg_name": "ceph_vg0"
Sep 30 14:49:04 compute-0 epic_noether[282162]:         }
Sep 30 14:49:04 compute-0 epic_noether[282162]:     ]
Sep 30 14:49:04 compute-0 epic_noether[282162]: }
Sep 30 14:49:04 compute-0 systemd[1]: libpod-050e73273e825964498ca193c8b6ad888da351003e7cba74f087b6f1eae7d40e.scope: Deactivated successfully.
Sep 30 14:49:04 compute-0 podman[282146]: 2025-09-30 14:49:04.798884516 +0000 UTC m=+0.420379144 container died 050e73273e825964498ca193c8b6ad888da351003e7cba74f087b6f1eae7d40e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Sep 30 14:49:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-ce93dbbc45d41da148151a2dcc96df8378d83256815886ca418c3d0059db0a34-merged.mount: Deactivated successfully.
Sep 30 14:49:04 compute-0 podman[282146]: 2025-09-30 14:49:04.849428543 +0000 UTC m=+0.470923191 container remove 050e73273e825964498ca193c8b6ad888da351003e7cba74f087b6f1eae7d40e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_noether, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Sep 30 14:49:04 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1007: 337 pgs: 337 active+clean; 121 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Sep 30 14:49:04 compute-0 systemd[1]: libpod-conmon-050e73273e825964498ca193c8b6ad888da351003e7cba74f087b6f1eae7d40e.scope: Deactivated successfully.
Sep 30 14:49:04 compute-0 sudo[281959]: pam_unix(sudo:session): session closed for user root
Sep 30 14:49:05 compute-0 sudo[282183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:49:05 compute-0 sudo[282183]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:49:05 compute-0 sudo[282183]: pam_unix(sudo:session): session closed for user root
Sep 30 14:49:05 compute-0 sudo[282208]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -- raw list --format json
Sep 30 14:49:05 compute-0 sudo[282208]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:49:05 compute-0 podman[282276]: 2025-09-30 14:49:05.571498559 +0000 UTC m=+0.055501358 container create edb39db9b98f33926cc64b3f534dc06476b019445b75fc269bcd1464635fa27f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_hodgkin, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid)
Sep 30 14:49:05 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:49:05 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:49:05 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:49:05.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:49:05 compute-0 ceph-mon[74194]: pgmap v1007: 337 pgs: 337 active+clean; 121 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Sep 30 14:49:05 compute-0 systemd[1]: Started libpod-conmon-edb39db9b98f33926cc64b3f534dc06476b019445b75fc269bcd1464635fa27f.scope.
Sep 30 14:49:05 compute-0 podman[282276]: 2025-09-30 14:49:05.543740821 +0000 UTC m=+0.027743680 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:49:05 compute-0 nova_compute[261524]: 2025-09-30 14:49:05.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:49:05 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:49:05 compute-0 podman[282276]: 2025-09-30 14:49:05.714566447 +0000 UTC m=+0.198569296 container init edb39db9b98f33926cc64b3f534dc06476b019445b75fc269bcd1464635fa27f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_hodgkin, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Sep 30 14:49:05 compute-0 podman[282276]: 2025-09-30 14:49:05.721457498 +0000 UTC m=+0.205460267 container start edb39db9b98f33926cc64b3f534dc06476b019445b75fc269bcd1464635fa27f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_hodgkin, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:49:05 compute-0 podman[282276]: 2025-09-30 14:49:05.724935399 +0000 UTC m=+0.208938268 container attach edb39db9b98f33926cc64b3f534dc06476b019445b75fc269bcd1464635fa27f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_hodgkin, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:49:05 compute-0 wonderful_hodgkin[282292]: 167 167
Sep 30 14:49:05 compute-0 systemd[1]: libpod-edb39db9b98f33926cc64b3f534dc06476b019445b75fc269bcd1464635fa27f.scope: Deactivated successfully.
Sep 30 14:49:05 compute-0 podman[282276]: 2025-09-30 14:49:05.730027473 +0000 UTC m=+0.214030252 container died edb39db9b98f33926cc64b3f534dc06476b019445b75fc269bcd1464635fa27f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_hodgkin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default)
Sep 30 14:49:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-499805648aecb97ae3af473705cfa60fb2eb25bc019c53c48cec408b4c695b84-merged.mount: Deactivated successfully.
Sep 30 14:49:05 compute-0 podman[282276]: 2025-09-30 14:49:05.784657898 +0000 UTC m=+0.268660707 container remove edb39db9b98f33926cc64b3f534dc06476b019445b75fc269bcd1464635fa27f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_hodgkin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Sep 30 14:49:05 compute-0 systemd[1]: libpod-conmon-edb39db9b98f33926cc64b3f534dc06476b019445b75fc269bcd1464635fa27f.scope: Deactivated successfully.
Sep 30 14:49:05 compute-0 podman[282318]: 2025-09-30 14:49:05.956749519 +0000 UTC m=+0.051325790 container create c660bb15ea72e4667b2d6bdd2bb586a0054d77ff0db3fb65de4b6c18770f3cfd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:49:06 compute-0 systemd[1]: Started libpod-conmon-c660bb15ea72e4667b2d6bdd2bb586a0054d77ff0db3fb65de4b6c18770f3cfd.scope.
Sep 30 14:49:06 compute-0 podman[282318]: 2025-09-30 14:49:05.935419298 +0000 UTC m=+0.029995619 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:49:06 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:49:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05eafd0fb39e8fa5039f75a488070588bb0bae4e97849ae1b8090883037ef9ae/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:49:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05eafd0fb39e8fa5039f75a488070588bb0bae4e97849ae1b8090883037ef9ae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:49:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05eafd0fb39e8fa5039f75a488070588bb0bae4e97849ae1b8090883037ef9ae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:49:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05eafd0fb39e8fa5039f75a488070588bb0bae4e97849ae1b8090883037ef9ae/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:49:06 compute-0 podman[282318]: 2025-09-30 14:49:06.055208785 +0000 UTC m=+0.149785106 container init c660bb15ea72e4667b2d6bdd2bb586a0054d77ff0db3fb65de4b6c18770f3cfd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_lehmann, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:49:06 compute-0 podman[282318]: 2025-09-30 14:49:06.06833622 +0000 UTC m=+0.162912531 container start c660bb15ea72e4667b2d6bdd2bb586a0054d77ff0db3fb65de4b6c18770f3cfd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_lehmann, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:49:06 compute-0 podman[282318]: 2025-09-30 14:49:06.074237785 +0000 UTC m=+0.168814096 container attach c660bb15ea72e4667b2d6bdd2bb586a0054d77ff0db3fb65de4b6c18770f3cfd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:49:06 compute-0 sudo[282337]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:49:06 compute-0 sudo[282337]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:49:06 compute-0 sudo[282337]: pam_unix(sudo:session): session closed for user root
Sep 30 14:49:06 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:49:06 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:49:06 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:49:06.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:49:06 compute-0 lvm[282433]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 14:49:06 compute-0 lvm[282433]: VG ceph_vg0 finished
Sep 30 14:49:06 compute-0 beautiful_lehmann[282334]: {}
Sep 30 14:49:06 compute-0 systemd[1]: libpod-c660bb15ea72e4667b2d6bdd2bb586a0054d77ff0db3fb65de4b6c18770f3cfd.scope: Deactivated successfully.
Sep 30 14:49:06 compute-0 systemd[1]: libpod-c660bb15ea72e4667b2d6bdd2bb586a0054d77ff0db3fb65de4b6c18770f3cfd.scope: Consumed 1.167s CPU time.
Sep 30 14:49:06 compute-0 podman[282318]: 2025-09-30 14:49:06.815475235 +0000 UTC m=+0.910051526 container died c660bb15ea72e4667b2d6bdd2bb586a0054d77ff0db3fb65de4b6c18770f3cfd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_lehmann, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:49:06 compute-0 nova_compute[261524]: 2025-09-30 14:49:06.819 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:49:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-05eafd0fb39e8fa5039f75a488070588bb0bae4e97849ae1b8090883037ef9ae-merged.mount: Deactivated successfully.
Sep 30 14:49:06 compute-0 podman[282318]: 2025-09-30 14:49:06.86478134 +0000 UTC m=+0.959357651 container remove c660bb15ea72e4667b2d6bdd2bb586a0054d77ff0db3fb65de4b6c18770f3cfd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_lehmann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Sep 30 14:49:06 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1008: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 344 KiB/s rd, 2.1 MiB/s wr, 86 op/s
Sep 30 14:49:06 compute-0 systemd[1]: libpod-conmon-c660bb15ea72e4667b2d6bdd2bb586a0054d77ff0db3fb65de4b6c18770f3cfd.scope: Deactivated successfully.
Sep 30 14:49:06 compute-0 sudo[282208]: pam_unix(sudo:session): session closed for user root
Sep 30 14:49:06 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 14:49:06 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:49:06 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 14:49:06 compute-0 nova_compute[261524]: 2025-09-30 14:49:06.952 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:49:06 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:49:07 compute-0 sudo[282450]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 14:49:07 compute-0 sudo[282450]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:49:07 compute-0 sudo[282450]: pam_unix(sudo:session): session closed for user root
Sep 30 14:49:07 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:49:07.184Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:49:07 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:49:07 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:49:07 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:49:07 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:49:07.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:49:07 compute-0 ceph-mon[74194]: pgmap v1008: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 344 KiB/s rd, 2.1 MiB/s wr, 86 op/s
Sep 30 14:49:07 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:49:07 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:49:08 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 14:49:08 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4010485668' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:49:08 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:49:08 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:49:08 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:49:08.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:49:08 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1009: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 13 KiB/s wr, 23 op/s
Sep 30 14:49:09 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:49:08 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:49:09 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:49:08 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:49:09 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:49:08 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:49:09 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:49:09 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:49:09 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/4010485668' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:49:09 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:49:09 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:49:09 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:49:09.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:49:10 compute-0 ceph-mon[74194]: pgmap v1009: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 13 KiB/s wr, 23 op/s
Sep 30 14:49:10 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:49:10 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:49:10 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:49:10.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:49:10 compute-0 nova_compute[261524]: 2025-09-30 14:49:10.685 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:49:10 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1010: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 13 KiB/s wr, 23 op/s
Sep 30 14:49:11 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 14:49:11 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2765138727' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 14:49:11 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 14:49:11 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2765138727' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 14:49:11 compute-0 ceph-mon[74194]: from='client.? 192.168.122.10:0/2765138727' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 14:49:11 compute-0 ceph-mon[74194]: from='client.? 192.168.122.10:0/2765138727' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 14:49:11 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:49:11 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:49:11 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:49:11.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:49:11 compute-0 nova_compute[261524]: 2025-09-30 14:49:11.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:49:12 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:49:12 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:49:12 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:49:12.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:49:12 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:49:12 compute-0 ceph-mon[74194]: pgmap v1010: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 13 KiB/s wr, 23 op/s
Sep 30 14:49:12 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1011: 337 pgs: 337 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 13 KiB/s wr, 29 op/s
Sep 30 14:49:13 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:49:13 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:49:13 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:49:13.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:49:13 compute-0 ceph-mon[74194]: pgmap v1011: 337 pgs: 337 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 13 KiB/s wr, 29 op/s
Sep 30 14:49:13 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:49:13.686Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:49:13 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:49:13.686Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:49:13 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:49:13.686Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:49:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:49:13 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:49:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:49:13 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:49:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:49:13 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:49:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:49:14 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:49:14 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:49:14 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:49:14 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:49:14.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:49:14 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:49:14 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:49:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:49:14] "GET /metrics HTTP/1.1" 200 48536 "" "Prometheus/2.51.0"
Sep 30 14:49:14 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:49:14] "GET /metrics HTTP/1.1" 200 48536 "" "Prometheus/2.51.0"
Sep 30 14:49:14 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1012: 337 pgs: 337 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 14:49:14 compute-0 nova_compute[261524]: 2025-09-30 14:49:14.969 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:49:15 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:49:15 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:49:15 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:49:15 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:49:15.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:49:15 compute-0 nova_compute[261524]: 2025-09-30 14:49:15.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:49:15 compute-0 nova_compute[261524]: 2025-09-30 14:49:15.947 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:49:15 compute-0 nova_compute[261524]: 2025-09-30 14:49:15.952 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:49:15 compute-0 nova_compute[261524]: 2025-09-30 14:49:15.952 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Sep 30 14:49:15 compute-0 nova_compute[261524]: 2025-09-30 14:49:15.953 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Sep 30 14:49:15 compute-0 nova_compute[261524]: 2025-09-30 14:49:15.969 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Sep 30 14:49:15 compute-0 nova_compute[261524]: 2025-09-30 14:49:15.969 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:49:16 compute-0 ceph-mon[74194]: pgmap v1012: 337 pgs: 337 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 14:49:16 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:49:16 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:49:16 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:49:16.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:49:16 compute-0 nova_compute[261524]: 2025-09-30 14:49:16.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:49:16 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1013: 337 pgs: 337 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 14:49:16 compute-0 nova_compute[261524]: 2025-09-30 14:49:16.951 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:49:16 compute-0 nova_compute[261524]: 2025-09-30 14:49:16.952 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:49:16 compute-0 nova_compute[261524]: 2025-09-30 14:49:16.952 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Sep 30 14:49:17 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/578160507' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:49:17 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/85566318' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:49:17 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/2659227442' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:49:17 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:49:17.186Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:49:17 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:49:17 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:49:17 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:49:17 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:49:17.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:49:17 compute-0 nova_compute[261524]: 2025-09-30 14:49:17.965 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:49:17 compute-0 nova_compute[261524]: 2025-09-30 14:49:17.965 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:49:17 compute-0 nova_compute[261524]: 2025-09-30 14:49:17.965 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Sep 30 14:49:17 compute-0 nova_compute[261524]: 2025-09-30 14:49:17.981 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Sep 30 14:49:18 compute-0 ceph-mon[74194]: pgmap v1013: 337 pgs: 337 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 14:49:18 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/925726723' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:49:18 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:49:18 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:49:18 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:49:18.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:49:18 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1014: 337 pgs: 337 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 4.4 KiB/s rd, 511 B/s wr, 6 op/s
Sep 30 14:49:18 compute-0 nova_compute[261524]: 2025-09-30 14:49:18.969 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:49:18 compute-0 nova_compute[261524]: 2025-09-30 14:49:18.969 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Sep 30 14:49:18 compute-0 nova_compute[261524]: 2025-09-30 14:49:18.970 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:49:18 compute-0 nova_compute[261524]: 2025-09-30 14:49:18.993 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:49:18 compute-0 nova_compute[261524]: 2025-09-30 14:49:18.993 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:49:18 compute-0 nova_compute[261524]: 2025-09-30 14:49:18.994 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:49:18 compute-0 nova_compute[261524]: 2025-09-30 14:49:18.994 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Sep 30 14:49:18 compute-0 nova_compute[261524]: 2025-09-30 14:49:18.995 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Sep 30 14:49:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:49:19 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:49:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:49:19 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:49:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:49:19 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:49:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:49:19 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:49:19 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 14:49:19 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2009360547' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:49:19 compute-0 nova_compute[261524]: 2025-09-30 14:49:19.474 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Sep 30 14:49:19 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:49:19 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:49:19 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:49:19.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:49:19 compute-0 nova_compute[261524]: 2025-09-30 14:49:19.649 2 WARNING nova.virt.libvirt.driver [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 14:49:19 compute-0 nova_compute[261524]: 2025-09-30 14:49:19.651 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4568MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Sep 30 14:49:19 compute-0 nova_compute[261524]: 2025-09-30 14:49:19.651 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:49:19 compute-0 nova_compute[261524]: 2025-09-30 14:49:19.651 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:49:19 compute-0 nova_compute[261524]: 2025-09-30 14:49:19.732 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Sep 30 14:49:19 compute-0 nova_compute[261524]: 2025-09-30 14:49:19.733 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Sep 30 14:49:19 compute-0 nova_compute[261524]: 2025-09-30 14:49:19.774 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Sep 30 14:49:20 compute-0 ceph-mon[74194]: pgmap v1014: 337 pgs: 337 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 4.4 KiB/s rd, 511 B/s wr, 6 op/s
Sep 30 14:49:20 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/2009360547' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:49:20 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 14:49:20 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3735749317' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:49:20 compute-0 nova_compute[261524]: 2025-09-30 14:49:20.264 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Sep 30 14:49:20 compute-0 nova_compute[261524]: 2025-09-30 14:49:20.273 2 DEBUG nova.compute.provider_tree [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Inventory has not changed in ProviderTree for provider: 06783cfc-6d32-454d-9501-ebd8adea3735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Sep 30 14:49:20 compute-0 nova_compute[261524]: 2025-09-30 14:49:20.292 2 DEBUG nova.scheduler.client.report [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Inventory has not changed for provider 06783cfc-6d32-454d-9501-ebd8adea3735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Sep 30 14:49:20 compute-0 nova_compute[261524]: 2025-09-30 14:49:20.295 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Sep 30 14:49:20 compute-0 nova_compute[261524]: 2025-09-30 14:49:20.296 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.644s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:49:20 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:49:20 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:49:20 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:49:20.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:49:20 compute-0 nova_compute[261524]: 2025-09-30 14:49:20.688 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:49:20 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1015: 337 pgs: 337 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 4.4 KiB/s rd, 511 B/s wr, 6 op/s
Sep 30 14:49:21 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/3735749317' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:49:21 compute-0 nova_compute[261524]: 2025-09-30 14:49:21.280 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:49:21 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:49:21 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:49:21 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:49:21.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:49:21 compute-0 nova_compute[261524]: 2025-09-30 14:49:21.831 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:49:22 compute-0 ceph-mon[74194]: pgmap v1015: 337 pgs: 337 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 4.4 KiB/s rd, 511 B/s wr, 6 op/s
Sep 30 14:49:22 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:49:22 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:49:22 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:49:22.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:49:22 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:49:22 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1016: 337 pgs: 337 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 4.7 KiB/s rd, 511 B/s wr, 6 op/s
Sep 30 14:49:23 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:49:23 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:49:23 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:49:23.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:49:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:49:23.688Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:49:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:49:23 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:49:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:49:23 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:49:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:49:23 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:49:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:49:24 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:49:24 compute-0 ceph-mon[74194]: pgmap v1016: 337 pgs: 337 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 4.7 KiB/s rd, 511 B/s wr, 6 op/s
Sep 30 14:49:24 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:49:24 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:49:24 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:49:24.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:49:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:49:24] "GET /metrics HTTP/1.1" 200 48536 "" "Prometheus/2.51.0"
Sep 30 14:49:24 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:49:24] "GET /metrics HTTP/1.1" 200 48536 "" "Prometheus/2.51.0"
Sep 30 14:49:24 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1017: 337 pgs: 337 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:49:25 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:49:25 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:49:25 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:49:25.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:49:25 compute-0 nova_compute[261524]: 2025-09-30 14:49:25.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:49:26 compute-0 sudo[282539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:49:26 compute-0 sudo[282539]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:49:26 compute-0 sudo[282539]: pam_unix(sudo:session): session closed for user root
Sep 30 14:49:26 compute-0 ceph-mon[74194]: pgmap v1017: 337 pgs: 337 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:49:26 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:49:26 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:49:26 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:49:26.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:49:26 compute-0 nova_compute[261524]: 2025-09-30 14:49:26.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:49:26 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1018: 337 pgs: 337 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:49:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:49:27.188Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:49:27 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:49:27 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:49:27 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:49:27 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:49:27.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:49:28 compute-0 ceph-mon[74194]: pgmap v1018: 337 pgs: 337 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:49:28 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:49:28 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:49:28 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:49:28.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:49:28 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1019: 337 pgs: 337 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:49:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:49:29 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:49:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:49:29 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:49:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:49:29 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:49:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:49:29 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:49:29 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:49:29 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:49:29 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:49:29.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:49:29 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:49:29 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:49:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:49:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:49:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:49:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:49:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:49:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:49:30 compute-0 ceph-mon[74194]: pgmap v1019: 337 pgs: 337 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:49:30 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:49:30 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:49:30 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:49:30 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:49:30.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:49:30 compute-0 nova_compute[261524]: 2025-09-30 14:49:30.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:49:30 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1020: 337 pgs: 337 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:49:31 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:49:31 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:49:31 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:49:31.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:49:31 compute-0 nova_compute[261524]: 2025-09-30 14:49:31.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:49:32 compute-0 ceph-mon[74194]: pgmap v1020: 337 pgs: 337 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:49:32 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:49:32 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:49:32 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:49:32.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:49:32 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:49:32 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1021: 337 pgs: 337 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:49:33 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/3361713888' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:49:33 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:49:33 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:49:33 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:49:33.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:49:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:49:33.689Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:49:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:49:33 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:49:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:49:33 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:49:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:49:33 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:49:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:49:34 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:49:34 compute-0 ceph-mon[74194]: pgmap v1021: 337 pgs: 337 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:49:34 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:49:34 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:49:34 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:49:34.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:49:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:49:34] "GET /metrics HTTP/1.1" 200 48533 "" "Prometheus/2.51.0"
Sep 30 14:49:34 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:49:34] "GET /metrics HTTP/1.1" 200 48533 "" "Prometheus/2.51.0"
Sep 30 14:49:34 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1022: 337 pgs: 337 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:49:35 compute-0 podman[282572]: 2025-09-30 14:49:35.154952592 +0000 UTC m=+0.075929166 container health_status 3f9405f717bf7bccb1d94628a6cea0442375ebf8d5cf43ef2536ee30dce6c6e0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=iscsid)
Sep 30 14:49:35 compute-0 podman[282580]: 2025-09-30 14:49:35.171803924 +0000 UTC m=+0.075110944 container health_status c96c6cc1b89de09f21811bef665f35e069874e3ad1bbd4c37d02227af98e3458 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2)
Sep 30 14:49:35 compute-0 podman[282573]: 2025-09-30 14:49:35.205299264 +0000 UTC m=+0.113921473 container health_status 8ace90c1ad44d98a416105b72e1c541724757ab08efa10adfaa0df7e8c2027c6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20250923, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Sep 30 14:49:35 compute-0 podman[282574]: 2025-09-30 14:49:35.206156077 +0000 UTC m=+0.108997235 container health_status b3fb2fd96e3ed0561f41ccf5f3a6a8f5edc1610051f196882b9fe64f54041c07 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20250923, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true)
Sep 30 14:49:35 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:49:35 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:49:35 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:49:35.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:49:35 compute-0 nova_compute[261524]: 2025-09-30 14:49:35.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:49:35 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[106217]: logger=infra.usagestats t=2025-09-30T14:49:35.893948604Z level=info msg="Usage stats are ready to report"
Sep 30 14:49:36 compute-0 ceph-mon[74194]: pgmap v1022: 337 pgs: 337 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:49:36 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:49:36 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:49:36 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:49:36.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:49:36 compute-0 nova_compute[261524]: 2025-09-30 14:49:36.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:49:36 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1023: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 14:49:37 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:49:37.188Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:49:37 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/1102878922' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 14:49:37 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:49:37 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:49:37 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:49:37 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:49:37.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:49:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:49:38.266 163966 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:49:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:49:38.266 163966 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:49:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:49:38.266 163966 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:49:38 compute-0 ceph-mon[74194]: pgmap v1023: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 14:49:38 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/2760652322' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 14:49:38 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:49:38 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:49:38 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:49:38.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:49:38 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1024: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 14:49:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:49:38 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:49:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:49:38 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:49:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:49:38 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:49:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:49:39 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:49:39 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:49:39 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:49:39 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:49:39.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:49:40 compute-0 ceph-mon[74194]: pgmap v1024: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 14:49:40 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:49:40 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:49:40 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:49:40.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:49:40 compute-0 nova_compute[261524]: 2025-09-30 14:49:40.695 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:49:40 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1025: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 14:49:41 compute-0 ceph-mon[74194]: pgmap v1025: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Sep 30 14:49:41 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:49:41 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:49:41 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:49:41.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:49:41 compute-0 nova_compute[261524]: 2025-09-30 14:49:41.842 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:49:42 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:49:42 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:49:42 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:49:42.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:49:42 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:49:42 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1026: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 97 op/s
Sep 30 14:49:43 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:49:43 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:49:43 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:49:43.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:49:43 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:49:43.690Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:49:43 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:49:43.690Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:49:43 compute-0 ceph-mon[74194]: pgmap v1026: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 97 op/s
Sep 30 14:49:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:49:43 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:49:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:49:43 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:49:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:49:43 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:49:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:49:44 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:49:44 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:49:44 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:49:44 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:49:44.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:49:44 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:49:44 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:49:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:49:44] "GET /metrics HTTP/1.1" 200 48550 "" "Prometheus/2.51.0"
Sep 30 14:49:44 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:49:44] "GET /metrics HTTP/1.1" 200 48550 "" "Prometheus/2.51.0"
Sep 30 14:49:44 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1027: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 96 op/s
Sep 30 14:49:44 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:49:45 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:49:45 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:49:45 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:49:45.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:49:45 compute-0 nova_compute[261524]: 2025-09-30 14:49:45.697 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:49:46 compute-0 ceph-mon[74194]: pgmap v1027: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 96 op/s
Sep 30 14:49:46 compute-0 sudo[282666]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:49:46 compute-0 sudo[282666]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:49:46 compute-0 sudo[282666]: pam_unix(sudo:session): session closed for user root
Sep 30 14:49:46 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:49:46 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:49:46 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:49:46.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:49:46 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1028: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Sep 30 14:49:46 compute-0 nova_compute[261524]: 2025-09-30 14:49:46.884 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:49:47 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:49:47.189Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:49:47 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:49:47 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:49:47 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:49:47 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:49:47.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:49:48 compute-0 ceph-mon[74194]: pgmap v1028: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Sep 30 14:49:48 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:49:48 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:49:48 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:49:48.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:49:48 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1029: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Sep 30 14:49:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:49:48 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:49:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:49:48 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:49:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:49:48 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:49:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:49:49 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:49:49 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:49:49 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:49:49 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:49:49.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:49:50 compute-0 ceph-mon[74194]: pgmap v1029: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Sep 30 14:49:50 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:49:50 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:49:50 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:49:50.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:49:50 compute-0 nova_compute[261524]: 2025-09-30 14:49:50.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:49:50 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1030: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Sep 30 14:49:51 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:49:51 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:49:51 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:49:51.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:49:51 compute-0 nova_compute[261524]: 2025-09-30 14:49:51.887 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:49:52 compute-0 ceph-mon[74194]: pgmap v1030: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Sep 30 14:49:52 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/205106492' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:49:52 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:49:52 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:49:52 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:49:52.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:49:52 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:49:52 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1031: 337 pgs: 337 active+clean; 109 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.0 MiB/s wr, 117 op/s
Sep 30 14:49:53 compute-0 ceph-mon[74194]: pgmap v1031: 337 pgs: 337 active+clean; 109 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.0 MiB/s wr, 117 op/s
Sep 30 14:49:53 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:49:53 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:49:53 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:49:53.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:49:53 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:49:53.691Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:49:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:49:53 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:49:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:49:53 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:49:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:49:53 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:49:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:49:54 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:49:54 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:49:54 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:49:54 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:49:54.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:49:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:49:54] "GET /metrics HTTP/1.1" 200 48550 "" "Prometheus/2.51.0"
Sep 30 14:49:54 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:49:54] "GET /metrics HTTP/1.1" 200 48550 "" "Prometheus/2.51.0"
Sep 30 14:49:54 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1032: 337 pgs: 337 active+clean; 109 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 372 KiB/s rd, 2.0 MiB/s wr, 47 op/s
Sep 30 14:49:55 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:49:55 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:49:55 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:49:55.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:49:55 compute-0 nova_compute[261524]: 2025-09-30 14:49:55.702 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:49:56 compute-0 ceph-mon[74194]: pgmap v1032: 337 pgs: 337 active+clean; 109 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 372 KiB/s rd, 2.0 MiB/s wr, 47 op/s
Sep 30 14:49:56 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:49:56 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:49:56 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:49:56.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:49:56 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1033: 337 pgs: 337 active+clean; 167 MiB data, 339 MiB used, 60 GiB / 60 GiB avail; 483 KiB/s rd, 3.9 MiB/s wr, 95 op/s
Sep 30 14:49:56 compute-0 nova_compute[261524]: 2025-09-30 14:49:56.891 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:49:57 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/331242983' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 14:49:57 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/4184171363' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 14:49:57 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:49:57.190Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:49:57 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:49:57 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:49:57 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:49:57 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:49:57.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:49:58 compute-0 ceph-mon[74194]: pgmap v1033: 337 pgs: 337 active+clean; 167 MiB data, 339 MiB used, 60 GiB / 60 GiB avail; 483 KiB/s rd, 3.9 MiB/s wr, 95 op/s
Sep 30 14:49:58 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:49:58 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:49:58 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:49:58.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:49:58 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1034: 337 pgs: 337 active+clean; 167 MiB data, 339 MiB used, 60 GiB / 60 GiB avail; 344 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Sep 30 14:49:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:49:58 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:49:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:49:58 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:49:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:49:58 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:49:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:49:59 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:49:59 compute-0 ceph-mgr[74485]: [balancer INFO root] Optimize plan auto_2025-09-30_14:49:59
Sep 30 14:49:59 compute-0 ceph-mgr[74485]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 14:49:59 compute-0 ceph-mgr[74485]: [balancer INFO root] do_upmap
Sep 30 14:49:59 compute-0 ceph-mgr[74485]: [balancer INFO root] pools ['.nfs', 'default.rgw.log', 'images', 'default.rgw.meta', 'default.rgw.control', '.mgr', '.rgw.root', 'volumes', 'cephfs.cephfs.data', 'vms', 'backups', 'cephfs.cephfs.meta']
Sep 30 14:49:59 compute-0 ceph-mgr[74485]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 14:49:59 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:49:59 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:49:59 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:49:59.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:49:59 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:49:59 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:49:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:49:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:49:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:49:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:49:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:49:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:50:00 compute-0 ceph-mon[74194]: log_channel(cluster) log [WRN] : overall HEALTH_WARN 1 OSD(s) experiencing slow operations in BlueStore
Sep 30 14:50:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 14:50:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:50:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 14:50:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:50:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001102599282900306 of space, bias 1.0, pg target 0.3307797848700918 quantized to 32 (current 32)
Sep 30 14:50:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:50:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:50:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:50:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:50:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:50:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Sep 30 14:50:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:50:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Sep 30 14:50:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:50:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:50:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:50:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Sep 30 14:50:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:50:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Sep 30 14:50:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:50:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:50:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:50:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 14:50:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:50:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 14:50:00 compute-0 ceph-mon[74194]: pgmap v1034: 337 pgs: 337 active+clean; 167 MiB data, 339 MiB used, 60 GiB / 60 GiB avail; 344 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Sep 30 14:50:00 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:50:00 compute-0 ceph-mon[74194]: overall HEALTH_WARN 1 OSD(s) experiencing slow operations in BlueStore
Sep 30 14:50:00 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:50:00 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:50:00 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:50:00.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:50:00 compute-0 nova_compute[261524]: 2025-09-30 14:50:00.703 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:50:00 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1035: 337 pgs: 337 active+clean; 167 MiB data, 339 MiB used, 60 GiB / 60 GiB avail; 344 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Sep 30 14:50:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 14:50:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 14:50:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 14:50:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 14:50:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 14:50:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 14:50:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 14:50:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 14:50:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 14:50:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 14:50:01 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:50:01 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:50:01 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:50:01.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:50:01 compute-0 nova_compute[261524]: 2025-09-30 14:50:01.908 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:50:01 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Sep 30 14:50:02 compute-0 ceph-mon[74194]: pgmap v1035: 337 pgs: 337 active+clean; 167 MiB data, 339 MiB used, 60 GiB / 60 GiB avail; 344 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Sep 30 14:50:02 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:50:02 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:50:02 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:50:02 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:50:02.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:50:02 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1036: 337 pgs: 337 active+clean; 167 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 165 op/s
Sep 30 14:50:03 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:50:03 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:50:03 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:50:03.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:50:03 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:50:03.693Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:50:03 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:50:03.693Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:50:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:50:03 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:50:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:50:03 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:50:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:50:03 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:50:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:50:04 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:50:04 compute-0 ceph-mon[74194]: pgmap v1036: 337 pgs: 337 active+clean; 167 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 165 op/s
Sep 30 14:50:04 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:50:04 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:50:04 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:50:04.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:50:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:50:04] "GET /metrics HTTP/1.1" 200 48552 "" "Prometheus/2.51.0"
Sep 30 14:50:04 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:50:04] "GET /metrics HTTP/1.1" 200 48552 "" "Prometheus/2.51.0"
Sep 30 14:50:04 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1037: 337 pgs: 337 active+clean; 167 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 122 op/s
Sep 30 14:50:05 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:50:05 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:50:05 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:50:05.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:50:05 compute-0 nova_compute[261524]: 2025-09-30 14:50:05.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:50:06 compute-0 podman[282713]: 2025-09-30 14:50:06.1349435 +0000 UTC m=+0.060820688 container health_status 3f9405f717bf7bccb1d94628a6cea0442375ebf8d5cf43ef2536ee30dce6c6e0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Sep 30 14:50:06 compute-0 podman[282720]: 2025-09-30 14:50:06.156362553 +0000 UTC m=+0.067727290 container health_status b3fb2fd96e3ed0561f41ccf5f3a6a8f5edc1610051f196882b9fe64f54041c07 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:50:06 compute-0 podman[282723]: 2025-09-30 14:50:06.167395183 +0000 UTC m=+0.071115179 container health_status c96c6cc1b89de09f21811bef665f35e069874e3ad1bbd4c37d02227af98e3458 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Sep 30 14:50:06 compute-0 podman[282714]: 2025-09-30 14:50:06.18707137 +0000 UTC m=+0.105488092 container health_status 8ace90c1ad44d98a416105b72e1c541724757ab08efa10adfaa0df7e8c2027c6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0)
Sep 30 14:50:06 compute-0 ceph-mon[74194]: pgmap v1037: 337 pgs: 337 active+clean; 167 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 122 op/s
Sep 30 14:50:06 compute-0 sudo[282798]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:50:06 compute-0 sudo[282798]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:50:06 compute-0 sudo[282798]: pam_unix(sudo:session): session closed for user root
Sep 30 14:50:06 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:50:06 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:50:06 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:50:06.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:50:06 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1038: 337 pgs: 337 active+clean; 167 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 122 op/s
Sep 30 14:50:06 compute-0 nova_compute[261524]: 2025-09-30 14:50:06.910 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:50:07 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:50:07.191Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:50:07 compute-0 sudo[282823]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:50:07 compute-0 sudo[282823]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:50:07 compute-0 sudo[282823]: pam_unix(sudo:session): session closed for user root
Sep 30 14:50:07 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:50:07 compute-0 sudo[282848]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 14:50:07 compute-0 sudo[282848]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:50:07 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:50:07 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:50:07 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:50:07.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:50:07 compute-0 sudo[282848]: pam_unix(sudo:session): session closed for user root
Sep 30 14:50:08 compute-0 ceph-mon[74194]: pgmap v1038: 337 pgs: 337 active+clean; 167 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 122 op/s
Sep 30 14:50:08 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:50:08 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:50:08 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:50:08.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:50:08 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1039: 337 pgs: 337 active+clean; 167 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 27 KiB/s wr, 74 op/s
Sep 30 14:50:09 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:50:08 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:50:09 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:50:09 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:50:09 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:50:09 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:50:09 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:50:09 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:50:09 compute-0 ceph-mon[74194]: pgmap v1039: 337 pgs: 337 active+clean; 167 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 27 KiB/s wr, 74 op/s
Sep 30 14:50:09 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Sep 30 14:50:09 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:50:09 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Sep 30 14:50:09 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:50:09 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:50:09 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:50:09.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:50:09 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:50:09 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Sep 30 14:50:09 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:50:09 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Sep 30 14:50:09 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:50:10 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Sep 30 14:50:10 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Sep 30 14:50:10 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:50:10 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:50:10 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:50:10.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:50:10 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:50:10 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:50:10 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:50:10 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:50:10 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Sep 30 14:50:10 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Sep 30 14:50:10 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Sep 30 14:50:10 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:50:10 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:50:10 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 14:50:10 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 14:50:10 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1040: 337 pgs: 337 active+clean; 167 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 28 KiB/s wr, 76 op/s
Sep 30 14:50:10 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 14:50:10 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:50:10 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 14:50:10 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:50:10 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 14:50:10 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 14:50:10 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 14:50:10 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 14:50:10 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:50:10 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:50:10 compute-0 nova_compute[261524]: 2025-09-30 14:50:10.707 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:50:10 compute-0 sudo[282909]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:50:10 compute-0 sudo[282909]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:50:10 compute-0 sudo[282909]: pam_unix(sudo:session): session closed for user root
Sep 30 14:50:10 compute-0 sudo[282934]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 14:50:10 compute-0 sudo[282934]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:50:11 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 14:50:11 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2592072116' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 14:50:11 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 14:50:11 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2592072116' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 14:50:11 compute-0 podman[282997]: 2025-09-30 14:50:11.324916395 +0000 UTC m=+0.056424834 container create 7c462b31d57574185585cff089a416201e98cb5cc4f9b8e6fd382fa40c45d83c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_dewdney, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:50:11 compute-0 systemd[1]: Started libpod-conmon-7c462b31d57574185585cff089a416201e98cb5cc4f9b8e6fd382fa40c45d83c.scope.
Sep 30 14:50:11 compute-0 podman[282997]: 2025-09-30 14:50:11.299922768 +0000 UTC m=+0.031431227 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:50:11 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:50:11 compute-0 podman[282997]: 2025-09-30 14:50:11.426488403 +0000 UTC m=+0.157996832 container init 7c462b31d57574185585cff089a416201e98cb5cc4f9b8e6fd382fa40c45d83c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_dewdney, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:50:11 compute-0 podman[282997]: 2025-09-30 14:50:11.436201498 +0000 UTC m=+0.167709907 container start 7c462b31d57574185585cff089a416201e98cb5cc4f9b8e6fd382fa40c45d83c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_dewdney, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Sep 30 14:50:11 compute-0 podman[282997]: 2025-09-30 14:50:11.439748701 +0000 UTC m=+0.171257110 container attach 7c462b31d57574185585cff089a416201e98cb5cc4f9b8e6fd382fa40c45d83c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_dewdney, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:50:11 compute-0 inspiring_dewdney[283014]: 167 167
Sep 30 14:50:11 compute-0 systemd[1]: libpod-7c462b31d57574185585cff089a416201e98cb5cc4f9b8e6fd382fa40c45d83c.scope: Deactivated successfully.
Sep 30 14:50:11 compute-0 conmon[283014]: conmon 7c462b31d57574185585 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7c462b31d57574185585cff089a416201e98cb5cc4f9b8e6fd382fa40c45d83c.scope/container/memory.events
Sep 30 14:50:11 compute-0 podman[282997]: 2025-09-30 14:50:11.443731606 +0000 UTC m=+0.175240035 container died 7c462b31d57574185585cff089a416201e98cb5cc4f9b8e6fd382fa40c45d83c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_dewdney, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:50:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-deab5033383a05801673c3290e98b06694842af90b79e26c1f7e406aec77477d-merged.mount: Deactivated successfully.
Sep 30 14:50:11 compute-0 podman[282997]: 2025-09-30 14:50:11.482000351 +0000 UTC m=+0.213508750 container remove 7c462b31d57574185585cff089a416201e98cb5cc4f9b8e6fd382fa40c45d83c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_dewdney, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Sep 30 14:50:11 compute-0 systemd[1]: libpod-conmon-7c462b31d57574185585cff089a416201e98cb5cc4f9b8e6fd382fa40c45d83c.scope: Deactivated successfully.
Sep 30 14:50:11 compute-0 ceph-mon[74194]: log_channel(cluster) log [WRN] : Health check failed: 2 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Sep 30 14:50:11 compute-0 podman[283039]: 2025-09-30 14:50:11.6643401 +0000 UTC m=+0.051822222 container create 41754dec8f92715b0a4f990b99dd4e3df4340b8c08e401fb685d479f1cf38a62 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_carver, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:50:11 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:50:11 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:50:11 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:50:11.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:50:11 compute-0 systemd[1]: Started libpod-conmon-41754dec8f92715b0a4f990b99dd4e3df4340b8c08e401fb685d479f1cf38a62.scope.
Sep 30 14:50:11 compute-0 podman[283039]: 2025-09-30 14:50:11.639391855 +0000 UTC m=+0.026873997 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:50:11 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:50:11 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Sep 30 14:50:11 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:50:11 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 14:50:11 compute-0 ceph-mon[74194]: pgmap v1040: 337 pgs: 337 active+clean; 167 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 28 KiB/s wr, 76 op/s
Sep 30 14:50:11 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:50:11 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:50:11 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 14:50:11 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 14:50:11 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:50:11 compute-0 ceph-mon[74194]: from='client.? 192.168.122.10:0/2592072116' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 14:50:11 compute-0 ceph-mon[74194]: from='client.? 192.168.122.10:0/2592072116' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 14:50:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d62c188c4fa995a35284920efe1895bf213b2060c63ce30c9f8dd326b1842fb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:50:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d62c188c4fa995a35284920efe1895bf213b2060c63ce30c9f8dd326b1842fb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:50:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d62c188c4fa995a35284920efe1895bf213b2060c63ce30c9f8dd326b1842fb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:50:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d62c188c4fa995a35284920efe1895bf213b2060c63ce30c9f8dd326b1842fb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:50:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d62c188c4fa995a35284920efe1895bf213b2060c63ce30c9f8dd326b1842fb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 14:50:11 compute-0 nova_compute[261524]: 2025-09-30 14:50:11.912 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:50:11 compute-0 podman[283039]: 2025-09-30 14:50:11.918681191 +0000 UTC m=+0.306163373 container init 41754dec8f92715b0a4f990b99dd4e3df4340b8c08e401fb685d479f1cf38a62 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_carver, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:50:11 compute-0 podman[283039]: 2025-09-30 14:50:11.92664093 +0000 UTC m=+0.314123042 container start 41754dec8f92715b0a4f990b99dd4e3df4340b8c08e401fb685d479f1cf38a62 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_carver, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:50:11 compute-0 podman[283039]: 2025-09-30 14:50:11.963213271 +0000 UTC m=+0.350695443 container attach 41754dec8f92715b0a4f990b99dd4e3df4340b8c08e401fb685d479f1cf38a62 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_carver, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:50:12 compute-0 goofy_carver[283055]: --> passed data devices: 0 physical, 1 LVM
Sep 30 14:50:12 compute-0 goofy_carver[283055]: --> All data devices are unavailable
Sep 30 14:50:12 compute-0 systemd[1]: libpod-41754dec8f92715b0a4f990b99dd4e3df4340b8c08e401fb685d479f1cf38a62.scope: Deactivated successfully.
Sep 30 14:50:12 compute-0 podman[283039]: 2025-09-30 14:50:12.330051787 +0000 UTC m=+0.717533869 container died 41754dec8f92715b0a4f990b99dd4e3df4340b8c08e401fb685d479f1cf38a62 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:50:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-6d62c188c4fa995a35284920efe1895bf213b2060c63ce30c9f8dd326b1842fb-merged.mount: Deactivated successfully.
Sep 30 14:50:12 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:50:12 compute-0 podman[283039]: 2025-09-30 14:50:12.410311365 +0000 UTC m=+0.797793447 container remove 41754dec8f92715b0a4f990b99dd4e3df4340b8c08e401fb685d479f1cf38a62 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_carver, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Sep 30 14:50:12 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:50:12 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:50:12 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:50:12.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:50:12 compute-0 systemd[1]: libpod-conmon-41754dec8f92715b0a4f990b99dd4e3df4340b8c08e401fb685d479f1cf38a62.scope: Deactivated successfully.
Sep 30 14:50:12 compute-0 sudo[282934]: pam_unix(sudo:session): session closed for user root
Sep 30 14:50:12 compute-0 sudo[283085]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:50:12 compute-0 sudo[283085]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:50:12 compute-0 sudo[283085]: pam_unix(sudo:session): session closed for user root
Sep 30 14:50:12 compute-0 sudo[283110]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -- lvm list --format json
Sep 30 14:50:12 compute-0 sudo[283110]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:50:12 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1041: 337 pgs: 337 active+clean; 200 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 138 op/s
Sep 30 14:50:12 compute-0 ceph-mon[74194]: Health check failed: 2 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Sep 30 14:50:13 compute-0 podman[283176]: 2025-09-30 14:50:13.062345664 +0000 UTC m=+0.041970214 container create 4dd1ad5e4715f49188356dbf96db650321689b6320bfb5dd948d305474223185 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_visvesvaraya, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:50:13 compute-0 systemd[1]: Started libpod-conmon-4dd1ad5e4715f49188356dbf96db650321689b6320bfb5dd948d305474223185.scope.
Sep 30 14:50:13 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:50:13 compute-0 podman[283176]: 2025-09-30 14:50:13.046445896 +0000 UTC m=+0.026070466 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:50:13 compute-0 podman[283176]: 2025-09-30 14:50:13.156529308 +0000 UTC m=+0.136153878 container init 4dd1ad5e4715f49188356dbf96db650321689b6320bfb5dd948d305474223185 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_visvesvaraya, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Sep 30 14:50:13 compute-0 podman[283176]: 2025-09-30 14:50:13.164937489 +0000 UTC m=+0.144562039 container start 4dd1ad5e4715f49188356dbf96db650321689b6320bfb5dd948d305474223185 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_visvesvaraya, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Sep 30 14:50:13 compute-0 trusting_visvesvaraya[283192]: 167 167
Sep 30 14:50:13 compute-0 systemd[1]: libpod-4dd1ad5e4715f49188356dbf96db650321689b6320bfb5dd948d305474223185.scope: Deactivated successfully.
Sep 30 14:50:13 compute-0 conmon[283192]: conmon 4dd1ad5e4715f4918835 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4dd1ad5e4715f49188356dbf96db650321689b6320bfb5dd948d305474223185.scope/container/memory.events
Sep 30 14:50:13 compute-0 podman[283176]: 2025-09-30 14:50:13.224106363 +0000 UTC m=+0.203730923 container attach 4dd1ad5e4715f49188356dbf96db650321689b6320bfb5dd948d305474223185 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_visvesvaraya, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:50:13 compute-0 podman[283176]: 2025-09-30 14:50:13.224778731 +0000 UTC m=+0.204403301 container died 4dd1ad5e4715f49188356dbf96db650321689b6320bfb5dd948d305474223185 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_visvesvaraya, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:50:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-990c1e85a5fba16683d17b7409a4189818769cf5e655d3b0f2e7036fc76ae0a4-merged.mount: Deactivated successfully.
Sep 30 14:50:13 compute-0 podman[283176]: 2025-09-30 14:50:13.368353493 +0000 UTC m=+0.347978043 container remove 4dd1ad5e4715f49188356dbf96db650321689b6320bfb5dd948d305474223185 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_visvesvaraya, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Sep 30 14:50:13 compute-0 systemd[1]: libpod-conmon-4dd1ad5e4715f49188356dbf96db650321689b6320bfb5dd948d305474223185.scope: Deactivated successfully.
Sep 30 14:50:13 compute-0 podman[283218]: 2025-09-30 14:50:13.560630023 +0000 UTC m=+0.031094088 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:50:13 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:50:13 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:50:13 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:50:13.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:50:13 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:50:13.693Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:50:13 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:50:13.694Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:50:13 compute-0 podman[283218]: 2025-09-30 14:50:13.734242362 +0000 UTC m=+0.204706357 container create 2d27340269254bc027f309893661c39d8db31512205e980dba199c30eb71ba43 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_liskov, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Sep 30 14:50:13 compute-0 systemd[1]: Started libpod-conmon-2d27340269254bc027f309893661c39d8db31512205e980dba199c30eb71ba43.scope.
Sep 30 14:50:13 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:50:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8dea01e1082056852007695e4b5f98bd73842b431690dcd0e7eec6bd1d57161/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:50:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8dea01e1082056852007695e4b5f98bd73842b431690dcd0e7eec6bd1d57161/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:50:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8dea01e1082056852007695e4b5f98bd73842b431690dcd0e7eec6bd1d57161/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:50:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8dea01e1082056852007695e4b5f98bd73842b431690dcd0e7eec6bd1d57161/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:50:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:50:13 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:50:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:50:13 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:50:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:50:13 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:50:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:50:14 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:50:14 compute-0 ceph-mon[74194]: pgmap v1041: 337 pgs: 337 active+clean; 200 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 138 op/s
Sep 30 14:50:14 compute-0 podman[283218]: 2025-09-30 14:50:14.026273883 +0000 UTC m=+0.496737868 container init 2d27340269254bc027f309893661c39d8db31512205e980dba199c30eb71ba43 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_liskov, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Sep 30 14:50:14 compute-0 podman[283218]: 2025-09-30 14:50:14.033985886 +0000 UTC m=+0.504449841 container start 2d27340269254bc027f309893661c39d8db31512205e980dba199c30eb71ba43 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:50:14 compute-0 podman[283218]: 2025-09-30 14:50:14.08326249 +0000 UTC m=+0.553726485 container attach 2d27340269254bc027f309893661c39d8db31512205e980dba199c30eb71ba43 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_liskov, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:50:14 compute-0 zealous_liskov[283235]: {
Sep 30 14:50:14 compute-0 zealous_liskov[283235]:     "0": [
Sep 30 14:50:14 compute-0 zealous_liskov[283235]:         {
Sep 30 14:50:14 compute-0 zealous_liskov[283235]:             "devices": [
Sep 30 14:50:14 compute-0 zealous_liskov[283235]:                 "/dev/loop3"
Sep 30 14:50:14 compute-0 zealous_liskov[283235]:             ],
Sep 30 14:50:14 compute-0 zealous_liskov[283235]:             "lv_name": "ceph_lv0",
Sep 30 14:50:14 compute-0 zealous_liskov[283235]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:50:14 compute-0 zealous_liskov[283235]:             "lv_size": "21470642176",
Sep 30 14:50:14 compute-0 zealous_liskov[283235]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5e3c7776-ac03-5698-b79f-a6dc2d80cae6,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1bf35304-bfb4-41f5-b832-570aa31de1b2,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 14:50:14 compute-0 zealous_liskov[283235]:             "lv_uuid": "f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L",
Sep 30 14:50:14 compute-0 zealous_liskov[283235]:             "name": "ceph_lv0",
Sep 30 14:50:14 compute-0 zealous_liskov[283235]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:50:14 compute-0 zealous_liskov[283235]:             "tags": {
Sep 30 14:50:14 compute-0 zealous_liskov[283235]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:50:14 compute-0 zealous_liskov[283235]:                 "ceph.block_uuid": "f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L",
Sep 30 14:50:14 compute-0 zealous_liskov[283235]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 14:50:14 compute-0 zealous_liskov[283235]:                 "ceph.cluster_fsid": "5e3c7776-ac03-5698-b79f-a6dc2d80cae6",
Sep 30 14:50:14 compute-0 zealous_liskov[283235]:                 "ceph.cluster_name": "ceph",
Sep 30 14:50:14 compute-0 zealous_liskov[283235]:                 "ceph.crush_device_class": "",
Sep 30 14:50:14 compute-0 zealous_liskov[283235]:                 "ceph.encrypted": "0",
Sep 30 14:50:14 compute-0 zealous_liskov[283235]:                 "ceph.osd_fsid": "1bf35304-bfb4-41f5-b832-570aa31de1b2",
Sep 30 14:50:14 compute-0 zealous_liskov[283235]:                 "ceph.osd_id": "0",
Sep 30 14:50:14 compute-0 zealous_liskov[283235]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 14:50:14 compute-0 zealous_liskov[283235]:                 "ceph.type": "block",
Sep 30 14:50:14 compute-0 zealous_liskov[283235]:                 "ceph.vdo": "0",
Sep 30 14:50:14 compute-0 zealous_liskov[283235]:                 "ceph.with_tpm": "0"
Sep 30 14:50:14 compute-0 zealous_liskov[283235]:             },
Sep 30 14:50:14 compute-0 zealous_liskov[283235]:             "type": "block",
Sep 30 14:50:14 compute-0 zealous_liskov[283235]:             "vg_name": "ceph_vg0"
Sep 30 14:50:14 compute-0 zealous_liskov[283235]:         }
Sep 30 14:50:14 compute-0 zealous_liskov[283235]:     ]
Sep 30 14:50:14 compute-0 zealous_liskov[283235]: }
Sep 30 14:50:14 compute-0 systemd[1]: libpod-2d27340269254bc027f309893661c39d8db31512205e980dba199c30eb71ba43.scope: Deactivated successfully.
Sep 30 14:50:14 compute-0 podman[283218]: 2025-09-30 14:50:14.344744179 +0000 UTC m=+0.815208174 container died 2d27340269254bc027f309893661c39d8db31512205e980dba199c30eb71ba43 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_liskov, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:50:14 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:50:14 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:50:14 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:50:14.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:50:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-d8dea01e1082056852007695e4b5f98bd73842b431690dcd0e7eec6bd1d57161-merged.mount: Deactivated successfully.
Sep 30 14:50:14 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1042: 337 pgs: 337 active+clean; 200 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 313 KiB/s rd, 2.2 MiB/s wr, 62 op/s
Sep 30 14:50:14 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Sep 30 14:50:14 compute-0 podman[283218]: 2025-09-30 14:50:14.685372236 +0000 UTC m=+1.155836201 container remove 2d27340269254bc027f309893661c39d8db31512205e980dba199c30eb71ba43 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_liskov, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:50:14 compute-0 systemd[1]: libpod-conmon-2d27340269254bc027f309893661c39d8db31512205e980dba199c30eb71ba43.scope: Deactivated successfully.
Sep 30 14:50:14 compute-0 sudo[283110]: pam_unix(sudo:session): session closed for user root
Sep 30 14:50:14 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:50:14] "GET /metrics HTTP/1.1" 200 48552 "" "Prometheus/2.51.0"
Sep 30 14:50:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:50:14] "GET /metrics HTTP/1.1" 200 48552 "" "Prometheus/2.51.0"
Sep 30 14:50:14 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:50:14 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:50:14 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:50:14 compute-0 sudo[283258]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:50:14 compute-0 sudo[283258]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:50:14 compute-0 sudo[283258]: pam_unix(sudo:session): session closed for user root
Sep 30 14:50:14 compute-0 sudo[283283]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -- raw list --format json
Sep 30 14:50:14 compute-0 sudo[283283]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:50:14 compute-0 nova_compute[261524]: 2025-09-30 14:50:14.954 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:50:15 compute-0 podman[283347]: 2025-09-30 14:50:15.444903447 +0000 UTC m=+0.106039996 container create a00ee13760067a80df5135ae7b307f4281eef6f30fa152df404d36a161af07f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_banzai, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Sep 30 14:50:15 compute-0 podman[283347]: 2025-09-30 14:50:15.376643134 +0000 UTC m=+0.037779713 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:50:15 compute-0 systemd[1]: Started libpod-conmon-a00ee13760067a80df5135ae7b307f4281eef6f30fa152df404d36a161af07f4.scope.
Sep 30 14:50:15 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:50:15 compute-0 podman[283347]: 2025-09-30 14:50:15.607813006 +0000 UTC m=+0.268949595 container init a00ee13760067a80df5135ae7b307f4281eef6f30fa152df404d36a161af07f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_banzai, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:50:15 compute-0 podman[283347]: 2025-09-30 14:50:15.619545634 +0000 UTC m=+0.280682183 container start a00ee13760067a80df5135ae7b307f4281eef6f30fa152df404d36a161af07f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:50:15 compute-0 sharp_banzai[283365]: 167 167
Sep 30 14:50:15 compute-0 systemd[1]: libpod-a00ee13760067a80df5135ae7b307f4281eef6f30fa152df404d36a161af07f4.scope: Deactivated successfully.
Sep 30 14:50:15 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:50:15 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:50:15 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:50:15.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:50:15 compute-0 nova_compute[261524]: 2025-09-30 14:50:15.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:50:15 compute-0 podman[283347]: 2025-09-30 14:50:15.713692517 +0000 UTC m=+0.374829076 container attach a00ee13760067a80df5135ae7b307f4281eef6f30fa152df404d36a161af07f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_banzai, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Sep 30 14:50:15 compute-0 podman[283347]: 2025-09-30 14:50:15.71609427 +0000 UTC m=+0.377230779 container died a00ee13760067a80df5135ae7b307f4281eef6f30fa152df404d36a161af07f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Sep 30 14:50:15 compute-0 ceph-mon[74194]: pgmap v1042: 337 pgs: 337 active+clean; 200 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 313 KiB/s rd, 2.2 MiB/s wr, 62 op/s
Sep 30 14:50:15 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:50:15 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:50:15 compute-0 nova_compute[261524]: 2025-09-30 14:50:15.952 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:50:15 compute-0 nova_compute[261524]: 2025-09-30 14:50:15.953 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Sep 30 14:50:15 compute-0 nova_compute[261524]: 2025-09-30 14:50:15.953 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Sep 30 14:50:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-86be0e470d848e30777030ed09f36cd0389cc0fe553dc038ba86f1cfe4107bd0-merged.mount: Deactivated successfully.
Sep 30 14:50:15 compute-0 nova_compute[261524]: 2025-09-30 14:50:15.969 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Sep 30 14:50:15 compute-0 podman[283347]: 2025-09-30 14:50:15.980465475 +0000 UTC m=+0.641601984 container remove a00ee13760067a80df5135ae7b307f4281eef6f30fa152df404d36a161af07f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_banzai, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Sep 30 14:50:16 compute-0 systemd[1]: libpod-conmon-a00ee13760067a80df5135ae7b307f4281eef6f30fa152df404d36a161af07f4.scope: Deactivated successfully.
Sep 30 14:50:16 compute-0 podman[283390]: 2025-09-30 14:50:16.194699223 +0000 UTC m=+0.045948608 container create 3c0f14b4b6d1558386c73e78af6c4af6f70f278720c8a80ad5139efc8df81afc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_hellman, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Sep 30 14:50:16 compute-0 systemd[1]: Started libpod-conmon-3c0f14b4b6d1558386c73e78af6c4af6f70f278720c8a80ad5139efc8df81afc.scope.
Sep 30 14:50:16 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:50:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bcf34ab051256e93b9ecc103c720ee570ced54e1b404cde8ff9e0752fc2831f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:50:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bcf34ab051256e93b9ecc103c720ee570ced54e1b404cde8ff9e0752fc2831f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:50:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bcf34ab051256e93b9ecc103c720ee570ced54e1b404cde8ff9e0752fc2831f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:50:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bcf34ab051256e93b9ecc103c720ee570ced54e1b404cde8ff9e0752fc2831f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:50:16 compute-0 podman[283390]: 2025-09-30 14:50:16.177013448 +0000 UTC m=+0.028262883 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:50:16 compute-0 podman[283390]: 2025-09-30 14:50:16.278625477 +0000 UTC m=+0.129874882 container init 3c0f14b4b6d1558386c73e78af6c4af6f70f278720c8a80ad5139efc8df81afc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_hellman, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Sep 30 14:50:16 compute-0 podman[283390]: 2025-09-30 14:50:16.285314863 +0000 UTC m=+0.136564258 container start 3c0f14b4b6d1558386c73e78af6c4af6f70f278720c8a80ad5139efc8df81afc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_hellman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Sep 30 14:50:16 compute-0 podman[283390]: 2025-09-30 14:50:16.289151084 +0000 UTC m=+0.140400509 container attach 3c0f14b4b6d1558386c73e78af6c4af6f70f278720c8a80ad5139efc8df81afc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Sep 30 14:50:16 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:50:16 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:50:16 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:50:16.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:50:16 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1043: 337 pgs: 337 active+clean; 200 MiB data, 365 MiB used, 60 GiB / 60 GiB avail; 332 KiB/s rd, 2.2 MiB/s wr, 66 op/s
Sep 30 14:50:16 compute-0 nova_compute[261524]: 2025-09-30 14:50:16.915 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:50:16 compute-0 nova_compute[261524]: 2025-09-30 14:50:16.951 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:50:16 compute-0 nova_compute[261524]: 2025-09-30 14:50:16.952 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:50:16 compute-0 lvm[283482]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 14:50:16 compute-0 lvm[283482]: VG ceph_vg0 finished
Sep 30 14:50:17 compute-0 happy_hellman[283408]: {}
Sep 30 14:50:17 compute-0 systemd[1]: libpod-3c0f14b4b6d1558386c73e78af6c4af6f70f278720c8a80ad5139efc8df81afc.scope: Deactivated successfully.
Sep 30 14:50:17 compute-0 podman[283390]: 2025-09-30 14:50:17.043888109 +0000 UTC m=+0.895137504 container died 3c0f14b4b6d1558386c73e78af6c4af6f70f278720c8a80ad5139efc8df81afc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_hellman, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:50:17 compute-0 systemd[1]: libpod-3c0f14b4b6d1558386c73e78af6c4af6f70f278720c8a80ad5139efc8df81afc.scope: Consumed 1.169s CPU time.
Sep 30 14:50:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-0bcf34ab051256e93b9ecc103c720ee570ced54e1b404cde8ff9e0752fc2831f-merged.mount: Deactivated successfully.
Sep 30 14:50:17 compute-0 podman[283390]: 2025-09-30 14:50:17.090288947 +0000 UTC m=+0.941538342 container remove 3c0f14b4b6d1558386c73e78af6c4af6f70f278720c8a80ad5139efc8df81afc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_hellman, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:50:17 compute-0 systemd[1]: libpod-conmon-3c0f14b4b6d1558386c73e78af6c4af6f70f278720c8a80ad5139efc8df81afc.scope: Deactivated successfully.
Sep 30 14:50:17 compute-0 sudo[283283]: pam_unix(sudo:session): session closed for user root
Sep 30 14:50:17 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 14:50:17 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:50:17 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 14:50:17 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:50:17 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:50:17.192Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:50:17 compute-0 sudo[283497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 14:50:17 compute-0 sudo[283497]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:50:17 compute-0 sudo[283497]: pam_unix(sudo:session): session closed for user root
Sep 30 14:50:17 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:50:17 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:50:17 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:50:17 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:50:17.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:50:17 compute-0 nova_compute[261524]: 2025-09-30 14:50:17.953 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:50:18 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:50:18.229 163966 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ea:30:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '86:54:af:bb:5a:5f'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Sep 30 14:50:18 compute-0 nova_compute[261524]: 2025-09-30 14:50:18.229 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:50:18 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:50:18.230 163966 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Sep 30 14:50:18 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:50:18 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:50:18 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:50:18.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:50:18 compute-0 ceph-mon[74194]: pgmap v1043: 337 pgs: 337 active+clean; 200 MiB data, 365 MiB used, 60 GiB / 60 GiB avail; 332 KiB/s rd, 2.2 MiB/s wr, 66 op/s
Sep 30 14:50:18 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:50:18 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:50:18 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1044: 337 pgs: 337 active+clean; 200 MiB data, 365 MiB used, 60 GiB / 60 GiB avail; 332 KiB/s rd, 2.2 MiB/s wr, 65 op/s
Sep 30 14:50:18 compute-0 nova_compute[261524]: 2025-09-30 14:50:18.953 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:50:18 compute-0 nova_compute[261524]: 2025-09-30 14:50:18.954 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:50:18 compute-0 nova_compute[261524]: 2025-09-30 14:50:18.954 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:50:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:50:18 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:50:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:50:18 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:50:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:50:18 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:50:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:50:19 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:50:19 compute-0 nova_compute[261524]: 2025-09-30 14:50:19.014 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:50:19 compute-0 nova_compute[261524]: 2025-09-30 14:50:19.015 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:50:19 compute-0 nova_compute[261524]: 2025-09-30 14:50:19.015 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:50:19 compute-0 nova_compute[261524]: 2025-09-30 14:50:19.015 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Sep 30 14:50:19 compute-0 nova_compute[261524]: 2025-09-30 14:50:19.016 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Sep 30 14:50:19 compute-0 nova_compute[261524]: 2025-09-30 14:50:19.495 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Sep 30 14:50:19 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/3027085864' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:50:19 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/3274582928' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:50:19 compute-0 ceph-mon[74194]: pgmap v1044: 337 pgs: 337 active+clean; 200 MiB data, 365 MiB used, 60 GiB / 60 GiB avail; 332 KiB/s rd, 2.2 MiB/s wr, 65 op/s
Sep 30 14:50:19 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/609594693' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:50:19 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/4121724814' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:50:19 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/3263994149' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:50:19 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:50:19 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:50:19 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:50:19.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:50:19 compute-0 nova_compute[261524]: 2025-09-30 14:50:19.685 2 WARNING nova.virt.libvirt.driver [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 14:50:19 compute-0 nova_compute[261524]: 2025-09-30 14:50:19.686 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4518MB free_disk=59.89732360839844GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Sep 30 14:50:19 compute-0 nova_compute[261524]: 2025-09-30 14:50:19.686 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:50:19 compute-0 nova_compute[261524]: 2025-09-30 14:50:19.687 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:50:19 compute-0 nova_compute[261524]: 2025-09-30 14:50:19.738 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Sep 30 14:50:19 compute-0 nova_compute[261524]: 2025-09-30 14:50:19.738 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Sep 30 14:50:19 compute-0 nova_compute[261524]: 2025-09-30 14:50:19.753 2 DEBUG nova.scheduler.client.report [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Refreshing inventories for resource provider 06783cfc-6d32-454d-9501-ebd8adea3735 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Sep 30 14:50:19 compute-0 nova_compute[261524]: 2025-09-30 14:50:19.879 2 DEBUG nova.scheduler.client.report [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Updating ProviderTree inventory for provider 06783cfc-6d32-454d-9501-ebd8adea3735 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Sep 30 14:50:19 compute-0 nova_compute[261524]: 2025-09-30 14:50:19.880 2 DEBUG nova.compute.provider_tree [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Updating inventory in ProviderTree for provider 06783cfc-6d32-454d-9501-ebd8adea3735 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Sep 30 14:50:19 compute-0 nova_compute[261524]: 2025-09-30 14:50:19.899 2 DEBUG nova.scheduler.client.report [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Refreshing aggregate associations for resource provider 06783cfc-6d32-454d-9501-ebd8adea3735, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Sep 30 14:50:19 compute-0 nova_compute[261524]: 2025-09-30 14:50:19.916 2 DEBUG nova.scheduler.client.report [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Refreshing trait associations for resource provider 06783cfc-6d32-454d-9501-ebd8adea3735, traits: COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AVX,HW_CPU_X86_SSE2,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_BMI2,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SVM,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_BMI,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_FMA3,HW_CPU_X86_AVX2,HW_CPU_X86_SSE42,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_F16C,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_RESCUE_BFV,COMPUTE_NODE,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_USB,COMPUTE_ACCELERATORS,HW_CPU_X86_CLMUL,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE4A,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_AMD_SVM _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Sep 30 14:50:19 compute-0 nova_compute[261524]: 2025-09-30 14:50:19.984 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Sep 30 14:50:20 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:50:20 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:50:20 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:50:20.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:50:20 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 14:50:20 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2424910042' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:50:20 compute-0 nova_compute[261524]: 2025-09-30 14:50:20.481 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Sep 30 14:50:20 compute-0 nova_compute[261524]: 2025-09-30 14:50:20.489 2 DEBUG nova.compute.provider_tree [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Inventory has not changed in ProviderTree for provider: 06783cfc-6d32-454d-9501-ebd8adea3735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Sep 30 14:50:20 compute-0 nova_compute[261524]: 2025-09-30 14:50:20.505 2 DEBUG nova.scheduler.client.report [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Inventory has not changed for provider 06783cfc-6d32-454d-9501-ebd8adea3735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Sep 30 14:50:20 compute-0 nova_compute[261524]: 2025-09-30 14:50:20.508 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Sep 30 14:50:20 compute-0 nova_compute[261524]: 2025-09-30 14:50:20.508 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.821s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:50:20 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/2424910042' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:50:20 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1045: 337 pgs: 337 active+clean; 200 MiB data, 365 MiB used, 60 GiB / 60 GiB avail; 332 KiB/s rd, 2.2 MiB/s wr, 65 op/s
Sep 30 14:50:20 compute-0 nova_compute[261524]: 2025-09-30 14:50:20.712 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:50:21 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:50:21.232 163966 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c6331d25-78a2-493c-bb43-51ad387342be, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 14:50:21 compute-0 ceph-mon[74194]: pgmap v1045: 337 pgs: 337 active+clean; 200 MiB data, 365 MiB used, 60 GiB / 60 GiB avail; 332 KiB/s rd, 2.2 MiB/s wr, 65 op/s
Sep 30 14:50:21 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:50:21 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:50:21 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:50:21.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:50:21 compute-0 nova_compute[261524]: 2025-09-30 14:50:21.916 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:50:22 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:50:22 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:50:22 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:50:22 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:50:22.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:50:22 compute-0 nova_compute[261524]: 2025-09-30 14:50:22.506 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:50:22 compute-0 nova_compute[261524]: 2025-09-30 14:50:22.507 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:50:22 compute-0 nova_compute[261524]: 2025-09-30 14:50:22.507 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Sep 30 14:50:22 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1046: 337 pgs: 337 active+clean; 200 MiB data, 365 MiB used, 60 GiB / 60 GiB avail; 332 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Sep 30 14:50:23 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:50:23 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:50:23 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:50:23.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:50:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:50:23.694Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:50:23 compute-0 ceph-mon[74194]: pgmap v1046: 337 pgs: 337 active+clean; 200 MiB data, 365 MiB used, 60 GiB / 60 GiB avail; 332 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Sep 30 14:50:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:50:23 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:50:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:50:23 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:50:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:50:23 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:50:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:50:24 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:50:24 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:50:24 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:50:24 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:50:24.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:50:24 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1047: 337 pgs: 337 active+clean; 200 MiB data, 365 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 13 KiB/s wr, 4 op/s
Sep 30 14:50:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:50:24] "GET /metrics HTTP/1.1" 200 48546 "" "Prometheus/2.51.0"
Sep 30 14:50:24 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:50:24] "GET /metrics HTTP/1.1" 200 48546 "" "Prometheus/2.51.0"
Sep 30 14:50:25 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:50:25 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:50:25 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:50:25.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:50:25 compute-0 nova_compute[261524]: 2025-09-30 14:50:25.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:50:25 compute-0 ceph-mon[74194]: pgmap v1047: 337 pgs: 337 active+clean; 200 MiB data, 365 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 13 KiB/s wr, 4 op/s
Sep 30 14:50:26 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:50:26 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:50:26 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:50:26.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:50:26 compute-0 sudo[283576]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:50:26 compute-0 sudo[283576]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:50:26 compute-0 sudo[283576]: pam_unix(sudo:session): session closed for user root
Sep 30 14:50:26 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1048: 337 pgs: 337 active+clean; 121 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 21 KiB/s wr, 33 op/s
Sep 30 14:50:26 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/728801815' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:50:26 compute-0 nova_compute[261524]: 2025-09-30 14:50:26.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:50:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:50:27.194Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:50:27 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:50:27 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:50:27 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:50:27 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:50:27.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:50:27 compute-0 ceph-mon[74194]: pgmap v1048: 337 pgs: 337 active+clean; 121 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 21 KiB/s wr, 33 op/s
Sep 30 14:50:28 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:50:28 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:50:28 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:50:28.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:50:28 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1049: 337 pgs: 337 active+clean; 121 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 20 KiB/s wr, 30 op/s
Sep 30 14:50:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:50:28 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:50:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:50:28 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:50:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:50:28 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:50:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:50:29 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:50:29 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:50:29 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:50:29 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:50:29 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:50:29 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:50:29.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:50:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:50:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:50:29 compute-0 ceph-mon[74194]: pgmap v1049: 337 pgs: 337 active+clean; 121 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 20 KiB/s wr, 30 op/s
Sep 30 14:50:29 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:50:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:50:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:50:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:50:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:50:30 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:50:30 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:50:30 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:50:30.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:50:30 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1050: 337 pgs: 337 active+clean; 121 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 20 KiB/s wr, 30 op/s
Sep 30 14:50:30 compute-0 nova_compute[261524]: 2025-09-30 14:50:30.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:50:30 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/1203958035' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:50:31 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:50:31 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:50:31 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:50:31.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:50:31 compute-0 ceph-mon[74194]: pgmap v1050: 337 pgs: 337 active+clean; 121 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 20 KiB/s wr, 30 op/s
Sep 30 14:50:31 compute-0 nova_compute[261524]: 2025-09-30 14:50:31.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:50:32 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:50:32 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:50:32 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:50:32 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:50:32.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:50:32 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1051: 337 pgs: 337 active+clean; 41 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 21 KiB/s wr, 58 op/s
Sep 30 14:50:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:50:33.695Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:50:33 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:50:33 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:50:33 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:50:33.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:50:33 compute-0 ceph-mon[74194]: pgmap v1051: 337 pgs: 337 active+clean; 41 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 21 KiB/s wr, 58 op/s
Sep 30 14:50:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:50:33 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:50:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:50:33 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:50:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:50:33 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:50:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:50:34 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:50:34 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:50:34 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:50:34 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:50:34.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:50:34 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1052: 337 pgs: 337 active+clean; 41 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 9.0 KiB/s wr, 56 op/s
Sep 30 14:50:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:50:34] "GET /metrics HTTP/1.1" 200 48552 "" "Prometheus/2.51.0"
Sep 30 14:50:34 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:50:34] "GET /metrics HTTP/1.1" 200 48552 "" "Prometheus/2.51.0"
Sep 30 14:50:35 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:50:35 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:50:35 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:50:35.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:50:35 compute-0 nova_compute[261524]: 2025-09-30 14:50:35.718 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:50:35 compute-0 ceph-mon[74194]: pgmap v1052: 337 pgs: 337 active+clean; 41 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 9.0 KiB/s wr, 56 op/s
Sep 30 14:50:36 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:50:36 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:50:36 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:50:36.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:50:36 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1053: 337 pgs: 337 active+clean; 41 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 9.0 KiB/s wr, 57 op/s
Sep 30 14:50:36 compute-0 nova_compute[261524]: 2025-09-30 14:50:36.923 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:50:37 compute-0 podman[283611]: 2025-09-30 14:50:37.156621807 +0000 UTC m=+0.074738645 container health_status 3f9405f717bf7bccb1d94628a6cea0442375ebf8d5cf43ef2536ee30dce6c6e0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20250923)
Sep 30 14:50:37 compute-0 podman[283613]: 2025-09-30 14:50:37.156608746 +0000 UTC m=+0.071193731 container health_status b3fb2fd96e3ed0561f41ccf5f3a6a8f5edc1610051f196882b9fe64f54041c07 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Sep 30 14:50:37 compute-0 podman[283614]: 2025-09-30 14:50:37.170865921 +0000 UTC m=+0.083053863 container health_status c96c6cc1b89de09f21811bef665f35e069874e3ad1bbd4c37d02227af98e3458 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Sep 30 14:50:37 compute-0 podman[283612]: 2025-09-30 14:50:37.187049986 +0000 UTC m=+0.105079291 container health_status 8ace90c1ad44d98a416105b72e1c541724757ab08efa10adfaa0df7e8c2027c6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20250923, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Sep 30 14:50:37 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:50:37.194Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:50:37 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:50:37 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:50:37 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:50:37 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:50:37.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:50:37 compute-0 ceph-mon[74194]: pgmap v1053: 337 pgs: 337 active+clean; 41 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 9.0 KiB/s wr, 57 op/s
Sep 30 14:50:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:50:38.267 163966 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:50:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:50:38.268 163966 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:50:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:50:38.268 163966 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:50:38 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:50:38 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:50:38 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:50:38.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:50:38 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1054: 337 pgs: 337 active+clean; 41 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 14:50:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:50:38 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:50:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:50:38 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:50:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:50:38 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:50:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:50:39 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:50:39 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:50:39 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:50:39 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:50:39.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:50:39 compute-0 ceph-mon[74194]: pgmap v1054: 337 pgs: 337 active+clean; 41 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 14:50:40 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:50:40 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:50:40 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:50:40.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:50:40 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1055: 337 pgs: 337 active+clean; 41 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 14:50:40 compute-0 nova_compute[261524]: 2025-09-30 14:50:40.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:50:41 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:50:41 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:50:41 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:50:41.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:50:41 compute-0 ceph-mon[74194]: pgmap v1055: 337 pgs: 337 active+clean; 41 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 14:50:41 compute-0 nova_compute[261524]: 2025-09-30 14:50:41.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:50:42 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:50:42 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:50:42 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:50:42 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:50:42.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:50:42 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1056: 337 pgs: 337 active+clean; 41 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 14:50:43 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:50:43.697Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:50:43 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:50:43.697Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:50:43 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:50:43 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:50:43 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:50:43.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:50:43 compute-0 ceph-mon[74194]: pgmap v1056: 337 pgs: 337 active+clean; 41 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 14:50:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:50:43 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:50:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:50:44 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:50:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:50:44 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:50:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:50:44 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:50:44 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:50:44 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:50:44 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:50:44.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:50:44 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:50:44 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:50:44 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1057: 337 pgs: 337 active+clean; 41 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:50:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:50:44] "GET /metrics HTTP/1.1" 200 48527 "" "Prometheus/2.51.0"
Sep 30 14:50:44 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:50:44] "GET /metrics HTTP/1.1" 200 48527 "" "Prometheus/2.51.0"
Sep 30 14:50:44 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:50:45 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:50:45 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:50:45 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:50:45.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:50:45 compute-0 nova_compute[261524]: 2025-09-30 14:50:45.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:50:45 compute-0 ceph-mon[74194]: pgmap v1057: 337 pgs: 337 active+clean; 41 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:50:46 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:50:46 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:50:46 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:50:46.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:50:46 compute-0 sudo[283705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:50:46 compute-0 sudo[283705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:50:46 compute-0 sudo[283705]: pam_unix(sudo:session): session closed for user root
Sep 30 14:50:46 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1058: 337 pgs: 337 active+clean; 41 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:50:46 compute-0 nova_compute[261524]: 2025-09-30 14:50:46.928 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:50:47 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:50:47.195Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:50:47 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:50:47.196Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:50:47 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:50:47.196Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:50:47 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:50:47 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:50:47 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:50:47 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:50:47.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:50:47 compute-0 ceph-mon[74194]: pgmap v1058: 337 pgs: 337 active+clean; 41 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:50:48 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:50:48 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:50:48 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:50:48.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:50:48 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1059: 337 pgs: 337 active+clean; 41 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:50:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:50:48 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:50:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:50:49 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:50:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:50:49 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:50:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:50:49 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:50:49 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:50:49 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:50:49 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:50:49.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:50:49 compute-0 ceph-mon[74194]: pgmap v1059: 337 pgs: 337 active+clean; 41 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:50:50 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:50:50 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:50:50 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:50:50.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:50:50 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1060: 337 pgs: 337 active+clean; 41 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:50:50 compute-0 nova_compute[261524]: 2025-09-30 14:50:50.725 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:50:50 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/941742703' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:50:51 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:50:51 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:50:51 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:50:51.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:50:51 compute-0 nova_compute[261524]: 2025-09-30 14:50:51.930 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:50:52 compute-0 ceph-mon[74194]: pgmap v1060: 337 pgs: 337 active+clean; 41 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:50:52 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:50:52 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:50:52 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:50:52 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:50:52.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:50:52 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1061: 337 pgs: 337 active+clean; 88 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 1.8 MiB/s wr, 24 op/s
Sep 30 14:50:53 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:50:53.698Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:50:53 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:50:53 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:50:53 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:50:53.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:50:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:50:53 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:50:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:50:53 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:50:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:50:53 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:50:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:50:54 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:50:54 compute-0 ceph-mon[74194]: pgmap v1061: 337 pgs: 337 active+clean; 88 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 1.8 MiB/s wr, 24 op/s
Sep 30 14:50:54 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/4069037268' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 14:50:54 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/2896564958' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Sep 30 14:50:54 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:50:54 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:50:54 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:50:54.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:50:54 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1062: 337 pgs: 337 active+clean; 88 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 1.8 MiB/s wr, 24 op/s
Sep 30 14:50:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:50:54] "GET /metrics HTTP/1.1" 200 48527 "" "Prometheus/2.51.0"
Sep 30 14:50:54 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:50:54] "GET /metrics HTTP/1.1" 200 48527 "" "Prometheus/2.51.0"
Sep 30 14:50:55 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:50:55 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:50:55 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:50:55.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:50:55 compute-0 nova_compute[261524]: 2025-09-30 14:50:55.727 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:50:56 compute-0 ceph-mon[74194]: pgmap v1062: 337 pgs: 337 active+clean; 88 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 1.8 MiB/s wr, 24 op/s
Sep 30 14:50:56 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:50:56 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:50:56 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:50:56.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:50:56 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1063: 337 pgs: 337 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Sep 30 14:50:56 compute-0 nova_compute[261524]: 2025-09-30 14:50:56.933 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:50:57 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:50:57.197Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:50:57 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:50:57 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:50:57 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:50:57 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:50:57.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:50:58 compute-0 ceph-mon[74194]: pgmap v1063: 337 pgs: 337 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Sep 30 14:50:58 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:50:58 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:50:58 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:50:58.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:50:58 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1064: 337 pgs: 337 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Sep 30 14:50:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:50:58 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:50:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:50:58 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:50:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:50:58 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:50:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:50:59 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:50:59 compute-0 ceph-mgr[74485]: [balancer INFO root] Optimize plan auto_2025-09-30_14:50:59
Sep 30 14:50:59 compute-0 ceph-mgr[74485]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 14:50:59 compute-0 ceph-mgr[74485]: [balancer INFO root] do_upmap
Sep 30 14:50:59 compute-0 ceph-mgr[74485]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.control', '.rgw.root', 'default.rgw.log', '.mgr', 'volumes', 'cephfs.cephfs.meta', '.nfs', 'images', 'backups', 'vms', 'default.rgw.meta']
Sep 30 14:50:59 compute-0 ceph-mgr[74485]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 14:50:59 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:50:59 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:50:59 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:50:59 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:50:59 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:50:59.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:50:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:50:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:50:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:50:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:50:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:50:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:51:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 14:51:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:51:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 14:51:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:51:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00034841348814872695 of space, bias 1.0, pg target 0.10452404644461809 quantized to 32 (current 32)
Sep 30 14:51:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:51:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:51:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:51:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:51:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:51:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Sep 30 14:51:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:51:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Sep 30 14:51:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:51:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:51:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:51:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Sep 30 14:51:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:51:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Sep 30 14:51:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:51:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:51:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:51:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 14:51:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:51:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 14:51:00 compute-0 ceph-mon[74194]: pgmap v1064: 337 pgs: 337 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Sep 30 14:51:00 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:51:00 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:51:00 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:51:00 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:51:00.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:51:00 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1065: 337 pgs: 337 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Sep 30 14:51:00 compute-0 nova_compute[261524]: 2025-09-30 14:51:00.728 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:51:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 14:51:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 14:51:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 14:51:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 14:51:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 14:51:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 14:51:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 14:51:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 14:51:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 14:51:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 14:51:01 compute-0 ceph-mon[74194]: pgmap v1065: 337 pgs: 337 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Sep 30 14:51:01 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:51:01 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:51:01 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:51:01.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:51:01 compute-0 nova_compute[261524]: 2025-09-30 14:51:01.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:51:02 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:51:02 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:51:02 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:51:02 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:51:02.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:51:02 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1066: 337 pgs: 337 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Sep 30 14:51:03 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:51:03.699Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:51:03 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:51:03 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:51:03 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:51:03.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:51:03 compute-0 ceph-mon[74194]: pgmap v1066: 337 pgs: 337 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Sep 30 14:51:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:51:03 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:51:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:51:03 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:51:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:51:03 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:51:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:51:04 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:51:04 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:51:04 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:51:04 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:51:04.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:51:04 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1067: 337 pgs: 337 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 77 op/s
Sep 30 14:51:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:51:04] "GET /metrics HTTP/1.1" 200 48540 "" "Prometheus/2.51.0"
Sep 30 14:51:04 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:51:04] "GET /metrics HTTP/1.1" 200 48540 "" "Prometheus/2.51.0"
Sep 30 14:51:05 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:51:05 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:51:05 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:51:05.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:51:05 compute-0 nova_compute[261524]: 2025-09-30 14:51:05.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:51:05 compute-0 ceph-mon[74194]: pgmap v1067: 337 pgs: 337 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 77 op/s
Sep 30 14:51:06 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:51:06 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:51:06 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:51:06.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:51:06 compute-0 sudo[283750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:51:06 compute-0 sudo[283750]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:51:06 compute-0 sudo[283750]: pam_unix(sudo:session): session closed for user root
Sep 30 14:51:06 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1068: 337 pgs: 337 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 78 op/s
Sep 30 14:51:06 compute-0 nova_compute[261524]: 2025-09-30 14:51:06.938 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:51:07 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:51:07.198Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:51:07 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:51:07 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:51:07 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:51:07 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:51:07.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:51:08 compute-0 ceph-mon[74194]: pgmap v1068: 337 pgs: 337 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 78 op/s
Sep 30 14:51:08 compute-0 podman[283785]: 2025-09-30 14:51:08.130429711 +0000 UTC m=+0.050877437 container health_status c96c6cc1b89de09f21811bef665f35e069874e3ad1bbd4c37d02227af98e3458 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Sep 30 14:51:08 compute-0 podman[283779]: 2025-09-30 14:51:08.153121687 +0000 UTC m=+0.077805015 container health_status b3fb2fd96e3ed0561f41ccf5f3a6a8f5edc1610051f196882b9fe64f54041c07 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:51:08 compute-0 podman[283777]: 2025-09-30 14:51:08.160991224 +0000 UTC m=+0.094695229 container health_status 3f9405f717bf7bccb1d94628a6cea0442375ebf8d5cf43ef2536ee30dce6c6e0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Sep 30 14:51:08 compute-0 podman[283778]: 2025-09-30 14:51:08.172057185 +0000 UTC m=+0.096721492 container health_status 8ace90c1ad44d98a416105b72e1c541724757ab08efa10adfaa0df7e8c2027c6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller)
Sep 30 14:51:08 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:51:08 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:51:08 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:51:08.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:51:08 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1069: 337 pgs: 337 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 65 op/s
Sep 30 14:51:08 compute-0 ceph-mgr[74485]: [dashboard INFO request] [192.168.122.100:56770] [POST] [200] [0.003s] [4.0B] [804c25c5-c60c-402d-9616-b5f54617578c] /api/prometheus_receiver
Sep 30 14:51:09 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:51:08 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:51:09 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:51:08 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:51:09 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:51:08 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:51:09 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:51:09 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:51:09 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:51:09 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:51:09 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:51:09.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:51:10 compute-0 ceph-mon[74194]: pgmap v1069: 337 pgs: 337 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 65 op/s
Sep 30 14:51:10 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:51:10 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:51:10 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:51:10.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:51:10 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1070: 337 pgs: 337 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 65 op/s
Sep 30 14:51:10 compute-0 nova_compute[261524]: 2025-09-30 14:51:10.747 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:51:11 compute-0 ceph-mon[74194]: from='client.? 192.168.122.10:0/3405450663' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 14:51:11 compute-0 ceph-mon[74194]: from='client.? 192.168.122.10:0/3405450663' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 14:51:11 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:51:11 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:51:11 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:51:11.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:51:11 compute-0 nova_compute[261524]: 2025-09-30 14:51:11.941 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:51:12 compute-0 ceph-mon[74194]: pgmap v1070: 337 pgs: 337 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 65 op/s
Sep 30 14:51:12 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:51:12 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:51:12 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:51:12 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:51:12.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:51:12 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1071: 337 pgs: 337 active+clean; 121 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 128 op/s
Sep 30 14:51:13 compute-0 ceph-mon[74194]: pgmap v1071: 337 pgs: 337 active+clean; 121 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 128 op/s
Sep 30 14:51:13 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:51:13.701Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:51:13 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:51:13 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:51:13 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:51:13.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:51:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:51:13 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:51:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:51:13 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:51:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:51:13 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:51:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:51:14 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:51:14 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:51:14 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:51:14 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:51:14.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:51:14 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:51:14 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:51:14 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1072: 337 pgs: 337 active+clean; 121 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Sep 30 14:51:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:51:14] "GET /metrics HTTP/1.1" 200 48550 "" "Prometheus/2.51.0"
Sep 30 14:51:14 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:51:14] "GET /metrics HTTP/1.1" 200 48550 "" "Prometheus/2.51.0"
Sep 30 14:51:14 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:51:15 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:51:15 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:51:15 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:51:15.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:51:15 compute-0 nova_compute[261524]: 2025-09-30 14:51:15.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:51:15 compute-0 ceph-mon[74194]: pgmap v1072: 337 pgs: 337 active+clean; 121 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Sep 30 14:51:16 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:51:16 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:51:16 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:51:16.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:51:16 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1073: 337 pgs: 337 active+clean; 121 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Sep 30 14:51:16 compute-0 nova_compute[261524]: 2025-09-30 14:51:16.944 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:51:16 compute-0 nova_compute[261524]: 2025-09-30 14:51:16.947 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:51:17 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:51:17.199Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:51:17 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:51:17.199Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:51:17 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:51:17.199Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:51:17 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:51:17 compute-0 sudo[283870]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:51:17 compute-0 sudo[283870]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:51:17 compute-0 sudo[283870]: pam_unix(sudo:session): session closed for user root
Sep 30 14:51:17 compute-0 sudo[283895]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Sep 30 14:51:17 compute-0 sudo[283895]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:51:17 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:51:17 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:51:17 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:51:17.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:51:17 compute-0 ceph-mon[74194]: pgmap v1073: 337 pgs: 337 active+clean; 121 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Sep 30 14:51:17 compute-0 nova_compute[261524]: 2025-09-30 14:51:17.952 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:51:17 compute-0 nova_compute[261524]: 2025-09-30 14:51:17.953 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Sep 30 14:51:17 compute-0 nova_compute[261524]: 2025-09-30 14:51:17.953 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Sep 30 14:51:17 compute-0 nova_compute[261524]: 2025-09-30 14:51:17.970 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Sep 30 14:51:17 compute-0 nova_compute[261524]: 2025-09-30 14:51:17.970 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:51:18 compute-0 podman[283993]: 2025-09-30 14:51:18.192248507 +0000 UTC m=+0.073709037 container exec a277d7b6b6f3cf10a7ce0ade5eebf0f8127074c248f9bce4451399614b97ded5 (image=quay.io/ceph/ceph:v19, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mon-compute-0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:51:18 compute-0 podman[283993]: 2025-09-30 14:51:18.312549967 +0000 UTC m=+0.194010407 container exec_died a277d7b6b6f3cf10a7ce0ade5eebf0f8127074c248f9bce4451399614b97ded5 (image=quay.io/ceph/ceph:v19, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mon-compute-0, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:51:18 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:51:18 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:51:18 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:51:18.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:51:18 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1074: 337 pgs: 337 active+clean; 121 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Sep 30 14:51:18 compute-0 podman[284113]: 2025-09-30 14:51:18.74743803 +0000 UTC m=+0.057825470 container exec 7517aa84b8564a81255eab7821e47762fe9b9d86aae2c7d77e10c0dfa057ab6d (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:51:18 compute-0 podman[284113]: 2025-09-30 14:51:18.785560421 +0000 UTC m=+0.095947841 container exec_died 7517aa84b8564a81255eab7821e47762fe9b9d86aae2c7d77e10c0dfa057ab6d (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:51:18 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:51:18.845Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:51:18 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:51:18.846Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:51:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:51:18 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:51:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:51:18 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:51:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:51:18 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:51:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:51:19 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:51:19 compute-0 nova_compute[261524]: 2025-09-30 14:51:19.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:51:19 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:51:19.114 163966 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ea:30:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '86:54:af:bb:5a:5f'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Sep 30 14:51:19 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:51:19.116 163966 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Sep 30 14:51:19 compute-0 podman[284199]: 2025-09-30 14:51:19.160264674 +0000 UTC m=+0.053534547 container exec d88f0cc72f487145fcaf99f1acc03b1aff53e72e3f3f9612ed1bd244b07dfd6e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:51:19 compute-0 podman[284199]: 2025-09-30 14:51:19.167291499 +0000 UTC m=+0.060561332 container exec_died d88f0cc72f487145fcaf99f1acc03b1aff53e72e3f3f9612ed1bd244b07dfd6e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Sep 30 14:51:19 compute-0 podman[284263]: 2025-09-30 14:51:19.442257541 +0000 UTC m=+0.072278079 container exec ec49c6e24c4fbc830188fe80824f1adb9a8c3cd6d4f4491a3e9330b04061bea8 (image=quay.io/ceph/haproxy:2.3, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-nfs-cephfs-compute-0-yvkpei)
Sep 30 14:51:19 compute-0 podman[284263]: 2025-09-30 14:51:19.4585655 +0000 UTC m=+0.088585948 container exec_died ec49c6e24c4fbc830188fe80824f1adb9a8c3cd6d4f4491a3e9330b04061bea8 (image=quay.io/ceph/haproxy:2.3, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-haproxy-nfs-cephfs-compute-0-yvkpei)
Sep 30 14:51:19 compute-0 podman[284328]: 2025-09-30 14:51:19.712836289 +0000 UTC m=+0.055266553 container exec df25873f420822291a2a2f3e4272e6ab946447daa59ec12441fae67f848da096 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-keepalived-nfs-cephfs-compute-0-nfjjcv, release=1793, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, vcs-type=git, distribution-scope=public, io.buildah.version=1.28.2, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, description=keepalived for Ceph, io.openshift.expose-services=, name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Sep 30 14:51:19 compute-0 podman[284328]: 2025-09-30 14:51:19.72849613 +0000 UTC m=+0.070926354 container exec_died df25873f420822291a2a2f3e4272e6ab946447daa59ec12441fae67f848da096 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-keepalived-nfs-cephfs-compute-0-nfjjcv, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, description=keepalived for Ceph, name=keepalived, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, release=1793, io.openshift.tags=Ceph keepalived, architecture=x86_64, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.buildah.version=1.28.2, io.openshift.expose-services=, version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Sep 30 14:51:19 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:51:19 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:51:19 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:51:19.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:51:19 compute-0 podman[284393]: 2025-09-30 14:51:19.948865219 +0000 UTC m=+0.052494590 container exec b02a1f46575144d1c0fa40fb1da73aeaa83cbe57512ae5912168f030bf7101d3 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:51:19 compute-0 nova_compute[261524]: 2025-09-30 14:51:19.951 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:51:19 compute-0 nova_compute[261524]: 2025-09-30 14:51:19.952 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:51:19 compute-0 nova_compute[261524]: 2025-09-30 14:51:19.952 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:51:19 compute-0 podman[284393]: 2025-09-30 14:51:19.97251772 +0000 UTC m=+0.076147061 container exec_died b02a1f46575144d1c0fa40fb1da73aeaa83cbe57512ae5912168f030bf7101d3 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:51:19 compute-0 nova_compute[261524]: 2025-09-30 14:51:19.977 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:51:19 compute-0 nova_compute[261524]: 2025-09-30 14:51:19.978 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:51:19 compute-0 nova_compute[261524]: 2025-09-30 14:51:19.978 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:51:19 compute-0 nova_compute[261524]: 2025-09-30 14:51:19.978 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Sep 30 14:51:19 compute-0 nova_compute[261524]: 2025-09-30 14:51:19.978 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Sep 30 14:51:20 compute-0 ceph-mon[74194]: pgmap v1074: 337 pgs: 337 active+clean; 121 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Sep 30 14:51:20 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:51:20.117 163966 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c6331d25-78a2-493c-bb43-51ad387342be, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 14:51:20 compute-0 podman[284489]: 2025-09-30 14:51:20.18837956 +0000 UTC m=+0.057611524 container exec 4fd9639868c9fdb652f2d65dd14f46e8bfbcca13240732508ba689971c876ee0 (image=quay.io/ceph/grafana:10.4.0, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Sep 30 14:51:20 compute-0 podman[284489]: 2025-09-30 14:51:20.364064255 +0000 UTC m=+0.233296219 container exec_died 4fd9639868c9fdb652f2d65dd14f46e8bfbcca13240732508ba689971c876ee0 (image=quay.io/ceph/grafana:10.4.0, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Sep 30 14:51:20 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 14:51:20 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3854788734' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:51:20 compute-0 nova_compute[261524]: 2025-09-30 14:51:20.442 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Sep 30 14:51:20 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:51:20 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:51:20 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:51:20.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:51:20 compute-0 nova_compute[261524]: 2025-09-30 14:51:20.631 2 WARNING nova.virt.libvirt.driver [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 14:51:20 compute-0 nova_compute[261524]: 2025-09-30 14:51:20.632 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4496MB free_disk=59.942752838134766GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Sep 30 14:51:20 compute-0 nova_compute[261524]: 2025-09-30 14:51:20.633 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:51:20 compute-0 nova_compute[261524]: 2025-09-30 14:51:20.633 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:51:20 compute-0 nova_compute[261524]: 2025-09-30 14:51:20.697 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Sep 30 14:51:20 compute-0 nova_compute[261524]: 2025-09-30 14:51:20.698 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Sep 30 14:51:20 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1075: 337 pgs: 337 active+clean; 121 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Sep 30 14:51:20 compute-0 nova_compute[261524]: 2025-09-30 14:51:20.731 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Sep 30 14:51:20 compute-0 nova_compute[261524]: 2025-09-30 14:51:20.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:51:20 compute-0 podman[284605]: 2025-09-30 14:51:20.91931695 +0000 UTC m=+0.198767502 container exec e4a50bbeb60f228cd09239a211f5e468f7ca87363229c6999e3900e12da32b57 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:51:21 compute-0 podman[284655]: 2025-09-30 14:51:21.033429057 +0000 UTC m=+0.058730204 container exec_died e4a50bbeb60f228cd09239a211f5e468f7ca87363229c6999e3900e12da32b57 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:51:21 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 14:51:21 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2747246370' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:51:21 compute-0 podman[284605]: 2025-09-30 14:51:21.164610303 +0000 UTC m=+0.444060875 container exec_died e4a50bbeb60f228cd09239a211f5e468f7ca87363229c6999e3900e12da32b57 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Sep 30 14:51:21 compute-0 nova_compute[261524]: 2025-09-30 14:51:21.166 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Sep 30 14:51:21 compute-0 nova_compute[261524]: 2025-09-30 14:51:21.172 2 DEBUG nova.compute.provider_tree [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Inventory has not changed in ProviderTree for provider: 06783cfc-6d32-454d-9501-ebd8adea3735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Sep 30 14:51:21 compute-0 nova_compute[261524]: 2025-09-30 14:51:21.186 2 DEBUG nova.scheduler.client.report [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Inventory has not changed for provider 06783cfc-6d32-454d-9501-ebd8adea3735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Sep 30 14:51:21 compute-0 nova_compute[261524]: 2025-09-30 14:51:21.188 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Sep 30 14:51:21 compute-0 nova_compute[261524]: 2025-09-30 14:51:21.189 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.556s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:51:21 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/632045599' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:51:21 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/706890639' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:51:21 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/3854788734' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:51:21 compute-0 sudo[283895]: pam_unix(sudo:session): session closed for user root
Sep 30 14:51:21 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 14:51:21 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:51:21 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:51:21 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:51:21.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:51:21 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:51:21 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 14:51:21 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:51:21 compute-0 sudo[284673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:51:21 compute-0 sudo[284673]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:51:21 compute-0 sudo[284673]: pam_unix(sudo:session): session closed for user root
Sep 30 14:51:21 compute-0 nova_compute[261524]: 2025-09-30 14:51:21.946 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:51:22 compute-0 sudo[284698]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 14:51:22 compute-0 sudo[284698]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:51:22 compute-0 nova_compute[261524]: 2025-09-30 14:51:22.189 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:51:22 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:51:22 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:51:22 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:51:22 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:51:22.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:51:22 compute-0 sudo[284698]: pam_unix(sudo:session): session closed for user root
Sep 30 14:51:22 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:51:22 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:51:22 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 14:51:22 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 14:51:22 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 14:51:22 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1076: 337 pgs: 337 active+clean; 121 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 334 KiB/s rd, 2.2 MiB/s wr, 72 op/s
Sep 30 14:51:22 compute-0 ceph-mon[74194]: pgmap v1075: 337 pgs: 337 active+clean; 121 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Sep 30 14:51:22 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/2747246370' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:51:22 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/368552818' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:51:22 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/1749165851' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:51:22 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:51:22 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:51:22 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:51:22 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 14:51:22 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:51:22 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 14:51:22 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 14:51:22 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 14:51:22 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 14:51:22 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:51:22 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:51:22 compute-0 sudo[284756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:51:22 compute-0 sudo[284756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:51:22 compute-0 sudo[284756]: pam_unix(sudo:session): session closed for user root
Sep 30 14:51:22 compute-0 sudo[284781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 14:51:22 compute-0 sudo[284781]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:51:22 compute-0 nova_compute[261524]: 2025-09-30 14:51:22.951 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:51:22 compute-0 nova_compute[261524]: 2025-09-30 14:51:22.952 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:51:22 compute-0 nova_compute[261524]: 2025-09-30 14:51:22.952 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Sep 30 14:51:23 compute-0 unix_chkpwd[284846]: password check failed for user (root)
Sep 30 14:51:23 compute-0 sshd-session[284740]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.233  user=root
Sep 30 14:51:23 compute-0 podman[284848]: 2025-09-30 14:51:23.239310349 +0000 UTC m=+0.043707629 container create 075b8c381eff31a8ea812c05cfba0dad2590b117cbb2770a55caab36321d3b9d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_meninsky, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:51:23 compute-0 systemd[1]: Started libpod-conmon-075b8c381eff31a8ea812c05cfba0dad2590b117cbb2770a55caab36321d3b9d.scope.
Sep 30 14:51:23 compute-0 podman[284848]: 2025-09-30 14:51:23.218185764 +0000 UTC m=+0.022583044 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:51:23 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:51:23 compute-0 podman[284848]: 2025-09-30 14:51:23.350971432 +0000 UTC m=+0.155368692 container init 075b8c381eff31a8ea812c05cfba0dad2590b117cbb2770a55caab36321d3b9d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_meninsky, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:51:23 compute-0 podman[284848]: 2025-09-30 14:51:23.362943116 +0000 UTC m=+0.167340366 container start 075b8c381eff31a8ea812c05cfba0dad2590b117cbb2770a55caab36321d3b9d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_meninsky, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:51:23 compute-0 podman[284848]: 2025-09-30 14:51:23.366095049 +0000 UTC m=+0.170492339 container attach 075b8c381eff31a8ea812c05cfba0dad2590b117cbb2770a55caab36321d3b9d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_meninsky, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:51:23 compute-0 nifty_meninsky[284864]: 167 167
Sep 30 14:51:23 compute-0 systemd[1]: libpod-075b8c381eff31a8ea812c05cfba0dad2590b117cbb2770a55caab36321d3b9d.scope: Deactivated successfully.
Sep 30 14:51:23 compute-0 podman[284848]: 2025-09-30 14:51:23.371690106 +0000 UTC m=+0.176087406 container died 075b8c381eff31a8ea812c05cfba0dad2590b117cbb2770a55caab36321d3b9d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_meninsky, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Sep 30 14:51:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-1507e7078c981b24f08f5c08ad54bfcf21453c58636faf7c10503b6c1fbee613-merged.mount: Deactivated successfully.
Sep 30 14:51:23 compute-0 podman[284848]: 2025-09-30 14:51:23.418419763 +0000 UTC m=+0.222817043 container remove 075b8c381eff31a8ea812c05cfba0dad2590b117cbb2770a55caab36321d3b9d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_meninsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Sep 30 14:51:23 compute-0 systemd[1]: libpod-conmon-075b8c381eff31a8ea812c05cfba0dad2590b117cbb2770a55caab36321d3b9d.scope: Deactivated successfully.
Sep 30 14:51:23 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:51:23 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 14:51:23 compute-0 ceph-mon[74194]: pgmap v1076: 337 pgs: 337 active+clean; 121 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 334 KiB/s rd, 2.2 MiB/s wr, 72 op/s
Sep 30 14:51:23 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:51:23 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:51:23 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 14:51:23 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 14:51:23 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:51:23 compute-0 podman[284888]: 2025-09-30 14:51:23.620859471 +0000 UTC m=+0.039422227 container create e2fb52be496e21106277c3187f9d310799df711e4e23945e3d10ac3bed269cb4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_kilby, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Sep 30 14:51:23 compute-0 systemd[1]: Started libpod-conmon-e2fb52be496e21106277c3187f9d310799df711e4e23945e3d10ac3bed269cb4.scope.
Sep 30 14:51:23 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:51:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/459583813d1ae7c56b7e08afe21dca0d056f545169094d831c23e8655414ea86/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:51:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/459583813d1ae7c56b7e08afe21dca0d056f545169094d831c23e8655414ea86/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:51:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/459583813d1ae7c56b7e08afe21dca0d056f545169094d831c23e8655414ea86/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:51:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/459583813d1ae7c56b7e08afe21dca0d056f545169094d831c23e8655414ea86/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:51:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/459583813d1ae7c56b7e08afe21dca0d056f545169094d831c23e8655414ea86/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 14:51:23 compute-0 podman[284888]: 2025-09-30 14:51:23.697046512 +0000 UTC m=+0.115609308 container init e2fb52be496e21106277c3187f9d310799df711e4e23945e3d10ac3bed269cb4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_kilby, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:51:23 compute-0 podman[284888]: 2025-09-30 14:51:23.604598704 +0000 UTC m=+0.023161500 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:51:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:51:23.703Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:51:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:51:23.703Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:51:23 compute-0 podman[284888]: 2025-09-30 14:51:23.709762306 +0000 UTC m=+0.128325082 container start e2fb52be496e21106277c3187f9d310799df711e4e23945e3d10ac3bed269cb4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_kilby, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Sep 30 14:51:23 compute-0 podman[284888]: 2025-09-30 14:51:23.713179886 +0000 UTC m=+0.131742652 container attach e2fb52be496e21106277c3187f9d310799df711e4e23945e3d10ac3bed269cb4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_kilby, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:51:23 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:51:23 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:51:23 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:51:23.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:51:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:51:23 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:51:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:51:24 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:51:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:51:24 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:51:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:51:24 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:51:24 compute-0 priceless_kilby[284904]: --> passed data devices: 0 physical, 1 LVM
Sep 30 14:51:24 compute-0 priceless_kilby[284904]: --> All data devices are unavailable
Sep 30 14:51:24 compute-0 systemd[1]: libpod-e2fb52be496e21106277c3187f9d310799df711e4e23945e3d10ac3bed269cb4.scope: Deactivated successfully.
Sep 30 14:51:24 compute-0 podman[284888]: 2025-09-30 14:51:24.077808724 +0000 UTC m=+0.496371480 container died e2fb52be496e21106277c3187f9d310799df711e4e23945e3d10ac3bed269cb4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_kilby, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Sep 30 14:51:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-459583813d1ae7c56b7e08afe21dca0d056f545169094d831c23e8655414ea86-merged.mount: Deactivated successfully.
Sep 30 14:51:24 compute-0 podman[284888]: 2025-09-30 14:51:24.122319323 +0000 UTC m=+0.540882089 container remove e2fb52be496e21106277c3187f9d310799df711e4e23945e3d10ac3bed269cb4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_kilby, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Sep 30 14:51:24 compute-0 systemd[1]: libpod-conmon-e2fb52be496e21106277c3187f9d310799df711e4e23945e3d10ac3bed269cb4.scope: Deactivated successfully.
Sep 30 14:51:24 compute-0 sudo[284781]: pam_unix(sudo:session): session closed for user root
Sep 30 14:51:24 compute-0 sudo[284933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:51:24 compute-0 sudo[284933]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:51:24 compute-0 sudo[284933]: pam_unix(sudo:session): session closed for user root
Sep 30 14:51:24 compute-0 sudo[284958]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -- lvm list --format json
Sep 30 14:51:24 compute-0 sudo[284958]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:51:24 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:51:24 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:51:24 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:51:24.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:51:24 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1077: 337 pgs: 337 active+clean; 121 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 7.1 KiB/s rd, 13 KiB/s wr, 8 op/s
Sep 30 14:51:24 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/1470071829' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:51:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:51:24] "GET /metrics HTTP/1.1" 200 48550 "" "Prometheus/2.51.0"
Sep 30 14:51:24 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:51:24] "GET /metrics HTTP/1.1" 200 48550 "" "Prometheus/2.51.0"
Sep 30 14:51:24 compute-0 podman[285022]: 2025-09-30 14:51:24.765420516 +0000 UTC m=+0.056438254 container create a515b342f78d808894d6def4528bee94c4943ab1e76f5aa5c43c7ae74d1bc8c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_mestorf, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:51:24 compute-0 sshd-session[284740]: Failed password for root from 80.94.93.233 port 38940 ssh2
Sep 30 14:51:24 compute-0 systemd[1]: Started libpod-conmon-a515b342f78d808894d6def4528bee94c4943ab1e76f5aa5c43c7ae74d1bc8c6.scope.
Sep 30 14:51:24 compute-0 podman[285022]: 2025-09-30 14:51:24.739570907 +0000 UTC m=+0.030588705 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:51:24 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:51:24 compute-0 podman[285022]: 2025-09-30 14:51:24.871956174 +0000 UTC m=+0.162973962 container init a515b342f78d808894d6def4528bee94c4943ab1e76f5aa5c43c7ae74d1bc8c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Sep 30 14:51:24 compute-0 podman[285022]: 2025-09-30 14:51:24.879472581 +0000 UTC m=+0.170490319 container start a515b342f78d808894d6def4528bee94c4943ab1e76f5aa5c43c7ae74d1bc8c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Sep 30 14:51:24 compute-0 podman[285022]: 2025-09-30 14:51:24.883399555 +0000 UTC m=+0.174417353 container attach a515b342f78d808894d6def4528bee94c4943ab1e76f5aa5c43c7ae74d1bc8c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_mestorf, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Sep 30 14:51:24 compute-0 vibrant_mestorf[285038]: 167 167
Sep 30 14:51:24 compute-0 systemd[1]: libpod-a515b342f78d808894d6def4528bee94c4943ab1e76f5aa5c43c7ae74d1bc8c6.scope: Deactivated successfully.
Sep 30 14:51:24 compute-0 podman[285022]: 2025-09-30 14:51:24.888741865 +0000 UTC m=+0.179759593 container died a515b342f78d808894d6def4528bee94c4943ab1e76f5aa5c43c7ae74d1bc8c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_mestorf, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:51:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-c3fb64cde4f46c8551eda674210dc0c5a47408ad7706bb3617807fa9f9dfdb12-merged.mount: Deactivated successfully.
Sep 30 14:51:24 compute-0 podman[285022]: 2025-09-30 14:51:24.942329993 +0000 UTC m=+0.233347701 container remove a515b342f78d808894d6def4528bee94c4943ab1e76f5aa5c43c7ae74d1bc8c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_mestorf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:51:24 compute-0 systemd[1]: libpod-conmon-a515b342f78d808894d6def4528bee94c4943ab1e76f5aa5c43c7ae74d1bc8c6.scope: Deactivated successfully.
Sep 30 14:51:25 compute-0 podman[285063]: 2025-09-30 14:51:25.141009551 +0000 UTC m=+0.036647653 container create 1b7d19a9db9854946eafce42b9e1b8727709208b79137176c9800d41f694b494 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Sep 30 14:51:25 compute-0 systemd[1]: Started libpod-conmon-1b7d19a9db9854946eafce42b9e1b8727709208b79137176c9800d41f694b494.scope.
Sep 30 14:51:25 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:51:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9ea9aec3779d4b31fa290b6a03b8bff3474694e85c213b10b37bcd4edb86be5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:51:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9ea9aec3779d4b31fa290b6a03b8bff3474694e85c213b10b37bcd4edb86be5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:51:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9ea9aec3779d4b31fa290b6a03b8bff3474694e85c213b10b37bcd4edb86be5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:51:25 compute-0 podman[285063]: 2025-09-30 14:51:25.127530307 +0000 UTC m=+0.023168429 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:51:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9ea9aec3779d4b31fa290b6a03b8bff3474694e85c213b10b37bcd4edb86be5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:51:25 compute-0 podman[285063]: 2025-09-30 14:51:25.234413934 +0000 UTC m=+0.130052106 container init 1b7d19a9db9854946eafce42b9e1b8727709208b79137176c9800d41f694b494 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_hermann, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:51:25 compute-0 unix_chkpwd[285083]: password check failed for user (root)
Sep 30 14:51:25 compute-0 podman[285063]: 2025-09-30 14:51:25.246471741 +0000 UTC m=+0.142109873 container start 1b7d19a9db9854946eafce42b9e1b8727709208b79137176c9800d41f694b494 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_hermann, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:51:25 compute-0 podman[285063]: 2025-09-30 14:51:25.250109426 +0000 UTC m=+0.145747618 container attach 1b7d19a9db9854946eafce42b9e1b8727709208b79137176c9800d41f694b494 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_hermann, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325)
Sep 30 14:51:25 compute-0 condescending_hermann[285079]: {
Sep 30 14:51:25 compute-0 condescending_hermann[285079]:     "0": [
Sep 30 14:51:25 compute-0 condescending_hermann[285079]:         {
Sep 30 14:51:25 compute-0 condescending_hermann[285079]:             "devices": [
Sep 30 14:51:25 compute-0 condescending_hermann[285079]:                 "/dev/loop3"
Sep 30 14:51:25 compute-0 condescending_hermann[285079]:             ],
Sep 30 14:51:25 compute-0 condescending_hermann[285079]:             "lv_name": "ceph_lv0",
Sep 30 14:51:25 compute-0 condescending_hermann[285079]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:51:25 compute-0 condescending_hermann[285079]:             "lv_size": "21470642176",
Sep 30 14:51:25 compute-0 condescending_hermann[285079]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5e3c7776-ac03-5698-b79f-a6dc2d80cae6,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1bf35304-bfb4-41f5-b832-570aa31de1b2,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 14:51:25 compute-0 condescending_hermann[285079]:             "lv_uuid": "f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L",
Sep 30 14:51:25 compute-0 condescending_hermann[285079]:             "name": "ceph_lv0",
Sep 30 14:51:25 compute-0 condescending_hermann[285079]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:51:25 compute-0 condescending_hermann[285079]:             "tags": {
Sep 30 14:51:25 compute-0 condescending_hermann[285079]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:51:25 compute-0 condescending_hermann[285079]:                 "ceph.block_uuid": "f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L",
Sep 30 14:51:25 compute-0 condescending_hermann[285079]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 14:51:25 compute-0 condescending_hermann[285079]:                 "ceph.cluster_fsid": "5e3c7776-ac03-5698-b79f-a6dc2d80cae6",
Sep 30 14:51:25 compute-0 condescending_hermann[285079]:                 "ceph.cluster_name": "ceph",
Sep 30 14:51:25 compute-0 condescending_hermann[285079]:                 "ceph.crush_device_class": "",
Sep 30 14:51:25 compute-0 condescending_hermann[285079]:                 "ceph.encrypted": "0",
Sep 30 14:51:25 compute-0 condescending_hermann[285079]:                 "ceph.osd_fsid": "1bf35304-bfb4-41f5-b832-570aa31de1b2",
Sep 30 14:51:25 compute-0 condescending_hermann[285079]:                 "ceph.osd_id": "0",
Sep 30 14:51:25 compute-0 condescending_hermann[285079]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 14:51:25 compute-0 condescending_hermann[285079]:                 "ceph.type": "block",
Sep 30 14:51:25 compute-0 condescending_hermann[285079]:                 "ceph.vdo": "0",
Sep 30 14:51:25 compute-0 condescending_hermann[285079]:                 "ceph.with_tpm": "0"
Sep 30 14:51:25 compute-0 condescending_hermann[285079]:             },
Sep 30 14:51:25 compute-0 condescending_hermann[285079]:             "type": "block",
Sep 30 14:51:25 compute-0 condescending_hermann[285079]:             "vg_name": "ceph_vg0"
Sep 30 14:51:25 compute-0 condescending_hermann[285079]:         }
Sep 30 14:51:25 compute-0 condescending_hermann[285079]:     ]
Sep 30 14:51:25 compute-0 condescending_hermann[285079]: }
Sep 30 14:51:25 compute-0 systemd[1]: libpod-1b7d19a9db9854946eafce42b9e1b8727709208b79137176c9800d41f694b494.scope: Deactivated successfully.
Sep 30 14:51:25 compute-0 podman[285063]: 2025-09-30 14:51:25.602278387 +0000 UTC m=+0.497916579 container died 1b7d19a9db9854946eafce42b9e1b8727709208b79137176c9800d41f694b494 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_hermann, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid)
Sep 30 14:51:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-f9ea9aec3779d4b31fa290b6a03b8bff3474694e85c213b10b37bcd4edb86be5-merged.mount: Deactivated successfully.
Sep 30 14:51:25 compute-0 podman[285063]: 2025-09-30 14:51:25.64997985 +0000 UTC m=+0.545617982 container remove 1b7d19a9db9854946eafce42b9e1b8727709208b79137176c9800d41f694b494 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_hermann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Sep 30 14:51:25 compute-0 ceph-mon[74194]: pgmap v1077: 337 pgs: 337 active+clean; 121 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 7.1 KiB/s rd, 13 KiB/s wr, 8 op/s
Sep 30 14:51:25 compute-0 systemd[1]: libpod-conmon-1b7d19a9db9854946eafce42b9e1b8727709208b79137176c9800d41f694b494.scope: Deactivated successfully.
Sep 30 14:51:25 compute-0 sudo[284958]: pam_unix(sudo:session): session closed for user root
Sep 30 14:51:25 compute-0 nova_compute[261524]: 2025-09-30 14:51:25.756 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:51:25 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:51:25 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:51:25 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:51:25.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:51:25 compute-0 sudo[285100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:51:25 compute-0 sudo[285100]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:51:25 compute-0 sudo[285100]: pam_unix(sudo:session): session closed for user root
Sep 30 14:51:25 compute-0 sudo[285125]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -- raw list --format json
Sep 30 14:51:25 compute-0 sudo[285125]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:51:26 compute-0 podman[285194]: 2025-09-30 14:51:26.296610975 +0000 UTC m=+0.044511440 container create 955e6aefe859a46f83bfd3093755b01fe89122c38a8d5ebc131cb52e5ad421b9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_visvesvaraya, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:51:26 compute-0 systemd[1]: Started libpod-conmon-955e6aefe859a46f83bfd3093755b01fe89122c38a8d5ebc131cb52e5ad421b9.scope.
Sep 30 14:51:26 compute-0 podman[285194]: 2025-09-30 14:51:26.277121283 +0000 UTC m=+0.025021758 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:51:26 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:51:26 compute-0 podman[285194]: 2025-09-30 14:51:26.392586956 +0000 UTC m=+0.140487451 container init 955e6aefe859a46f83bfd3093755b01fe89122c38a8d5ebc131cb52e5ad421b9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_visvesvaraya, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Sep 30 14:51:26 compute-0 podman[285194]: 2025-09-30 14:51:26.401944902 +0000 UTC m=+0.149845397 container start 955e6aefe859a46f83bfd3093755b01fe89122c38a8d5ebc131cb52e5ad421b9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_visvesvaraya, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:51:26 compute-0 podman[285194]: 2025-09-30 14:51:26.405987448 +0000 UTC m=+0.153887943 container attach 955e6aefe859a46f83bfd3093755b01fe89122c38a8d5ebc131cb52e5ad421b9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_visvesvaraya, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:51:26 compute-0 confident_visvesvaraya[285211]: 167 167
Sep 30 14:51:26 compute-0 systemd[1]: libpod-955e6aefe859a46f83bfd3093755b01fe89122c38a8d5ebc131cb52e5ad421b9.scope: Deactivated successfully.
Sep 30 14:51:26 compute-0 podman[285194]: 2025-09-30 14:51:26.407247941 +0000 UTC m=+0.155148426 container died 955e6aefe859a46f83bfd3093755b01fe89122c38a8d5ebc131cb52e5ad421b9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_visvesvaraya, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:51:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-0216e5a59cc2386140e1f3180d608048345a69550a29e4863a48a8d160e25ab0-merged.mount: Deactivated successfully.
Sep 30 14:51:26 compute-0 podman[285194]: 2025-09-30 14:51:26.453616219 +0000 UTC m=+0.201516684 container remove 955e6aefe859a46f83bfd3093755b01fe89122c38a8d5ebc131cb52e5ad421b9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_visvesvaraya, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Sep 30 14:51:26 compute-0 systemd[1]: libpod-conmon-955e6aefe859a46f83bfd3093755b01fe89122c38a8d5ebc131cb52e5ad421b9.scope: Deactivated successfully.
Sep 30 14:51:26 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:51:26 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:51:26 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:51:26.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:51:26 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1078: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 14 KiB/s wr, 29 op/s
Sep 30 14:51:26 compute-0 podman[285236]: 2025-09-30 14:51:26.69007741 +0000 UTC m=+0.057221764 container create c031735525054b75fd9a60c8bb7de105601a343bcb8ba9725d6e1ad2fb6cf95c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_roentgen, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Sep 30 14:51:26 compute-0 systemd[1]: Started libpod-conmon-c031735525054b75fd9a60c8bb7de105601a343bcb8ba9725d6e1ad2fb6cf95c.scope.
Sep 30 14:51:26 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:51:26 compute-0 sudo[285250]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:51:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4de192116eb2db4022225c2d53a084044d9620caf67c83bc834f34e3b42863a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:51:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4de192116eb2db4022225c2d53a084044d9620caf67c83bc834f34e3b42863a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:51:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4de192116eb2db4022225c2d53a084044d9620caf67c83bc834f34e3b42863a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:51:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4de192116eb2db4022225c2d53a084044d9620caf67c83bc834f34e3b42863a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:51:26 compute-0 sudo[285250]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:51:26 compute-0 sudo[285250]: pam_unix(sudo:session): session closed for user root
Sep 30 14:51:26 compute-0 podman[285236]: 2025-09-30 14:51:26.674044789 +0000 UTC m=+0.041189143 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:51:26 compute-0 podman[285236]: 2025-09-30 14:51:26.779656513 +0000 UTC m=+0.146800847 container init c031735525054b75fd9a60c8bb7de105601a343bcb8ba9725d6e1ad2fb6cf95c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_roentgen, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Sep 30 14:51:26 compute-0 podman[285236]: 2025-09-30 14:51:26.786266977 +0000 UTC m=+0.153411301 container start c031735525054b75fd9a60c8bb7de105601a343bcb8ba9725d6e1ad2fb6cf95c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_roentgen, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Sep 30 14:51:26 compute-0 podman[285236]: 2025-09-30 14:51:26.790002725 +0000 UTC m=+0.157147049 container attach c031735525054b75fd9a60c8bb7de105601a343bcb8ba9725d6e1ad2fb6cf95c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_roentgen, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Sep 30 14:51:26 compute-0 nova_compute[261524]: 2025-09-30 14:51:26.947 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:51:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:51:27.200Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:51:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:51:27.200Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:51:27 compute-0 lvm[285353]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 14:51:27 compute-0 lvm[285353]: VG ceph_vg0 finished
Sep 30 14:51:27 compute-0 optimistic_roentgen[285277]: {}
Sep 30 14:51:27 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:51:27 compute-0 sshd-session[284740]: Failed password for root from 80.94.93.233 port 38940 ssh2
Sep 30 14:51:27 compute-0 systemd[1]: libpod-c031735525054b75fd9a60c8bb7de105601a343bcb8ba9725d6e1ad2fb6cf95c.scope: Deactivated successfully.
Sep 30 14:51:27 compute-0 systemd[1]: libpod-c031735525054b75fd9a60c8bb7de105601a343bcb8ba9725d6e1ad2fb6cf95c.scope: Consumed 1.093s CPU time.
Sep 30 14:51:27 compute-0 podman[285236]: 2025-09-30 14:51:27.506143796 +0000 UTC m=+0.873288130 container died c031735525054b75fd9a60c8bb7de105601a343bcb8ba9725d6e1ad2fb6cf95c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_roentgen, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:51:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-d4de192116eb2db4022225c2d53a084044d9620caf67c83bc834f34e3b42863a-merged.mount: Deactivated successfully.
Sep 30 14:51:27 compute-0 podman[285236]: 2025-09-30 14:51:27.551238911 +0000 UTC m=+0.918383235 container remove c031735525054b75fd9a60c8bb7de105601a343bcb8ba9725d6e1ad2fb6cf95c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_roentgen, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Sep 30 14:51:27 compute-0 systemd[1]: libpod-conmon-c031735525054b75fd9a60c8bb7de105601a343bcb8ba9725d6e1ad2fb6cf95c.scope: Deactivated successfully.
Sep 30 14:51:27 compute-0 sudo[285125]: pam_unix(sudo:session): session closed for user root
Sep 30 14:51:27 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 14:51:27 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:51:27 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 14:51:27 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:51:27 compute-0 sudo[285369]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 14:51:27 compute-0 ceph-mon[74194]: pgmap v1078: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 14 KiB/s wr, 29 op/s
Sep 30 14:51:27 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:51:27 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:51:27 compute-0 sudo[285369]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:51:27 compute-0 sudo[285369]: pam_unix(sudo:session): session closed for user root
Sep 30 14:51:27 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:51:27 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:51:27 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:51:27.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:51:28 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:51:28 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:51:28 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:51:28.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:51:28 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1079: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 14:51:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:51:28.847Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:51:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:51:28 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:51:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:51:28 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:51:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:51:28 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:51:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:51:29 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:51:29 compute-0 unix_chkpwd[285395]: password check failed for user (root)
Sep 30 14:51:29 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:51:29 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:51:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:51:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:51:29 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:51:29 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:51:29 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:51:29.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:51:29 compute-0 ceph-mon[74194]: pgmap v1079: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 14:51:29 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:51:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:51:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:51:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:51:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:51:30 compute-0 sshd-session[285398]: error: kex_exchange_identification: read: Connection reset by peer
Sep 30 14:51:30 compute-0 sshd-session[285398]: Connection reset by 195.184.76.139 port 33437
Sep 30 14:51:30 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:51:30 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:51:30 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:51:30.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:51:30 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1080: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 14:51:30 compute-0 nova_compute[261524]: 2025-09-30 14:51:30.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:51:31 compute-0 sshd-session[284740]: Failed password for root from 80.94.93.233 port 38940 ssh2
Sep 30 14:51:31 compute-0 sshd-session[284740]: Received disconnect from 80.94.93.233 port 38940:11:  [preauth]
Sep 30 14:51:31 compute-0 sshd-session[284740]: Disconnected from authenticating user root 80.94.93.233 port 38940 [preauth]
Sep 30 14:51:31 compute-0 sshd-session[284740]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.233  user=root
Sep 30 14:51:31 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:51:31 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:51:31 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:51:31.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:51:31 compute-0 ceph-mon[74194]: pgmap v1080: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 14:51:31 compute-0 nova_compute[261524]: 2025-09-30 14:51:31.948 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:51:32 compute-0 unix_chkpwd[285404]: password check failed for user (root)
Sep 30 14:51:32 compute-0 sshd-session[285401]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.233  user=root
Sep 30 14:51:32 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:51:32 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:51:32 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:51:32 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:51:32.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:51:32 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1081: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 14:51:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:51:33.704Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:51:33 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:51:33 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:51:33 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:51:33.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:51:33 compute-0 ceph-mon[74194]: pgmap v1081: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Sep 30 14:51:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:51:33 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:51:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:51:33 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:51:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:51:33 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:51:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:51:34 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:51:34 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:51:34 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:51:34 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:51:34.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:51:34 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1082: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 1.1 KiB/s wr, 20 op/s
Sep 30 14:51:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:51:34] "GET /metrics HTTP/1.1" 200 48542 "" "Prometheus/2.51.0"
Sep 30 14:51:34 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:51:34] "GET /metrics HTTP/1.1" 200 48542 "" "Prometheus/2.51.0"
Sep 30 14:51:34 compute-0 sshd-session[285401]: Failed password for root from 80.94.93.233 port 25368 ssh2
Sep 30 14:51:35 compute-0 nova_compute[261524]: 2025-09-30 14:51:35.761 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:51:35 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:51:35 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:51:35 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:51:35.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:51:35 compute-0 ceph-mon[74194]: pgmap v1082: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 1.1 KiB/s wr, 20 op/s
Sep 30 14:51:36 compute-0 unix_chkpwd[285409]: password check failed for user (root)
Sep 30 14:51:36 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:51:36 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:51:36 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:51:36.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:51:36 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1083: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 1.1 KiB/s wr, 20 op/s
Sep 30 14:51:36 compute-0 nova_compute[261524]: 2025-09-30 14:51:36.951 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:51:37 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:51:37.201Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:51:37 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:51:37 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:51:37 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:51:37 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:51:37.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:51:37 compute-0 ceph-mon[74194]: pgmap v1083: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 1.1 KiB/s wr, 20 op/s
Sep 30 14:51:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:51:38.269 163966 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:51:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:51:38.269 163966 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:51:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:51:38.269 163966 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:51:38 compute-0 sshd-session[285401]: Failed password for root from 80.94.93.233 port 25368 ssh2
Sep 30 14:51:38 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:51:38 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:51:38 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:51:38.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:51:38 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1084: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:51:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:51:38.847Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:51:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:51:38.848Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:51:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:51:38 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:51:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:51:38 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:51:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:51:38 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:51:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:51:39 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:51:39 compute-0 podman[285415]: 2025-09-30 14:51:39.171252987 +0000 UTC m=+0.083752071 container health_status c96c6cc1b89de09f21811bef665f35e069874e3ad1bbd4c37d02227af98e3458 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Sep 30 14:51:39 compute-0 podman[285414]: 2025-09-30 14:51:39.178306102 +0000 UTC m=+0.082798226 container health_status b3fb2fd96e3ed0561f41ccf5f3a6a8f5edc1610051f196882b9fe64f54041c07 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:51:39 compute-0 podman[285412]: 2025-09-30 14:51:39.191147259 +0000 UTC m=+0.101984489 container health_status 3f9405f717bf7bccb1d94628a6cea0442375ebf8d5cf43ef2536ee30dce6c6e0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, managed_by=edpm_ansible)
Sep 30 14:51:39 compute-0 podman[285413]: 2025-09-30 14:51:39.197458855 +0000 UTC m=+0.116801259 container health_status 8ace90c1ad44d98a416105b72e1c541724757ab08efa10adfaa0df7e8c2027c6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Sep 30 14:51:39 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:51:39 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:51:39 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:51:39.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:51:39 compute-0 ceph-mon[74194]: pgmap v1084: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:51:40 compute-0 unix_chkpwd[285494]: password check failed for user (root)
Sep 30 14:51:40 compute-0 sshd-session[285399]: Connection closed by 195.184.76.213 port 46337
Sep 30 14:51:40 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:51:40 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:51:40 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:51:40.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:51:40 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1085: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:51:40 compute-0 nova_compute[261524]: 2025-09-30 14:51:40.764 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:51:41 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:51:41 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:51:41 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:51:41.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:51:41 compute-0 ceph-mon[74194]: pgmap v1085: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:51:41 compute-0 nova_compute[261524]: 2025-09-30 14:51:41.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:51:42 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:51:42 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:51:42 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:51:42 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:51:42.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:51:42 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1086: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:51:42 compute-0 sshd-session[285401]: Failed password for root from 80.94.93.233 port 25368 ssh2
Sep 30 14:51:43 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:51:43.705Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:51:43 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:51:43 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:51:43 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:51:43.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:51:43 compute-0 ceph-mon[74194]: pgmap v1086: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:51:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:51:43 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:51:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:51:43 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:51:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:51:43 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:51:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:51:44 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:51:44 compute-0 sshd-session[285401]: Received disconnect from 80.94.93.233 port 25368:11:  [preauth]
Sep 30 14:51:44 compute-0 sshd-session[285401]: Disconnected from authenticating user root 80.94.93.233 port 25368 [preauth]
Sep 30 14:51:44 compute-0 sshd-session[285401]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.233  user=root
Sep 30 14:51:44 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:51:44 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:51:44 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:51:44.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:51:44 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1087: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:51:44 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:51:44 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:51:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:51:44] "GET /metrics HTTP/1.1" 200 48542 "" "Prometheus/2.51.0"
Sep 30 14:51:44 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:51:44] "GET /metrics HTTP/1.1" 200 48542 "" "Prometheus/2.51.0"
Sep 30 14:51:44 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:51:45 compute-0 unix_chkpwd[285501]: password check failed for user (root)
Sep 30 14:51:45 compute-0 sshd-session[285499]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.233  user=root
Sep 30 14:51:45 compute-0 nova_compute[261524]: 2025-09-30 14:51:45.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:51:45 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:51:45 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:51:45 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:51:45.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:51:45 compute-0 ceph-mon[74194]: pgmap v1087: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:51:46 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:51:46 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:51:46 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:51:46.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:51:46 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1088: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:51:46 compute-0 sudo[285504]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:51:46 compute-0 sudo[285504]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:51:46 compute-0 sudo[285504]: pam_unix(sudo:session): session closed for user root
Sep 30 14:51:46 compute-0 nova_compute[261524]: 2025-09-30 14:51:46.957 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:51:47 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:51:47.201Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:51:47 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:51:47 compute-0 sshd-session[285499]: Failed password for root from 80.94.93.233 port 33824 ssh2
Sep 30 14:51:47 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:51:47 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:51:47 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:51:47.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:51:48 compute-0 ceph-mon[74194]: pgmap v1088: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:51:48 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:51:48 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:51:48 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:51:48.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:51:48 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1089: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:51:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:51:48.848Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:51:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:51:48 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:51:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:51:48 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:51:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:51:48 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:51:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:51:49 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:51:49 compute-0 unix_chkpwd[285532]: password check failed for user (root)
Sep 30 14:51:49 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:51:49 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:51:49 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:51:49.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:51:50 compute-0 ceph-mon[74194]: pgmap v1089: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:51:50 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:51:50 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:51:50 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:51:50.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:51:50 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1090: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:51:50 compute-0 nova_compute[261524]: 2025-09-30 14:51:50.769 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:51:51 compute-0 sshd-session[285499]: Failed password for root from 80.94.93.233 port 33824 ssh2
Sep 30 14:51:51 compute-0 unix_chkpwd[285535]: password check failed for user (root)
Sep 30 14:51:51 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:51:51 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:51:51 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:51:51.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:51:51 compute-0 nova_compute[261524]: 2025-09-30 14:51:51.959 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:51:52 compute-0 ceph-mon[74194]: pgmap v1090: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:51:52 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:51:52 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:51:52 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:51:52 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:51:52.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:51:52 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1091: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:51:53 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:51:53.706Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:51:53 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:51:53 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:51:53 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:51:53.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:51:53 compute-0 sshd-session[285499]: Failed password for root from 80.94.93.233 port 33824 ssh2
Sep 30 14:51:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:51:53 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:51:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:51:53 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:51:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:51:53 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:51:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:51:54 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:51:54 compute-0 ceph-mon[74194]: pgmap v1091: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:51:54 compute-0 ceph-mon[74194]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #66. Immutable memtables: 0.
Sep 30 14:51:54 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:51:54.082045) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Sep 30 14:51:54 compute-0 ceph-mon[74194]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 66
Sep 30 14:51:54 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759243914082111, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 2126, "num_deletes": 251, "total_data_size": 4254812, "memory_usage": 4338416, "flush_reason": "Manual Compaction"}
Sep 30 14:51:54 compute-0 ceph-mon[74194]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #67: started
Sep 30 14:51:54 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759243914122506, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 67, "file_size": 4078954, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 29681, "largest_seqno": 31806, "table_properties": {"data_size": 4069364, "index_size": 6020, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 19900, "raw_average_key_size": 20, "raw_value_size": 4050223, "raw_average_value_size": 4162, "num_data_blocks": 259, "num_entries": 973, "num_filter_entries": 973, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759243714, "oldest_key_time": 1759243714, "file_creation_time": 1759243914, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4a74fe2f-a33e-416b-ba25-743e7942b3ac", "db_session_id": "KY5CTSKWFSFJYE5835A9", "orig_file_number": 67, "seqno_to_time_mapping": "N/A"}}
Sep 30 14:51:54 compute-0 ceph-mon[74194]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 40495 microseconds, and 16098 cpu microseconds.
Sep 30 14:51:54 compute-0 ceph-mon[74194]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 14:51:54 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:51:54.122548) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #67: 4078954 bytes OK
Sep 30 14:51:54 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:51:54.122568) [db/memtable_list.cc:519] [default] Level-0 commit table #67 started
Sep 30 14:51:54 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:51:54.124949) [db/memtable_list.cc:722] [default] Level-0 commit table #67: memtable #1 done
Sep 30 14:51:54 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:51:54.124982) EVENT_LOG_v1 {"time_micros": 1759243914124972, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Sep 30 14:51:54 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:51:54.125008) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Sep 30 14:51:54 compute-0 ceph-mon[74194]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 4246183, prev total WAL file size 4246183, number of live WAL files 2.
Sep 30 14:51:54 compute-0 ceph-mon[74194]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000063.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 14:51:54 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:51:54.126924) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Sep 30 14:51:54 compute-0 ceph-mon[74194]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Sep 30 14:51:54 compute-0 ceph-mon[74194]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [67(3983KB)], [65(12MB)]
Sep 30 14:51:54 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759243914126979, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [67], "files_L6": [65], "score": -1, "input_data_size": 16672235, "oldest_snapshot_seqno": -1}
Sep 30 14:51:54 compute-0 ceph-mon[74194]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #68: 6297 keys, 14489769 bytes, temperature: kUnknown
Sep 30 14:51:54 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759243914306960, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 68, "file_size": 14489769, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14447884, "index_size": 25071, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15749, "raw_key_size": 161043, "raw_average_key_size": 25, "raw_value_size": 14334752, "raw_average_value_size": 2276, "num_data_blocks": 1007, "num_entries": 6297, "num_filter_entries": 6297, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759241526, "oldest_key_time": 0, "file_creation_time": 1759243914, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4a74fe2f-a33e-416b-ba25-743e7942b3ac", "db_session_id": "KY5CTSKWFSFJYE5835A9", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Sep 30 14:51:54 compute-0 ceph-mon[74194]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 14:51:54 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:51:54.307227) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 14489769 bytes
Sep 30 14:51:54 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:51:54.309259) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 92.6 rd, 80.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.9, 12.0 +0.0 blob) out(13.8 +0.0 blob), read-write-amplify(7.6) write-amplify(3.6) OK, records in: 6818, records dropped: 521 output_compression: NoCompression
Sep 30 14:51:54 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:51:54.309278) EVENT_LOG_v1 {"time_micros": 1759243914309270, "job": 36, "event": "compaction_finished", "compaction_time_micros": 180056, "compaction_time_cpu_micros": 49630, "output_level": 6, "num_output_files": 1, "total_output_size": 14489769, "num_input_records": 6818, "num_output_records": 6297, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Sep 30 14:51:54 compute-0 ceph-mon[74194]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000067.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 14:51:54 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759243914310102, "job": 36, "event": "table_file_deletion", "file_number": 67}
Sep 30 14:51:54 compute-0 ceph-mon[74194]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 14:51:54 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759243914312328, "job": 36, "event": "table_file_deletion", "file_number": 65}
Sep 30 14:51:54 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:51:54.126823) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:51:54 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:51:54.312361) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:51:54 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:51:54.312366) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:51:54 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:51:54.312367) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:51:54 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:51:54.312369) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:51:54 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:51:54.312371) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:51:54 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:51:54 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:51:54 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:51:54.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:51:54 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1092: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:51:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:51:54] "GET /metrics HTTP/1.1" 200 48535 "" "Prometheus/2.51.0"
Sep 30 14:51:54 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:51:54] "GET /metrics HTTP/1.1" 200 48535 "" "Prometheus/2.51.0"
Sep 30 14:51:55 compute-0 sshd-session[285499]: Received disconnect from 80.94.93.233 port 33824:11:  [preauth]
Sep 30 14:51:55 compute-0 sshd-session[285499]: Disconnected from authenticating user root 80.94.93.233 port 33824 [preauth]
Sep 30 14:51:55 compute-0 sshd-session[285499]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.233  user=root
Sep 30 14:51:55 compute-0 nova_compute[261524]: 2025-09-30 14:51:55.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:51:55 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:51:55 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:51:55 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:51:55.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:51:56 compute-0 ceph-mon[74194]: pgmap v1092: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:51:56 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:51:56 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:51:56 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:51:56.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:51:56 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1093: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:51:56 compute-0 nova_compute[261524]: 2025-09-30 14:51:56.962 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:51:57 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:51:57.201Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:51:57 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:51:57.202Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:51:57 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:51:57 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:51:57 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:51:57 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:51:57.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:51:58 compute-0 ceph-mon[74194]: pgmap v1093: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:51:58 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:51:58 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:51:58 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:51:58.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:51:58 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1094: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:51:58 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:51:58.849Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:51:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:51:58 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:51:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:51:58 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:51:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:51:58 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:51:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:51:59 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:51:59 compute-0 ceph-mgr[74485]: [balancer INFO root] Optimize plan auto_2025-09-30_14:51:59
Sep 30 14:51:59 compute-0 ceph-mgr[74485]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 14:51:59 compute-0 ceph-mgr[74485]: [balancer INFO root] do_upmap
Sep 30 14:51:59 compute-0 ceph-mgr[74485]: [balancer INFO root] pools ['vms', 'images', 'backups', 'default.rgw.log', 'default.rgw.meta', '.rgw.root', '.nfs', 'volumes', 'default.rgw.control', 'cephfs.cephfs.meta', '.mgr', 'cephfs.cephfs.data']
Sep 30 14:51:59 compute-0 ceph-mgr[74485]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 14:51:59 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:51:59 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:51:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:51:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:51:59 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:51:59 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:51:59 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:51:59.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:51:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:51:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:51:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:51:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:52:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 14:52:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:52:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 14:52:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:52:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 14:52:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:52:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:52:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:52:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:52:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:52:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Sep 30 14:52:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:52:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Sep 30 14:52:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:52:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:52:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:52:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Sep 30 14:52:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:52:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Sep 30 14:52:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:52:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:52:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:52:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 14:52:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:52:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 14:52:00 compute-0 ceph-mon[74194]: pgmap v1094: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:52:00 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:52:00 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:52:00 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:52:00 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:52:00.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:52:00 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1095: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:52:00 compute-0 nova_compute[261524]: 2025-09-30 14:52:00.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:52:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 14:52:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 14:52:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 14:52:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 14:52:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 14:52:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 14:52:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 14:52:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 14:52:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 14:52:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 14:52:01 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:52:01 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:52:01 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:52:01.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:52:01 compute-0 nova_compute[261524]: 2025-09-30 14:52:01.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:52:02 compute-0 ceph-mon[74194]: pgmap v1095: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:52:02 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:52:02 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:52:02 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:52:02 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:52:02.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:52:02 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1096: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:52:03 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:52:03.707Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:52:03 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:52:03 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:52:03 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:52:03.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:52:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:52:03 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:52:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:52:04 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:52:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:52:04 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:52:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:52:04 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:52:04 compute-0 ceph-mon[74194]: pgmap v1096: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:52:04 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:52:04 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:52:04 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:52:04.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:52:04 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1097: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:52:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:52:04] "GET /metrics HTTP/1.1" 200 48532 "" "Prometheus/2.51.0"
Sep 30 14:52:04 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:52:04] "GET /metrics HTTP/1.1" 200 48532 "" "Prometheus/2.51.0"
Sep 30 14:52:05 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:52:05 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:52:05 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:52:05.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:52:05 compute-0 nova_compute[261524]: 2025-09-30 14:52:05.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:52:06 compute-0 ceph-mon[74194]: pgmap v1097: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:52:06 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:52:06 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:52:06 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:52:06.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:52:06 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1098: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:52:06 compute-0 sudo[285552]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:52:06 compute-0 sudo[285552]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:52:06 compute-0 sudo[285552]: pam_unix(sudo:session): session closed for user root
Sep 30 14:52:06 compute-0 nova_compute[261524]: 2025-09-30 14:52:06.967 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:52:07 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:52:07.202Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:52:07 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:52:07.202Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:52:07 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:52:07.202Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:52:07 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:52:07 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:52:07 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:52:07 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:52:07.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:52:08 compute-0 ceph-mon[74194]: pgmap v1098: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:52:08 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:52:08 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:52:08 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:52:08.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:52:08 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1099: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:52:08 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:52:08.850Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:52:09 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:52:08 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:52:09 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:52:08 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:52:09 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:52:08 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:52:09 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:52:09 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:52:09 compute-0 ceph-mon[74194]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Sep 30 14:52:09 compute-0 ceph-mon[74194]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Cumulative writes: 7153 writes, 31K keys, 7153 commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.02 MB/s
                                           Cumulative WAL: 7153 writes, 7153 syncs, 1.00 writes per sync, written: 0.06 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1590 writes, 6728 keys, 1590 commit groups, 1.0 writes per commit group, ingest: 11.73 MB, 0.02 MB/s
                                           Interval WAL: 1590 writes, 1590 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     74.8      0.67              0.14        18    0.037       0      0       0.0       0.0
                                             L6      1/0   13.82 MB   0.0      0.3     0.0      0.2       0.2      0.0       0.0   4.4    115.9     99.4      2.22              0.53        17    0.130     94K   9346       0.0       0.0
                                            Sum      1/0   13.82 MB   0.0      0.3     0.0      0.2       0.3      0.1       0.0   5.4     88.8     93.6      2.89              0.67        35    0.083     94K   9346       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   5.8     82.4     84.1      0.81              0.19         8    0.101     26K   2582       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.3     0.0      0.2       0.2      0.0       0.0   0.0    115.9     99.4      2.22              0.53        17    0.130     94K   9346       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     75.1      0.67              0.14        17    0.039       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     13.7      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.049, interval 0.011
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.26 GB write, 0.11 MB/s write, 0.25 GB read, 0.11 MB/s read, 2.9 seconds
                                           Interval compaction: 0.07 GB write, 0.11 MB/s write, 0.06 GB read, 0.11 MB/s read, 0.8 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5596d7211350#2 capacity: 304.00 MB usage: 22.66 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.00015 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1237,21.92 MB,7.21205%) FilterBlock(36,275.05 KB,0.0883554%) IndexBlock(36,483.08 KB,0.155183%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Sep 30 14:52:09 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:52:09 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:52:09 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:52:09.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:52:10 compute-0 podman[285581]: 2025-09-30 14:52:10.147932114 +0000 UTC m=+0.073182484 container health_status 3f9405f717bf7bccb1d94628a6cea0442375ebf8d5cf43ef2536ee30dce6c6e0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Sep 30 14:52:10 compute-0 podman[285589]: 2025-09-30 14:52:10.147992475 +0000 UTC m=+0.062284497 container health_status c96c6cc1b89de09f21811bef665f35e069874e3ad1bbd4c37d02227af98e3458 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent)
Sep 30 14:52:10 compute-0 podman[285583]: 2025-09-30 14:52:10.152146984 +0000 UTC m=+0.068985583 container health_status b3fb2fd96e3ed0561f41ccf5f3a6a8f5edc1610051f196882b9fe64f54041c07 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Sep 30 14:52:10 compute-0 podman[285582]: 2025-09-30 14:52:10.173707951 +0000 UTC m=+0.090862978 container health_status 8ace90c1ad44d98a416105b72e1c541724757ab08efa10adfaa0df7e8c2027c6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Sep 30 14:52:10 compute-0 ceph-mon[74194]: pgmap v1099: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:52:10 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1100: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:52:10 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:52:10 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:52:10 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:52:10.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:52:10 compute-0 nova_compute[261524]: 2025-09-30 14:52:10.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:52:11 compute-0 ceph-mon[74194]: pgmap v1100: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:52:11 compute-0 ceph-mon[74194]: from='client.? 192.168.122.10:0/4119497405' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 14:52:11 compute-0 ceph-mon[74194]: from='client.? 192.168.122.10:0/4119497405' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 14:52:11 compute-0 sshd-session[285665]: Accepted publickey for zuul from 192.168.122.10 port 50618 ssh2: ECDSA SHA256:bXV1aFTGAGwGo0hLh6HZ3pTGxlJrPf0VedxXflT3nU8
Sep 30 14:52:11 compute-0 systemd-logind[808]: New session 57 of user zuul.
Sep 30 14:52:11 compute-0 systemd[1]: Started Session 57 of User zuul.
Sep 30 14:52:11 compute-0 sshd-session[285665]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Sep 30 14:52:11 compute-0 sudo[285670]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp -p container,openstack_edpm,system,storage,virt'
Sep 30 14:52:11 compute-0 sudo[285670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:52:11 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:52:11 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:52:11 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:52:11.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:52:11 compute-0 nova_compute[261524]: 2025-09-30 14:52:11.970 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:52:12 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:52:12 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1101: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:52:12 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:52:12 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:52:12 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:52:12.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:52:13 compute-0 ceph-mon[74194]: pgmap v1101: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:52:13 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:52:13.708Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:52:13 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:52:13 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:52:13 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:52:13.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:52:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:52:13 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:52:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:52:13 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:52:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:52:13 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:52:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:52:14 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:52:14 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.26021 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:14 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.25888 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:14 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.16473 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:14 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1102: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:52:14 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:52:14 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:52:14 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:52:14.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:52:14 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:52:14 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:52:14 compute-0 ceph-mon[74194]: from='client.26021 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:14 compute-0 ceph-mon[74194]: from='client.25888 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:14 compute-0 ceph-mon[74194]: from='client.16473 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:52:14] "GET /metrics HTTP/1.1" 200 48534 "" "Prometheus/2.51.0"
Sep 30 14:52:14 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:52:14] "GET /metrics HTTP/1.1" 200 48534 "" "Prometheus/2.51.0"
Sep 30 14:52:14 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.26027 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:14 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.16479 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:15 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.25894 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:15 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status"} v 0)
Sep 30 14:52:15 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/668891824' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Sep 30 14:52:15 compute-0 ceph-mon[74194]: pgmap v1102: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:52:15 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:52:15 compute-0 ceph-mon[74194]: from='client.26027 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:15 compute-0 ceph-mon[74194]: from='client.16479 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:15 compute-0 ceph-mon[74194]: from='client.25894 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:15 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/3553241736' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Sep 30 14:52:15 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/668891824' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Sep 30 14:52:15 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/2438487227' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Sep 30 14:52:15 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:52:15 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:52:15 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:52:15.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:52:15 compute-0 nova_compute[261524]: 2025-09-30 14:52:15.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:52:16 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1103: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:52:16 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:52:16 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:52:16 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:52:16.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:52:16 compute-0 nova_compute[261524]: 2025-09-30 14:52:16.947 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:52:16 compute-0 nova_compute[261524]: 2025-09-30 14:52:16.972 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:52:17 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:52:17.203Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:52:17 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:52:17 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:52:17 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:52:17 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:52:17.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:52:17 compute-0 ceph-mon[74194]: pgmap v1103: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:52:17 compute-0 nova_compute[261524]: 2025-09-30 14:52:17.952 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:52:18 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1104: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:52:18 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:52:18 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:52:18 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:52:18.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:52:18 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:52:18.851Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:52:18 compute-0 nova_compute[261524]: 2025-09-30 14:52:18.948 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:52:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:52:18 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:52:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:52:18 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:52:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:52:18 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:52:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:52:19 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:52:19 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:52:19 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:52:19 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:52:19.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:52:19 compute-0 ceph-mon[74194]: pgmap v1104: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:52:19 compute-0 nova_compute[261524]: 2025-09-30 14:52:19.952 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:52:19 compute-0 nova_compute[261524]: 2025-09-30 14:52:19.953 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Sep 30 14:52:19 compute-0 nova_compute[261524]: 2025-09-30 14:52:19.953 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Sep 30 14:52:20 compute-0 nova_compute[261524]: 2025-09-30 14:52:20.320 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Sep 30 14:52:20 compute-0 nova_compute[261524]: 2025-09-30 14:52:20.320 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:52:20 compute-0 nova_compute[261524]: 2025-09-30 14:52:20.588 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:52:20 compute-0 nova_compute[261524]: 2025-09-30 14:52:20.590 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:52:20 compute-0 nova_compute[261524]: 2025-09-30 14:52:20.590 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:52:20 compute-0 nova_compute[261524]: 2025-09-30 14:52:20.591 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Sep 30 14:52:20 compute-0 nova_compute[261524]: 2025-09-30 14:52:20.591 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Sep 30 14:52:20 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1105: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:52:20 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:52:20 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:52:20 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:52:20.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:52:20 compute-0 ovs-vsctl[286056]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Sep 30 14:52:20 compute-0 nova_compute[261524]: 2025-09-30 14:52:20.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:52:20 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/696561292' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:52:21 compute-0 nova_compute[261524]: 2025-09-30 14:52:21.056 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Sep 30 14:52:21 compute-0 nova_compute[261524]: 2025-09-30 14:52:21.221 2 WARNING nova.virt.libvirt.driver [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 14:52:21 compute-0 nova_compute[261524]: 2025-09-30 14:52:21.223 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4407MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Sep 30 14:52:21 compute-0 nova_compute[261524]: 2025-09-30 14:52:21.223 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:52:21 compute-0 nova_compute[261524]: 2025-09-30 14:52:21.224 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:52:21 compute-0 nova_compute[261524]: 2025-09-30 14:52:21.303 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Sep 30 14:52:21 compute-0 nova_compute[261524]: 2025-09-30 14:52:21.304 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Sep 30 14:52:21 compute-0 nova_compute[261524]: 2025-09-30 14:52:21.329 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Sep 30 14:52:21 compute-0 virtqemud[261000]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Sep 30 14:52:21 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 14:52:21 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3555259406' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:52:21 compute-0 nova_compute[261524]: 2025-09-30 14:52:21.782 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Sep 30 14:52:21 compute-0 nova_compute[261524]: 2025-09-30 14:52:21.788 2 DEBUG nova.compute.provider_tree [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Inventory has not changed in ProviderTree for provider: 06783cfc-6d32-454d-9501-ebd8adea3735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Sep 30 14:52:21 compute-0 virtqemud[261000]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Sep 30 14:52:21 compute-0 virtqemud[261000]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Sep 30 14:52:21 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:52:21 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:52:21 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:52:21.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:52:21 compute-0 nova_compute[261524]: 2025-09-30 14:52:21.858 2 DEBUG nova.scheduler.client.report [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Inventory has not changed for provider 06783cfc-6d32-454d-9501-ebd8adea3735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Sep 30 14:52:21 compute-0 nova_compute[261524]: 2025-09-30 14:52:21.860 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Sep 30 14:52:21 compute-0 nova_compute[261524]: 2025-09-30 14:52:21.860 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.636s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:52:21 compute-0 ceph-mon[74194]: pgmap v1105: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:52:21 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/795902837' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:52:21 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/3164588315' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:52:21 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/92592437' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:52:21 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/3555259406' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:52:21 compute-0 nova_compute[261524]: 2025-09-30 14:52:21.974 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:52:22 compute-0 ceph-mds[96424]: mds.cephfs.compute-0.gqfeob asok_command: cache status {prefix=cache status} (starting...)
Sep 30 14:52:22 compute-0 ceph-mds[96424]: mds.cephfs.compute-0.gqfeob Can't run that command on an inactive MDS!
Sep 30 14:52:22 compute-0 lvm[286394]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 14:52:22 compute-0 lvm[286394]: VG ceph_vg0 finished
Sep 30 14:52:22 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:52:22 compute-0 nova_compute[261524]: 2025-09-30 14:52:22.493 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:52:22 compute-0 nova_compute[261524]: 2025-09-30 14:52:22.494 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:52:22 compute-0 nova_compute[261524]: 2025-09-30 14:52:22.494 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:52:22 compute-0 ceph-mds[96424]: mds.cephfs.compute-0.gqfeob asok_command: client ls {prefix=client ls} (starting...)
Sep 30 14:52:22 compute-0 ceph-mds[96424]: mds.cephfs.compute-0.gqfeob Can't run that command on an inactive MDS!
Sep 30 14:52:22 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1106: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:52:22 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:52:22 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:52:22 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:52:22.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:52:22 compute-0 kernel: block vda: the capability attribute has been deprecated.
Sep 30 14:52:22 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.26069 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:22 compute-0 nova_compute[261524]: 2025-09-30 14:52:22.952 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:52:22 compute-0 nova_compute[261524]: 2025-09-30 14:52:22.952 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Sep 30 14:52:22 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/3764232418' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:52:23 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Sep 30 14:52:23 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Sep 30 14:52:23 compute-0 ceph-mds[96424]: mds.cephfs.compute-0.gqfeob asok_command: damage ls {prefix=damage ls} (starting...)
Sep 30 14:52:23 compute-0 ceph-mds[96424]: mds.cephfs.compute-0.gqfeob Can't run that command on an inactive MDS!
Sep 30 14:52:23 compute-0 ceph-mds[96424]: mds.cephfs.compute-0.gqfeob asok_command: dump loads {prefix=dump loads} (starting...)
Sep 30 14:52:23 compute-0 ceph-mds[96424]: mds.cephfs.compute-0.gqfeob Can't run that command on an inactive MDS!
Sep 30 14:52:23 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.16527 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:23 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.26090 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:23 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Sep 30 14:52:23 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3476988986' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Sep 30 14:52:23 compute-0 ceph-mds[96424]: mds.cephfs.compute-0.gqfeob asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Sep 30 14:52:23 compute-0 ceph-mds[96424]: mds.cephfs.compute-0.gqfeob Can't run that command on an inactive MDS!
Sep 30 14:52:23 compute-0 ceph-mds[96424]: mds.cephfs.compute-0.gqfeob asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Sep 30 14:52:23 compute-0 ceph-mds[96424]: mds.cephfs.compute-0.gqfeob Can't run that command on an inactive MDS!
Sep 30 14:52:23 compute-0 ceph-mds[96424]: mds.cephfs.compute-0.gqfeob asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Sep 30 14:52:23 compute-0 ceph-mds[96424]: mds.cephfs.compute-0.gqfeob Can't run that command on an inactive MDS!
Sep 30 14:52:23 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.16539 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:23 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.25924 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:52:23.709Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:52:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:52:23.709Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:52:23 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:52:23 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3300477890' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:52:23 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.26105 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:23 compute-0 ceph-mds[96424]: mds.cephfs.compute-0.gqfeob asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Sep 30 14:52:23 compute-0 ceph-mds[96424]: mds.cephfs.compute-0.gqfeob Can't run that command on an inactive MDS!
Sep 30 14:52:23 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:52:23 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:52:23 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:52:23.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:52:23 compute-0 nova_compute[261524]: 2025-09-30 14:52:23.953 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:52:23 compute-0 ceph-mon[74194]: pgmap v1106: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:52:23 compute-0 ceph-mon[74194]: from='client.26069 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:23 compute-0 ceph-mon[74194]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Sep 30 14:52:23 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/4022627666' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Sep 30 14:52:23 compute-0 ceph-mon[74194]: from='client.16527 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:23 compute-0 ceph-mon[74194]: from='client.26090 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:23 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/3476988986' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Sep 30 14:52:23 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/1498529480' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:52:23 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/3300477890' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:52:23 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/3244641766' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Sep 30 14:52:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:52:23 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:52:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:52:23 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:52:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:52:23 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:52:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:52:24 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:52:24 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.16554 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:24 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Sep 30 14:52:24 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Sep 30 14:52:24 compute-0 ceph-mds[96424]: mds.cephfs.compute-0.gqfeob asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Sep 30 14:52:24 compute-0 ceph-mds[96424]: mds.cephfs.compute-0.gqfeob Can't run that command on an inactive MDS!
Sep 30 14:52:24 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config log"} v 0)
Sep 30 14:52:24 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4135644784' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Sep 30 14:52:24 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.25945 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:24 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.26120 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:24 compute-0 ceph-mds[96424]: mds.cephfs.compute-0.gqfeob asok_command: get subtrees {prefix=get subtrees} (starting...)
Sep 30 14:52:24 compute-0 ceph-mds[96424]: mds.cephfs.compute-0.gqfeob Can't run that command on an inactive MDS!
Sep 30 14:52:24 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.16575 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:24 compute-0 ceph-mds[96424]: mds.cephfs.compute-0.gqfeob asok_command: ops {prefix=ops} (starting...)
Sep 30 14:52:24 compute-0 ceph-mds[96424]: mds.cephfs.compute-0.gqfeob Can't run that command on an inactive MDS!
Sep 30 14:52:24 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.25963 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:24 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1107: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:52:24 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:52:24 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:52:24 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:52:24.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:52:24 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.16590 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:24 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config-key dump"} v 0)
Sep 30 14:52:24 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2936119354' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Sep 30 14:52:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:52:24] "GET /metrics HTTP/1.1" 200 48534 "" "Prometheus/2.51.0"
Sep 30 14:52:24 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:52:24] "GET /metrics HTTP/1.1" 200 48534 "" "Prometheus/2.51.0"
Sep 30 14:52:24 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Sep 30 14:52:24 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3566357869' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Sep 30 14:52:24 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.25984 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:25 compute-0 ceph-mon[74194]: from='client.16539 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:25 compute-0 ceph-mon[74194]: from='client.25924 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:25 compute-0 ceph-mon[74194]: from='client.26105 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:25 compute-0 ceph-mon[74194]: from='client.16554 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:25 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/2838640628' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Sep 30 14:52:25 compute-0 ceph-mon[74194]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Sep 30 14:52:25 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/4135644784' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Sep 30 14:52:25 compute-0 ceph-mon[74194]: from='client.25945 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:25 compute-0 ceph-mon[74194]: from='client.26120 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:25 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/2589956319' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Sep 30 14:52:25 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/114353343' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:52:25 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/2936119354' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Sep 30 14:52:25 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/2467371444' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Sep 30 14:52:25 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/3702438145' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Sep 30 14:52:25 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/3566357869' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Sep 30 14:52:25 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.16602 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:25 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.26162 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:25 compute-0 ceph-mds[96424]: mds.cephfs.compute-0.gqfeob asok_command: session ls {prefix=session ls} (starting...)
Sep 30 14:52:25 compute-0 ceph-mds[96424]: mds.cephfs.compute-0.gqfeob Can't run that command on an inactive MDS!
Sep 30 14:52:25 compute-0 ceph-mds[96424]: mds.cephfs.compute-0.gqfeob asok_command: status {prefix=status} (starting...)
Sep 30 14:52:25 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Sep 30 14:52:25 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3485706618' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Sep 30 14:52:25 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.16626 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:25 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Sep 30 14:52:25 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Sep 30 14:52:25 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Sep 30 14:52:25 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/314629667' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Sep 30 14:52:25 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.26011 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:25 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Sep 30 14:52:25 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/715502323' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Sep 30 14:52:25 compute-0 nova_compute[261524]: 2025-09-30 14:52:25.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:52:25 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:52:25 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:52:25 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:52:25.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:52:25 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Sep 30 14:52:25 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/637408045' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Sep 30 14:52:26 compute-0 ceph-mon[74194]: from='client.16575 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:26 compute-0 ceph-mon[74194]: from='client.25963 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:26 compute-0 ceph-mon[74194]: pgmap v1107: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:52:26 compute-0 ceph-mon[74194]: from='client.16590 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:26 compute-0 ceph-mon[74194]: from='client.25984 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:26 compute-0 ceph-mon[74194]: from='client.16602 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:26 compute-0 ceph-mon[74194]: from='client.26162 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:26 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/787604236' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Sep 30 14:52:26 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/97771316' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Sep 30 14:52:26 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/3485706618' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Sep 30 14:52:26 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/1544604691' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Sep 30 14:52:26 compute-0 ceph-mon[74194]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Sep 30 14:52:26 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/2780899382' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Sep 30 14:52:26 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/314629667' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Sep 30 14:52:26 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/715502323' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Sep 30 14:52:26 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/2550014662' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Sep 30 14:52:26 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/637408045' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Sep 30 14:52:26 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/1403569308' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Sep 30 14:52:26 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.26023 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:26 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Sep 30 14:52:26 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2846167567' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Sep 30 14:52:26 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.26213 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:26 compute-0 ceph-mgr[74485]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Sep 30 14:52:26 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:52:26.420+0000 7ffa0cb94640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Sep 30 14:52:26 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Sep 30 14:52:26 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Sep 30 14:52:26 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1108: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:52:26 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:52:26 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:52:26 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:52:26.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:52:26 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Sep 30 14:52:26 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1124108209' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Sep 30 14:52:26 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.16686 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:26 compute-0 ceph-mgr[74485]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Sep 30 14:52:26 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:52:26.780+0000 7ffa0cb94640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Sep 30 14:52:26 compute-0 nova_compute[261524]: 2025-09-30 14:52:26.977 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:52:27 compute-0 sudo[287062]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:52:27 compute-0 sudo[287062]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:52:27 compute-0 sudo[287062]: pam_unix(sudo:session): session closed for user root
Sep 30 14:52:27 compute-0 ceph-mon[74194]: from='client.16626 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:27 compute-0 ceph-mon[74194]: from='client.26011 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:27 compute-0 ceph-mon[74194]: from='client.26023 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:27 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/284303029' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Sep 30 14:52:27 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/4206092146' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Sep 30 14:52:27 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/1719039201' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Sep 30 14:52:27 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/2846167567' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Sep 30 14:52:27 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/2178331630' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Sep 30 14:52:27 compute-0 ceph-mon[74194]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Sep 30 14:52:27 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/2138336617' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Sep 30 14:52:27 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/1124108209' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Sep 30 14:52:27 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/1191173112' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Sep 30 14:52:27 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/1680417367' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Sep 30 14:52:27 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/3472917256' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Sep 30 14:52:27 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/1751695812' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Sep 30 14:52:27 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat"} v 0)
Sep 30 14:52:27 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1500533908' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Sep 30 14:52:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:52:27.204Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:52:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:52:27.205Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:52:27 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Sep 30 14:52:27 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1249534522' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Sep 30 14:52:27 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.26077 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:27 compute-0 ceph-mgr[74485]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Sep 30 14:52:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T14:52:27.400+0000 7ffa0cb94640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Sep 30 14:52:27 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:52:27 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Sep 30 14:52:27 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3411346959' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Sep 30 14:52:27 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Sep 30 14:52:27 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2191663680' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Sep 30 14:52:27 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:52:27 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:52:27 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:52:27.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:52:27 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.26294 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:27 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.16755 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:28 compute-0 sudo[287244]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:52:28 compute-0 sudo[287244]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:52:28 compute-0 sudo[287244]: pam_unix(sudo:session): session closed for user root
Sep 30 14:52:28 compute-0 ceph-mon[74194]: from='client.26213 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:28 compute-0 ceph-mon[74194]: pgmap v1108: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:52:28 compute-0 ceph-mon[74194]: from='client.16686 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:28 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/1500533908' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Sep 30 14:52:28 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/982813492' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Sep 30 14:52:28 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/1249534522' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Sep 30 14:52:28 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/3464558933' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Sep 30 14:52:28 compute-0 ceph-mon[74194]: from='client.26077 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:28 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/1374933503' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Sep 30 14:52:28 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/3411346959' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Sep 30 14:52:28 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/1478391475' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Sep 30 14:52:28 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/2191663680' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Sep 30 14:52:28 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/1119952762' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Sep 30 14:52:28 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/1306351722' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Sep 30 14:52:28 compute-0 sudo[287281]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 14:52:28 compute-0 sudo[287281]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:52:28 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Sep 30 14:52:28 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1616689873' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Sep 30 14:52:28 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.26315 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:28 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.16770 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:28 compute-0 sudo[287281]: pam_unix(sudo:session): session closed for user root
Sep 30 14:52:28 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1109: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:52:28 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:52:28 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:52:28 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:52:28.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:52:28 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.26119 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:28 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Sep 30 14:52:28 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1713784966' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:19:43.262012+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 5029888 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:19:44.262245+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 934311 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84180992 unmapped: 4988928 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:19:45.262395+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 4980736 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:19:46.262563+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 4980736 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:19:47.262817+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 4980736 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:19:48.262994+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 4964352 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:19:49.263140+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 933720 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 4964352 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:19:50.263319+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84213760 unmapped: 4956160 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:19:51.263456+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84213760 unmapped: 4956160 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:19:52.263598+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84213760 unmapped: 4956160 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:19:53.263772+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84221952 unmapped: 4947968 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.526376724s of 11.609647751s, submitted: 6
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:19:54.263920+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 933588 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84230144 unmapped: 4939776 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:19:55.264070+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84238336 unmapped: 4931584 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:19:56.264268+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84238336 unmapped: 4931584 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:19:57.264458+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84238336 unmapped: 4931584 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:19:58.264582+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84246528 unmapped: 4923392 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:19:59.264729+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 933588 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84246528 unmapped: 4923392 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:20:00.264913+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 4915200 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:20:01.265325+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 4907008 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:20:02.265545+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 4907008 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:20:03.265711+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84271104 unmapped: 4898816 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:20:04.265841+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 933588 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84271104 unmapped: 4898816 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:20:05.266079+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84279296 unmapped: 4890624 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:20:06.267677+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84287488 unmapped: 4882432 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:20:07.267898+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84295680 unmapped: 4874240 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:20:08.268245+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84295680 unmapped: 4874240 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:20:09.270523+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 933588 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 ms_handle_reset con 0x559a36455000 session 0x559a34fb6780
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 ms_handle_reset con 0x559a346e3c00 session 0x559a34fb63c0
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84303872 unmapped: 4866048 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:20:10.270677+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84312064 unmapped: 4857856 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:20:11.271411+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84312064 unmapped: 4857856 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:20:12.272397+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84320256 unmapped: 4849664 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:20:13.273250+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84320256 unmapped: 4849664 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:20:14.274552+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 933588 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84320256 unmapped: 4849664 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:20:15.275384+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84336640 unmapped: 4833280 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:20:16.275871+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84336640 unmapped: 4833280 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:20:17.276320+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84336640 unmapped: 4833280 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:20:18.276622+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84344832 unmapped: 4825088 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:20:19.276812+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 933588 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84344832 unmapped: 4825088 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:20:20.276935+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a37c68c00
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 26.601274490s of 26.604181290s, submitted: 1
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 4816896 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:20:21.277065+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 ms_handle_reset con 0x559a36455800 session 0x559a378d3a40
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 ms_handle_reset con 0x559a36ff6800 session 0x559a384e2000
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 4816896 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:20:22.277321+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 4816896 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:20:23.277566+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84361216 unmapped: 4808704 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:20:24.277784+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 933720 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84361216 unmapped: 4808704 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:20:25.277937+0000)
Sep 30 14:52:28 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84377600 unmapped: 4792320 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:20:26.278222+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a346e3c00
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84377600 unmapped: 4792320 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:20:27.278459+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84377600 unmapped: 4792320 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:20:28.278605+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 4784128 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:20:29.278750+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 933720 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 4784128 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:20:30.278896+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84393984 unmapped: 4775936 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:20:31.279055+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84393984 unmapped: 4775936 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:20:32.279199+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a36455000
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.633007050s of 11.636272430s, submitted: 1
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 4759552 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:20:33.279313+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 4751360 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:20:34.279429+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 933852 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 4751360 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:20:35.279544+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85467136 unmapped: 3702784 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:20:36.279701+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 4743168 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:20:37.279897+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 4743168 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:20:38.280075+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a37928000
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 4734976 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:20:39.281354+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 934050 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 4734976 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:20:40.281677+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 4726784 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:20:41.282853+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 4726784 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:20:42.283034+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84451328 unmapped: 4718592 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:20:43.283157+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84459520 unmapped: 4710400 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:20:44.283385+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 934050 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84459520 unmapped: 4710400 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:20:45.283556+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.011795998s of 13.031287193s, submitted: 5
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 4702208 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:20:46.283718+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 4694016 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:20:47.283928+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 4694016 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:20:48.284194+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 4685824 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:20:49.284322+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 933918 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 4685824 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:20:50.284551+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 4677632 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:20:51.284680+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 4677632 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:20:52.285107+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 4669440 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:20:53.285298+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 4661248 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:20:54.285456+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 933918 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 4661248 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:20:55.285634+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 4661248 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:20:56.285819+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 4653056 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:20:57.286253+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 4653056 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:20:58.286372+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 4644864 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:20:59.286514+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 933918 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 4644864 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:21:00.286758+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 4644864 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:21:01.286904+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 4636672 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:21:02.287416+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 ms_handle_reset con 0x559a35203400 session 0x559a37cf65a0
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 ms_handle_reset con 0x559a34811000 session 0x559a345841e0
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 4628480 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:21:03.287561+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 4628480 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:21:04.287788+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 933918 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 4628480 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:21:05.287950+0000)
Sep 30 14:52:28 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 14:52:28 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 4620288 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:21:06.288080+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 4620288 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:21:07.288258+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 4620288 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:21:08.288389+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 4612096 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:21:09.288513+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 ms_handle_reset con 0x559a346e3c00 session 0x559a385fde00
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 ms_handle_reset con 0x559a37c68c00 session 0x559a380983c0
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 933918 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 4612096 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:21:10.288637+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 4595712 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 ms_handle_reset con 0x559a37928000 session 0x559a38705c20
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 ms_handle_reset con 0x559a36455000 session 0x559a385fc1e0
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:21:11.289277+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 4595712 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:21:12.289403+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 4595712 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a346e3c00
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 27.825698853s of 27.829246521s, submitted: 1
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:21:13.289515+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84590592 unmapped: 4579328 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:21:14.289922+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 934050 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84590592 unmapped: 4579328 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:21:15.290091+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84598784 unmapped: 4571136 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:21:16.290288+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84598784 unmapped: 4571136 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:21:17.290432+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84598784 unmapped: 4571136 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:21:18.290584+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 4562944 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:21:19.290724+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 934050 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 4562944 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:21:20.290858+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a34811000
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 4546560 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:21:21.290963+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:21:22.291242+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 4546560 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35203400
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:21:23.291397+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 4546560 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:21:24.291761+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 4538368 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 934314 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:21:25.291904+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 4538368 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:21:26.292270+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 4538368 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a36ff6800
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:21:27.292457+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84639744 unmapped: 4530176 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.411727905s of 14.504632950s, submitted: 3
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:21:28.292657+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84664320 unmapped: 4505600 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:21:29.292842+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 4497408 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 934182 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:21:30.293020+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 4497408 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:21:31.293244+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84697088 unmapped: 4472832 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:21:32.293474+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84697088 unmapped: 4472832 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:21:33.293624+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84697088 unmapped: 4472832 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:21:34.293784+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 4464640 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 934182 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:21:35.293972+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84713472 unmapped: 4456448 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:21:36.294298+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84721664 unmapped: 4448256 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:21:37.294521+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84729856 unmapped: 4440064 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:21:38.294690+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 4423680 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:21:39.296117+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 4415488 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 933918 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:21:40.296262+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 4415488 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:21:41.296424+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 4415488 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:21:42.296589+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84762624 unmapped: 4407296 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:21:43.297012+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84770816 unmapped: 4399104 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:21:44.297303+0000)
Sep 30 14:52:28 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1110: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 605 B/s rd, 0 op/s
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84770816 unmapped: 4399104 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 933918 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:21:45.297448+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84770816 unmapped: 4399104 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:21:46.297872+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84779008 unmapped: 4390912 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:21:47.298066+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84779008 unmapped: 4390912 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:21:48.298229+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84787200 unmapped: 4382720 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:21:49.298381+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84787200 unmapped: 4382720 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:21:50.298514+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 933918 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84787200 unmapped: 4382720 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:21:51.298630+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84795392 unmapped: 4374528 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:21:52.298764+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84795392 unmapped: 4374528 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:21:53.298887+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84795392 unmapped: 4374528 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:21:54.299027+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84803584 unmapped: 4366336 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:21:55.299212+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 933918 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84803584 unmapped: 4366336 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:21:56.299404+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84819968 unmapped: 4349952 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:21:57.299558+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84819968 unmapped: 4349952 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 ms_handle_reset con 0x559a36ff6800 session 0x559a378d3680
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 ms_handle_reset con 0x559a34811000 session 0x559a379c21e0
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:21:58.299702+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84819968 unmapped: 4349952 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:21:59.299852+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84828160 unmapped: 4341760 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:22:00.299996+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 933918 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84828160 unmapped: 4341760 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:22:01.300156+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84836352 unmapped: 4333568 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:22:02.300429+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84836352 unmapped: 4333568 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:22:03.300569+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84844544 unmapped: 4325376 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:22:04.300695+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84852736 unmapped: 4317184 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:22:05.300872+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 933918 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84852736 unmapped: 4317184 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:22:06.301131+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84860928 unmapped: 4308992 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:22:07.301321+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84860928 unmapped: 4308992 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:22:08.301496+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84877312 unmapped: 4292608 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a3793b400
Sep 30 14:52:28 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 41.003913879s of 41.015987396s, submitted: 4
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:22:09.301614+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84877312 unmapped: 4292608 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:22:10.301723+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 934050 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84885504 unmapped: 4284416 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:22:11.301867+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84885504 unmapped: 4284416 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a37929c00
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:22:12.302056+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 4259840 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:22:13.302213+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84918272 unmapped: 4251648 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:22:14.302348+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84918272 unmapped: 4251648 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:22:15.302499+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935562 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84926464 unmapped: 4243456 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:22:16.302710+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84934656 unmapped: 4235264 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:22:17.302896+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84942848 unmapped: 4227072 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:22:18.303067+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84942848 unmapped: 4227072 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:22:19.303374+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84942848 unmapped: 4227072 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:22:20.303588+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935562 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84951040 unmapped: 4218880 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:22:21.303757+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84951040 unmapped: 4218880 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:22:22.303890+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84959232 unmapped: 4210688 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:22:23.304035+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84959232 unmapped: 4210688 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:22:24.304234+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84959232 unmapped: 4210688 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:22:25.304382+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935562 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.702447891s of 16.718183517s, submitted: 2
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84975616 unmapped: 4194304 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:22:26.304602+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84975616 unmapped: 4194304 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:22:27.304777+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84983808 unmapped: 4186112 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:22:28.304952+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84983808 unmapped: 4186112 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:22:29.305091+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 84992000 unmapped: 4177920 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:22:30.305265+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935430 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85000192 unmapped: 4169728 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:22:31.305414+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85000192 unmapped: 4169728 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:22:32.305548+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85008384 unmapped: 4161536 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 ms_handle_reset con 0x559a3793b400 session 0x559a385fde00
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:22:33.305724+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85008384 unmapped: 4161536 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:22:34.305867+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85008384 unmapped: 4161536 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:22:35.306029+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935430 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85016576 unmapped: 4153344 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:22:36.306347+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85016576 unmapped: 4153344 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:22:37.306506+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85016576 unmapped: 4153344 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:22:38.306678+0000)
Sep 30 14:52:28 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85024768 unmapped: 4145152 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:22:39.306862+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85024768 unmapped: 4145152 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:22:40.307045+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935430 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 4136960 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:22:41.307222+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85041152 unmapped: 4128768 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:22:42.307345+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 4120576 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:22:43.307461+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 4120576 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a3793b400
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.692031860s of 18.695718765s, submitted: 1
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:22:44.307614+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85073920 unmapped: 4096000 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:22:45.307737+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 935562 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85082112 unmapped: 4087808 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:22:46.307871+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85082112 unmapped: 4087808 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:22:47.308056+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85090304 unmapped: 4079616 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:22:48.308241+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85098496 unmapped: 4071424 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:22:49.308395+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85098496 unmapped: 4071424 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:22:50.308642+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937074 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85106688 unmapped: 4063232 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:22:51.308913+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85114880 unmapped: 4055040 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:22:52.309066+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85114880 unmapped: 4055040 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:22:53.309264+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85123072 unmapped: 4046848 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:22:54.309430+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85131264 unmapped: 4038656 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:22:55.309685+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936483 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85139456 unmapped: 4030464 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:22:56.309926+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85139456 unmapped: 4030464 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:22:57.310105+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85139456 unmapped: 4030464 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:22:58.310334+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85147648 unmapped: 4022272 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.570773125s of 14.579764366s, submitted: 3
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:22:59.310476+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85147648 unmapped: 4022272 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:23:00.310620+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936351 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85155840 unmapped: 4014080 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:23:01.310794+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85164032 unmapped: 4005888 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:23:02.310958+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85164032 unmapped: 4005888 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:23:03.311203+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85172224 unmapped: 3997696 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:23:04.311355+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85172224 unmapped: 3997696 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:23:05.311476+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936351 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 3989504 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:23:06.311612+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 3989504 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:23:07.311778+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 3981312 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:23:08.311932+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 3981312 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:23:09.312068+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 3981312 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:23:10.312233+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936351 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 3973120 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:23:11.312359+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 3973120 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:23:12.312488+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 3973120 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:23:13.312638+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 3964928 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:23:14.312788+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 3964928 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:23:15.312955+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936351 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 3956736 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:23:16.313103+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 3956736 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:23:17.313285+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85221376 unmapped: 3948544 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:23:18.313451+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85221376 unmapped: 3948544 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:23:19.313612+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85221376 unmapped: 3948544 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:23:20.313834+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936351 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 3940352 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:23:21.313978+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 3940352 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:23:22.314108+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 3940352 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 8577 writes, 33K keys, 8577 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.04 MB/s
                                           Cumulative WAL: 8577 writes, 1988 syncs, 4.31 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 8577 writes, 33K keys, 8577 commit groups, 1.0 writes per commit group, ingest: 21.27 MB, 0.04 MB/s
                                           Interval WAL: 8577 writes, 1988 syncs, 4.31 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.06              0.00         1    0.065       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.06              0.00         1    0.065       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.06              0.00         1    0.065       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559a3376f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559a3376f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559a3376f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559a3376f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559a3376f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559a3376f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559a3376f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559a3376e9b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559a3376e9b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559a3376e9b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559a3376f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559a3376f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:23:23.314250+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 3866624 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:23:24.314359+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 3866624 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:23:25.315419+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936351 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 3858432 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:23:26.315551+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 3858432 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:23:27.315673+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 3850240 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:23:28.315855+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 3850240 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:23:29.316000+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 3850240 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:23:30.316084+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936351 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 3842048 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:23:31.316212+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 3842048 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:23:32.316335+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 3833856 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:23:33.316496+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 3833856 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:23:34.316840+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 ms_handle_reset con 0x559a37929c00 session 0x559a352fb4a0
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 ms_handle_reset con 0x559a35203400 session 0x559a34584780
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 3825664 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:23:35.316971+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936351 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 3825664 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:23:36.317101+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 3825664 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:23:37.317263+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 3817472 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:23:38.317392+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 3817472 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:23:39.317584+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 3817472 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:23:40.317751+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936351 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 3809280 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:23:41.317879+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 3809280 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:23:42.318046+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 3801088 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:23:43.318234+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 3801088 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:23:44.318399+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85377024 unmapped: 3792896 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:23:45.318515+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a34811000
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 46.577968597s of 46.597072601s, submitted: 1
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 936483 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85377024 unmapped: 3792896 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:23:46.318659+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85377024 unmapped: 3792896 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:23:47.318870+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85385216 unmapped: 3784704 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:23:48.319046+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a36455000
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 3776512 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:23:49.319191+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 3768320 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:23:50.319301+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937995 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 3768320 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:23:51.319440+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a36ff6800
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 3760128 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:23:52.319549+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 3760128 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:23:53.319681+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 3751936 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:23:54.319852+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 3751936 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:23:55.319985+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937995 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 3751936 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:23:56.320213+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 3743744 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:23:57.320434+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.221809387s of 12.243607521s, submitted: 2
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 3743744 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:23:58.320597+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 3727360 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:23:59.320741+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 3727360 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:24:00.320912+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937404 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 3727360 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:24:01.321046+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85450752 unmapped: 3719168 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:24:02.321208+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85450752 unmapped: 3719168 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:24:03.321342+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85458944 unmapped: 3710976 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:24:04.321542+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85458944 unmapped: 3710976 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 ms_handle_reset con 0x559a36455000 session 0x559a386e52c0
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 ms_handle_reset con 0x559a346e3c00 session 0x559a35e89a40
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:24:05.321772+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937272 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85467136 unmapped: 3702784 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:24:06.321960+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85467136 unmapped: 3702784 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:24:07.322210+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85475328 unmapped: 3694592 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:24:08.322384+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85475328 unmapped: 3694592 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:24:09.322525+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85475328 unmapped: 3694592 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:24:10.322654+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937272 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85483520 unmapped: 3686400 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:24:11.322798+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85483520 unmapped: 3686400 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:24:12.322946+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85483520 unmapped: 3686400 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:24:13.323097+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 3678208 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:24:14.323257+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 3678208 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:24:15.323507+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937272 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 3670016 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a37928000
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.482225418s of 18.550962448s, submitted: 2
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:24:16.323670+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 3670016 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:24:17.323861+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 3670016 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:24:18.324044+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 3661824 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:24:19.324216+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a36459800
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 3645440 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:24:20.324347+0000)
Sep 30 14:52:28 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 940428 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 3645440 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:24:21.324504+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85532672 unmapped: 3637248 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:24:22.324642+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a34810800
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85532672 unmapped: 3637248 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:24:23.324826+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85540864 unmapped: 3629056 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:24:24.325023+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85540864 unmapped: 3629056 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:24:25.325241+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 940428 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85549056 unmapped: 3620864 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:24:26.325385+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85549056 unmapped: 3620864 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:24:27.325552+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85549056 unmapped: 3620864 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.089201927s of 12.099145889s, submitted: 3
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:24:28.325696+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85557248 unmapped: 3612672 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:24:29.325856+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85557248 unmapped: 3612672 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:24:30.326025+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939705 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 3604480 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:24:31.326162+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 3604480 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:24:32.326341+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85573632 unmapped: 3596288 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:24:33.326510+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85573632 unmapped: 3596288 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:24:34.326632+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85573632 unmapped: 3596288 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:24:35.326800+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939705 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85581824 unmapped: 3588096 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:24:36.327010+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85581824 unmapped: 3588096 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:24:37.327239+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85581824 unmapped: 3588096 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:24:38.327362+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85590016 unmapped: 3579904 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:24:39.327561+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 ms_handle_reset con 0x559a36ff6800 session 0x559a386cf4a0
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 ms_handle_reset con 0x559a34811000 session 0x559a386ced20
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85590016 unmapped: 3579904 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:24:40.327727+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939705 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85590016 unmapped: 3579904 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 ms_handle_reset con 0x559a36459800 session 0x559a386e5e00
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 ms_handle_reset con 0x559a3793b400 session 0x559a35a3cb40
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:24:41.327898+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85598208 unmapped: 3571712 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:24:42.328087+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85598208 unmapped: 3571712 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:24:43.328230+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85606400 unmapped: 3563520 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:24:44.328367+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85606400 unmapped: 3563520 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:24:45.328502+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939705 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85606400 unmapped: 3563520 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:24:46.328660+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85622784 unmapped: 3547136 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:24:47.328843+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85622784 unmapped: 3547136 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:24:48.328994+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 3538944 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:24:49.329278+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 3538944 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:24:50.329476+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a346e3c00
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 22.118692398s of 22.125301361s, submitted: 2
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939837 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85639168 unmapped: 3530752 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:24:51.329629+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85639168 unmapped: 3530752 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35203400
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:24:52.329747+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85647360 unmapped: 3522560 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:24:53.329863+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85647360 unmapped: 3522560 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:24:54.330001+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85647360 unmapped: 3522560 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:24:55.330107+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941481 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85655552 unmapped: 3514368 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:24:56.330215+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a36455000
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85655552 unmapped: 3514368 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:24:57.330393+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85663744 unmapped: 3506176 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:24:58.330577+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a37929c00
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85663744 unmapped: 3506176 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:24:59.330728+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85663744 unmapped: 3506176 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:25:00.330883+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941481 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85663744 unmapped: 3506176 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:25:01.331030+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 3497984 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:25:02.331198+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.300581932s of 12.359103203s, submitted: 3
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 3497984 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:25:03.331313+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 3776512 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:25:04.331500+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 3776512 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:25:05.331675+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 940167 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85475328 unmapped: 3694592 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:25:06.331786+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85073920 unmapped: 4096000 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:25:07.331937+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85082112 unmapped: 4087808 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:25:08.332074+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85082112 unmapped: 4087808 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:25:09.332214+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85090304 unmapped: 4079616 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:25:10.332419+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 940035 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85090304 unmapped: 4079616 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:25:11.332579+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85090304 unmapped: 4079616 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:25:12.332808+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85090304 unmapped: 4079616 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:25:13.332953+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85090304 unmapped: 4079616 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:25:14.333105+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85090304 unmapped: 4079616 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:25:15.333290+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 940035 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85090304 unmapped: 4079616 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:25:16.333421+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85090304 unmapped: 4079616 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:25:17.333596+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85090304 unmapped: 4079616 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:25:18.333769+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85090304 unmapped: 4079616 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:25:19.333919+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85090304 unmapped: 4079616 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:25:20.334062+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 940035 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85090304 unmapped: 4079616 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:25:21.334228+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85090304 unmapped: 4079616 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:25:22.334406+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85090304 unmapped: 4079616 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:25:23.334554+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85090304 unmapped: 4079616 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:25:24.334693+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85090304 unmapped: 4079616 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:25:25.334902+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 940035 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 ms_handle_reset con 0x559a37929c00 session 0x559a38705c20
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 ms_handle_reset con 0x559a35203400 session 0x559a378d3680
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85090304 unmapped: 4079616 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:25:26.335054+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85090304 unmapped: 4079616 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:25:27.335249+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85090304 unmapped: 4079616 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:25:28.335354+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85090304 unmapped: 4079616 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:25:29.335500+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85090304 unmapped: 4079616 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:25:30.335628+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 940035 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85090304 unmapped: 4079616 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:25:31.335795+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85090304 unmapped: 4079616 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:25:32.335943+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85090304 unmapped: 4079616 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:25:33.336084+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85090304 unmapped: 4079616 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:25:34.336232+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85090304 unmapped: 4079616 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:25:35.336362+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 940035 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85098496 unmapped: 4071424 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:25:36.336464+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a34811000
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 33.638031006s of 34.067096710s, submitted: 122
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85098496 unmapped: 4071424 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:25:37.336641+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85098496 unmapped: 4071424 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:25:38.336768+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85098496 unmapped: 4071424 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:25:39.336898+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85098496 unmapped: 4071424 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:25:40.337032+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941679 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85098496 unmapped: 4071424 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:25:41.337161+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85098496 unmapped: 4071424 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:25:42.337355+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35203400
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85123072 unmapped: 4046848 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:25:43.337492+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85114880 unmapped: 4055040 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:25:44.337620+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85147648 unmapped: 4022272 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:25:45.337752+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942600 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 3964928 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:25:46.337901+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.028728485s of 10.010936737s, submitted: 219
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 3915776 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:25:47.338052+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 3915776 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:25:48.338254+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 3915776 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:25:49.338416+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 3915776 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:25:50.338588+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942600 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 3915776 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:25:51.338742+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 3915776 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:25:52.338908+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 3915776 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:25:53.339053+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 3915776 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:25:54.339217+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 3915776 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:25:55.339524+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942468 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 3915776 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:25:56.339649+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 3915776 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.26336 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:25:57.339860+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 3915776 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:25:58.340007+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 3915776 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:25:59.340211+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 3915776 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:26:00.340405+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942468 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 3915776 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:26:01.340560+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 3915776 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 ms_handle_reset con 0x559a35203400 session 0x559a34584780
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 ms_handle_reset con 0x559a34811000 session 0x559a345845a0
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:26:02.341467+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 3915776 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:26:03.342292+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:26:04.343028+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 3915776 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:26:05.343385+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 3915776 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942468 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:26:06.343505+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 3907584 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:26:07.343781+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 3907584 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:26:08.344011+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 3907584 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:26:09.344221+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 3907584 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:26:10.344474+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 3907584 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942468 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:26:11.344613+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 3899392 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:26:12.344782+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 3899392 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:26:13.345005+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a36459800
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 26.258413315s of 26.503026962s, submitted: 22
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 3899392 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:26:14.345154+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 3899392 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:26:15.345499+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 3899392 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942600 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:26:16.345760+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 3899392 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:26:17.346095+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 3899392 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:26:18.346265+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 3899392 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:26:19.346499+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a37929c00
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 3899392 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:26:20.346727+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 3899392 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 944112 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:26:21.346913+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 3899392 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:26:22.347131+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 3899392 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:26:23.347359+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 3899392 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:26:24.347536+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 3899392 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:26:25.347769+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 3899392 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.095639229s of 12.118737221s, submitted: 3
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942930 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:26:26.347914+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 3899392 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:26:27.348228+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 3915776 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:26:28.348429+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 3915776 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:26:29.348653+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 3915776 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:26:30.348766+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 3915776 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942798 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:26:31.348930+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 3907584 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:26:32.349122+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 3907584 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:26:33.349281+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 3907584 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:26:34.349554+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 3907584 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:26:35.349752+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 3899392 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:26:36.349966+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942798 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 3899392 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:26:37.350152+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 3899392 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:26:38.350399+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 3899392 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:26:39.350579+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 3899392 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:26:40.350740+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 3899392 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:26:41.350883+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942798 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 3899392 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:26:42.351028+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 3899392 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:26:43.351213+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 3899392 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:26:44.351357+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 3899392 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:26:45.351512+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 3899392 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:26:46.351670+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942798 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 3899392 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:26:47.351883+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 3899392 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:26:48.352012+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 3899392 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:26:49.352145+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 3899392 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:26:50.352237+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 3899392 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:26:51.352375+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942798 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 3899392 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:26:52.352519+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 3899392 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:26:53.352628+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 3899392 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:26:54.352781+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 3899392 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:26:55.352962+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 3899392 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:26:56.353082+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942798 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 3899392 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:26:57.353261+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 3899392 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:26:58.353387+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 3899392 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:26:59.353508+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 3899392 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:27:00.353614+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 3899392 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:27:01.353787+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942798 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 3899392 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:27:02.353944+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 3899392 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:27:03.354134+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 3899392 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:27:04.354242+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 3899392 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:27:05.354404+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 3891200 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:27:06.354570+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942798 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 3891200 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:27:07.354882+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 3891200 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:27:08.355005+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 3891200 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:27:09.355134+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 3891200 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:27:10.355244+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 3891200 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:27:11.355409+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942798 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 3891200 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:27:12.355552+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 3891200 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:27:13.355705+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 3891200 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:27:14.355854+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 3891200 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:27:15.355976+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 3891200 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:27:16.356108+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942798 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 3891200 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:27:17.356279+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 3891200 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:27:18.356454+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 3891200 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:27:19.356660+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 3891200 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:27:20.356884+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 3891200 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:27:21.357073+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942798 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 3891200 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:27:22.357272+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 3891200 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:27:23.357416+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 3891200 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:27:24.357590+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 3891200 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:27:25.357812+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 3891200 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:27:26.357977+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942798 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 3891200 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:27:27.358138+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 3891200 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:27:28.358242+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 3891200 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:27:29.358385+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 3891200 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:27:30.358521+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 3891200 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:27:31.358638+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942798 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 3891200 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:27:32.358803+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 3891200 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:27:33.358946+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 3891200 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:27:34.359085+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 3891200 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:27:35.359238+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 3891200 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:27:36.359403+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942798 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 3891200 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:27:37.359592+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 3891200 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:27:38.359739+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 3891200 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:27:39.360139+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 3891200 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:27:40.360367+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 3891200 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:27:41.360534+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942798 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 3891200 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:27:42.360690+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 3891200 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:27:43.360850+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 3891200 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:27:44.361021+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 3891200 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:27:45.361216+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 3891200 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:27:46.361359+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942798 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 3891200 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:27:47.361566+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 3891200 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:27:48.361720+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 3891200 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:27:49.361872+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 3891200 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:27:50.362014+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 3891200 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:27:51.362140+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942798 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 3891200 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:27:52.362231+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 3891200 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:27:53.362366+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 3891200 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:27:54.362479+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 3891200 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:27:55.362629+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 3891200 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:27:56.362828+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942798 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 3891200 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:27:57.363028+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 3891200 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:27:58.363573+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 3891200 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:27:59.363875+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 3891200 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:00.364003+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 3891200 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:01.364125+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942798 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3883008 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:02.364225+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3883008 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:03.364332+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3883008 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:04.364471+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3883008 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:05.364640+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3883008 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:06.364877+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942798 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3883008 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:07.365112+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3883008 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:08.365281+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3883008 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:09.365498+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3883008 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:10.365640+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3883008 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:11.365811+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942798 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3883008 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:12.366021+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3883008 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:13.366209+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3883008 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:14.366404+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3883008 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:15.366568+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3883008 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:16.366723+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942798 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3883008 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:17.366934+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3883008 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:18.367064+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3883008 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:19.367618+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3883008 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:20.367760+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 ms_handle_reset con 0x559a37929c00 session 0x559a386f30e0
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 ms_handle_reset con 0x559a36459800 session 0x559a352fa1e0
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3883008 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:21.367889+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942798 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3883008 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:22.368024+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3883008 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:23.368148+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3883008 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:24.370600+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3883008 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:25.370734+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3883008 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:26.370861+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942798 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3883008 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:27.371042+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3883008 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:28.371227+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3883008 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:29.371424+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3883008 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:30.371565+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3883008 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a3793b400
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 125.835968018s of 125.864295959s, submitted: 2
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:31.371678+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942930 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3883008 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:32.371847+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3883008 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:33.371990+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3883008 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:34.372103+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3883008 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:35.372224+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3883008 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:36.372401+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942930 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3883008 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:37.372546+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3883008 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:38.372721+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3883008 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:39.372851+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3883008 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:40.372996+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3883008 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:41.373109+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942339 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3883008 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:42.373303+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3883008 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:43.373426+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3883008 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:44.373681+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3883008 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:45.373888+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3883008 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:46.374034+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942339 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3883008 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:47.374219+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3883008 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.006479263s of 17.014936447s, submitted: 2
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:48.374379+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 3874816 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:49.374494+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 3874816 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:50.374780+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 3874816 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:51.374958+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942207 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 3874816 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:52.375091+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 ms_handle_reset con 0x559a34810800 session 0x559a37980960
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 ms_handle_reset con 0x559a37928000 session 0x559a386e5860
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 3874816 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:53.375503+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 3874816 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 14:52:28 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:54.375760+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 3874816 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:55.375917+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 3874816 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:56.376118+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942207 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 3874816 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:57.376491+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 3874816 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:58.376653+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 3874816 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:59.376818+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 3874816 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:00.376974+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 3874816 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:01.377090+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942207 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 3874816 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:02.377293+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 3874816 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a37928000
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.830188751s of 14.834419250s, submitted: 1
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:03.377454+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 3866624 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:04.378029+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 3866624 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:05.378145+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 3866624 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:06.378302+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942339 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 3866624 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:07.378482+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 3866624 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:08.378598+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 3866624 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:09.378719+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 3866624 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:10.378911+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 3866624 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:11.379087+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942339 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 3866624 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:12.379240+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 3866624 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:13.379364+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 3866624 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:14.380066+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 3866624 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:15.381277+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 3866624 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.002157211s of 13.013655663s, submitted: 1
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:16.381695+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942207 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 3858432 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:17.381893+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 3858432 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:18.382060+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 3858432 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:19.382204+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 3858432 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:20.382357+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 3858432 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:21.382484+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942207 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 3858432 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:22.382627+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 3858432 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:23.382772+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 3858432 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:24.382905+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 3858432 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:25.383065+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 3858432 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:26.383229+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942207 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 3858432 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:27.383373+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 3858432 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:28.383508+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 3858432 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:29.383625+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 3858432 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:30.383756+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 3858432 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:31.383891+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942207 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 3858432 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:32.384004+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 3858432 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:33.384139+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 3858432 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:34.384274+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 3858432 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:35.384402+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 3858432 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:36.384537+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942207 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 3858432 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:37.384664+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 3858432 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:38.384783+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 3858432 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:39.384911+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 3858432 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:40.385065+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 3858432 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:41.385198+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942207 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 3858432 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:42.385373+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 3850240 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:43.385509+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 3850240 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:44.385704+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 3850240 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:45.385852+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 ms_handle_reset con 0x559a36455000 session 0x559a380d6d20
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 ms_handle_reset con 0x559a346e3c00 session 0x559a380d65a0
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 3850240 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:46.386039+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 ms_handle_reset con 0x559a3793b400 session 0x559a382a1a40
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942207 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 3850240 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:47.386259+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 3850240 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:48.386438+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 3850240 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:49.386604+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 3850240 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:50.386745+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 3850240 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:51.386863+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942207 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 3850240 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:52.386997+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 3850240 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:53.387148+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 3850240 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:54.387269+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 3850240 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:55.387355+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 3850240 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:56.387489+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a34811000
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 40.213050842s of 40.217830658s, submitted: 1
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942339 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 86368256 unmapped: 2801664 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:57.387668+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35203400
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 3850240 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:58.387808+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 3850240 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:59.387952+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a36459800
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 3842048 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:00.388102+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 3842048 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:01.388234+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943983 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 3842048 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:02.388386+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 3842048 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:03.388557+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a37929c00
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 3842048 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:04.388753+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 3842048 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:05.388939+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 3842048 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:06.389088+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945495 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 3842048 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:07.389257+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 3842048 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:08.389396+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 3842048 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:09.389510+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 3842048 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:10.389632+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.983433723s of 13.998912811s, submitted: 4
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 3842048 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:11.389839+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945231 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 3842048 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:12.389997+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 3842048 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:13.390123+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 3842048 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:14.390236+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 3842048 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:15.390381+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 3842048 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:16.390524+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945231 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 3842048 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:17.390768+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 3842048 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:18.390896+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:19.391070+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 3842048 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:20.391245+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 3842048 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:21.391415+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 3842048 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945231 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:22.391559+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 3842048 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:23.391696+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 3842048 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 ms_handle_reset con 0x559a36459800 session 0x559a3852a3c0
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 ms_handle_reset con 0x559a37928000 session 0x559a381fbc20
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:24.391804+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 3842048 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:25.391938+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 3842048 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:26.392100+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 3842048 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945231 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:27.392267+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 3842048 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:28.392488+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 3842048 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:29.392677+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 3842048 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:30.392810+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 3842048 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:31.392941+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 3842048 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945231 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:32.393114+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 3842048 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:33.393225+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 3842048 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:34.393343+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 3842048 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a346e3c00
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 24.723939896s of 24.757307053s, submitted: 2
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:35.393470+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 3842048 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:36.393611+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 3842048 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945363 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:37.393811+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 3842048 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:52:28 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:52:28 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.16788 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:38.393983+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 3842048 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:39.394102+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 3842048 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:40.394221+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 3842048 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a36455000
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:41.394379+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 3833856 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946875 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:42.394522+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 ms_handle_reset con 0x559a36455400 session 0x559a35e89e00
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a36459800
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 3833856 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:43.394651+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 3833856 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:44.394761+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 3833856 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 ms_handle_reset con 0x559a36458400 session 0x559a35a3d680
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a3793b400
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:45.394920+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 3833856 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:46.395073+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 3833856 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.069216728s of 12.078630447s, submitted: 3
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945693 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:47.395240+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 3833856 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:48.395386+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 3833856 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:49.395512+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 3833856 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:50.395686+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 3833856 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:51.395854+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 3833856 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945693 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:52.396007+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 3833856 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:53.396241+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 3833856 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:54.396388+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 3833856 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:55.396520+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 3833856 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:56.396647+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 3825664 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945561 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:57.396800+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 3825664 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:58.397012+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 3825664 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:59.397204+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 3825664 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:00.397383+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 3825664 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:01.397527+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 3825664 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945561 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:02.397700+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 3825664 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:03.397853+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 3825664 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:04.398014+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 3825664 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:05.398151+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 3825664 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:06.398302+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 3825664 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945561 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:07.398442+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 3825664 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:08.398592+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 3825664 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:09.398734+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 3825664 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:10.399712+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 3825664 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:11.399838+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 3825664 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945561 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:12.399987+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 3825664 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:13.400147+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 3825664 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:14.400257+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 3825664 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:15.400371+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 3825664 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:16.400488+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 ms_handle_reset con 0x559a36455000 session 0x559a3852b0e0
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 ms_handle_reset con 0x559a346e3c00 session 0x559a3852ab40
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 3825664 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:17.400686+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945561 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 3825664 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:18.400852+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 3825664 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:19.400968+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 3817472 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:20.401091+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 3817472 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:21.401225+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 3817472 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:22.401375+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945561 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 3817472 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:23.401566+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 3817472 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:24.401922+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 3817472 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:25.402386+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 3817472 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:26.402514+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 3817472 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a36458400
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 39.949089050s of 39.967605591s, submitted: 2
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:27.402671+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945693 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 3817472 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:28.402849+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 3817472 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:29.403101+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 3817472 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:30.403379+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 3817472 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:31.403725+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 3817472 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:32.403854+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945693 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 3817472 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a37c6b800
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:33.404042+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 3817472 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:34.404223+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 3817472 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:35.404376+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 3817472 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:36.404545+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 3817472 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:37.404804+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945693 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 3817472 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:38.404949+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 3817472 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.115405083s of 12.119539261s, submitted: 1
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:39.405125+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 3817472 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:40.405429+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 3817472 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:41.405761+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 3817472 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:42.405942+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945102 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 3817472 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:43.406261+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 3817472 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:44.406540+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 3817472 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:45.406819+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 3809280 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:46.407203+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 3809280 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:47.407399+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 944970 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 3809280 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:48.407585+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 3809280 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:49.407697+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 3809280 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:50.407825+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 3809280 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:51.407984+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 3801088 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:52.408157+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 944970 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 3801088 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:53.408884+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 3801088 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:54.409044+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 3801088 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:55.409181+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 3801088 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:56.409376+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 3801088 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:57.409568+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 944970 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 3801088 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:58.409682+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 3801088 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:59.409842+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 3801088 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:00.409973+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 3801088 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:01.410244+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 3801088 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:02.410407+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 944970 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 3809280 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:03.410536+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 3809280 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:04.410653+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 3809280 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:05.410774+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 3809280 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:06.410926+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 3809280 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:07.411649+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 944970 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 3801088 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:08.411788+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 3801088 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:09.411918+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 3801088 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:10.412045+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 3801088 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:11.412237+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 3801088 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:12.412427+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 944970 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 3801088 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:13.412572+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 3801088 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:14.412718+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 3801088 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:15.412878+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 ms_handle_reset con 0x559a37929c00 session 0x559a37f42780
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 ms_handle_reset con 0x559a35203400 session 0x559a352fa960
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 3801088 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:16.413005+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 3801088 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:17.413252+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 944970 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 3801088 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:18.413377+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 3801088 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:19.413531+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 3801088 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:20.413679+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 3801088 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:21.413810+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 3801088 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:22.413939+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 944970 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 3801088 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:23.414062+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 3801088 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:24.414194+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 3801088 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:25.414322+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 3801088 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a379d3400
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 46.930648804s of 46.940258026s, submitted: 2
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:26.414487+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 3801088 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:27.415053+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945102 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 3801088 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:28.415235+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 3801088 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35eacc00
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:29.415565+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 3776512 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:30.415688+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 3776512 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:31.415896+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 3776512 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:32.416041+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946614 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 3776512 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:33.416215+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 3768320 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:34.416342+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 3768320 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:35.416522+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 3768320 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:36.416690+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 3768320 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:37.416888+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946614 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 3768320 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:38.417044+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 3768320 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:39.417626+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 3768320 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:40.418004+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 3768320 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:41.418271+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 3768320 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:42.418401+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946614 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.521619797s of 16.530069351s, submitted: 2
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 3768320 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:43.418760+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 3768320 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:44.418950+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 3768320 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:45.419604+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 3768320 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:46.419764+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 ms_handle_reset con 0x559a379d3400 session 0x559a379814a0
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 3768320 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:47.420052+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946482 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 3760128 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:48.420413+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 3760128 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:49.420653+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 3760128 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:50.420783+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 3760128 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:51.423490+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 3760128 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:52.423607+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946482 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 3760128 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:53.423713+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 3760128 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:54.423857+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 3760128 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:55.423990+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 3760128 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:56.424216+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 3760128 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a346e3c00
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.257704735s of 14.261231422s, submitted: 1
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:57.424371+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946614 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 3760128 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:58.424523+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 3760128 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:59.424677+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 3760128 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:00.424818+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 3751936 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:01.424954+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 3751936 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:02.425156+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948126 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 3751936 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:03.425309+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 3751936 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:04.425419+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 3743744 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:05.425529+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 3743744 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:06.425646+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 3743744 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:07.425826+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947535 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 3743744 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:08.425950+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 3743744 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:09.426069+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 3743744 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:10.426202+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 3743744 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.876503944s of 14.200399399s, submitted: 3
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:11.426343+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85434368 unmapped: 3735552 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:12.426440+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947403 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85434368 unmapped: 3735552 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:13.426569+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85434368 unmapped: 3735552 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:14.426721+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85434368 unmapped: 3735552 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:15.426857+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85434368 unmapped: 3735552 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:16.426954+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85434368 unmapped: 3735552 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:17.427103+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947403 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85434368 unmapped: 3735552 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:18.427419+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85434368 unmapped: 3735552 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 ms_handle_reset con 0x559a346e3c00 session 0x559a35b7b860
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:19.427601+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85434368 unmapped: 3735552 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:20.427748+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85434368 unmapped: 3735552 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:21.427880+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 3727360 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:22.428010+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947403 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 3727360 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 9389 writes, 35K keys, 9389 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 9389 writes, 2394 syncs, 3.92 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 812 writes, 1249 keys, 812 commit groups, 1.0 writes per commit group, ingest: 0.42 MB, 0.00 MB/s
                                           Interval WAL: 812 writes, 406 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.06              0.00         1    0.065       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.06              0.00         1    0.065       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.06              0.00         1    0.065       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559a3376f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559a3376f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559a3376f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559a3376f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559a3376f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559a3376f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559a3376f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559a3376e9b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559a3376e9b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559a3376e9b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559a3376f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559a3376f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:23.428136+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85475328 unmapped: 3694592 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:24.428236+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85475328 unmapped: 3694592 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:25.428355+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85475328 unmapped: 3694592 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:26.428485+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85475328 unmapped: 3694592 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:27.428653+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947403 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85475328 unmapped: 3694592 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:28.428788+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85475328 unmapped: 3694592 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:29.428938+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35203400
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.340478897s of 18.891267776s, submitted: 1
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85475328 unmapped: 3694592 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:30.429070+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85475328 unmapped: 3694592 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:31.429224+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85475328 unmapped: 3694592 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:32.429360+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947535 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85475328 unmapped: 3694592 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:33.429508+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85475328 unmapped: 3694592 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:34.429672+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85483520 unmapped: 3686400 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:35.429811+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85483520 unmapped: 3686400 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:36.429938+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85483520 unmapped: 3686400 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a36455000
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:37.430267+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950559 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85483520 unmapped: 3686400 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:38.430411+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85483520 unmapped: 3686400 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:39.430538+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85483520 unmapped: 3686400 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.068456650s of 10.116128922s, submitted: 3
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:40.430734+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85483520 unmapped: 3686400 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:41.430855+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85483520 unmapped: 3686400 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:42.430991+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 949968 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85483520 unmapped: 3686400 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:43.431284+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85483520 unmapped: 3686400 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:44.431468+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85483520 unmapped: 3686400 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:45.431628+0000)
Sep 30 14:52:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:52:28.852Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 3678208 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:46.431774+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 3678208 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:47.431942+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 949836 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 3678208 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:48.432066+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 3678208 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:49.432259+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 3678208 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:50.432396+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 3678208 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:51.432617+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 3678208 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:52.432797+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 949836 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 3678208 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:53.432903+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 3678208 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:54.433034+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 3678208 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:55.433196+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 ms_handle_reset con 0x559a35eacc00 session 0x559a3750a960
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 ms_handle_reset con 0x559a34811000 session 0x559a3852a1e0
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 3678208 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:56.433336+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 3678208 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:57.433481+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 949836 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 3678208 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:58.433625+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 3678208 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:59.433745+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 3678208 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:00.433856+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 3678208 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:01.434021+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 3678208 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:02.434157+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 949836 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 3678208 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:03.434304+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 3678208 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:04.434505+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 3678208 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:05.434691+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 3678208 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a37929c00
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 26.097017288s of 26.103408813s, submitted: 2
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:06.434816+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 3678208 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:07.435000+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 949968 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 3678208 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:08.435249+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 3678208 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:09.435435+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 3678208 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:10.435665+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 3678208 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:11.435820+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 3678208 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:12.435983+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 949968 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 3678208 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:13.436208+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 3678208 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:14.436383+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 3678208 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:15.436559+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 3678208 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:16.436733+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 3678208 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:17.436954+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948786 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 3670016 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:18.437124+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 3670016 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:19.437327+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 3670016 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:20.437486+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 3670016 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.030439377s of 15.042234421s, submitted: 3
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:21.437648+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 3670016 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:22.437834+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 ms_handle_reset con 0x559a37c6b800 session 0x559a3852be00
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 ms_handle_reset con 0x559a36458400 session 0x559a3852bc20
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948654 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 3670016 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:23.438022+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 3670016 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:24.438183+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 3670016 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:25.438375+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 3670016 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:26.438527+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 3670016 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:27.438856+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948654 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 3670016 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:28.439036+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 3670016 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:29.439274+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 3670016 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:30.439482+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 3670016 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:31.439683+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 3670016 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:32.439870+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948654 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 3670016 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a36458400
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.988931656s of 11.991481781s, submitted: 1
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:33.440010+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 3670016 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:34.440231+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 3670016 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:35.440425+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 ms_handle_reset con 0x559a37929c00 session 0x559a374d52c0
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85516288 unmapped: 3653632 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:36.441221+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85516288 unmapped: 3653632 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:37.441414+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948786 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:38.442349+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85516288 unmapped: 3653632 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:39.443256+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85516288 unmapped: 3653632 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:40.443863+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 3645440 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:41.444094+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 3645440 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:42.444749+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 3645440 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948786 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:43.445367+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 3645440 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:44.445895+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 3645440 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:45.446096+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 3645440 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a346e3c00
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.982230186s of 12.986698151s, submitted: 1
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:46.446245+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 3645440 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:47.446441+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 3645440 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948786 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:48.446845+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 3645440 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:49.447069+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 3645440 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:50.447302+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 3645440 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:51.447557+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 3645440 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:52.447753+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a34811000
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 3645440 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950298 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:53.447982+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 3645440 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:54.448274+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 3645440 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:55.448457+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 3645440 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:56.448652+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85540864 unmapped: 3629056 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:57.449000+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85540864 unmapped: 3629056 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950298 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:58.449270+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.175735474s of 12.185593605s, submitted: 3
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85540864 unmapped: 3629056 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:59.449430+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85540864 unmapped: 3629056 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:00.449631+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85540864 unmapped: 3629056 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:01.449781+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85540864 unmapped: 3629056 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:02.449904+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85540864 unmapped: 3629056 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950166 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:03.450011+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85540864 unmapped: 3629056 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:04.450204+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85540864 unmapped: 3629056 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:05.450340+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85590016 unmapped: 3579904 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:06.450498+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85663744 unmapped: 3506176 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:07.450729+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85663744 unmapped: 3506176 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950166 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:08.450944+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85663744 unmapped: 3506176 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:09.451397+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85663744 unmapped: 3506176 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:10.451626+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85663744 unmapped: 3506176 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:11.451872+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85663744 unmapped: 3506176 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:12.452229+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85663744 unmapped: 3506176 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950166 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:13.452988+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85663744 unmapped: 3506176 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:14.453273+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85663744 unmapped: 3506176 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:15.453839+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85663744 unmapped: 3506176 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:16.454019+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85663744 unmapped: 3506176 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:17.454390+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85663744 unmapped: 3506176 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950166 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:18.454588+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85663744 unmapped: 3506176 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:19.454853+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85663744 unmapped: 3506176 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:20.455034+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85663744 unmapped: 3506176 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:21.455249+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 3497984 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:22.455384+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 3497984 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950166 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:23.455569+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 3497984 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:24.455727+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 3497984 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:25.456048+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 3497984 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:26.456283+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 3497984 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:27.456502+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 3497984 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950166 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:28.456706+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 3497984 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:29.456941+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 3497984 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:30.457265+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 3497984 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:31.457502+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 3497984 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:32.457633+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 3497984 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:33.457861+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950166 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 3497984 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:34.458107+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 3497984 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:35.458346+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 3497984 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:36.458555+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 3497984 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:37.458808+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 3497984 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:38.459030+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950166 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 3497984 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:39.459240+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 3497984 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:40.459482+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 3497984 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:41.459663+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 3497984 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:42.459837+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 3497984 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 44.053421021s of 44.496948242s, submitted: 120
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:43.460024+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950166 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85704704 unmapped: 3465216 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:44.460215+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85835776 unmapped: 3334144 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:45.460351+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85852160 unmapped: 3317760 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:46.460489+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85852160 unmapped: 3317760 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:47.460684+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85852160 unmapped: 3317760 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:48.460871+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950166 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85868544 unmapped: 3301376 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:49.461057+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 3252224 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:50.461265+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 3252224 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:51.461464+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 3252224 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:52.461649+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 3252224 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:53.461809+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950166 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 3252224 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:54.462089+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 3252224 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:55.462258+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 3252224 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:56.462393+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 3252224 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:57.462649+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85925888 unmapped: 3244032 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:58.462834+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950166 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85925888 unmapped: 3244032 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:59.463006+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85925888 unmapped: 3244032 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:00.463219+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85925888 unmapped: 3244032 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:01.463375+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85925888 unmapped: 3244032 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:02.463714+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85925888 unmapped: 3244032 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:03.463869+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950166 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85925888 unmapped: 3244032 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:04.464034+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85925888 unmapped: 3244032 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:05.464248+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 ms_handle_reset con 0x559a36455000 session 0x559a380d63c0
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 ms_handle_reset con 0x559a35203400 session 0x559a34fb4b40
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85925888 unmapped: 3244032 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:06.464446+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85925888 unmapped: 3244032 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:07.464632+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 ms_handle_reset con 0x559a34811000 session 0x559a35a3d680
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 ms_handle_reset con 0x559a346e3c00 session 0x559a3852b4a0
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85925888 unmapped: 3244032 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:08.464856+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950166 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85925888 unmapped: 3244032 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:09.465000+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85925888 unmapped: 3244032 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:10.465115+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85925888 unmapped: 3244032 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:11.465314+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85925888 unmapped: 3244032 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:12.465434+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85925888 unmapped: 3244032 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:13.465807+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950166 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85925888 unmapped: 3244032 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:14.466030+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85925888 unmapped: 3244032 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:15.466654+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85925888 unmapped: 3244032 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35203400
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 32.520961761s of 33.186458588s, submitted: 236
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:16.466790+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85925888 unmapped: 3244032 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:17.468042+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85925888 unmapped: 3244032 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a36455000
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:18.468215+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950430 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85925888 unmapped: 3244032 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:19.468480+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85925888 unmapped: 3244032 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:20.469148+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85925888 unmapped: 3244032 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:21.469566+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85925888 unmapped: 3244032 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a37929c00
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:22.470107+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85925888 unmapped: 3244032 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:23.470271+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 951942 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3235840 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:24.470470+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3235840 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:25.470632+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3235840 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:26.471073+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3235840 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:27.471450+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3235840 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.164124489s of 12.175537109s, submitted: 3
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:28.471630+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 951351 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 ms_handle_reset con 0x559a36458400 session 0x559a3852a3c0
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3235840 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:29.471803+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3235840 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:30.472124+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3235840 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:31.472409+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3235840 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:32.472571+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3235840 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:33.472805+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 951219 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3235840 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:34.473099+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3235840 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:35.473418+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3235840 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:36.473676+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3235840 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:37.473958+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3235840 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:38.474308+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 951087 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3235840 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:39.474483+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35eacc00
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.271492004s of 11.303041458s, submitted: 3
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3235840 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:40.474686+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3235840 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:41.474863+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3235840 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:42.475120+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a37c6b800
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3235840 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:43.475393+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 954243 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3235840 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:44.475606+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3235840 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:45.475828+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3235840 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:46.476001+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3235840 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:47.476266+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3235840 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:48.476464+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 954243 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3235840 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:49.476629+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3235840 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:50.476794+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3235840 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 sudo[287563]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:51.476978+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3235840 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:52.477225+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3235840 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.656469345s of 13.668901443s, submitted: 4
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:53.477406+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953520 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3235840 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:54.477630+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3235840 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:55.477815+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3235840 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:56.477985+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3235840 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:57.478251+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3235840 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:58.478396+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953520 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3235840 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:59.478620+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3235840 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:00.478783+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3235840 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:01.478931+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3235840 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:02.479129+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3227648 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:03.479317+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953520 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 sudo[287563]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3227648 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:04.479502+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3227648 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:05.479653+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3227648 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:06.479869+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3227648 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:07.480258+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3227648 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:08.480454+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953520 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3227648 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:09.480744+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3227648 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:10.480925+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3227648 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:11.481092+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3227648 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:12.481289+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3227648 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:13.481428+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953520 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3227648 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:14.481601+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3227648 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:15.481761+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3227648 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:16.481932+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3227648 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:17.482198+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3227648 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:18.482337+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953520 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3227648 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:19.482570+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3227648 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:20.483380+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3227648 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:21.484140+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3227648 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:22.484663+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3227648 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 sudo[287563]: pam_unix(sudo:session): session closed for user root
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:23.485291+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953520 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3227648 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:24.485746+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3227648 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:25.486256+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3227648 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:26.486580+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3227648 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:27.486812+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3227648 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:28.487055+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953520 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3227648 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:29.487223+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3227648 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:30.487431+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3227648 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:31.487849+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3227648 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:32.488084+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 ms_handle_reset con 0x559a37929c00 session 0x559a37d7ed20
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 ms_handle_reset con 0x559a35203400 session 0x559a37d7e5a0
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 ms_handle_reset con 0x559a35eacc00 session 0x559a37d7f2c0
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3227648 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:33.488317+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953520 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3227648 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:34.488492+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3227648 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:35.488712+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:36.488923+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3227648 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:37.489276+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3227648 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:38.489448+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3227648 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953520 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:39.489613+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3227648 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:40.489770+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3227648 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:41.489933+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3227648 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:42.490087+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3227648 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a37c69400
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 49.957733154s of 49.961242676s, submitted: 1
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:43.490256+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3227648 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35e38000
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953784 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:44.490419+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 3211264 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:45.490590+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 3211264 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:46.490733+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 3211264 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:47.490905+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 3211264 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:48.491871+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 3211264 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953784 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a37c0a400
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:49.492032+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 3211264 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a37944800
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:50.492232+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 3211264 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:51.493354+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 3211264 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:52.494404+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 3211264 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:53.494980+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 3211264 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 954705 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:54.495785+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 3211264 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:55.495971+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.094205856s of 12.115049362s, submitted: 4
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 3211264 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:56.496595+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 3211264 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:57.497146+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 3211264 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:58.497736+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 3211264 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953982 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:59.498252+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 3211264 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:00.498758+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 2162688 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:01.499088+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 2162688 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:02.499556+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 2162688 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:03.499979+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 2162688 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953850 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:04.500403+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 2162688 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:05.500768+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 2162688 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:06.501143+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 2162688 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:07.501643+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 2162688 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:08.501948+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 2162688 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953850 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:09.502156+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 2162688 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:10.502521+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 2162688 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:11.502822+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 2162688 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:12.503106+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 2162688 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:13.503457+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 2162688 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953850 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:14.503697+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 2162688 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:15.503958+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 2162688 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:16.504214+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 2162688 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:17.504473+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 2162688 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:18.504616+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 ms_handle_reset con 0x559a37944800 session 0x559a385c5e00
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 ms_handle_reset con 0x559a35e38000 session 0x559a37d7fe00
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 2162688 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953850 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:19.504773+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 2162688 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:20.504966+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 2162688 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:21.505088+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 2162688 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 ms_handle_reset con 0x559a37c0a400 session 0x559a37d7f0e0
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 ms_handle_reset con 0x559a37c69400 session 0x559a37d7f860
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:22.505231+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 2162688 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:23.505358+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 2162688 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953850 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:24.505509+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 2162688 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:25.505656+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 2162688 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:26.505843+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 2162688 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:27.506019+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 2162688 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:28.506247+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 2162688 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953850 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:29.506356+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35e38000
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 34.027877808s of 34.041492462s, submitted: 3
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 3211264 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:30.506485+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 3211264 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:31.506616+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 3211264 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:32.506807+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 3211264 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35203400
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:33.506944+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 3211264 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 954114 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:34.507115+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 3211264 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:35.507269+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35eacc00
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 3211264 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:36.507476+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 3211264 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:37.507694+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 3211264 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:38.507884+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 3211264 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955626 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:39.508022+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 3211264 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:40.508158+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 3211264 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:41.508351+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.082656860s of 12.092800140s, submitted: 3
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 3211264 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:42.508525+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 3211264 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:43.508714+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 3211264 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 954903 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:44.508875+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 3211264 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:45.509043+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 3211264 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:46.509306+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 3211264 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a36458400
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:47.509508+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85966848 unmapped: 3203072 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _renew_subs
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 141 handle_osd_map epochs [142,142], i have 141, src has [1,142]
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:48.509650+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 142 ms_handle_reset con 0x559a35eacc00 session 0x559a35b7ad20
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 142 ms_handle_reset con 0x559a35e38000 session 0x559a3852ab40
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 142 handle_osd_map epochs [142,143], i have 142, src has [1,143]
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87040000 unmapped: 2129920 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 962227 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:49.509819+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc24e000/0x0/0x4ffc00000, data 0xfb188/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 143 ms_handle_reset con 0x559a36458400 session 0x559a34fb7c20
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87048192 unmapped: 2121728 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35e38000
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _renew_subs
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 143 handle_osd_map epochs [144,144], i have 143, src has [1,144]
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:50.509999+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 144 handle_osd_map epochs [144,145], i have 144, src has [1,145]
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 145 ms_handle_reset con 0x559a35e38000 session 0x559a386f2000
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87113728 unmapped: 18841600 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:51.510234+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87113728 unmapped: 18841600 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:52.510403+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87113728 unmapped: 18841600 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:53.510605+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87113728 unmapped: 18841600 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1022669 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:54.510798+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fba47000/0x0/0x4ffc00000, data 0x8ff3a8/0x9b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87113728 unmapped: 18841600 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:55.510965+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87113728 unmapped: 18841600 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:56.511111+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87113728 unmapped: 18841600 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:57.511340+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _renew_subs
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 145 handle_osd_map epochs [146,146], i have 145, src has [1,146]
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.849364281s of 15.955332756s, submitted: 38
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87113728 unmapped: 18841600 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fba45000/0x0/0x4ffc00000, data 0x90137a/0x9b6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:58.511485+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87113728 unmapped: 18841600 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1024663 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35eacc00
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:59.511643+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fba45000/0x0/0x4ffc00000, data 0x90137a/0x9b6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87113728 unmapped: 18841600 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:00.511818+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87113728 unmapped: 18841600 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:01.511983+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87113728 unmapped: 18841600 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:02.512152+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87113728 unmapped: 18841600 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:03.512351+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87113728 unmapped: 18841600 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1024795 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:04.512519+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fba45000/0x0/0x4ffc00000, data 0x90137a/0x9b6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87113728 unmapped: 18841600 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:05.512714+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a37c0a400
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 18833408 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:06.512905+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 18833408 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:07.513244+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 18833408 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:08.513478+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 18833408 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1025467 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:09.513633+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 18833408 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fba46000/0x0/0x4ffc00000, data 0x90137a/0x9b6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:10.513825+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 18833408 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:11.513986+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 18833408 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fba46000/0x0/0x4ffc00000, data 0x90137a/0x9b6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:12.514151+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 18833408 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:13.514387+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 18833408 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1025467 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:14.514527+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.920860291s of 16.938102722s, submitted: 15
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 18833408 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:15.514689+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 18833408 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:16.514937+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 18833408 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fba46000/0x0/0x4ffc00000, data 0x90137a/0x9b6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:17.515309+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 18833408 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:18.515592+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 18833408 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1025335 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:19.515745+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 18833408 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:20.515910+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 18833408 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:21.516067+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 18833408 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fba46000/0x0/0x4ffc00000, data 0x90137a/0x9b6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:22.516212+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 18833408 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:23.516420+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 18833408 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:24.516636+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1025335 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 18833408 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:25.516864+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 18833408 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:26.517054+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fba46000/0x0/0x4ffc00000, data 0x90137a/0x9b6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 18833408 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:27.517277+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 18833408 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:28.517448+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 146 ms_handle_reset con 0x559a35203400 session 0x559a3852b4a0
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 18833408 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:29.517656+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1025335 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 18833408 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:30.517932+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 18833408 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:31.518054+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 18833408 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fba46000/0x0/0x4ffc00000, data 0x90137a/0x9b6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:32.518259+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 18833408 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:33.518447+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 18833408 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:34.518614+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1025335 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 18833408 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:35.518761+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 18833408 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:36.518960+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 18833408 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:37.519133+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fba46000/0x0/0x4ffc00000, data 0x90137a/0x9b6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 18833408 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:38.519303+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 18833408 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:39.519535+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1025335 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a37c69400
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 25.400842667s of 25.404727936s, submitted: 1
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 18833408 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:40.519754+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 146 handle_osd_map epochs [147,147], i have 146, src has [1,147]
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 18833408 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:41.519927+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 147 handle_osd_map epochs [147,148], i have 147, src has [1,148]
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x903466/0x9b9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 18833408 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:42.520098+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a37929c00
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87433216 unmapped: 26402816 heap: 113836032 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 148 ms_handle_reset con 0x559a37929c00 session 0x559a38090b40
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:43.520295+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87433216 unmapped: 26402816 heap: 113836032 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a3644d400
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 148 ms_handle_reset con 0x559a3644d400 session 0x559a37c5f0e0
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:44.520507+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1156281 data_alloc: 218103808 data_used: 139264
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87433216 unmapped: 26402816 heap: 113836032 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:45.520662+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87433216 unmapped: 26402816 heap: 113836032 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a37f7fc00
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 148 ms_handle_reset con 0x559a37f7fc00 session 0x559a378d3680
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:46.520804+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fa8d5000/0x0/0x4ffc00000, data 0x1a6f5a6/0x1b26000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87449600 unmapped: 26386432 heap: 113836032 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35203400
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 148 ms_handle_reset con 0x559a35203400 session 0x559a381fa000
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:47.521023+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35e38000
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 148 ms_handle_reset con 0x559a35e38000 session 0x559a385c1680
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87629824 unmapped: 26206208 heap: 113836032 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:48.521286+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fa8b1000/0x0/0x4ffc00000, data 0x1a935b6/0x1b4b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87629824 unmapped: 26206208 heap: 113836032 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a3644d400
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:49.521431+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1159771 data_alloc: 218103808 data_used: 143360
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fa8b1000/0x0/0x4ffc00000, data 0x1a935b6/0x1b4b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87629824 unmapped: 26206208 heap: 113836032 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a37929c00
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:50.521556+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 101638144 unmapped: 12197888 heap: 113836032 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:51.521712+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 148 handle_osd_map epochs [148,149], i have 148, src has [1,149]
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.004156113s of 12.149291992s, submitted: 30
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104964096 unmapped: 8871936 heap: 113836032 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:52.521832+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104964096 unmapped: 8871936 heap: 113836032 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:53.521987+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104964096 unmapped: 8871936 heap: 113836032 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:54.522152+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1285766 data_alloc: 234881024 data_used: 18407424
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104964096 unmapped: 8871936 heap: 113836032 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa8ad000/0x0/0x4ffc00000, data 0x1a95588/0x1b4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:55.522331+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104964096 unmapped: 8871936 heap: 113836032 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:56.522488+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104964096 unmapped: 8871936 heap: 113836032 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:57.522690+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104964096 unmapped: 8871936 heap: 113836032 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:58.522817+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104964096 unmapped: 8871936 heap: 113836032 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:59.522926+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1285766 data_alloc: 234881024 data_used: 18407424
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104964096 unmapped: 8871936 heap: 113836032 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:00.523126+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa8ad000/0x0/0x4ffc00000, data 0x1a95588/0x1b4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 114024448 unmapped: 2957312 heap: 116981760 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:01.523328+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.947079659s of 10.143234253s, submitted: 91
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113860608 unmapped: 3121152 heap: 116981760 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f8cc5000/0x0/0x4ffc00000, data 0x24d8588/0x2591000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:02.523553+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a37c69400 session 0x559a385d0b40
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113139712 unmapped: 3842048 heap: 116981760 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:03.523680+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113139712 unmapped: 3842048 heap: 116981760 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:04.523833+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1388708 data_alloc: 234881024 data_used: 19419136
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113139712 unmapped: 3842048 heap: 116981760 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:05.524044+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113139712 unmapped: 3842048 heap: 116981760 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:06.524305+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113139712 unmapped: 3842048 heap: 116981760 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:07.524632+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f8c8e000/0x0/0x4ffc00000, data 0x250d588/0x25c6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112746496 unmapped: 4235264 heap: 116981760 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:08.524863+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112746496 unmapped: 4235264 heap: 116981760 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:09.525056+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1384604 data_alloc: 234881024 data_used: 19419136
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112746496 unmapped: 4235264 heap: 116981760 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:10.525289+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112746496 unmapped: 4235264 heap: 116981760 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:11.525504+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a37c6b800 session 0x559a386e4b40
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a36455000 session 0x559a37fecb40
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112762880 unmapped: 4218880 heap: 116981760 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:12.525709+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112762880 unmapped: 4218880 heap: 116981760 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f8c93000/0x0/0x4ffc00000, data 0x2510588/0x25c9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:13.525834+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112762880 unmapped: 4218880 heap: 116981760 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:14.526045+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1385516 data_alloc: 234881024 data_used: 19488768
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112762880 unmapped: 4218880 heap: 116981760 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:15.526242+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.836216927s of 13.892159462s, submitted: 28
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112762880 unmapped: 4218880 heap: 116981760 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:16.526404+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112762880 unmapped: 4218880 heap: 116981760 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:17.526628+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112762880 unmapped: 4218880 heap: 116981760 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:18.526806+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f8c92000/0x0/0x4ffc00000, data 0x2511588/0x25ca000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112771072 unmapped: 4210688 heap: 116981760 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:19.526962+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1385740 data_alloc: 234881024 data_used: 19488768
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112771072 unmapped: 4210688 heap: 116981760 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:20.527121+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112771072 unmapped: 4210688 heap: 116981760 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:21.527272+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112771072 unmapped: 4210688 heap: 116981760 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:22.527461+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f8c92000/0x0/0x4ffc00000, data 0x2511588/0x25ca000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35203400
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112771072 unmapped: 4210688 heap: 116981760 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:23.527636+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35e38000
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a35e38000 session 0x559a352d65a0
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 4202496 heap: 116981760 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a37c69400
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a37c69400 session 0x559a380914a0
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a37c6b800
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a37c6b800 session 0x559a381fb680
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:24.527794+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1385536 data_alloc: 234881024 data_used: 19488768
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 4202496 heap: 116981760 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:25.527965+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35eadc00
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a35eadc00 session 0x559a380d6f00
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.024140358s of 10.032184601s, submitted: 2
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35202000
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a35202000 session 0x559a34fb6960
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112336896 unmapped: 8855552 heap: 121192448 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35202000
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a35202000 session 0x559a380d7a40
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35e38000
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a35e38000 session 0x559a37f423c0
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35eadc00
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:26.528124+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a35eadc00 session 0x559a386cef00
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a37c69400
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a37c69400 session 0x559a384e5680
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112353280 unmapped: 8839168 heap: 121192448 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:27.528341+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f885e000/0x0/0x4ffc00000, data 0x29435fa/0x29fe000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112353280 unmapped: 8839168 heap: 121192448 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:28.528464+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112353280 unmapped: 8839168 heap: 121192448 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:29.528594+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1423021 data_alloc: 234881024 data_used: 19492864
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112377856 unmapped: 8814592 heap: 121192448 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:30.528782+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112386048 unmapped: 8806400 heap: 121192448 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:31.528939+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f885e000/0x0/0x4ffc00000, data 0x29435fa/0x29fe000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112386048 unmapped: 8806400 heap: 121192448 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:32.529090+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113434624 unmapped: 7757824 heap: 121192448 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:33.529283+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a37c6b800
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113434624 unmapped: 7757824 heap: 121192448 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:34.529421+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1423345 data_alloc: 234881024 data_used: 19529728
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f885e000/0x0/0x4ffc00000, data 0x29435fa/0x29fe000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 116015104 unmapped: 5177344 heap: 121192448 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:35.529518+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 116015104 unmapped: 5177344 heap: 121192448 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:36.529665+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 116015104 unmapped: 5177344 heap: 121192448 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:37.529834+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.171621323s of 12.282431602s, submitted: 33
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 116015104 unmapped: 5177344 heap: 121192448 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:38.530011+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f885e000/0x0/0x4ffc00000, data 0x29435fa/0x29fe000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 116015104 unmapped: 5177344 heap: 121192448 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:39.530234+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1444341 data_alloc: 234881024 data_used: 22675456
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f885e000/0x0/0x4ffc00000, data 0x29435fa/0x29fe000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 116047872 unmapped: 5144576 heap: 121192448 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:40.530444+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 116047872 unmapped: 5144576 heap: 121192448 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:41.530730+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 116047872 unmapped: 5144576 heap: 121192448 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:42.530876+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f885e000/0x0/0x4ffc00000, data 0x29435fa/0x29fe000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 116047872 unmapped: 5144576 heap: 121192448 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:43.531020+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 116064256 unmapped: 5128192 heap: 121192448 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:44.531150+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1447117 data_alloc: 234881024 data_used: 22716416
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f885e000/0x0/0x4ffc00000, data 0x29435fa/0x29fe000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 117702656 unmapped: 5595136 heap: 123297792 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:45.531356+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 119062528 unmapped: 4235264 heap: 123297792 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:46.531488+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 119308288 unmapped: 3989504 heap: 123297792 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:47.531690+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f84c7000/0x0/0x4ffc00000, data 0x2cd15fa/0x2d8c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 119005184 unmapped: 4292608 heap: 123297792 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:48.531915+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f84cf000/0x0/0x4ffc00000, data 0x2cd15fa/0x2d8c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 119005184 unmapped: 4292608 heap: 123297792 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:49.532065+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1479425 data_alloc: 234881024 data_used: 22691840
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 119005184 unmapped: 4292608 heap: 123297792 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:50.532251+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 119005184 unmapped: 4292608 heap: 123297792 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:51.532419+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.723421097s of 13.870462418s, submitted: 62
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 119005184 unmapped: 4292608 heap: 123297792 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f84cf000/0x0/0x4ffc00000, data 0x2cd15fa/0x2d8c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:52.532667+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 119005184 unmapped: 4292608 heap: 123297792 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:53.532819+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 119005184 unmapped: 4292608 heap: 123297792 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:54.533014+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f84cf000/0x0/0x4ffc00000, data 0x2cd15fa/0x2d8c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a37c0a400 session 0x559a35a8f860
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a35eacc00 session 0x559a374d52c0
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1479593 data_alloc: 234881024 data_used: 22691840
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 119013376 unmapped: 4284416 heap: 123297792 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:55.533292+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a37c6b800 session 0x559a385d14a0
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f84d0000/0x0/0x4ffc00000, data 0x2cd15fa/0x2d8c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35202000
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a35202000 session 0x559a35b56780
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 117399552 unmapped: 5898240 heap: 123297792 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:56.533441+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 117399552 unmapped: 5898240 heap: 123297792 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:57.533635+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 117399552 unmapped: 5898240 heap: 123297792 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:58.533786+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 117399552 unmapped: 5898240 heap: 123297792 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:59.533975+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1395603 data_alloc: 234881024 data_used: 19476480
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 117399552 unmapped: 5898240 heap: 123297792 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:00.534129+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f8c8b000/0x0/0x4ffc00000, data 0x2512588/0x25cb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a37929c00 session 0x559a385d12c0
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a3644d400 session 0x559a35074b40
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:01.534496+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 117383168 unmapped: 5914624 heap: 123297792 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35e38000
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a35e38000 session 0x559a382a05a0
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:02.534740+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103661568 unmapped: 19636224 heap: 123297792 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:03.534892+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103661568 unmapped: 19636224 heap: 123297792 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:04.535038+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103661568 unmapped: 19636224 heap: 123297792 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1060006 data_alloc: 218103808 data_used: 143360
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:05.535251+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103661568 unmapped: 19636224 heap: 123297792 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:06.535425+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103661568 unmapped: 19636224 heap: 123297792 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:07.535662+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103661568 unmapped: 19636224 heap: 123297792 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:08.535870+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103661568 unmapped: 19636224 heap: 123297792 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:09.536040+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103661568 unmapped: 19636224 heap: 123297792 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1060006 data_alloc: 218103808 data_used: 143360
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:10.536273+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103661568 unmapped: 19636224 heap: 123297792 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:11.536432+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103661568 unmapped: 19636224 heap: 123297792 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:12.536596+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103661568 unmapped: 19636224 heap: 123297792 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:13.536800+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103661568 unmapped: 19636224 heap: 123297792 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:14.536969+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103661568 unmapped: 19636224 heap: 123297792 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1060006 data_alloc: 218103808 data_used: 143360
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:15.537112+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103661568 unmapped: 19636224 heap: 123297792 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:16.537293+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103661568 unmapped: 19636224 heap: 123297792 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:17.538319+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103661568 unmapped: 19636224 heap: 123297792 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:18.538506+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103661568 unmapped: 19636224 heap: 123297792 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 27.074157715s of 27.186643600s, submitted: 39
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:19.538676+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103661568 unmapped: 19636224 heap: 123297792 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059874 data_alloc: 218103808 data_used: 143360
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:20.538856+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103661568 unmapped: 19636224 heap: 123297792 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:21.538991+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103661568 unmapped: 19636224 heap: 123297792 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:22.539118+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103661568 unmapped: 19636224 heap: 123297792 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:23.539306+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103661568 unmapped: 19636224 heap: 123297792 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:24.539470+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103661568 unmapped: 19636224 heap: 123297792 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059874 data_alloc: 218103808 data_used: 143360
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:25.539641+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103661568 unmapped: 19636224 heap: 123297792 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:26.539809+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103661568 unmapped: 19636224 heap: 123297792 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:27.540022+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103661568 unmapped: 19636224 heap: 123297792 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35202000
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a35202000 session 0x559a34fb5e00
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:28.540297+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 102539264 unmapped: 22929408 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:29.540472+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 102539264 unmapped: 22929408 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1090172 data_alloc: 218103808 data_used: 143360
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:30.540647+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 102539264 unmapped: 22929408 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa536000/0x0/0x4ffc00000, data 0xc6e578/0xd26000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:31.540961+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 102539264 unmapped: 22929408 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa536000/0x0/0x4ffc00000, data 0xc6e578/0xd26000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:32.541222+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 102539264 unmapped: 22929408 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa536000/0x0/0x4ffc00000, data 0xc6e578/0xd26000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a3644d400
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a3644d400 session 0x559a385fc000
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:33.541409+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 102539264 unmapped: 22929408 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a37929c00
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a37929c00 session 0x559a35e89e00
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a37c6b800
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a37c6b800 session 0x559a3852af00
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35eadc00
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a35eadc00 session 0x559a382e2d20
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:34.541896+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 102539264 unmapped: 22929408 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1090172 data_alloc: 218103808 data_used: 143360
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:35.542056+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 102539264 unmapped: 22929408 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35eadc00
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:36.542238+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103456768 unmapped: 22011904 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:37.542452+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103456768 unmapped: 22011904 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa536000/0x0/0x4ffc00000, data 0xc6e578/0xd26000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:38.542556+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103456768 unmapped: 22011904 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:39.542684+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103456768 unmapped: 22011904 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa536000/0x0/0x4ffc00000, data 0xc6e578/0xd26000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1114340 data_alloc: 218103808 data_used: 3620864
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa536000/0x0/0x4ffc00000, data 0xc6e578/0xd26000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:40.542879+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103456768 unmapped: 22011904 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:41.543018+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103456768 unmapped: 22011904 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:42.543192+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103456768 unmapped: 22011904 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:43.543369+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103456768 unmapped: 22011904 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:44.543527+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103456768 unmapped: 22011904 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1114340 data_alloc: 218103808 data_used: 3620864
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:45.543663+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103456768 unmapped: 22011904 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa536000/0x0/0x4ffc00000, data 0xc6e578/0xd26000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 26.532819748s of 26.560840607s, submitted: 9
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:46.543849+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107814912 unmapped: 17653760 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f9f40000/0x0/0x4ffc00000, data 0x1264578/0x131c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:47.544062+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107945984 unmapped: 17522688 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:48.544228+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107954176 unmapped: 17514496 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:49.544357+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107954176 unmapped: 17514496 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1162178 data_alloc: 218103808 data_used: 3960832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:50.544501+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107954176 unmapped: 17514496 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:51.544639+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f9f3a000/0x0/0x4ffc00000, data 0x126a578/0x1322000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107954176 unmapped: 17514496 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:52.544760+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107954176 unmapped: 17514496 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:53.544906+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107954176 unmapped: 17514496 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:54.545078+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107954176 unmapped: 17514496 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1162178 data_alloc: 218103808 data_used: 3960832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:55.545251+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107954176 unmapped: 17514496 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f9f3a000/0x0/0x4ffc00000, data 0x126a578/0x1322000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:56.545437+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107954176 unmapped: 17514496 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:57.545660+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107954176 unmapped: 17514496 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:58.545826+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107954176 unmapped: 17514496 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:59.545991+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107954176 unmapped: 17514496 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1162178 data_alloc: 218103808 data_used: 3960832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:00.546258+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107954176 unmapped: 17514496 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:01.546405+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f9f3a000/0x0/0x4ffc00000, data 0x126a578/0x1322000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107954176 unmapped: 17514496 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:02.546562+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107954176 unmapped: 17514496 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:03.546757+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107954176 unmapped: 17514496 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f9f3a000/0x0/0x4ffc00000, data 0x126a578/0x1322000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:04.546889+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107954176 unmapped: 17514496 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1162330 data_alloc: 218103808 data_used: 3964928
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:05.547050+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f9f3a000/0x0/0x4ffc00000, data 0x126a578/0x1322000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107954176 unmapped: 17514496 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:06.547243+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f9f3a000/0x0/0x4ffc00000, data 0x126a578/0x1322000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107954176 unmapped: 17514496 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:07.547406+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a35eadc00 session 0x559a3750b860
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107954176 unmapped: 17514496 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a3792a000
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 21.976572037s of 22.099090576s, submitted: 62
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a3792a000 session 0x559a38098b40
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:08.547610+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103186432 unmapped: 22282240 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:09.547781+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103186432 unmapped: 22282240 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1061970 data_alloc: 218103808 data_used: 143360
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:10.547943+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103186432 unmapped: 22282240 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:11.548092+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103186432 unmapped: 22282240 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:12.548273+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103186432 unmapped: 22282240 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:13.548425+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103186432 unmapped: 22282240 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:14.548570+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103186432 unmapped: 22282240 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1061970 data_alloc: 218103808 data_used: 143360
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:15.548769+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103186432 unmapped: 22282240 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:16.548987+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103186432 unmapped: 22282240 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:17.549249+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103186432 unmapped: 22282240 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:18.549392+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103186432 unmapped: 22282240 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:19.549574+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103186432 unmapped: 22282240 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1061970 data_alloc: 218103808 data_used: 143360
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:20.549710+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103186432 unmapped: 22282240 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:21.549846+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103186432 unmapped: 22282240 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:22.549966+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103186432 unmapped: 22282240 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:23.550104+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103186432 unmapped: 22282240 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:24.550256+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103186432 unmapped: 22282240 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1061970 data_alloc: 218103808 data_used: 143360
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:25.550446+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103186432 unmapped: 22282240 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:26.550593+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103186432 unmapped: 22282240 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:27.550758+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103186432 unmapped: 22282240 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:28.550887+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103186432 unmapped: 22282240 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:29.550998+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103186432 unmapped: 22282240 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1061970 data_alloc: 218103808 data_used: 143360
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:30.551135+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103186432 unmapped: 22282240 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:31.551286+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103186432 unmapped: 22282240 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:32.551467+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103186432 unmapped: 22282240 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:33.551633+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103186432 unmapped: 22282240 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:34.551799+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103186432 unmapped: 22282240 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1061970 data_alloc: 218103808 data_used: 143360
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:35.551964+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103186432 unmapped: 22282240 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:36.552110+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103186432 unmapped: 22282240 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:37.552330+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103186432 unmapped: 22282240 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:38.552475+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103186432 unmapped: 22282240 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:39.552674+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103186432 unmapped: 22282240 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1061970 data_alloc: 218103808 data_used: 143360
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:40.552891+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103186432 unmapped: 22282240 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:41.554060+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103186432 unmapped: 22282240 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a3644c800
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a3644c800 session 0x559a35e892c0
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a37f88800
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a37f88800 session 0x559a35b56f00
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a37f8ac00
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a37f8ac00 session 0x559a37d7e960
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:42.554269+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35eadc00
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a35eadc00 session 0x559a385c4d20
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a3644c800
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 34.519138336s of 34.540813446s, submitted: 6
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a3644c800 session 0x559a3811fa40
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104177664 unmapped: 24444928 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a3792a000
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a3792a000 session 0x559a37fec780
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:43.554419+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104185856 unmapped: 24436736 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:44.555259+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104185856 unmapped: 24436736 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa47f000/0x0/0x4ffc00000, data 0xd245da/0xddd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098689 data_alloc: 218103808 data_used: 143360
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:45.555546+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104185856 unmapped: 24436736 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a37f88800
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a37f88800 session 0x559a35ecef00
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:46.555780+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35afa400
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a35afa400 session 0x559a37fedc20
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103907328 unmapped: 24715264 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35eadc00
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a35eadc00 session 0x559a37feda40
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a3644c800
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a3644c800 session 0x559a37fec1e0
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:47.556038+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104210432 unmapped: 24412160 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a3792a000
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:48.556665+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104210432 unmapped: 24412160 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a37f88800
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:49.557201+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104357888 unmapped: 24264704 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1130042 data_alloc: 218103808 data_used: 4173824
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:50.557488+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104357888 unmapped: 24264704 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa45a000/0x0/0x4ffc00000, data 0xd485fd/0xe02000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:51.557947+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104357888 unmapped: 24264704 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:52.558261+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104357888 unmapped: 24264704 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:53.558432+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104357888 unmapped: 24264704 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:54.558584+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa45a000/0x0/0x4ffc00000, data 0xd485fd/0xe02000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104357888 unmapped: 24264704 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1130042 data_alloc: 218103808 data_used: 4173824
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:55.558769+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104357888 unmapped: 24264704 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:56.558946+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104357888 unmapped: 24264704 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:57.559128+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 24256512 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:58.559226+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.221706390s of 16.312311172s, submitted: 35
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 24256512 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:59.559336+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108920832 unmapped: 19701760 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1191816 data_alloc: 218103808 data_used: 5292032
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:00.559566+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f9d00000/0x0/0x4ffc00000, data 0x14a25fd/0x155c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108527616 unmapped: 20094976 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:01.559771+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108527616 unmapped: 20094976 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:02.560069+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f9c7d000/0x0/0x4ffc00000, data 0x15255fd/0x15df000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108527616 unmapped: 20094976 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f9c7d000/0x0/0x4ffc00000, data 0x15255fd/0x15df000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:03.560376+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108527616 unmapped: 20094976 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:04.560597+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108527616 unmapped: 20094976 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1200426 data_alloc: 218103808 data_used: 5365760
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:05.560789+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108552192 unmapped: 20070400 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:06.560961+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108552192 unmapped: 20070400 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:07.561230+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108552192 unmapped: 20070400 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:08.561431+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f9c5c000/0x0/0x4ffc00000, data 0x15465fd/0x1600000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108552192 unmapped: 20070400 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:09.561646+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108552192 unmapped: 20070400 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1198370 data_alloc: 218103808 data_used: 5369856
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:10.561896+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108552192 unmapped: 20070400 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:11.562069+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.448533058s of 12.715208054s, submitted: 96
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108716032 unmapped: 19906560 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f9c5c000/0x0/0x4ffc00000, data 0x15465fd/0x1600000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:12.562241+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f9c52000/0x0/0x4ffc00000, data 0x15505fd/0x160a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108716032 unmapped: 19906560 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:13.562381+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108716032 unmapped: 19906560 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:14.562646+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108716032 unmapped: 19906560 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1198650 data_alloc: 218103808 data_used: 5369856
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:15.562817+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108716032 unmapped: 19906560 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:16.563009+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108716032 unmapped: 19906560 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a3644f000
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a3644f000 session 0x559a38705680
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35203000
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a35203000 session 0x559a352fbe00
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:17.563220+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f9b7c000/0x0/0x4ffc00000, data 0x1625626/0x16e0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108584960 unmapped: 20037632 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:18.563391+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108584960 unmapped: 20037632 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:19.563605+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108584960 unmapped: 20037632 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1221235 data_alloc: 218103808 data_used: 5369856
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:20.563856+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108593152 unmapped: 20029440 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:21.564021+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108593152 unmapped: 20029440 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f99ef000/0x0/0x4ffc00000, data 0x17b265f/0x186d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:22.564284+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108593152 unmapped: 20029440 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 11K writes, 40K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                           Cumulative WAL: 11K writes, 3104 syncs, 3.57 writes per sync, written: 0.03 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1677 writes, 5010 keys, 1677 commit groups, 1.0 writes per commit group, ingest: 5.64 MB, 0.01 MB/s
                                           Interval WAL: 1677 writes, 710 syncs, 2.36 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:23.564558+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108593152 unmapped: 20029440 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35eac000
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.542958260s of 12.671705246s, submitted: 30
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a35eac000 session 0x559a34585e00
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:24.564786+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108896256 unmapped: 19726336 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35203000
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225404 data_alloc: 218103808 data_used: 5369856
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:25.565002+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108896256 unmapped: 19726336 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35eadc00
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:26.565272+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109076480 unmapped: 19546112 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f99ca000/0x0/0x4ffc00000, data 0x17d6682/0x1892000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:27.565494+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110125056 unmapped: 18497536 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:28.565704+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110133248 unmapped: 18489344 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f99ca000/0x0/0x4ffc00000, data 0x17d6682/0x1892000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:29.565966+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110141440 unmapped: 18481152 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242124 data_alloc: 218103808 data_used: 7774208
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:30.566282+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110149632 unmapped: 18472960 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:31.566524+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110149632 unmapped: 18472960 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:32.566736+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110149632 unmapped: 18472960 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f99ca000/0x0/0x4ffc00000, data 0x17d6682/0x1892000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:33.566920+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110149632 unmapped: 18472960 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:34.567052+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110157824 unmapped: 18464768 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242748 data_alloc: 218103808 data_used: 7778304
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:35.567236+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110157824 unmapped: 18464768 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.805513382s of 11.841412544s, submitted: 9
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:36.567393+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110657536 unmapped: 17965056 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:37.567565+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111050752 unmapped: 17571840 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f95d2000/0x0/0x4ffc00000, data 0x1bce682/0x1c8a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:38.567735+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111050752 unmapped: 17571840 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:39.567895+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111050752 unmapped: 17571840 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:40.568095+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1298356 data_alloc: 234881024 data_used: 9617408
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111050752 unmapped: 17571840 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:41.568292+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f95d2000/0x0/0x4ffc00000, data 0x1bce682/0x1c8a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111050752 unmapped: 17571840 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:42.568449+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111050752 unmapped: 17571840 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:43.568651+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111050752 unmapped: 17571840 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:44.568873+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f95d2000/0x0/0x4ffc00000, data 0x1bce682/0x1c8a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111050752 unmapped: 17571840 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:45.569027+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1298356 data_alloc: 234881024 data_used: 9617408
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a35eadc00 session 0x559a385d0780
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a35203000 session 0x559a3450cb40
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111058944 unmapped: 17563648 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a3644c800
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.844120026s of 10.036936760s, submitted: 67
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:46.569423+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a3644c800 session 0x559a34fb6780
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107167744 unmapped: 21454848 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:47.569833+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107167744 unmapped: 21454848 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:48.570135+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f9c4d000/0x0/0x4ffc00000, data 0x15535fd/0x160d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107167744 unmapped: 21454848 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:49.570389+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107167744 unmapped: 21454848 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:50.570551+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206836 data_alloc: 218103808 data_used: 5369856
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107167744 unmapped: 21454848 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a37f88800 session 0x559a35f2de00
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f9c4d000/0x0/0x4ffc00000, data 0x15535fd/0x160d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a3792a000 session 0x559a380d70e0
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:51.571085+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a3792a000
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a3792a000 session 0x559a35a3d680
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104751104 unmapped: 23871488 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:52.571495+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104751104 unmapped: 23871488 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:53.572054+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104751104 unmapped: 23871488 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:54.572638+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104751104 unmapped: 23871488 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:55.572893+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1079774 data_alloc: 218103808 data_used: 143360
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89b000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104751104 unmapped: 23871488 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89b000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:56.573383+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104751104 unmapped: 23871488 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:57.573827+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104751104 unmapped: 23871488 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:58.574292+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104751104 unmapped: 23871488 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:59.574709+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104751104 unmapped: 23871488 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:00.574911+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89b000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1079774 data_alloc: 218103808 data_used: 143360
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104751104 unmapped: 23871488 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:01.575137+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104751104 unmapped: 23871488 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:02.575268+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104751104 unmapped: 23871488 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:03.575427+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89b000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104751104 unmapped: 23871488 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:04.576202+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104751104 unmapped: 23871488 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:05.576354+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1079774 data_alloc: 218103808 data_used: 143360
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104759296 unmapped: 23863296 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:06.576473+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104759296 unmapped: 23863296 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:07.576635+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104759296 unmapped: 23863296 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:08.576808+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104759296 unmapped: 23863296 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89b000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:09.576966+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104759296 unmapped: 23863296 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:10.577146+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1079774 data_alloc: 218103808 data_used: 143360
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104759296 unmapped: 23863296 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:11.577432+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89b000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104759296 unmapped: 23863296 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:12.577779+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104759296 unmapped: 23863296 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:13.577980+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104759296 unmapped: 23863296 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:14.578151+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104759296 unmapped: 23863296 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:15.578420+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1079774 data_alloc: 218103808 data_used: 143360
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104759296 unmapped: 23863296 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89b000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:16.578689+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35203000
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a35203000 session 0x559a34fb7c20
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35eadc00
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a35eadc00 session 0x559a34fb6000
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a3644c800
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a3644c800 session 0x559a34fb74a0
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a37f88800
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a37f88800 session 0x559a34fb6960
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a37f88800
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 30.608356476s of 30.821556091s, submitted: 55
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a37f88800 session 0x559a37fed680
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104783872 unmapped: 23838720 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:17.578957+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104783872 unmapped: 23838720 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:18.579994+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa51b000/0x0/0x4ffc00000, data 0xc89578/0xd41000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104783872 unmapped: 23838720 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:19.580282+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104783872 unmapped: 23838720 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:20.580419+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1106332 data_alloc: 218103808 data_used: 143360
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35203000
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a35203000 session 0x559a37fec000
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104783872 unmapped: 23838720 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35eadc00
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a35eadc00 session 0x559a37fede00
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:21.581160+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a3644c800
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a3644c800 session 0x559a37fecd20
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a3792a000
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a3792a000 session 0x559a3852be00
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104767488 unmapped: 23855104 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:22.581393+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa51a000/0x0/0x4ffc00000, data 0xc8959b/0xd42000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104767488 unmapped: 23855104 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:23.581825+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35203000
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104767488 unmapped: 23855104 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:24.581973+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104800256 unmapped: 23822336 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:25.582115+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133049 data_alloc: 218103808 data_used: 3821568
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104800256 unmapped: 23822336 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:26.582639+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104800256 unmapped: 23822336 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:27.582850+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104800256 unmapped: 23822336 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:28.583023+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa51a000/0x0/0x4ffc00000, data 0xc8959b/0xd42000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104800256 unmapped: 23822336 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:29.583220+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104800256 unmapped: 23822336 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:30.583423+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133049 data_alloc: 218103808 data_used: 3821568
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104800256 unmapped: 23822336 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:31.583581+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104800256 unmapped: 23822336 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:32.583749+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa51a000/0x0/0x4ffc00000, data 0xc8959b/0xd42000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104800256 unmapped: 23822336 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:33.583915+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.136903763s of 17.181941986s, submitted: 9
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:34.584068+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109273088 unmapped: 19349504 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:35.584248+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109281280 unmapped: 19341312 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f9f40000/0x0/0x4ffc00000, data 0x126359b/0x131c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1178007 data_alloc: 218103808 data_used: 4296704
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:36.584424+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109281280 unmapped: 19341312 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:37.584615+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108388352 unmapped: 20234240 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:38.584788+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108388352 unmapped: 20234240 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:39.584952+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108396544 unmapped: 20226048 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:40.585212+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108396544 unmapped: 20226048 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182447 data_alloc: 218103808 data_used: 4304896
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f9f36000/0x0/0x4ffc00000, data 0x126d59b/0x1326000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:41.585447+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108396544 unmapped: 20226048 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:42.585636+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108396544 unmapped: 20226048 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:43.585887+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108396544 unmapped: 20226048 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:44.586071+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108396544 unmapped: 20226048 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:45.586228+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108396544 unmapped: 20226048 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f9f36000/0x0/0x4ffc00000, data 0x126d59b/0x1326000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182447 data_alloc: 218103808 data_used: 4304896
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:46.586490+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108396544 unmapped: 20226048 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f9f36000/0x0/0x4ffc00000, data 0x126d59b/0x1326000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:47.586664+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108396544 unmapped: 20226048 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:48.586823+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108396544 unmapped: 20226048 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:49.586944+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108396544 unmapped: 20226048 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:50.587145+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108404736 unmapped: 20217856 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182447 data_alloc: 218103808 data_used: 4304896
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:51.587654+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108404736 unmapped: 20217856 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:52.588058+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108404736 unmapped: 20217856 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f9f36000/0x0/0x4ffc00000, data 0x126d59b/0x1326000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:53.588209+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108404736 unmapped: 20217856 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f9f36000/0x0/0x4ffc00000, data 0x126d59b/0x1326000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:54.588457+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108404736 unmapped: 20217856 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a35a7d400 session 0x559a34fb7860
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35eadc00
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f9f36000/0x0/0x4ffc00000, data 0x126d59b/0x1326000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:55.588723+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108404736 unmapped: 20217856 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182447 data_alloc: 218103808 data_used: 4304896
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:56.588961+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108404736 unmapped: 20217856 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:57.589279+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108404736 unmapped: 20217856 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:58.589462+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108404736 unmapped: 20217856 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:59.589634+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108404736 unmapped: 20217856 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f9f36000/0x0/0x4ffc00000, data 0x126d59b/0x1326000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:00.589818+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108404736 unmapped: 20217856 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182447 data_alloc: 218103808 data_used: 4304896
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:01.590035+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108404736 unmapped: 20217856 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35a7d400
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a35a7d400 session 0x559a34fb54a0
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a3644c800
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a3644c800 session 0x559a385c0f00
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:02.590211+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a37f88800
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a37f88800 session 0x559a35b7ba40
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a3644f000
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a3644f000 session 0x559a35b7b4a0
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108421120 unmapped: 20201472 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a37944400
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 28.113771439s of 28.566558838s, submitted: 42
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a37944400 session 0x559a37fec5a0
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35a7d400
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a35a7d400 session 0x559a374d4f00
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:03.590376+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108494848 unmapped: 25387008 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:04.590524+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108494848 unmapped: 25387008 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f97b3000/0x0/0x4ffc00000, data 0x19ef5fd/0x1aa9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:05.590684+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108535808 unmapped: 25346048 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1237194 data_alloc: 218103808 data_used: 4304896
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:06.590826+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f97b3000/0x0/0x4ffc00000, data 0x19ef5fd/0x1aa9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107208704 unmapped: 26673152 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:07.591007+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107216896 unmapped: 26664960 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:08.591262+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a3644c800
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a3644c800 session 0x559a38704780
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107536384 unmapped: 26345472 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a3644f000
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:09.591373+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107536384 unmapped: 26345472 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a37f88800
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f978c000/0x0/0x4ffc00000, data 0x1a14620/0x1acf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:10.591522+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110084096 unmapped: 23797760 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297304 data_alloc: 234881024 data_used: 12181504
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:11.591667+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112812032 unmapped: 21069824 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:12.591812+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112812032 unmapped: 21069824 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:13.592003+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112812032 unmapped: 21069824 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f978c000/0x0/0x4ffc00000, data 0x1a14620/0x1acf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:14.592256+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112844800 unmapped: 21037056 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f978c000/0x0/0x4ffc00000, data 0x1a14620/0x1acf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:15.592415+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112877568 unmapped: 21004288 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297304 data_alloc: 234881024 data_used: 12181504
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:16.593247+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112877568 unmapped: 21004288 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:17.593409+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112877568 unmapped: 21004288 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:18.593532+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112885760 unmapped: 20996096 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:19.593706+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112885760 unmapped: 20996096 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.973478317s of 17.503797531s, submitted: 156
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:20.593862+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f978c000/0x0/0x4ffc00000, data 0x1a14620/0x1acf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 114573312 unmapped: 19308544 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1337706 data_alloc: 234881024 data_used: 12423168
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:21.594005+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 114843648 unmapped: 19038208 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:22.594127+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 114851840 unmapped: 19030016 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:23.594232+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 114851840 unmapped: 19030016 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f92a3000/0x0/0x4ffc00000, data 0x1efe620/0x1fb9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:24.594359+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f92a3000/0x0/0x4ffc00000, data 0x1efe620/0x1fb9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 114851840 unmapped: 19030016 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:25.594513+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 114851840 unmapped: 19030016 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341346 data_alloc: 234881024 data_used: 12734464
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:26.594655+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 114851840 unmapped: 19030016 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f92a3000/0x0/0x4ffc00000, data 0x1efe620/0x1fb9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:27.594818+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 114860032 unmapped: 19021824 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:28.594955+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 114860032 unmapped: 19021824 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:29.595091+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 114860032 unmapped: 19021824 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:30.595210+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 114860032 unmapped: 19021824 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341346 data_alloc: 234881024 data_used: 12734464
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:31.595482+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 114860032 unmapped: 19021824 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:32.595719+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f92a3000/0x0/0x4ffc00000, data 0x1efe620/0x1fb9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 114860032 unmapped: 19021824 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:33.595844+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 114860032 unmapped: 19021824 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:34.596080+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 114860032 unmapped: 19021824 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:35.596337+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 114868224 unmapped: 19013632 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341346 data_alloc: 234881024 data_used: 12734464
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:36.596572+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 114868224 unmapped: 19013632 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f92a3000/0x0/0x4ffc00000, data 0x1efe620/0x1fb9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:37.596763+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 114868224 unmapped: 19013632 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:38.596925+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 114868224 unmapped: 19013632 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:39.597093+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 114868224 unmapped: 19013632 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:40.597266+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 114868224 unmapped: 19013632 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341346 data_alloc: 234881024 data_used: 12734464
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:41.597448+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 114868224 unmapped: 19013632 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f92a3000/0x0/0x4ffc00000, data 0x1efe620/0x1fb9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a36459800 session 0x559a384e3680
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a36ff6400
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:42.597627+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 114876416 unmapped: 19005440 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 22.791982651s of 22.966978073s, submitted: 47
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:43.598053+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 114884608 unmapped: 18997248 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a3793b400 session 0x559a3852b680
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a37f71000
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:44.598243+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 114909184 unmapped: 18972672 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:45.598369+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113303552 unmapped: 20578304 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1338914 data_alloc: 234881024 data_used: 12738560
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:46.598532+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113336320 unmapped: 20545536 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:47.598759+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113377280 unmapped: 20504576 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a35203800 session 0x559a375712c0
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a3793b400
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f92a3000/0x0/0x4ffc00000, data 0x1efe620/0x1fb9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:48.598996+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113377280 unmapped: 20504576 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:49.599217+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113385472 unmapped: 20496384 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:50.599397+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113385472 unmapped: 20496384 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1338914 data_alloc: 234881024 data_used: 12738560
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:51.599614+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113393664 unmapped: 20488192 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f92a3000/0x0/0x4ffc00000, data 0x1efe620/0x1fb9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:52.599923+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113393664 unmapped: 20488192 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:53.600102+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113393664 unmapped: 20488192 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:54.600227+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113401856 unmapped: 20480000 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:55.600444+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f92a3000/0x0/0x4ffc00000, data 0x1efe620/0x1fb9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113401856 unmapped: 20480000 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1338914 data_alloc: 234881024 data_used: 12738560
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:56.600657+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113401856 unmapped: 20480000 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:57.600876+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113401856 unmapped: 20480000 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:58.601064+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f92a3000/0x0/0x4ffc00000, data 0x1efe620/0x1fb9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113410048 unmapped: 20471808 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:59.601249+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113410048 unmapped: 20471808 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:00.601364+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113410048 unmapped: 20471808 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1338914 data_alloc: 234881024 data_used: 12738560
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:01.601521+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113410048 unmapped: 20471808 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:02.601670+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113410048 unmapped: 20471808 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f92a3000/0x0/0x4ffc00000, data 0x1efe620/0x1fb9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:03.601808+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113410048 unmapped: 20471808 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:04.601960+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113410048 unmapped: 20471808 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:05.602094+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113410048 unmapped: 20471808 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1338914 data_alloc: 234881024 data_used: 12738560
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:06.602257+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 20463616 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:07.602430+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 20463616 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:08.602722+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113426432 unmapped: 20455424 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f92a3000/0x0/0x4ffc00000, data 0x1efe620/0x1fb9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:09.602842+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113426432 unmapped: 20455424 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:10.602973+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113426432 unmapped: 20455424 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1338914 data_alloc: 234881024 data_used: 12738560
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f92a3000/0x0/0x4ffc00000, data 0x1efe620/0x1fb9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:11.603142+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113426432 unmapped: 20455424 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f92a3000/0x0/0x4ffc00000, data 0x1efe620/0x1fb9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:12.603251+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113426432 unmapped: 20455424 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:13.603432+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113426432 unmapped: 20455424 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:14.603599+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113434624 unmapped: 20447232 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:15.603800+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113434624 unmapped: 20447232 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1338914 data_alloc: 234881024 data_used: 12738560
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:16.604024+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113434624 unmapped: 20447232 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f92a3000/0x0/0x4ffc00000, data 0x1efe620/0x1fb9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:17.604233+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113434624 unmapped: 20447232 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:18.604380+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113434624 unmapped: 20447232 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 35.032520294s of 36.020671844s, submitted: 236
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a37f88800 session 0x559a35a3d680
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a3644f000 session 0x559a37c61e00
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:19.604510+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f92a3000/0x0/0x4ffc00000, data 0x1efe620/0x1fb9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a37945800
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108929024 unmapped: 24952832 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a37945800 session 0x559a37cf61e0
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:20.604637+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108978176 unmapped: 24903680 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1190986 data_alloc: 218103808 data_used: 4304896
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:21.604796+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108978176 unmapped: 24903680 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:22.604975+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108978176 unmapped: 24903680 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:23.605123+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108978176 unmapped: 24903680 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:24.605250+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f9f2e000/0x0/0x4ffc00000, data 0x126e59b/0x1327000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108978176 unmapped: 24903680 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:25.605429+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108978176 unmapped: 24903680 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1190986 data_alloc: 218103808 data_used: 4304896
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:26.605565+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108978176 unmapped: 24903680 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:27.605755+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f9f2e000/0x0/0x4ffc00000, data 0x126e59b/0x1327000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108978176 unmapped: 24903680 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:28.605945+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f9f2e000/0x0/0x4ffc00000, data 0x126e59b/0x1327000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108978176 unmapped: 24903680 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:29.606257+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108978176 unmapped: 24903680 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:30.606392+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108978176 unmapped: 24903680 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1190986 data_alloc: 218103808 data_used: 4304896
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:31.606549+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a35203000 session 0x559a3852b0e0
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108978176 unmapped: 24903680 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35a7d400
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:32.606658+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108978176 unmapped: 24903680 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.846877098s of 13.960657120s, submitted: 42
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:33.606815+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a35a7d400 session 0x559a3750a1e0
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 26574848 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89c000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:34.606990+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 26574848 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:35.607163+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 26574848 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1094874 data_alloc: 218103808 data_used: 143360
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:36.607400+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 26574848 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:37.607632+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 26574848 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89c000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:38.607784+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 26574848 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89c000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:39.607984+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89c000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 26574848 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:40.608134+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 26574848 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1094874 data_alloc: 218103808 data_used: 143360
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:41.608333+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 26574848 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:42.608518+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 26574848 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:43.608690+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 26574848 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:44.608860+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89c000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 26574848 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89c000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:45.609001+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89c000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 26574848 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1094874 data_alloc: 218103808 data_used: 143360
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:46.609213+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89c000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 26574848 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:47.609383+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 26574848 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:48.609529+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 26574848 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:49.609701+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 26574848 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:50.609872+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89c000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 26574848 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1094874 data_alloc: 218103808 data_used: 143360
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:51.610056+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 26574848 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:52.610247+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 26574848 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:53.610417+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 26574848 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:54.610590+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 26574848 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:55.610823+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 26574848 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1094874 data_alloc: 218103808 data_used: 143360
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:56.610994+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89c000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 26574848 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:57.611254+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 26574848 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:58.611443+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 26574848 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:59.611631+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89c000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 26574848 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:00.611826+0000)
Sep 30 14:52:28 compute-0 sudo[287601]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 26574848 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1094874 data_alloc: 218103808 data_used: 143360
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:01.612267+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 26574848 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89c000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:02.612657+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 26574848 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a3644c800
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 30.090757370s of 30.147060394s, submitted: 17
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a3644c800 session 0x559a3852af00
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:03.612985+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107937792 unmapped: 25944064 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:04.613143+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107937792 unmapped: 25944064 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa2b7000/0x0/0x4ffc00000, data 0xeed578/0xfa5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:05.613279+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107937792 unmapped: 25944064 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141040 data_alloc: 218103808 data_used: 143360
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:06.613442+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a3644f000
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a3644f000 session 0x559a374d5c20
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107937792 unmapped: 25944064 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a37f88800
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a37f88800 session 0x559a3852b4a0
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:07.613619+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107937792 unmapped: 25944064 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa2b7000/0x0/0x4ffc00000, data 0xeed578/0xfa5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:08.613805+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a37f88800
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a37f88800 session 0x559a378d2f00
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35203000
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a35203000 session 0x559a384e2780
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 26591232 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35a7d400
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a3644c800
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:09.613978+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 26591232 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:10.614128+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107323392 unmapped: 26558464 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170518 data_alloc: 218103808 data_used: 4247552
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:11.614275+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107847680 unmapped: 26034176 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:12.614456+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107847680 unmapped: 26034176 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a35a7d400 session 0x559a35a3d4a0
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a3644c800 session 0x559a380d7e00
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:13.614593+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a3644f000
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.329400063s of 10.396521568s, submitted: 12
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a3644f000 session 0x559a385fdc20
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89c000/0x0/0x4ffc00000, data 0x907588/0x9c0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107167744 unmapped: 26714112 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:14.614815+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107167744 unmapped: 26714112 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:15.614997+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107167744 unmapped: 26714112 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097695 data_alloc: 218103808 data_used: 143360
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:16.615135+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107167744 unmapped: 26714112 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:17.615278+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107167744 unmapped: 26714112 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:18.615416+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107167744 unmapped: 26714112 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:19.615547+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107167744 unmapped: 26714112 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:20.615715+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107167744 unmapped: 26714112 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097695 data_alloc: 218103808 data_used: 143360
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:21.615910+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107167744 unmapped: 26714112 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:22.616070+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107167744 unmapped: 26714112 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:23.616239+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107167744 unmapped: 26714112 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:24.616361+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35203000
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a35203000 session 0x559a34fb74a0
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35a7d400
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a35a7d400 session 0x559a379c3a40
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a3644c800
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a3644c800 session 0x559a379c25a0
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a37f88800
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a37f88800 session 0x559a379c34a0
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a36456800
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.362629890s of 11.389686584s, submitted: 7
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109977600 unmapped: 23904256 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a36456800 session 0x559a379c2960
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a36456800
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a36456800 session 0x559a35075a40
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35203000
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a35203000 session 0x559a37d7e960
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35a7d400
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:25.616504+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a35a7d400 session 0x559a37d7fe00
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a3644c800
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a3644c800 session 0x559a37c601e0
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108027904 unmapped: 29532160 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1169478 data_alloc: 218103808 data_used: 147456
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:26.616669+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f9fef000/0x0/0x4ffc00000, data 0x11b35ea/0x126d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108027904 unmapped: 29532160 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:27.616852+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f9fef000/0x0/0x4ffc00000, data 0x11b35ea/0x126d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108027904 unmapped: 29532160 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:28.616988+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108027904 unmapped: 29532160 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:29.617134+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108027904 unmapped: 29532160 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:30.617272+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: mgrc ms_handle_reset ms_handle_reset con 0x559a36ff6c00
Sep 30 14:52:28 compute-0 ceph-osd[82707]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1364357926
Sep 30 14:52:28 compute-0 ceph-osd[82707]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1364357926,v1:192.168.122.100:6801/1364357926]
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: get_auth_request con 0x559a3644f000 auth_method 0
Sep 30 14:52:28 compute-0 ceph-osd[82707]: mgrc handle_mgr_configure stats_period=5
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f9fef000/0x0/0x4ffc00000, data 0x11b35ea/0x126d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108027904 unmapped: 29532160 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1169478 data_alloc: 218103808 data_used: 147456
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:31.617403+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35203000
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a35203000 session 0x559a37c610e0
Sep 30 14:52:28 compute-0 sudo[287601]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f9fcb000/0x0/0x4ffc00000, data 0x11d75ea/0x1291000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108339200 unmapped: 29220864 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:32.617555+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35a7d400
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a3644c800
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108339200 unmapped: 29220864 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:33.617673+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108142592 unmapped: 29417472 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:34.617799+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111403008 unmapped: 26157056 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:35.618128+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111403008 unmapped: 26157056 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a35a7d400 session 0x559a37c614a0
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a3644c800 session 0x559a37d7fa40
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225050 data_alloc: 218103808 data_used: 7917568
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:36.618227+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a34810800
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.493298531s of 11.615280151s, submitted: 40
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a34810800 session 0x559a378d2960
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108945408 unmapped: 28614656 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa878000/0x0/0x4ffc00000, data 0x92b5da/0x9e4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:37.618578+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108945408 unmapped: 28614656 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:38.618744+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108945408 unmapped: 28614656 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:39.618903+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89c000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108945408 unmapped: 28614656 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:40.619044+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108945408 unmapped: 28614656 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:41.619253+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1106714 data_alloc: 218103808 data_used: 143360
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108945408 unmapped: 28614656 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:42.619386+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89c000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89c000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108945408 unmapped: 28614656 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:43.619514+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108945408 unmapped: 28614656 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:44.619630+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108945408 unmapped: 28614656 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89c000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:45.619692+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108945408 unmapped: 28614656 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:46.619867+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1106714 data_alloc: 218103808 data_used: 143360
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108945408 unmapped: 28614656 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:47.620059+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108945408 unmapped: 28614656 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:48.620201+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108945408 unmapped: 28614656 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:49.620361+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89c000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108945408 unmapped: 28614656 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:50.620487+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89c000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108945408 unmapped: 28614656 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:51.620637+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1106714 data_alloc: 218103808 data_used: 143360
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108945408 unmapped: 28614656 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:52.620768+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108945408 unmapped: 28614656 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:53.620924+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108945408 unmapped: 28614656 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:54.621106+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108945408 unmapped: 28614656 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:55.621254+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108945408 unmapped: 28614656 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89c000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:56.621401+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1106714 data_alloc: 218103808 data_used: 143360
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108945408 unmapped: 28614656 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:57.621629+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89c000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108945408 unmapped: 28614656 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:58.621757+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89c000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108945408 unmapped: 28614656 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:59.621936+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108945408 unmapped: 28614656 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:00.622125+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108945408 unmapped: 28614656 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:01.622277+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89c000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1106714 data_alloc: 218103808 data_used: 143360
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108945408 unmapped: 28614656 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:02.622447+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108945408 unmapped: 28614656 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:03.623951+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108945408 unmapped: 28614656 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89c000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:04.625379+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a37c0b800
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 28.454156876s of 28.542760849s, submitted: 19
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a37c0b800 session 0x559a37c61c20
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109494272 unmapped: 28065792 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:05.625589+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89c000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109494272 unmapped: 28065792 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:06.625850+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1122452 data_alloc: 218103808 data_used: 143360
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109494272 unmapped: 28065792 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:07.626886+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa6cb000/0x0/0x4ffc00000, data 0xad9578/0xb91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109494272 unmapped: 28065792 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:08.627903+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a34810800
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a34810800 session 0x559a37f42960
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109494272 unmapped: 28065792 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:09.628082+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35203000
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a35203000 session 0x559a3750a960
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35a7d400
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a35a7d400 session 0x559a37570960
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a3644c800
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a3644c800 session 0x559a37fed4a0
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109576192 unmapped: 27983872 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:10.628344+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa6cb000/0x0/0x4ffc00000, data 0xad9578/0xb91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a379d3800
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109576192 unmapped: 27983872 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:11.628498+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1124816 data_alloc: 218103808 data_used: 143360
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a3644f400
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109527040 unmapped: 28033024 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:12.628712+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109527040 unmapped: 28033024 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:13.628907+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa6a7000/0x0/0x4ffc00000, data 0xafd578/0xbb5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109527040 unmapped: 28033024 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:14.629453+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109527040 unmapped: 28033024 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:15.629877+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109527040 unmapped: 28033024 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:16.630033+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa6a7000/0x0/0x4ffc00000, data 0xafd578/0xbb5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1136672 data_alloc: 218103808 data_used: 1855488
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa6a7000/0x0/0x4ffc00000, data 0xafd578/0xbb5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109527040 unmapped: 28033024 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:17.630290+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:18.630579+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109527040 unmapped: 28033024 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:19.631004+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109527040 unmapped: 28033024 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:20.631246+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109527040 unmapped: 28033024 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa6a7000/0x0/0x4ffc00000, data 0xafd578/0xbb5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:21.631527+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109527040 unmapped: 28033024 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1136672 data_alloc: 218103808 data_used: 1855488
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.582111359s of 16.620002747s, submitted: 6
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:22.631704+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112844800 unmapped: 24715264 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:23.631903+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112517120 unmapped: 25042944 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:24.632108+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112517120 unmapped: 25042944 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:25.632288+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112517120 unmapped: 25042944 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa340000/0x0/0x4ffc00000, data 0xe64578/0xf1c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:26.632424+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 24961024 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1181628 data_alloc: 218103808 data_used: 2711552
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:27.632627+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 24961024 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:28.632758+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112238592 unmapped: 25321472 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:29.633484+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112238592 unmapped: 25321472 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:30.633616+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa31f000/0x0/0x4ffc00000, data 0xe85578/0xf3d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112238592 unmapped: 25321472 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:31.633762+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112320512 unmapped: 25239552 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182204 data_alloc: 218103808 data_used: 2744320
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:32.633918+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110592000 unmapped: 26968064 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:33.634135+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110592000 unmapped: 26968064 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a3644f400 session 0x559a386e4d20
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a379d3800 session 0x559a386cf860
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a34810800
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.981079102s of 12.167107582s, submitted: 42
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:34.634299+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109117440 unmapped: 28442624 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:35.634854+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109117440 unmapped: 28442624 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a34810800 session 0x559a384e30e0
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa879000/0x0/0x4ffc00000, data 0x92b578/0x9e3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:36.635005+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109117440 unmapped: 28442624 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1112166 data_alloc: 218103808 data_used: 143360
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:37.635316+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109117440 unmapped: 28442624 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:38.635585+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109117440 unmapped: 28442624 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:39.635819+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109117440 unmapped: 28442624 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:40.636033+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109117440 unmapped: 28442624 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:41.636261+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109117440 unmapped: 28442624 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1112166 data_alloc: 218103808 data_used: 143360
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:42.636492+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109117440 unmapped: 28442624 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:43.636720+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109117440 unmapped: 28442624 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:44.636938+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109117440 unmapped: 28442624 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:45.637074+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109117440 unmapped: 28442624 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:46.637233+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109117440 unmapped: 28442624 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1112166 data_alloc: 218103808 data_used: 143360
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:47.637456+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109117440 unmapped: 28442624 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:48.637644+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109117440 unmapped: 28442624 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:49.637847+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109117440 unmapped: 28442624 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:50.638080+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109117440 unmapped: 28442624 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:51.638301+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109117440 unmapped: 28442624 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1112166 data_alloc: 218103808 data_used: 143360
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:52.638540+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109117440 unmapped: 28442624 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:53.638749+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109117440 unmapped: 28442624 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:54.638960+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109117440 unmapped: 28442624 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:55.639138+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109117440 unmapped: 28442624 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:56.639388+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109117440 unmapped: 28442624 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1112166 data_alloc: 218103808 data_used: 143360
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:57.639620+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109117440 unmapped: 28442624 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:58.639887+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109117440 unmapped: 28442624 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:59.640122+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109117440 unmapped: 28442624 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:00.640283+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109117440 unmapped: 28442624 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:01.640448+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109117440 unmapped: 28442624 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1112166 data_alloc: 218103808 data_used: 143360
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:02.640582+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109117440 unmapped: 28442624 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35203000
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 28.728111267s of 28.756206512s, submitted: 8
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a35203000 session 0x559a35e88000
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:03.640798+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa57f000/0x0/0x4ffc00000, data 0xc25578/0xcdd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109256704 unmapped: 28303360 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:04.641036+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109256704 unmapped: 28303360 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:05.641295+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109256704 unmapped: 28303360 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:06.641447+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109256704 unmapped: 28303360 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135562 data_alloc: 218103808 data_used: 143360
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:07.641663+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109256704 unmapped: 28303360 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:08.641859+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109256704 unmapped: 28303360 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa57f000/0x0/0x4ffc00000, data 0xc25578/0xcdd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:09.642057+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109256704 unmapped: 28303360 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35a7d400
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:10.642233+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109273088 unmapped: 28286976 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa57f000/0x0/0x4ffc00000, data 0xc25578/0xcdd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:11.642407+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa57f000/0x0/0x4ffc00000, data 0xc25578/0xcdd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109273088 unmapped: 28286976 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1156842 data_alloc: 218103808 data_used: 3321856
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:12.642616+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109273088 unmapped: 28286976 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:13.642836+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109273088 unmapped: 28286976 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:14.643006+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109273088 unmapped: 28286976 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa57f000/0x0/0x4ffc00000, data 0xc25578/0xcdd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:15.643248+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109289472 unmapped: 28270592 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:16.643375+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109289472 unmapped: 28270592 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1156842 data_alloc: 218103808 data_used: 3321856
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:17.643653+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109289472 unmapped: 28270592 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa57f000/0x0/0x4ffc00000, data 0xc25578/0xcdd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:18.643854+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109289472 unmapped: 28270592 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:19.644046+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109289472 unmapped: 28270592 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.302663803s of 17.326045990s, submitted: 2
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:20.644299+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112336896 unmapped: 25223168 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:21.644480+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112402432 unmapped: 25157632 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a3644c800
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1199073 data_alloc: 218103808 data_used: 3313664
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a3644c800 session 0x559a35b7ba40
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35afb400
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a35afb400 session 0x559a35b7a960
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a34810800
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a34810800 session 0x559a37c61860
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35203000
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a35203000 session 0x559a386e50e0
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:22.644637+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a3644c800
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a3644c800 session 0x559a34fb6960
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112795648 unmapped: 24764416 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f9b73000/0x0/0x4ffc00000, data 0x1219578/0x12d1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:23.644810+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112795648 unmapped: 24764416 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:24.645077+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112795648 unmapped: 24764416 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:25.645316+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112795648 unmapped: 24764416 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:26.645653+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112754688 unmapped: 24805376 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216955 data_alloc: 218103808 data_used: 3313664
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:27.645883+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112754688 unmapped: 24805376 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a379d3800
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:28.646099+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f9b73000/0x0/0x4ffc00000, data 0x1219578/0x12d1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112082944 unmapped: 25477120 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:29.646339+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111632384 unmapped: 25927680 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:30.646560+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112271360 unmapped: 25288704 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:31.646719+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112353280 unmapped: 25206784 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1227647 data_alloc: 218103808 data_used: 5545984
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f9b7b000/0x0/0x4ffc00000, data 0x1219578/0x12d1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:32.646842+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112353280 unmapped: 25206784 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:33.647040+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112353280 unmapped: 25206784 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:34.647274+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112353280 unmapped: 25206784 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f9b7b000/0x0/0x4ffc00000, data 0x1219578/0x12d1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:35.647507+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112353280 unmapped: 25206784 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:36.647697+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f9b7b000/0x0/0x4ffc00000, data 0x1219578/0x12d1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112353280 unmapped: 25206784 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1227647 data_alloc: 218103808 data_used: 5545984
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:37.647843+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112353280 unmapped: 25206784 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:38.647998+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112353280 unmapped: 25206784 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:39.648391+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.042070389s of 19.426975250s, submitted: 48
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 115458048 unmapped: 22102016 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:40.648520+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 116129792 unmapped: 21430272 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:41.648648+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 116211712 unmapped: 21348352 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1266207 data_alloc: 218103808 data_used: 6287360
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:42.648803+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f9836000/0x0/0x4ffc00000, data 0x155e578/0x1616000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 116211712 unmapped: 21348352 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:43.648983+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 116211712 unmapped: 21348352 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:44.650749+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 116211712 unmapped: 21348352 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:45.650872+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 116211712 unmapped: 21348352 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:46.650999+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 116211712 unmapped: 21348352 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263887 data_alloc: 218103808 data_used: 6287360
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:47.651144+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 115646464 unmapped: 21913600 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f9815000/0x0/0x4ffc00000, data 0x157f578/0x1637000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:48.651299+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 115646464 unmapped: 21913600 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:49.651457+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 115646464 unmapped: 21913600 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:50.651618+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 115654656 unmapped: 21905408 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:51.651800+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 115654656 unmapped: 21905408 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1264799 data_alloc: 218103808 data_used: 6356992
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:52.651998+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f9815000/0x0/0x4ffc00000, data 0x157f578/0x1637000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.029740334s of 13.182536125s, submitted: 43
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 115744768 unmapped: 21815296 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:53.652143+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 115744768 unmapped: 21815296 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:54.652262+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a379d3800 session 0x559a3811eb40
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35203c00
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 114909184 unmapped: 22650880 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a35203c00 session 0x559a37c60f00
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:55.652528+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113876992 unmapped: 23683072 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f9dd0000/0x0/0x4ffc00000, data 0xfc4578/0x107c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:56.652735+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113876992 unmapped: 23683072 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193731 data_alloc: 218103808 data_used: 3313664
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:57.652949+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113876992 unmapped: 23683072 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a35a7d400 session 0x559a35f2d2c0
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a34810800
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:58.653124+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a34810800 session 0x559a35a3da40
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112427008 unmapped: 25133056 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:59.653299+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112427008 unmapped: 25133056 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:00.653481+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112427008 unmapped: 25133056 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:01.653631+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112427008 unmapped: 25133056 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1122767 data_alloc: 218103808 data_used: 143360
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:02.653831+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112427008 unmapped: 25133056 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:03.654065+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112427008 unmapped: 25133056 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:04.654225+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112427008 unmapped: 25133056 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:05.654388+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112427008 unmapped: 25133056 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:06.654726+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112427008 unmapped: 25133056 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:28 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:28 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1122767 data_alloc: 218103808 data_used: 143360
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:07.655150+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112427008 unmapped: 25133056 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:08.655363+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112427008 unmapped: 25133056 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:09.655523+0000)
Sep 30 14:52:28 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112427008 unmapped: 25133056 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:28 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:52:28 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:52:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:52:28 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:52:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:52:28 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:52:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:52:29 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:10.655835+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112427008 unmapped: 25133056 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:11.656031+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112427008 unmapped: 25133056 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:29 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:29 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1122767 data_alloc: 218103808 data_used: 143360
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:12.656215+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112427008 unmapped: 25133056 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:13.656397+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112427008 unmapped: 25133056 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:14.656575+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112427008 unmapped: 25133056 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:15.656757+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112427008 unmapped: 25133056 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:16.656952+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112427008 unmapped: 25133056 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:29 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:29 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1122767 data_alloc: 218103808 data_used: 143360
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:17.657154+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112427008 unmapped: 25133056 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:18.657345+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112427008 unmapped: 25133056 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:19.657495+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112427008 unmapped: 25133056 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:20.657659+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35203000
Sep 30 14:52:29 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 27.805879593s of 27.928400040s, submitted: 32
Sep 30 14:52:29 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a35203000 session 0x559a350752c0
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112427008 unmapped: 25133056 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:21.657812+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112427008 unmapped: 25133056 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:29 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:29 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144923 data_alloc: 218103808 data_used: 143360
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:22.657918+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112427008 unmapped: 25133056 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35203c00
Sep 30 14:52:29 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a35203c00 session 0x559a386ce3c0
Sep 30 14:52:29 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa276000/0x0/0x4ffc00000, data 0xb1e578/0xbd6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:23.658113+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a3644c800
Sep 30 14:52:29 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a3644c800 session 0x559a386e4960
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112427008 unmapped: 25133056 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:24.658295+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa276000/0x0/0x4ffc00000, data 0xb1e578/0xbd6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a3644c800
Sep 30 14:52:29 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a3644c800 session 0x559a37fedc20
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a34810800
Sep 30 14:52:29 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a34810800 session 0x559a385c45a0
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112435200 unmapped: 25124864 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35203000
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35203c00
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:25.658456+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112435200 unmapped: 25124864 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa275000/0x0/0x4ffc00000, data 0xb1e588/0xbd7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:26.658572+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa275000/0x0/0x4ffc00000, data 0xb1e588/0xbd7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112287744 unmapped: 25272320 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:29 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:29 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154646 data_alloc: 218103808 data_used: 1196032
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:27.658688+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112287744 unmapped: 25272320 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa275000/0x0/0x4ffc00000, data 0xb1e588/0xbd7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:28.658819+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112287744 unmapped: 25272320 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:29.658982+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112287744 unmapped: 25272320 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:30.659213+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112295936 unmapped: 25264128 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:31.659308+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112295936 unmapped: 25264128 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:29 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:29 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154646 data_alloc: 218103808 data_used: 1196032
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:32.659474+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa275000/0x0/0x4ffc00000, data 0xb1e588/0xbd7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112295936 unmapped: 25264128 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:33.659588+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112295936 unmapped: 25264128 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa275000/0x0/0x4ffc00000, data 0xb1e588/0xbd7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:34.659732+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112295936 unmapped: 25264128 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:35.659879+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112295936 unmapped: 25264128 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.447950363s of 15.509048462s, submitted: 16
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:36.660026+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 115269632 unmapped: 22290432 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:29 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:29 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261110 data_alloc: 218103808 data_used: 1269760
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:37.660218+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f9450000/0x0/0x4ffc00000, data 0x1942588/0x19fb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113729536 unmapped: 23830528 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:38.660390+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113729536 unmapped: 23830528 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:39.660583+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f9431000/0x0/0x4ffc00000, data 0x1961588/0x1a1a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113729536 unmapped: 23830528 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:40.660720+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f9431000/0x0/0x4ffc00000, data 0x1961588/0x1a1a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113737728 unmapped: 23822336 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:41.660867+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113737728 unmapped: 23822336 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:29 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:29 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263414 data_alloc: 218103808 data_used: 1269760
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:42.661032+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113737728 unmapped: 23822336 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:43.661247+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 24018944 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:44.661444+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 24018944 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:45.661641+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f942f000/0x0/0x4ffc00000, data 0x1964588/0x1a1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 24018944 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:46.661816+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 24018944 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:29 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:29 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1260926 data_alloc: 218103808 data_used: 1269760
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:47.662053+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113549312 unmapped: 24010752 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:48.662199+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113549312 unmapped: 24010752 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f942f000/0x0/0x4ffc00000, data 0x1964588/0x1a1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 14:52:29 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a35203000 session 0x559a38705860
Sep 30 14:52:29 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.065621376s of 13.279171944s, submitted: 76
Sep 30 14:52:29 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a35203c00 session 0x559a380d6780
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:49.662358+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a37c0a400
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113549312 unmapped: 24010752 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:50.662479+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113549312 unmapped: 24010752 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:51.662640+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111755264 unmapped: 25804800 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:29 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:29 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 14:52:29 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a37c0a400 session 0x559a37fed4a0
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:52.662781+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 25788416 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:53.662878+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 25788416 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:54.663021+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 25788416 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:55.663155+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 25788416 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:56.663296+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 25788416 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:29 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:29 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:57.663582+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 25788416 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:58.663755+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 25788416 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:59.663925+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 25788416 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:00.664105+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 25788416 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:01.664211+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 25788416 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:29 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:29 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:02.664325+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 25788416 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:03.664427+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 25788416 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:04.664591+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 25788416 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:05.664729+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 25788416 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:06.664908+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 25788416 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:29 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:29 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:07.665076+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 25788416 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:08.665206+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 25788416 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:09.665444+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 25788416 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:10.665566+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 25788416 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:11.665799+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 25788416 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:29 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:29 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:12.665962+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 25788416 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:13.666266+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 25788416 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:14.666444+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 25788416 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:15.666583+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 25788416 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:16.666767+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 25788416 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 14:52:29 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:29 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:29 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:17.666971+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 25788416 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:18.667097+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 25788416 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:19.667263+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 25788416 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:20.667475+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 25788416 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:21.667637+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 25788416 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:29 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:29 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:22.667750+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 25788416 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:23.667958+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 25788416 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:24.668109+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 25788416 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:25.668307+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 25788416 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:26.668419+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 25788416 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:29 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:29 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:27.668604+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 25788416 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:28.668775+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 25788416 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:29.668903+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 25788416 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:30.669056+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 25788416 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:31.669270+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 25788416 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:29 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:29 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:32.669381+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 25788416 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:33.669537+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 25788416 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:34.669694+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 25788416 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:35.669845+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111779840 unmapped: 25780224 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:36.669924+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111779840 unmapped: 25780224 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:29 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:29 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:37.670053+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111779840 unmapped: 25780224 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:38.670268+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111779840 unmapped: 25780224 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:39.670430+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111779840 unmapped: 25780224 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:40.670618+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111779840 unmapped: 25780224 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:41.670760+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111779840 unmapped: 25780224 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:29 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:29 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:42.670895+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111779840 unmapped: 25780224 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:43.671030+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111779840 unmapped: 25780224 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:44.671229+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111779840 unmapped: 25780224 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:45.671369+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111779840 unmapped: 25780224 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:46.671499+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111779840 unmapped: 25780224 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:29 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:29 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:47.671691+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111779840 unmapped: 25780224 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:48.671816+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111779840 unmapped: 25780224 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:49.671936+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111779840 unmapped: 25780224 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:50.672057+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111788032 unmapped: 25772032 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:51.672228+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111788032 unmapped: 25772032 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:29 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:29 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:52.672340+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111788032 unmapped: 25772032 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:53.672458+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111788032 unmapped: 25772032 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:54.672572+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111788032 unmapped: 25772032 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:55.672713+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 14:52:29 compute-0 ceph-osd[82707]: do_command 'config diff' '{prefix=config diff}'
Sep 30 14:52:29 compute-0 ceph-osd[82707]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Sep 30 14:52:29 compute-0 ceph-osd[82707]: do_command 'config show' '{prefix=config show}'
Sep 30 14:52:29 compute-0 ceph-osd[82707]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Sep 30 14:52:29 compute-0 ceph-osd[82707]: do_command 'counter dump' '{prefix=counter dump}'
Sep 30 14:52:29 compute-0 ceph-osd[82707]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Sep 30 14:52:29 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111788032 unmapped: 25772032 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: do_command 'counter schema' '{prefix=counter schema}'
Sep 30 14:52:29 compute-0 ceph-osd[82707]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:56.672833+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111501312 unmapped: 26058752 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:57.673006+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 14:52:29 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 14:52:29 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 14:52:29 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111869952 unmapped: 25690112 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 14:52:29 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 14:52:29 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:58.673136+0000)
Sep 30 14:52:29 compute-0 ceph-osd[82707]: do_command 'log dump' '{prefix=log dump}'
Sep 30 14:52:29 compute-0 ceph-mon[74194]: from='client.26294 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:29 compute-0 ceph-mon[74194]: from='client.16755 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:29 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/1078333956' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Sep 30 14:52:29 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/1616689873' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Sep 30 14:52:29 compute-0 ceph-mon[74194]: from='client.26315 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:29 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/2691991420' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Sep 30 14:52:29 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/94825430' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Sep 30 14:52:29 compute-0 ceph-mon[74194]: from='client.16770 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:29 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/1713784966' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Sep 30 14:52:29 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:52:29 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 14:52:29 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/3657898015' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Sep 30 14:52:29 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:52:29 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:52:29 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 14:52:29 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 14:52:29 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:52:29 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/754638809' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Sep 30 14:52:29 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.26137 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:29 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Sep 30 14:52:29 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3815609234' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Sep 30 14:52:29 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Sep 30 14:52:29 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.26363 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:29 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.16809 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:29 compute-0 podman[287748]: 2025-09-30 14:52:29.456147194 +0000 UTC m=+0.038192376 container create c910195b176ab2f6f656115ce5db62209c63c84184b8ffe70c3a56bc1956c42c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_gates, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Sep 30 14:52:29 compute-0 systemd[1]: Started libpod-conmon-c910195b176ab2f6f656115ce5db62209c63c84184b8ffe70c3a56bc1956c42c.scope.
Sep 30 14:52:29 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:52:29 compute-0 podman[287748]: 2025-09-30 14:52:29.438850614 +0000 UTC m=+0.020895816 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:52:29 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.26155 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:29 compute-0 podman[287748]: 2025-09-30 14:52:29.544362903 +0000 UTC m=+0.126408105 container init c910195b176ab2f6f656115ce5db62209c63c84184b8ffe70c3a56bc1956c42c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_gates, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Sep 30 14:52:29 compute-0 podman[287748]: 2025-09-30 14:52:29.552812333 +0000 UTC m=+0.134857515 container start c910195b176ab2f6f656115ce5db62209c63c84184b8ffe70c3a56bc1956c42c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_gates, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Sep 30 14:52:29 compute-0 podman[287748]: 2025-09-30 14:52:29.555801411 +0000 UTC m=+0.137846593 container attach c910195b176ab2f6f656115ce5db62209c63c84184b8ffe70c3a56bc1956c42c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_gates, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:52:29 compute-0 systemd[1]: libpod-c910195b176ab2f6f656115ce5db62209c63c84184b8ffe70c3a56bc1956c42c.scope: Deactivated successfully.
Sep 30 14:52:29 compute-0 vigilant_gates[287784]: 167 167
Sep 30 14:52:29 compute-0 conmon[287784]: conmon c910195b176ab2f6f656 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c910195b176ab2f6f656115ce5db62209c63c84184b8ffe70c3a56bc1956c42c.scope/container/memory.events
Sep 30 14:52:29 compute-0 podman[287748]: 2025-09-30 14:52:29.560078582 +0000 UTC m=+0.142123774 container died c910195b176ab2f6f656115ce5db62209c63c84184b8ffe70c3a56bc1956c42c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_gates, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:52:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-c026f7fde0692ba25452a60d5f7c4a41d1f3e35c23f40f41922ac0d71af5dd31-merged.mount: Deactivated successfully.
Sep 30 14:52:29 compute-0 podman[287748]: 2025-09-30 14:52:29.60876198 +0000 UTC m=+0.190807162 container remove c910195b176ab2f6f656115ce5db62209c63c84184b8ffe70c3a56bc1956c42c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_gates, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:52:29 compute-0 systemd[1]: libpod-conmon-c910195b176ab2f6f656115ce5db62209c63c84184b8ffe70c3a56bc1956c42c.scope: Deactivated successfully.
Sep 30 14:52:29 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Sep 30 14:52:29 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/707749693' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Sep 30 14:52:29 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:52:29 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:52:29 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.26384 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:29 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.16833 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:52:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:52:29 compute-0 podman[287813]: 2025-09-30 14:52:29.765413762 +0000 UTC m=+0.046133543 container create 853f208ea2e56d77f027ed943704566331d48b8007b2269a8ad4bfd7597f26ea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_pasteur, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Sep 30 14:52:29 compute-0 systemd[1]: Started libpod-conmon-853f208ea2e56d77f027ed943704566331d48b8007b2269a8ad4bfd7597f26ea.scope.
Sep 30 14:52:29 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:52:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17022640c28f4386f18b5a423563909a0affcb38f5095c2a2d96ec998344aa1e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:52:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17022640c28f4386f18b5a423563909a0affcb38f5095c2a2d96ec998344aa1e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:52:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17022640c28f4386f18b5a423563909a0affcb38f5095c2a2d96ec998344aa1e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:52:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17022640c28f4386f18b5a423563909a0affcb38f5095c2a2d96ec998344aa1e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:52:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:52:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:52:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17022640c28f4386f18b5a423563909a0affcb38f5095c2a2d96ec998344aa1e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 14:52:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:52:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:52:29 compute-0 podman[287813]: 2025-09-30 14:52:29.744269981 +0000 UTC m=+0.024989782 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:52:29 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:52:29 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:52:29 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:52:29.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:52:29 compute-0 podman[287813]: 2025-09-30 14:52:29.847282475 +0000 UTC m=+0.128002276 container init 853f208ea2e56d77f027ed943704566331d48b8007b2269a8ad4bfd7597f26ea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_pasteur, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True)
Sep 30 14:52:29 compute-0 podman[287813]: 2025-09-30 14:52:29.857601084 +0000 UTC m=+0.138320875 container start 853f208ea2e56d77f027ed943704566331d48b8007b2269a8ad4bfd7597f26ea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_pasteur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:52:29 compute-0 podman[287813]: 2025-09-30 14:52:29.861068504 +0000 UTC m=+0.141788305 container attach 853f208ea2e56d77f027ed943704566331d48b8007b2269a8ad4bfd7597f26ea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_pasteur, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True)
Sep 30 14:52:29 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.26170 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:30 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Sep 30 14:52:30 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/41311244' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Sep 30 14:52:30 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.26402 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:30 compute-0 adoring_pasteur[287852]: --> passed data devices: 0 physical, 1 LVM
Sep 30 14:52:30 compute-0 adoring_pasteur[287852]: --> All data devices are unavailable
Sep 30 14:52:30 compute-0 ceph-mon[74194]: pgmap v1109: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:52:30 compute-0 ceph-mon[74194]: from='client.26119 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:30 compute-0 ceph-mon[74194]: pgmap v1110: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 605 B/s rd, 0 op/s
Sep 30 14:52:30 compute-0 ceph-mon[74194]: from='client.26336 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:30 compute-0 ceph-mon[74194]: from='client.16788 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:30 compute-0 ceph-mon[74194]: from='client.26137 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:30 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/3815609234' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Sep 30 14:52:30 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/3145269085' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Sep 30 14:52:30 compute-0 ceph-mon[74194]: from='client.26363 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:30 compute-0 ceph-mon[74194]: from='client.16809 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:30 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/2146008747' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Sep 30 14:52:30 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/707749693' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Sep 30 14:52:30 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/3815065320' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Sep 30 14:52:30 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:52:30 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/3024478114' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Sep 30 14:52:30 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/41311244' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Sep 30 14:52:30 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/1346603460' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Sep 30 14:52:30 compute-0 systemd[1]: libpod-853f208ea2e56d77f027ed943704566331d48b8007b2269a8ad4bfd7597f26ea.scope: Deactivated successfully.
Sep 30 14:52:30 compute-0 podman[287813]: 2025-09-30 14:52:30.191029682 +0000 UTC m=+0.471749463 container died 853f208ea2e56d77f027ed943704566331d48b8007b2269a8ad4bfd7597f26ea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_pasteur, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Sep 30 14:52:30 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.16848 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-17022640c28f4386f18b5a423563909a0affcb38f5095c2a2d96ec998344aa1e-merged.mount: Deactivated successfully.
Sep 30 14:52:30 compute-0 podman[287813]: 2025-09-30 14:52:30.230131081 +0000 UTC m=+0.510850862 container remove 853f208ea2e56d77f027ed943704566331d48b8007b2269a8ad4bfd7597f26ea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_pasteur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:52:30 compute-0 systemd[1]: libpod-conmon-853f208ea2e56d77f027ed943704566331d48b8007b2269a8ad4bfd7597f26ea.scope: Deactivated successfully.
Sep 30 14:52:30 compute-0 sudo[287601]: pam_unix(sudo:session): session closed for user root
Sep 30 14:52:30 compute-0 sudo[287944]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:52:30 compute-0 sudo[287944]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:52:30 compute-0 sudo[287944]: pam_unix(sudo:session): session closed for user root
Sep 30 14:52:30 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.26182 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:30 compute-0 sudo[287981]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -- lvm list --format json
Sep 30 14:52:30 compute-0 sudo[287981]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:52:30 compute-0 crontab[288048]: (root) LIST (root)
Sep 30 14:52:30 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.26423 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:30 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon stat"} v 0)
Sep 30 14:52:30 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2425949006' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Sep 30 14:52:30 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.16875 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:30 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:52:30 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:52:30 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:52:30.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:52:30 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1111: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 605 B/s rd, 0 op/s
Sep 30 14:52:30 compute-0 nova_compute[261524]: 2025-09-30 14:52:30.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:52:30 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.26194 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:30 compute-0 podman[288126]: 2025-09-30 14:52:30.861796059 +0000 UTC m=+0.065982850 container create 338b2ba7089dde9f1ec3306618f0767bdf32abf4321aa0873844ac3ddae89362 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:52:30 compute-0 systemd[1]: Started libpod-conmon-338b2ba7089dde9f1ec3306618f0767bdf32abf4321aa0873844ac3ddae89362.scope.
Sep 30 14:52:30 compute-0 podman[288126]: 2025-09-30 14:52:30.822458584 +0000 UTC m=+0.026645405 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:52:30 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:52:30 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.26429 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:30 compute-0 podman[288126]: 2025-09-30 14:52:30.962213606 +0000 UTC m=+0.166400427 container init 338b2ba7089dde9f1ec3306618f0767bdf32abf4321aa0873844ac3ddae89362 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_carson, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:52:30 compute-0 podman[288126]: 2025-09-30 14:52:30.972568166 +0000 UTC m=+0.176754957 container start 338b2ba7089dde9f1ec3306618f0767bdf32abf4321aa0873844ac3ddae89362 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Sep 30 14:52:30 compute-0 podman[288126]: 2025-09-30 14:52:30.976470767 +0000 UTC m=+0.180657558 container attach 338b2ba7089dde9f1ec3306618f0767bdf32abf4321aa0873844ac3ddae89362 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_carson, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:52:30 compute-0 pedantic_carson[288168]: 167 167
Sep 30 14:52:30 compute-0 systemd[1]: libpod-338b2ba7089dde9f1ec3306618f0767bdf32abf4321aa0873844ac3ddae89362.scope: Deactivated successfully.
Sep 30 14:52:30 compute-0 podman[288126]: 2025-09-30 14:52:30.981710304 +0000 UTC m=+0.185897095 container died 338b2ba7089dde9f1ec3306618f0767bdf32abf4321aa0873844ac3ddae89362 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:52:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-18330f09d31f99bfc23c0353f69fb890e56be44636a088ceb47df006449647f8-merged.mount: Deactivated successfully.
Sep 30 14:52:31 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.16890 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:31 compute-0 podman[288126]: 2025-09-30 14:52:31.027009454 +0000 UTC m=+0.231196245 container remove 338b2ba7089dde9f1ec3306618f0767bdf32abf4321aa0873844ac3ddae89362 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_carson, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:52:31 compute-0 systemd[1]: libpod-conmon-338b2ba7089dde9f1ec3306618f0767bdf32abf4321aa0873844ac3ddae89362.scope: Deactivated successfully.
Sep 30 14:52:31 compute-0 ceph-mon[74194]: from='client.26155 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:31 compute-0 ceph-mon[74194]: from='client.26384 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:31 compute-0 ceph-mon[74194]: from='client.16833 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:31 compute-0 ceph-mon[74194]: from='client.26170 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:31 compute-0 ceph-mon[74194]: from='client.26402 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:31 compute-0 ceph-mon[74194]: from='client.16848 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:31 compute-0 ceph-mon[74194]: from='client.26182 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:31 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/40701258' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Sep 30 14:52:31 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/2425949006' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Sep 30 14:52:31 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/1733394663' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Sep 30 14:52:31 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/2542343164' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Sep 30 14:52:31 compute-0 podman[288212]: 2025-09-30 14:52:31.220045824 +0000 UTC m=+0.052096928 container create 0ea42a24cecd6eed366b36701a769bfef6474224052e9c9a49e2ab88be2090ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_nash, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid)
Sep 30 14:52:31 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.26209 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:31 compute-0 systemd[1]: Started libpod-conmon-0ea42a24cecd6eed366b36701a769bfef6474224052e9c9a49e2ab88be2090ab.scope.
Sep 30 14:52:31 compute-0 podman[288212]: 2025-09-30 14:52:31.196337256 +0000 UTC m=+0.028388400 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:52:31 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:52:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dea0ed19ea3a8e1429e36e6c598f4c665d12e22d2a277b8e085441dcb948689d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:52:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dea0ed19ea3a8e1429e36e6c598f4c665d12e22d2a277b8e085441dcb948689d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:52:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dea0ed19ea3a8e1429e36e6c598f4c665d12e22d2a277b8e085441dcb948689d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:52:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dea0ed19ea3a8e1429e36e6c598f4c665d12e22d2a277b8e085441dcb948689d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:52:31 compute-0 podman[288212]: 2025-09-30 14:52:31.330604755 +0000 UTC m=+0.162655899 container init 0ea42a24cecd6eed366b36701a769bfef6474224052e9c9a49e2ab88be2090ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_nash, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:52:31 compute-0 podman[288212]: 2025-09-30 14:52:31.338634054 +0000 UTC m=+0.170685198 container start 0ea42a24cecd6eed366b36701a769bfef6474224052e9c9a49e2ab88be2090ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_nash, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:52:31 compute-0 podman[288212]: 2025-09-30 14:52:31.342398742 +0000 UTC m=+0.174449886 container attach 0ea42a24cecd6eed366b36701a769bfef6474224052e9c9a49e2ab88be2090ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_nash, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:52:31 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.26444 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:31 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.16896 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:31 compute-0 practical_nash[288267]: {
Sep 30 14:52:31 compute-0 practical_nash[288267]:     "0": [
Sep 30 14:52:31 compute-0 practical_nash[288267]:         {
Sep 30 14:52:31 compute-0 practical_nash[288267]:             "devices": [
Sep 30 14:52:31 compute-0 practical_nash[288267]:                 "/dev/loop3"
Sep 30 14:52:31 compute-0 practical_nash[288267]:             ],
Sep 30 14:52:31 compute-0 practical_nash[288267]:             "lv_name": "ceph_lv0",
Sep 30 14:52:31 compute-0 practical_nash[288267]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:52:31 compute-0 practical_nash[288267]:             "lv_size": "21470642176",
Sep 30 14:52:31 compute-0 practical_nash[288267]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5e3c7776-ac03-5698-b79f-a6dc2d80cae6,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1bf35304-bfb4-41f5-b832-570aa31de1b2,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 14:52:31 compute-0 practical_nash[288267]:             "lv_uuid": "f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L",
Sep 30 14:52:31 compute-0 practical_nash[288267]:             "name": "ceph_lv0",
Sep 30 14:52:31 compute-0 practical_nash[288267]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:52:31 compute-0 practical_nash[288267]:             "tags": {
Sep 30 14:52:31 compute-0 practical_nash[288267]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:52:31 compute-0 practical_nash[288267]:                 "ceph.block_uuid": "f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L",
Sep 30 14:52:31 compute-0 practical_nash[288267]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 14:52:31 compute-0 practical_nash[288267]:                 "ceph.cluster_fsid": "5e3c7776-ac03-5698-b79f-a6dc2d80cae6",
Sep 30 14:52:31 compute-0 practical_nash[288267]:                 "ceph.cluster_name": "ceph",
Sep 30 14:52:31 compute-0 practical_nash[288267]:                 "ceph.crush_device_class": "",
Sep 30 14:52:31 compute-0 practical_nash[288267]:                 "ceph.encrypted": "0",
Sep 30 14:52:31 compute-0 practical_nash[288267]:                 "ceph.osd_fsid": "1bf35304-bfb4-41f5-b832-570aa31de1b2",
Sep 30 14:52:31 compute-0 practical_nash[288267]:                 "ceph.osd_id": "0",
Sep 30 14:52:31 compute-0 practical_nash[288267]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 14:52:31 compute-0 practical_nash[288267]:                 "ceph.type": "block",
Sep 30 14:52:31 compute-0 practical_nash[288267]:                 "ceph.vdo": "0",
Sep 30 14:52:31 compute-0 practical_nash[288267]:                 "ceph.with_tpm": "0"
Sep 30 14:52:31 compute-0 practical_nash[288267]:             },
Sep 30 14:52:31 compute-0 practical_nash[288267]:             "type": "block",
Sep 30 14:52:31 compute-0 practical_nash[288267]:             "vg_name": "ceph_vg0"
Sep 30 14:52:31 compute-0 practical_nash[288267]:         }
Sep 30 14:52:31 compute-0 practical_nash[288267]:     ]
Sep 30 14:52:31 compute-0 practical_nash[288267]: }
Sep 30 14:52:31 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "node ls"} v 0)
Sep 30 14:52:31 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2635498237' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Sep 30 14:52:31 compute-0 systemd[1]: libpod-0ea42a24cecd6eed366b36701a769bfef6474224052e9c9a49e2ab88be2090ab.scope: Deactivated successfully.
Sep 30 14:52:31 compute-0 podman[288212]: 2025-09-30 14:52:31.630356885 +0000 UTC m=+0.462408009 container died 0ea42a24cecd6eed366b36701a769bfef6474224052e9c9a49e2ab88be2090ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_nash, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Sep 30 14:52:31 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.26230 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-dea0ed19ea3a8e1429e36e6c598f4c665d12e22d2a277b8e085441dcb948689d-merged.mount: Deactivated successfully.
Sep 30 14:52:31 compute-0 podman[288212]: 2025-09-30 14:52:31.681781995 +0000 UTC m=+0.513833099 container remove 0ea42a24cecd6eed366b36701a769bfef6474224052e9c9a49e2ab88be2090ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_nash, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Sep 30 14:52:31 compute-0 systemd[1]: libpod-conmon-0ea42a24cecd6eed366b36701a769bfef6474224052e9c9a49e2ab88be2090ab.scope: Deactivated successfully.
Sep 30 14:52:31 compute-0 sudo[287981]: pam_unix(sudo:session): session closed for user root
Sep 30 14:52:31 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.26468 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:31 compute-0 sudo[288319]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:52:31 compute-0 sudo[288319]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:52:31 compute-0 sudo[288319]: pam_unix(sudo:session): session closed for user root
Sep 30 14:52:31 compute-0 sudo[288363]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -- raw list --format json
Sep 30 14:52:31 compute-0 sudo[288363]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:52:31 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:52:31 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:52:31 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:52:31.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:52:31 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.16923 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:31 compute-0 nova_compute[261524]: 2025-09-30 14:52:31.979 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:52:32 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Sep 30 14:52:32 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2281687846' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Sep 30 14:52:32 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.26245 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:32 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.26483 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:32 compute-0 ceph-mon[74194]: from='client.26423 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:32 compute-0 ceph-mon[74194]: from='client.16875 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:32 compute-0 ceph-mon[74194]: pgmap v1111: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 605 B/s rd, 0 op/s
Sep 30 14:52:32 compute-0 ceph-mon[74194]: from='client.26194 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:32 compute-0 ceph-mon[74194]: from='client.26429 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:32 compute-0 ceph-mon[74194]: from='client.16890 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:32 compute-0 ceph-mon[74194]: from='client.26209 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:32 compute-0 ceph-mon[74194]: from='client.26444 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:32 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/791889916' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Sep 30 14:52:32 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/4223371938' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Sep 30 14:52:32 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/2635498237' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Sep 30 14:52:32 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/2268261266' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Sep 30 14:52:32 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/2281687846' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Sep 30 14:52:32 compute-0 podman[288471]: 2025-09-30 14:52:32.293408652 +0000 UTC m=+0.039209583 container create ffc385c07b717a20cdff8af1c4c2d057486f7c632965e990243f0749daeeb937 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_beaver, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:52:32 compute-0 systemd[1]: Started libpod-conmon-ffc385c07b717a20cdff8af1c4c2d057486f7c632965e990243f0749daeeb937.scope.
Sep 30 14:52:32 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.16941 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:32 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:52:32 compute-0 podman[288471]: 2025-09-30 14:52:32.275418453 +0000 UTC m=+0.021219414 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:52:32 compute-0 podman[288471]: 2025-09-30 14:52:32.372484993 +0000 UTC m=+0.118285944 container init ffc385c07b717a20cdff8af1c4c2d057486f7c632965e990243f0749daeeb937 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_beaver, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:52:32 compute-0 podman[288471]: 2025-09-30 14:52:32.380397729 +0000 UTC m=+0.126198660 container start ffc385c07b717a20cdff8af1c4c2d057486f7c632965e990243f0749daeeb937 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:52:32 compute-0 podman[288471]: 2025-09-30 14:52:32.384433754 +0000 UTC m=+0.130234705 container attach ffc385c07b717a20cdff8af1c4c2d057486f7c632965e990243f0749daeeb937 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_beaver, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Sep 30 14:52:32 compute-0 funny_beaver[288488]: 167 167
Sep 30 14:52:32 compute-0 systemd[1]: libpod-ffc385c07b717a20cdff8af1c4c2d057486f7c632965e990243f0749daeeb937.scope: Deactivated successfully.
Sep 30 14:52:32 compute-0 podman[288471]: 2025-09-30 14:52:32.387313579 +0000 UTC m=+0.133114510 container died ffc385c07b717a20cdff8af1c4c2d057486f7c632965e990243f0749daeeb937 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_beaver, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:52:32 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Sep 30 14:52:32 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2315270532' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Sep 30 14:52:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-4a1ad6eb27e3a195a3f4bb0b4ee86d52b9a30d1f46d515d7639d649f73400cb2-merged.mount: Deactivated successfully.
Sep 30 14:52:32 compute-0 podman[288471]: 2025-09-30 14:52:32.436742117 +0000 UTC m=+0.182543048 container remove ffc385c07b717a20cdff8af1c4c2d057486f7c632965e990243f0749daeeb937 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:52:32 compute-0 systemd[1]: libpod-conmon-ffc385c07b717a20cdff8af1c4c2d057486f7c632965e990243f0749daeeb937.scope: Deactivated successfully.
Sep 30 14:52:32 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:52:32 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.26257 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:32 compute-0 podman[288539]: 2025-09-30 14:52:32.613263076 +0000 UTC m=+0.047573100 container create 2e32a6579767b1042bd8c6b20ff09540584e54039794edc82e6937ea04404d4d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_banach, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Sep 30 14:52:32 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:52:32 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:52:32 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:52:32.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:52:32 compute-0 systemd[1]: Started libpod-conmon-2e32a6579767b1042bd8c6b20ff09540584e54039794edc82e6937ea04404d4d.scope.
Sep 30 14:52:32 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:52:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef5e88fb1d0290a4325de93cd440bcc364f1d753c1f78994a6ffa253ae1cc484/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:52:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef5e88fb1d0290a4325de93cd440bcc364f1d753c1f78994a6ffa253ae1cc484/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:52:32 compute-0 podman[288539]: 2025-09-30 14:52:32.595780781 +0000 UTC m=+0.030090835 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:52:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef5e88fb1d0290a4325de93cd440bcc364f1d753c1f78994a6ffa253ae1cc484/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:52:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef5e88fb1d0290a4325de93cd440bcc364f1d753c1f78994a6ffa253ae1cc484/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:52:32 compute-0 podman[288539]: 2025-09-30 14:52:32.717688017 +0000 UTC m=+0.151998051 container init 2e32a6579767b1042bd8c6b20ff09540584e54039794edc82e6937ea04404d4d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True)
Sep 30 14:52:32 compute-0 podman[288539]: 2025-09-30 14:52:32.724542646 +0000 UTC m=+0.158852670 container start 2e32a6579767b1042bd8c6b20ff09540584e54039794edc82e6937ea04404d4d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_banach, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:52:32 compute-0 podman[288539]: 2025-09-30 14:52:32.729628288 +0000 UTC m=+0.163938312 container attach 2e32a6579767b1042bd8c6b20ff09540584e54039794edc82e6937ea04404d4d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_banach, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:52:32 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1112: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 605 B/s rd, 0 op/s
Sep 30 14:52:32 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Sep 30 14:52:32 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1185949828' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Sep 30 14:52:32 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Sep 30 14:52:32 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3151056148' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Sep 30 14:52:33 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.26269 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:33 compute-0 ceph-mon[74194]: from='client.16896 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:33 compute-0 ceph-mon[74194]: from='client.26230 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:33 compute-0 ceph-mon[74194]: from='client.26468 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:33 compute-0 ceph-mon[74194]: from='client.16923 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:33 compute-0 ceph-mon[74194]: from='client.26245 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:33 compute-0 ceph-mon[74194]: from='client.26483 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:33 compute-0 ceph-mon[74194]: from='client.16941 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:33 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/1768108423' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Sep 30 14:52:33 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/2315270532' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Sep 30 14:52:33 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/2027556828' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Sep 30 14:52:33 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/3793054057' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Sep 30 14:52:33 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/3901774049' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Sep 30 14:52:33 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/1185949828' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Sep 30 14:52:33 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/3151056148' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Sep 30 14:52:33 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/3595110581' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Sep 30 14:52:33 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/180415424' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Sep 30 14:52:33 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/422943134' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Sep 30 14:52:33 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/3509836763' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Sep 30 14:52:33 compute-0 lvm[288704]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 14:52:33 compute-0 lvm[288704]: VG ceph_vg0 finished
Sep 30 14:52:33 compute-0 heuristic_banach[288574]: {}
Sep 30 14:52:33 compute-0 systemd[1]: libpod-2e32a6579767b1042bd8c6b20ff09540584e54039794edc82e6937ea04404d4d.scope: Deactivated successfully.
Sep 30 14:52:33 compute-0 systemd[1]: libpod-2e32a6579767b1042bd8c6b20ff09540584e54039794edc82e6937ea04404d4d.scope: Consumed 1.137s CPU time.
Sep 30 14:52:33 compute-0 podman[288539]: 2025-09-30 14:52:33.503521572 +0000 UTC m=+0.937831606 container died 2e32a6579767b1042bd8c6b20ff09540584e54039794edc82e6937ea04404d4d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_banach, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Sep 30 14:52:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-ef5e88fb1d0290a4325de93cd440bcc364f1d753c1f78994a6ffa253ae1cc484-merged.mount: Deactivated successfully.
Sep 30 14:52:33 compute-0 podman[288539]: 2025-09-30 14:52:33.557591121 +0000 UTC m=+0.991901145 container remove 2e32a6579767b1042bd8c6b20ff09540584e54039794edc82e6937ea04404d4d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_banach, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Sep 30 14:52:33 compute-0 systemd[1]: libpod-conmon-2e32a6579767b1042bd8c6b20ff09540584e54039794edc82e6937ea04404d4d.scope: Deactivated successfully.
Sep 30 14:52:33 compute-0 sudo[288363]: pam_unix(sudo:session): session closed for user root
Sep 30 14:52:33 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 14:52:33 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:52:33 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 14:52:33 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:52:33 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Sep 30 14:52:33 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4170450574' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Sep 30 14:52:33 compute-0 sudo[288766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 14:52:33 compute-0 sudo[288766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:52:33 compute-0 sudo[288766]: pam_unix(sudo:session): session closed for user root
Sep 30 14:52:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:52:33.710Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:52:33 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:52:33 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:52:33 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:52:33.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:52:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:52:33 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:52:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:52:33 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:52:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:52:33 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:52:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:52:34 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:52:34 compute-0 systemd[1]: Starting dnf makecache...
Sep 30 14:52:34 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Sep 30 14:52:34 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2453135004' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Sep 30 14:52:34 compute-0 dnf[288868]: Metadata cache refreshed recently.
Sep 30 14:52:34 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Sep 30 14:52:34 compute-0 systemd[1]: Finished dnf makecache.
Sep 30 14:52:34 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Sep 30 14:52:34 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2481308977' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Sep 30 14:52:34 compute-0 ceph-mon[74194]: from='client.26257 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:34 compute-0 ceph-mon[74194]: pgmap v1112: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 605 B/s rd, 0 op/s
Sep 30 14:52:34 compute-0 ceph-mon[74194]: from='client.26269 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:34 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/1338283372' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Sep 30 14:52:34 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/2523231260' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Sep 30 14:52:34 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/200948868' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Sep 30 14:52:34 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/443006564' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Sep 30 14:52:34 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:52:34 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:52:34 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/4070581682' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Sep 30 14:52:34 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/4170450574' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Sep 30 14:52:34 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/3158046409' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Sep 30 14:52:34 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/4133759465' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Sep 30 14:52:34 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/493835738' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Sep 30 14:52:34 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/1371273631' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Sep 30 14:52:34 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/4175853660' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Sep 30 14:52:34 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/2453135004' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Sep 30 14:52:34 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Sep 30 14:52:34 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/912391796' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Sep 30 14:52:34 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:52:34 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:52:34 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:52:34.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:52:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:52:34] "GET /metrics HTTP/1.1" 200 48531 "" "Prometheus/2.51.0"
Sep 30 14:52:34 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:52:34] "GET /metrics HTTP/1.1" 200 48531 "" "Prometheus/2.51.0"
Sep 30 14:52:34 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1113: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 605 B/s rd, 0 op/s
Sep 30 14:52:34 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Sep 30 14:52:34 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/960044828' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Sep 30 14:52:34 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.26609 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:34 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd utilization"} v 0)
Sep 30 14:52:34 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3997200306' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Sep 30 14:52:35 compute-0 systemd[1]: Starting Hostname Service...
Sep 30 14:52:35 compute-0 systemd[1]: Started Hostname Service.
Sep 30 14:52:35 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Sep 30 14:52:35 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1406252710' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Sep 30 14:52:35 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.26621 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:35 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.17061 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:35 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/2481308977' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Sep 30 14:52:35 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/3223959109' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Sep 30 14:52:35 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/3608355324' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Sep 30 14:52:35 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/1362706758' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Sep 30 14:52:35 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/912391796' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Sep 30 14:52:35 compute-0 ceph-mon[74194]: pgmap v1113: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 605 B/s rd, 0 op/s
Sep 30 14:52:35 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/960044828' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Sep 30 14:52:35 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/1577075684' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Sep 30 14:52:35 compute-0 ceph-mon[74194]: from='client.26609 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:35 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/3997200306' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Sep 30 14:52:35 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/592840058' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Sep 30 14:52:35 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/12731968' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Sep 30 14:52:35 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/1646120924' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Sep 30 14:52:35 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/1406252710' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Sep 30 14:52:35 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/2032546573' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Sep 30 14:52:35 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/326212242' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Sep 30 14:52:35 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Sep 30 14:52:35 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1473968219' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Sep 30 14:52:35 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.26639 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:35 compute-0 nova_compute[261524]: 2025-09-30 14:52:35.837 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:52:35 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:52:35 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:52:35 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:52:35.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:52:35 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.17085 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:36 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.17091 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:36 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.26663 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:36 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.17109 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:36 compute-0 ceph-mon[74194]: from='client.26621 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:36 compute-0 ceph-mon[74194]: from='client.17061 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:36 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/1473968219' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Sep 30 14:52:36 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/149027308' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Sep 30 14:52:36 compute-0 ceph-mon[74194]: from='client.26639 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:36 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/642778765' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Sep 30 14:52:36 compute-0 ceph-mon[74194]: from='client.17085 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:36 compute-0 ceph-mon[74194]: from='client.17091 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:36 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/1147861063' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Sep 30 14:52:36 compute-0 ceph-mon[74194]: from='client.26663 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:36 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/3467157271' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Sep 30 14:52:36 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/3532042555' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Sep 30 14:52:36 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.26681 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:36 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.26383 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:36 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:52:36 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:52:36 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:52:36.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:52:36 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1114: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 605 B/s rd, 0 op/s
Sep 30 14:52:36 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "quorum_status"} v 0)
Sep 30 14:52:36 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/200480901' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Sep 30 14:52:36 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.26395 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:36 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.17127 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:36 compute-0 nova_compute[261524]: 2025-09-30 14:52:36.982 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:52:37 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.26702 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:37 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.26404 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:37 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:52:37.205Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:52:37 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions"} v 0)
Sep 30 14:52:37 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2177513831' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Sep 30 14:52:37 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.17151 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:37 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.26410 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:37 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.26717 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:37 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:52:37 compute-0 ceph-mon[74194]: from='client.17109 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:37 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/1284246696' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Sep 30 14:52:37 compute-0 ceph-mon[74194]: from='client.26681 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:37 compute-0 ceph-mon[74194]: from='client.26383 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:37 compute-0 ceph-mon[74194]: pgmap v1114: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 605 B/s rd, 0 op/s
Sep 30 14:52:37 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/197932642' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Sep 30 14:52:37 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/200480901' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Sep 30 14:52:37 compute-0 ceph-mon[74194]: from='client.26395 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:37 compute-0 ceph-mon[74194]: from='client.17127 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:37 compute-0 ceph-mon[74194]: from='client.26702 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:37 compute-0 ceph-mon[74194]: from='client.26404 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:37 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/2177513831' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Sep 30 14:52:37 compute-0 ceph-mon[74194]: from='client.17151 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:37 compute-0 ceph-mon[74194]: from='client.26410 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:37 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/4245951064' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Sep 30 14:52:37 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.26434 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:37 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.17169 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:37 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Sep 30 14:52:37 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2466803261' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Sep 30 14:52:37 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:52:37 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:52:37 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:52:37.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:52:37 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.26732 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:37 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Sep 30 14:52:37 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Sep 30 14:52:38 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Sep 30 14:52:38 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2110355520' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Sep 30 14:52:38 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.26446 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:52:38.269 163966 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:52:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:52:38.270 163966 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:52:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:52:38.270 163966 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:52:38 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.17181 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:38 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.26750 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:38 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Sep 30 14:52:38 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Sep 30 14:52:38 compute-0 ceph-mon[74194]: from='client.26717 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:38 compute-0 ceph-mon[74194]: from='client.26434 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:38 compute-0 ceph-mon[74194]: from='client.17169 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:38 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/385791881' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Sep 30 14:52:38 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/2466803261' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Sep 30 14:52:38 compute-0 ceph-mon[74194]: from='client.26732 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:38 compute-0 ceph-mon[74194]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Sep 30 14:52:38 compute-0 ceph-mon[74194]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Sep 30 14:52:38 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/13958325' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Sep 30 14:52:38 compute-0 ceph-mon[74194]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Sep 30 14:52:38 compute-0 ceph-mon[74194]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Sep 30 14:52:38 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/2110355520' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Sep 30 14:52:38 compute-0 ceph-mon[74194]: from='client.26446 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:38 compute-0 ceph-mon[74194]: from='client.17181 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:38 compute-0 ceph-mon[74194]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Sep 30 14:52:38 compute-0 ceph-mon[74194]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Sep 30 14:52:38 compute-0 ceph-mon[74194]: from='client.26750 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:38 compute-0 ceph-mon[74194]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Sep 30 14:52:38 compute-0 ceph-mon[74194]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Sep 30 14:52:38 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/2023775354' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Sep 30 14:52:38 compute-0 ceph-mon[74194]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Sep 30 14:52:38 compute-0 ceph-mon[74194]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Sep 30 14:52:38 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.26470 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:38 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:52:38 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:52:38 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:52:38.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:52:38 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.17217 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:38 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1115: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 605 B/s rd, 0 op/s
Sep 30 14:52:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:52:38.853Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:52:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:52:38 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:52:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:52:39 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:52:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:52:39 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:52:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:52:39 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:52:39 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.26795 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:39 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump"} v 0)
Sep 30 14:52:39 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1992381109' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Sep 30 14:52:39 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.26801 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:39 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.26500 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:39 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.17250 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:39 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Sep 30 14:52:39 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Sep 30 14:52:39 compute-0 ceph-mon[74194]: from='client.26470 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:39 compute-0 ceph-mon[74194]: from='client.17217 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:39 compute-0 ceph-mon[74194]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Sep 30 14:52:39 compute-0 ceph-mon[74194]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Sep 30 14:52:39 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/2523082577' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Sep 30 14:52:39 compute-0 ceph-mon[74194]: pgmap v1115: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 605 B/s rd, 0 op/s
Sep 30 14:52:39 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/4245527019' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Sep 30 14:52:39 compute-0 ceph-mon[74194]: from='client.26795 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:39 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/1992381109' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Sep 30 14:52:39 compute-0 ceph-mon[74194]: from='client.26801 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:39 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/392551841' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Sep 30 14:52:39 compute-0 ceph-mon[74194]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Sep 30 14:52:39 compute-0 ceph-mon[74194]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Sep 30 14:52:39 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:52:39 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:52:39 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:52:39.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:52:39 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0)
Sep 30 14:52:39 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1444387849' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Sep 30 14:52:40 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df"} v 0)
Sep 30 14:52:40 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1749097881' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Sep 30 14:52:40 compute-0 podman[289615]: 2025-09-30 14:52:40.618941132 +0000 UTC m=+0.079283227 container health_status c96c6cc1b89de09f21811bef665f35e069874e3ad1bbd4c37d02227af98e3458 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, tcib_managed=true)
Sep 30 14:52:40 compute-0 podman[289614]: 2025-09-30 14:52:40.63230225 +0000 UTC m=+0.084322038 container health_status b3fb2fd96e3ed0561f41ccf5f3a6a8f5edc1610051f196882b9fe64f54041c07 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, managed_by=edpm_ansible, org.label-schema.build-date=20250923)
Sep 30 14:52:40 compute-0 podman[289611]: 2025-09-30 14:52:40.637046323 +0000 UTC m=+0.104295299 container health_status 3f9405f717bf7bccb1d94628a6cea0442375ebf8d5cf43ef2536ee30dce6c6e0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Sep 30 14:52:40 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:52:40 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:52:40 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:52:40.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:52:40 compute-0 podman[289612]: 2025-09-30 14:52:40.678038812 +0000 UTC m=+0.144750783 container health_status 8ace90c1ad44d98a416105b72e1c541724757ab08efa10adfaa0df7e8c2027c6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Sep 30 14:52:40 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.26539 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:40 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1116: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:52:40 compute-0 ceph-mon[74194]: from='client.26500 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:40 compute-0 ceph-mon[74194]: from='client.17250 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:40 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/3807206506' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Sep 30 14:52:40 compute-0 ceph-mon[74194]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Sep 30 14:52:40 compute-0 ceph-mon[74194]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Sep 30 14:52:40 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/1444387849' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Sep 30 14:52:40 compute-0 ceph-mon[74194]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Sep 30 14:52:40 compute-0 ceph-mon[74194]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Sep 30 14:52:40 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/2662572891' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Sep 30 14:52:40 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/3210101078' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Sep 30 14:52:40 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/1749097881' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Sep 30 14:52:40 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/31123470' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Sep 30 14:52:40 compute-0 nova_compute[261524]: 2025-09-30 14:52:40.837 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:52:41 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs dump"} v 0)
Sep 30 14:52:41 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/666801587' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Sep 30 14:52:41 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs ls"} v 0)
Sep 30 14:52:41 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1623109835' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Sep 30 14:52:41 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.26855 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:41 compute-0 ceph-mon[74194]: from='client.26539 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:41 compute-0 ceph-mon[74194]: pgmap v1116: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:52:41 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/666801587' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Sep 30 14:52:41 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/2502490297' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Sep 30 14:52:41 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/1705225302' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Sep 30 14:52:41 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/1623109835' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Sep 30 14:52:41 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/3479404343' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Sep 30 14:52:41 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:52:41 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:52:41 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:52:41.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:52:41 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.17295 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:41 compute-0 nova_compute[261524]: 2025-09-30 14:52:41.986 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:52:42 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds stat"} v 0)
Sep 30 14:52:42 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1230287432' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Sep 30 14:52:42 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:52:42 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:52:42 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:52:42 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:52:42.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:52:42 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.26888 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:42 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1117: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:52:42 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump"} v 0)
Sep 30 14:52:42 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1602311117' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Sep 30 14:52:42 compute-0 ceph-mon[74194]: from='client.26855 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:42 compute-0 ceph-mon[74194]: from='client.17295 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:42 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/2502254178' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Sep 30 14:52:42 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/19458601' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Sep 30 14:52:42 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/1230287432' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Sep 30 14:52:42 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/335314317' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Sep 30 14:52:42 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/77170888' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Sep 30 14:52:42 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/1602311117' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Sep 30 14:52:42 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.26575 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:43 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.17331 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:43 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.26909 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:43 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls"} v 0)
Sep 30 14:52:43 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/98787851' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Sep 30 14:52:43 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:52:43.711Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:52:43 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:52:43 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:52:43 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:52:43.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:52:43 compute-0 ceph-mon[74194]: from='client.26888 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:43 compute-0 ceph-mon[74194]: pgmap v1117: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:52:43 compute-0 ceph-mon[74194]: from='client.26575 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:43 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/1026751740' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Sep 30 14:52:43 compute-0 ceph-mon[74194]: from='client.17331 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:43 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/488113577' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Sep 30 14:52:43 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/98787851' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Sep 30 14:52:43 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.17346 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:43 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.17352 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:52:43 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:52:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:52:44 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:52:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:52:44 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:52:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:52:44 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:52:44 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.26599 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:44 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.17370 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:44 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:52:44 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:52:44 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:52:44.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:52:44 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:52:44 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:52:44 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd dump"} v 0)
Sep 30 14:52:44 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2403434210' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Sep 30 14:52:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:52:44] "GET /metrics HTTP/1.1" 200 48534 "" "Prometheus/2.51.0"
Sep 30 14:52:44 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:52:44] "GET /metrics HTTP/1.1" 200 48534 "" "Prometheus/2.51.0"
Sep 30 14:52:44 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1118: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:52:44 compute-0 ceph-mon[74194]: from='client.26909 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:44 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/3685427179' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Sep 30 14:52:44 compute-0 ceph-mon[74194]: from='client.17346 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:44 compute-0 ceph-mon[74194]: from='client.17352 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:44 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/3561573759' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Sep 30 14:52:44 compute-0 ceph-mon[74194]: from='client.26599 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:44 compute-0 ceph-mon[74194]: from='client.17370 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:44 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:52:44 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/3608913749' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Sep 30 14:52:44 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/2403434210' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Sep 30 14:52:44 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/2991816507' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Sep 30 14:52:45 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.26948 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:45 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd numa-status"} v 0)
Sep 30 14:52:45 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/827885466' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Sep 30 14:52:45 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.26611 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:45 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.26963 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:45 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:52:45 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 14:52:45 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:52:45 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 14:52:45 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:52:45 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:52:45 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:52:45 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:52:45 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:52:45 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Sep 30 14:52:45 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:52:45 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Sep 30 14:52:45 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:52:45 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:52:45 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:52:45 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Sep 30 14:52:45 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:52:45 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Sep 30 14:52:45 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:52:45 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:52:45 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:52:45 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 14:52:45 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:52:45 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 14:52:45 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.17403 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:45 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.26617 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:45 compute-0 nova_compute[261524]: 2025-09-30 14:52:45.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:52:45 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:52:45 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:52:45 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:52:45.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:52:45 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.17412 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:45 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:52:45 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 14:52:45 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:52:45 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 14:52:45 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:52:45 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:52:45 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:52:45 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:52:45 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:52:45 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Sep 30 14:52:45 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:52:45 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Sep 30 14:52:45 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:52:45 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:52:45 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:52:45 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Sep 30 14:52:45 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:52:45 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Sep 30 14:52:45 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:52:45 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:52:45 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:52:45 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 14:52:45 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:52:45 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 14:52:45 compute-0 ceph-mon[74194]: pgmap v1118: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:52:45 compute-0 ceph-mon[74194]: from='client.26948 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:45 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/827885466' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Sep 30 14:52:45 compute-0 ceph-mon[74194]: from='client.26611 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:45 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/1381099233' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Sep 30 14:52:46 compute-0 ovs-appctl[291076]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Sep 30 14:52:46 compute-0 ovs-appctl[291082]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Sep 30 14:52:46 compute-0 ovs-appctl[291089]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Sep 30 14:52:46 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0)
Sep 30 14:52:46 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1722945262' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Sep 30 14:52:46 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:52:46 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:52:46 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:52:46.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:52:46 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.26644 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:46 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.26650 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:46 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1119: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:52:46 compute-0 ceph-mon[74194]: from='client.26963 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:46 compute-0 ceph-mon[74194]: from='client.17403 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:46 compute-0 ceph-mon[74194]: from='client.26617 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:46 compute-0 ceph-mon[74194]: from='client.17412 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:46 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/2497266440' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Sep 30 14:52:46 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/2063487070' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Sep 30 14:52:46 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/1722945262' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Sep 30 14:52:46 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/2474617457' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Sep 30 14:52:46 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/2888408356' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Sep 30 14:52:46 compute-0 nova_compute[261524]: 2025-09-30 14:52:46.987 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:52:47 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.17430 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:47 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.26999 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:47 compute-0 sudo[291461]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:52:47 compute-0 sudo[291461]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:52:47 compute-0 sudo[291461]: pam_unix(sudo:session): session closed for user root
Sep 30 14:52:47 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.26659 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:47 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:52:47 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 14:52:47 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:52:47 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 14:52:47 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:52:47 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:52:47 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:52:47 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:52:47 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:52:47 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Sep 30 14:52:47 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:52:47 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Sep 30 14:52:47 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:52:47 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:52:47 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:52:47 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Sep 30 14:52:47 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:52:47 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Sep 30 14:52:47 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:52:47 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:52:47 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:52:47 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 14:52:47 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:52:47 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 14:52:47 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:52:47.206Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:52:47 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:52:47.207Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:52:47 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:52:47.207Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:52:47 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.17439 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:47 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:52:47 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:52:47 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:52:47 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:52:47.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:52:47 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status"} v 0)
Sep 30 14:52:47 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/746289546' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Sep 30 14:52:48 compute-0 ceph-mon[74194]: from='client.26644 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:48 compute-0 ceph-mon[74194]: from='client.26650 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:48 compute-0 ceph-mon[74194]: pgmap v1119: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:52:48 compute-0 ceph-mon[74194]: from='client.17430 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:48 compute-0 ceph-mon[74194]: from='client.26999 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:48 compute-0 ceph-mon[74194]: from='client.26659 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:48 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/1094285710' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Sep 30 14:52:48 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/3187341005' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Sep 30 14:52:48 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/746289546' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Sep 30 14:52:48 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.26680 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:48 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "time-sync-status"} v 0)
Sep 30 14:52:48 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3391685783' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Sep 30 14:52:48 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:52:48 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:52:48 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:52:48.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:52:48 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump", "format": "json-pretty"} v 0)
Sep 30 14:52:48 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3470510488' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Sep 30 14:52:48 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1120: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:52:48 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.17466 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:48 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.27041 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:52:48.854Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:52:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:52:48.855Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:52:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:52:48 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:52:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:52:48 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:52:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:52:48 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:52:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:52:49 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:52:49 compute-0 ceph-mon[74194]: from='client.17439 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:49 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/3243690731' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Sep 30 14:52:49 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/3746993586' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Sep 30 14:52:49 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/2444344828' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Sep 30 14:52:49 compute-0 ceph-mon[74194]: from='client.26680 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:49 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/3391685783' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Sep 30 14:52:49 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/3470510488' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Sep 30 14:52:49 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.17472 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:49 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "detail": "detail", "format": "json-pretty"} v 0)
Sep 30 14:52:49 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1045369078' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Sep 30 14:52:49 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:52:49 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:52:49 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:52:49.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:52:50 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json-pretty"} v 0)
Sep 30 14:52:50 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2916917437' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Sep 30 14:52:50 compute-0 ceph-mon[74194]: pgmap v1120: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:52:50 compute-0 ceph-mon[74194]: from='client.17466 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 14:52:50 compute-0 ceph-mon[74194]: from='client.27041 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:50 compute-0 ceph-mon[74194]: from='client.17472 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:50 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/315416704' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Sep 30 14:52:50 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/3029621622' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Sep 30 14:52:50 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/1045369078' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Sep 30 14:52:50 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/1843511958' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Sep 30 14:52:50 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/3529892534' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Sep 30 14:52:50 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/2916917437' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Sep 30 14:52:50 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs dump", "format": "json-pretty"} v 0)
Sep 30 14:52:50 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1112761798' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Sep 30 14:52:50 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.26713 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:50 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:52:50 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:52:50 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:52:50.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:52:50 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1121: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:52:50 compute-0 nova_compute[261524]: 2025-09-30 14:52:50.840 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:52:50 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs ls", "format": "json-pretty"} v 0)
Sep 30 14:52:50 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/990647403' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Sep 30 14:52:51 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/3744522294' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Sep 30 14:52:51 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/1773263589' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Sep 30 14:52:51 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/1112761798' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Sep 30 14:52:51 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/1127134164' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Sep 30 14:52:51 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/990647403' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Sep 30 14:52:51 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/3002150483' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Sep 30 14:52:51 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.27113 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:51 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.17514 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:51 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds stat", "format": "json-pretty"} v 0)
Sep 30 14:52:51 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1288452577' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Sep 30 14:52:51 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:52:51 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:52:51 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:52:51.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:52:51 compute-0 nova_compute[261524]: 2025-09-30 14:52:51.989 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:52:52 compute-0 ceph-mon[74194]: from='client.26713 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:52 compute-0 ceph-mon[74194]: pgmap v1121: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:52:52 compute-0 ceph-mon[74194]: from='client.27113 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:52 compute-0 ceph-mon[74194]: from='client.17514 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:52 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/1353712787' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Sep 30 14:52:52 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/539248331' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Sep 30 14:52:52 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/1288452577' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Sep 30 14:52:52 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/1824353088' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Sep 30 14:52:52 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/215861810' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Sep 30 14:52:52 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json-pretty"} v 0)
Sep 30 14:52:52 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2115595115' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Sep 30 14:52:52 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:52:52 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.27143 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:52 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.17550 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:52 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:52:52 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:52:52 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:52:52.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:52:52 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1122: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:52:52 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.26752 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:53 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json-pretty"} v 0)
Sep 30 14:52:53 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2035154598' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Sep 30 14:52:53 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/2115595115' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Sep 30 14:52:53 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/3861105183' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Sep 30 14:52:53 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/2678516651' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Sep 30 14:52:53 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/2035154598' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Sep 30 14:52:53 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.27164 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:53 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.17568 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:53 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:52:53.711Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:52:53 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:52:53.712Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:52:53 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.27173 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:53 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.17580 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:53 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:52:53 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:52:53 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:52:53.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:52:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:52:53 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:52:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:52:53 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:52:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:52:53 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:52:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:52:54 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:52:54 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.26776 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:54 compute-0 ceph-mon[74194]: from='client.27143 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:54 compute-0 ceph-mon[74194]: from='client.17550 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:54 compute-0 ceph-mon[74194]: pgmap v1122: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:52:54 compute-0 ceph-mon[74194]: from='client.26752 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:54 compute-0 ceph-mon[74194]: from='client.27164 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:54 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/1580177487' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Sep 30 14:52:54 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/1561869143' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Sep 30 14:52:54 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/3532162595' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Sep 30 14:52:54 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd dump", "format": "json-pretty"} v 0)
Sep 30 14:52:54 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1110319995' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Sep 30 14:52:54 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd numa-status", "format": "json-pretty"} v 0)
Sep 30 14:52:54 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3915529136' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Sep 30 14:52:54 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:52:54 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:52:54 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:52:54.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:52:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:52:54] "GET /metrics HTTP/1.1" 200 48534 "" "Prometheus/2.51.0"
Sep 30 14:52:54 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:52:54] "GET /metrics HTTP/1.1" 200 48534 "" "Prometheus/2.51.0"
Sep 30 14:52:54 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1123: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:52:54 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.27203 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:54 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.26791 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:55 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.17604 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:55 compute-0 ceph-mon[74194]: from='client.17568 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:55 compute-0 ceph-mon[74194]: from='client.27173 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:55 compute-0 ceph-mon[74194]: from='client.17580 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:55 compute-0 ceph-mon[74194]: from='client.26776 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:55 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/1110319995' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Sep 30 14:52:55 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/2022702227' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Sep 30 14:52:55 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/3624781350' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Sep 30 14:52:55 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/3915529136' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Sep 30 14:52:55 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.26797 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:55 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.27209 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:55 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:52:55 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 14:52:55 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:52:55 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 14:52:55 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:52:55 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:52:55 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:52:55 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:52:55 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:52:55 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Sep 30 14:52:55 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:52:55 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Sep 30 14:52:55 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:52:55 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:52:55 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:52:55 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Sep 30 14:52:55 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:52:55 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Sep 30 14:52:55 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:52:55 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:52:55 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:52:55 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 14:52:55 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:52:55 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 14:52:55 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.27215 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:55 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:52:55 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 14:52:55 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:52:55 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 14:52:55 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:52:55 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:52:55 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:52:55 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:52:55 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:52:55 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Sep 30 14:52:55 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:52:55 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Sep 30 14:52:55 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:52:55 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:52:55 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:52:55 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Sep 30 14:52:55 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:52:55 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Sep 30 14:52:55 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:52:55 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:52:55 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:52:55 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 14:52:55 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:52:55 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 14:52:55 compute-0 nova_compute[261524]: 2025-09-30 14:52:55.842 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:52:55 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"} v 0)
Sep 30 14:52:55 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2511139130' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Sep 30 14:52:55 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:52:55 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:52:55 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:52:55.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:52:56 compute-0 ceph-mon[74194]: pgmap v1123: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:52:56 compute-0 ceph-mon[74194]: from='client.27203 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:56 compute-0 ceph-mon[74194]: from='client.26791 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:56 compute-0 ceph-mon[74194]: from='client.17604 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:56 compute-0 ceph-mon[74194]: from='client.26797 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:56 compute-0 ceph-mon[74194]: from='client.27209 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:56 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/1830190949' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Sep 30 14:52:56 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/664112369' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Sep 30 14:52:56 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/2511139130' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Sep 30 14:52:56 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/2222906305' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Sep 30 14:52:56 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/1453617943' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Sep 30 14:52:56 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd stat", "format": "json-pretty"} v 0)
Sep 30 14:52:56 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2653220034' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Sep 30 14:52:56 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.27239 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:56 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.26818 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:56 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.17643 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:56 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:52:56 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:52:56 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:52:56.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:52:56 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1124: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:52:56 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.27251 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:56 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.26830 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:56 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:52:56 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 14:52:56 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:52:56 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 14:52:56 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:52:56 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:52:56 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:52:56 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:52:56 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:52:56 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Sep 30 14:52:56 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:52:56 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Sep 30 14:52:56 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:52:56 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:52:56 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:52:56 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Sep 30 14:52:56 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:52:56 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Sep 30 14:52:56 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:52:56 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:52:56 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:52:56 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 14:52:56 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:52:56 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 14:52:57 compute-0 nova_compute[261524]: 2025-09-30 14:52:57.017 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:52:57 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.17649 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:57 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:52:57.208Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:52:57 compute-0 ceph-mon[74194]: from='client.27215 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:57 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/2653220034' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Sep 30 14:52:57 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:52:57 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Sep 30 14:52:57 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3914751026' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Sep 30 14:52:57 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:52:57 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:52:57 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:52:57.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:52:57 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "time-sync-status", "format": "json-pretty"} v 0)
Sep 30 14:52:57 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1967711490' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Sep 30 14:52:58 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.26848 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:58 compute-0 virtqemud[261000]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Sep 30 14:52:58 compute-0 ceph-mon[74194]: from='client.27239 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:58 compute-0 ceph-mon[74194]: from='client.26818 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:58 compute-0 ceph-mon[74194]: from='client.17643 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:58 compute-0 ceph-mon[74194]: pgmap v1124: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:52:58 compute-0 ceph-mon[74194]: from='client.27251 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:58 compute-0 ceph-mon[74194]: from='client.26830 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:58 compute-0 ceph-mon[74194]: from='client.17649 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:58 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/3046569264' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Sep 30 14:52:58 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/926955893' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Sep 30 14:52:58 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/3914751026' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Sep 30 14:52:58 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/2878469568' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Sep 30 14:52:58 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/1765912665' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Sep 30 14:52:58 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/1967711490' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Sep 30 14:52:58 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.26854 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:58 compute-0 systemd[1]: Starting Time & Date Service...
Sep 30 14:52:58 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:52:58 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:52:58 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:52:58.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:52:58 compute-0 systemd[1]: Started Time & Date Service.
Sep 30 14:52:58 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1125: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:52:58 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:52:58.856Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:52:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:52:58 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:52:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:52:58 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:52:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:52:58 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:52:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:52:59 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:52:59 compute-0 ceph-mon[74194]: from='client.26848 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:52:59 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/1115282808' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Sep 30 14:52:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-crash-compute-0[79646]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Sep 30 14:52:59 compute-0 ceph-mgr[74485]: [balancer INFO root] Optimize plan auto_2025-09-30_14:52:59
Sep 30 14:52:59 compute-0 ceph-mgr[74485]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 14:52:59 compute-0 ceph-mgr[74485]: [balancer INFO root] do_upmap
Sep 30 14:52:59 compute-0 ceph-mgr[74485]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.control', '.mgr', 'backups', '.nfs', 'vms', 'volumes', 'images', '.rgw.root']
Sep 30 14:52:59 compute-0 ceph-mgr[74485]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 14:52:59 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:52:59 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:52:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:52:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:52:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:52:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:52:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:52:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:52:59 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:52:59 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:52:59 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:52:59.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:53:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 14:53:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:53:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 14:53:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:53:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 14:53:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:53:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:53:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:53:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:53:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:53:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Sep 30 14:53:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:53:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Sep 30 14:53:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:53:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:53:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:53:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Sep 30 14:53:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:53:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Sep 30 14:53:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:53:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:53:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:53:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 14:53:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:53:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 14:53:00 compute-0 ceph-mon[74194]: from='client.26854 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 14:53:00 compute-0 ceph-mon[74194]: pgmap v1125: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:53:00 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/3150729378' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Sep 30 14:53:00 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:53:00 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:53:00 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:53:00 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:53:00.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:53:00 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1126: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:53:00 compute-0 nova_compute[261524]: 2025-09-30 14:53:00.843 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:53:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 14:53:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 14:53:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 14:53:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 14:53:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 14:53:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 14:53:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 14:53:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 14:53:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 14:53:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 14:53:01 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:53:01 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:53:01 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:53:01.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:53:02 compute-0 nova_compute[261524]: 2025-09-30 14:53:02.019 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:53:02 compute-0 ceph-mon[74194]: pgmap v1126: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:53:02 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:53:02 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:53:02 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:53:02 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:53:02.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:53:02 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1127: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:53:03 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:53:03.714Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:53:03 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:53:03 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:53:03 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:53:03.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:53:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:53:03 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:53:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:53:03 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:53:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:53:03 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:53:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:53:04 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:53:04 compute-0 ceph-mon[74194]: pgmap v1127: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:53:04 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:53:04 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:53:04 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:53:04.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:53:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:53:04] "GET /metrics HTTP/1.1" 200 48532 "" "Prometheus/2.51.0"
Sep 30 14:53:04 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:53:04] "GET /metrics HTTP/1.1" 200 48532 "" "Prometheus/2.51.0"
Sep 30 14:53:04 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1128: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:53:05 compute-0 ceph-mon[74194]: pgmap v1128: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:53:05 compute-0 nova_compute[261524]: 2025-09-30 14:53:05.879 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:53:05 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:53:05 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:53:05 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:53:05.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:53:06 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:53:06 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:53:06 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:53:06.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:53:06 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1129: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:53:07 compute-0 nova_compute[261524]: 2025-09-30 14:53:07.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:53:07 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:53:07.209Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:53:07 compute-0 sudo[293507]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:53:07 compute-0 sudo[293507]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:53:07 compute-0 sudo[293507]: pam_unix(sudo:session): session closed for user root
Sep 30 14:53:07 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:53:07 compute-0 ceph-mon[74194]: pgmap v1129: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:53:07 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:53:07 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:53:07 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:53:07.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:53:08 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:53:08 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:53:08 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:53:08.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:53:08 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1130: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:53:08 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:53:08.858Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:53:09 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:53:08 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:53:09 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:53:08 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:53:09 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:53:08 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:53:09 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:53:09 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:53:09 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:53:09 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:53:09 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:53:09.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:53:10 compute-0 ceph-mon[74194]: pgmap v1130: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:53:10 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:53:10 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:53:10 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:53:10.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:53:10 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1131: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:53:10 compute-0 nova_compute[261524]: 2025-09-30 14:53:10.881 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:53:11 compute-0 podman[293539]: 2025-09-30 14:53:11.15269887 +0000 UTC m=+0.066300599 container health_status c96c6cc1b89de09f21811bef665f35e069874e3ad1bbd4c37d02227af98e3458 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent)
Sep 30 14:53:11 compute-0 podman[293538]: 2025-09-30 14:53:11.15691643 +0000 UTC m=+0.076418452 container health_status b3fb2fd96e3ed0561f41ccf5f3a6a8f5edc1610051f196882b9fe64f54041c07 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Sep 30 14:53:11 compute-0 podman[293536]: 2025-09-30 14:53:11.221835671 +0000 UTC m=+0.141560509 container health_status 3f9405f717bf7bccb1d94628a6cea0442375ebf8d5cf43ef2536ee30dce6c6e0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250923, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Sep 30 14:53:11 compute-0 podman[293537]: 2025-09-30 14:53:11.26321888 +0000 UTC m=+0.182702812 container health_status 8ace90c1ad44d98a416105b72e1c541724757ab08efa10adfaa0df7e8c2027c6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20250923, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:53:11 compute-0 ceph-mon[74194]: from='client.? 192.168.122.10:0/1321683530' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 14:53:11 compute-0 ceph-mon[74194]: from='client.? 192.168.122.10:0/1321683530' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 14:53:11 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:53:11 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:53:11 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:53:11.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:53:12 compute-0 nova_compute[261524]: 2025-09-30 14:53:12.026 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:53:12 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:53:12 compute-0 ceph-mon[74194]: pgmap v1131: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:53:12 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:53:12 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:53:12 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:53:12.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:53:12 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1132: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:53:13 compute-0 ceph-mon[74194]: pgmap v1132: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:53:13 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:53:13.714Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:53:13 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:53:13 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:53:13 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:53:13.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:53:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:53:13 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:53:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:53:13 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:53:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:53:13 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:53:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:53:14 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:53:14 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:53:14 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:53:14 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:53:14 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:53:14 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:53:14.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:53:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:53:14] "GET /metrics HTTP/1.1" 200 48532 "" "Prometheus/2.51.0"
Sep 30 14:53:14 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:53:14] "GET /metrics HTTP/1.1" 200 48532 "" "Prometheus/2.51.0"
Sep 30 14:53:14 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1133: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:53:15 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:53:15 compute-0 nova_compute[261524]: 2025-09-30 14:53:15.882 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:53:15 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:53:15 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:53:15 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:53:15.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:53:16 compute-0 ceph-mon[74194]: pgmap v1133: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:53:16 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:53:16 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:53:16 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:53:16.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:53:16 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1134: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:53:17 compute-0 nova_compute[261524]: 2025-09-30 14:53:17.057 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:53:17 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:53:17.210Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:53:17 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:53:17 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:53:17 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:53:17 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:53:17.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:53:18 compute-0 ceph-mon[74194]: pgmap v1134: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:53:18 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:53:18 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:53:18 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:53:18.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:53:18 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1135: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:53:18 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:53:18.858Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:53:18 compute-0 nova_compute[261524]: 2025-09-30 14:53:18.952 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:53:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:53:19 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:53:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:53:19 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:53:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:53:19 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:53:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:53:19 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:53:19 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:53:19 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:53:19 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:53:19.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:53:20 compute-0 ceph-mon[74194]: pgmap v1135: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:53:20 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:53:20 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:53:20 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:53:20.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:53:20 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1136: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:53:20 compute-0 nova_compute[261524]: 2025-09-30 14:53:20.884 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:53:20 compute-0 nova_compute[261524]: 2025-09-30 14:53:20.947 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:53:21 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/969295922' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:53:21 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:53:21 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:53:21 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:53:21.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:53:21 compute-0 nova_compute[261524]: 2025-09-30 14:53:21.952 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:53:21 compute-0 nova_compute[261524]: 2025-09-30 14:53:21.952 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Sep 30 14:53:21 compute-0 nova_compute[261524]: 2025-09-30 14:53:21.953 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Sep 30 14:53:21 compute-0 nova_compute[261524]: 2025-09-30 14:53:21.971 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Sep 30 14:53:21 compute-0 nova_compute[261524]: 2025-09-30 14:53:21.971 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:53:22 compute-0 nova_compute[261524]: 2025-09-30 14:53:22.000 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:53:22 compute-0 nova_compute[261524]: 2025-09-30 14:53:22.001 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:53:22 compute-0 nova_compute[261524]: 2025-09-30 14:53:22.001 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:53:22 compute-0 nova_compute[261524]: 2025-09-30 14:53:22.001 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Sep 30 14:53:22 compute-0 nova_compute[261524]: 2025-09-30 14:53:22.001 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Sep 30 14:53:22 compute-0 nova_compute[261524]: 2025-09-30 14:53:22.059 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:53:22 compute-0 ceph-mon[74194]: pgmap v1136: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:53:22 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/3356329139' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:53:22 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/2795229564' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:53:22 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/3638338944' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:53:22 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 14:53:22 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2074761425' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:53:22 compute-0 nova_compute[261524]: 2025-09-30 14:53:22.494 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Sep 30 14:53:22 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:53:22 compute-0 nova_compute[261524]: 2025-09-30 14:53:22.642 2 WARNING nova.virt.libvirt.driver [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 14:53:22 compute-0 nova_compute[261524]: 2025-09-30 14:53:22.643 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4413MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Sep 30 14:53:22 compute-0 nova_compute[261524]: 2025-09-30 14:53:22.643 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:53:22 compute-0 nova_compute[261524]: 2025-09-30 14:53:22.643 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:53:22 compute-0 nova_compute[261524]: 2025-09-30 14:53:22.712 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Sep 30 14:53:22 compute-0 nova_compute[261524]: 2025-09-30 14:53:22.712 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Sep 30 14:53:22 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:53:22 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:53:22 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:53:22.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:53:22 compute-0 nova_compute[261524]: 2025-09-30 14:53:22.730 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Sep 30 14:53:22 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1137: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:53:23 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 14:53:23 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4228673537' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:53:23 compute-0 nova_compute[261524]: 2025-09-30 14:53:23.163 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Sep 30 14:53:23 compute-0 nova_compute[261524]: 2025-09-30 14:53:23.169 2 DEBUG nova.compute.provider_tree [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Inventory has not changed in ProviderTree for provider: 06783cfc-6d32-454d-9501-ebd8adea3735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Sep 30 14:53:23 compute-0 nova_compute[261524]: 2025-09-30 14:53:23.187 2 DEBUG nova.scheduler.client.report [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Inventory has not changed for provider 06783cfc-6d32-454d-9501-ebd8adea3735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Sep 30 14:53:23 compute-0 nova_compute[261524]: 2025-09-30 14:53:23.189 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Sep 30 14:53:23 compute-0 nova_compute[261524]: 2025-09-30 14:53:23.189 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.546s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:53:23 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/2074761425' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:53:23 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/4228673537' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:53:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:53:23.715Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:53:23 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:53:23 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:53:23 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:53:23.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:53:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:53:23 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:53:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:53:23 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:53:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:53:23 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:53:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:53:24 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:53:24 compute-0 nova_compute[261524]: 2025-09-30 14:53:24.170 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:53:24 compute-0 nova_compute[261524]: 2025-09-30 14:53:24.171 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:53:24 compute-0 nova_compute[261524]: 2025-09-30 14:53:24.171 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:53:24 compute-0 nova_compute[261524]: 2025-09-30 14:53:24.171 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:53:24 compute-0 nova_compute[261524]: 2025-09-30 14:53:24.172 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:53:24 compute-0 nova_compute[261524]: 2025-09-30 14:53:24.172 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Sep 30 14:53:24 compute-0 ceph-mon[74194]: pgmap v1137: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:53:24 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:53:24 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:53:24 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:53:24.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:53:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:53:24] "GET /metrics HTTP/1.1" 200 48532 "" "Prometheus/2.51.0"
Sep 30 14:53:24 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:53:24] "GET /metrics HTTP/1.1" 200 48532 "" "Prometheus/2.51.0"
Sep 30 14:53:24 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1138: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:53:25 compute-0 ceph-mon[74194]: pgmap v1138: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:53:25 compute-0 nova_compute[261524]: 2025-09-30 14:53:25.886 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:53:25 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:53:25 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:53:25 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:53:25.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:53:26 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:53:26 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:53:26 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:53:26.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:53:26 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1139: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:53:27 compute-0 nova_compute[261524]: 2025-09-30 14:53:27.062 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:53:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:53:27.211Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:53:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:53:27.212Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:53:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:53:27.212Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:53:27 compute-0 sudo[293677]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:53:27 compute-0 sudo[293677]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:53:27 compute-0 sudo[293677]: pam_unix(sudo:session): session closed for user root
Sep 30 14:53:27 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:53:27 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:53:27 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:53:27 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:53:27.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:53:27 compute-0 ceph-mon[74194]: pgmap v1139: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:53:28 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:53:28 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:53:28 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:53:28.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:53:28 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Sep 30 14:53:28 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1140: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:53:28 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Sep 30 14:53:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:53:28.859Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:53:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:53:28 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:53:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:53:28 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:53:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:53:28 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:53:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:53:29 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:53:29 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:53:29 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:53:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:53:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:53:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:53:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:53:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:53:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:53:29 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:53:29 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:53:29 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:53:29.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:53:30 compute-0 ceph-mon[74194]: pgmap v1140: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:53:30 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:53:30 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:53:30 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:53:30 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:53:30.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:53:30 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1141: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:53:30 compute-0 nova_compute[261524]: 2025-09-30 14:53:30.889 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:53:31 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:53:31 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:53:31 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:53:31.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:53:32 compute-0 ceph-mon[74194]: pgmap v1141: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:53:32 compute-0 nova_compute[261524]: 2025-09-30 14:53:32.065 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:53:32 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:53:32 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:53:32 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:53:32 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:53:32.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:53:32 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1142: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:53:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:53:33.716Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:53:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:53:33.717Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:53:33 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:53:33 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:53:33 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:53:33.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:53:33 compute-0 sudo[293714]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:53:33 compute-0 sudo[293714]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:53:33 compute-0 sudo[293714]: pam_unix(sudo:session): session closed for user root
Sep 30 14:53:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:53:33 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:53:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:53:33 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:53:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:53:33 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:53:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:53:34 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:53:34 compute-0 sudo[293739]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Sep 30 14:53:34 compute-0 sudo[293739]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:53:34 compute-0 ceph-mon[74194]: pgmap v1142: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:53:34 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Sep 30 14:53:34 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:53:34 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Sep 30 14:53:34 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:53:34 compute-0 sudo[293739]: pam_unix(sudo:session): session closed for user root
Sep 30 14:53:34 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 14:53:34 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:53:34 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 14:53:34 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:53:34 compute-0 sudo[293786]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:53:34 compute-0 sudo[293786]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:53:34 compute-0 sudo[293786]: pam_unix(sudo:session): session closed for user root
Sep 30 14:53:34 compute-0 sudo[293811]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 14:53:34 compute-0 sudo[293811]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:53:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:53:34] "GET /metrics HTTP/1.1" 200 48532 "" "Prometheus/2.51.0"
Sep 30 14:53:34 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:53:34] "GET /metrics HTTP/1.1" 200 48532 "" "Prometheus/2.51.0"
Sep 30 14:53:34 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:53:34 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:53:34 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:53:34.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:53:34 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1143: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:53:35 compute-0 sudo[293811]: pam_unix(sudo:session): session closed for user root
Sep 30 14:53:35 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:53:35 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:53:35 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 14:53:35 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 14:53:35 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 14:53:35 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1144: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 591 B/s rd, 0 op/s
Sep 30 14:53:35 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:53:35 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 14:53:35 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:53:35 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 14:53:35 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 14:53:35 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 14:53:35 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 14:53:35 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:53:35 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:53:35 compute-0 sudo[293867]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:53:35 compute-0 sudo[293867]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:53:35 compute-0 sudo[293867]: pam_unix(sudo:session): session closed for user root
Sep 30 14:53:35 compute-0 sudo[293892]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 14:53:35 compute-0 sudo[293892]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:53:35 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:53:35 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:53:35 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:53:35 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:53:35 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:53:35 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 14:53:35 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:53:35 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:53:35 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 14:53:35 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 14:53:35 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:53:35 compute-0 podman[293959]: 2025-09-30 14:53:35.854604424 +0000 UTC m=+0.061204266 container create f3f9b0aefb25922a554dc6c75ee3bed4a197471994237423ba56e115dab2a033 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_payne, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Sep 30 14:53:35 compute-0 nova_compute[261524]: 2025-09-30 14:53:35.890 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:53:35 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:53:35 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:53:35 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:53:35.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:53:35 compute-0 podman[293959]: 2025-09-30 14:53:35.829017577 +0000 UTC m=+0.035617509 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:53:35 compute-0 systemd[1]: Started libpod-conmon-f3f9b0aefb25922a554dc6c75ee3bed4a197471994237423ba56e115dab2a033.scope.
Sep 30 14:53:35 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:53:35 compute-0 podman[293959]: 2025-09-30 14:53:35.986678585 +0000 UTC m=+0.193278437 container init f3f9b0aefb25922a554dc6c75ee3bed4a197471994237423ba56e115dab2a033 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_payne, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Sep 30 14:53:35 compute-0 podman[293959]: 2025-09-30 14:53:35.99415468 +0000 UTC m=+0.200754512 container start f3f9b0aefb25922a554dc6c75ee3bed4a197471994237423ba56e115dab2a033 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_payne, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:53:35 compute-0 podman[293959]: 2025-09-30 14:53:35.997587949 +0000 UTC m=+0.204187791 container attach f3f9b0aefb25922a554dc6c75ee3bed4a197471994237423ba56e115dab2a033 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_payne, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1)
Sep 30 14:53:36 compute-0 angry_payne[293976]: 167 167
Sep 30 14:53:36 compute-0 systemd[1]: libpod-f3f9b0aefb25922a554dc6c75ee3bed4a197471994237423ba56e115dab2a033.scope: Deactivated successfully.
Sep 30 14:53:36 compute-0 podman[293959]: 2025-09-30 14:53:36.001859171 +0000 UTC m=+0.208459013 container died f3f9b0aefb25922a554dc6c75ee3bed4a197471994237423ba56e115dab2a033 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_payne, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Sep 30 14:53:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-06e48300fb5c383c3a4ae831d2aa7fe71d1089ed98164093fba69a19616a42bb-merged.mount: Deactivated successfully.
Sep 30 14:53:36 compute-0 podman[293959]: 2025-09-30 14:53:36.049495232 +0000 UTC m=+0.256095064 container remove f3f9b0aefb25922a554dc6c75ee3bed4a197471994237423ba56e115dab2a033 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_payne, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:53:36 compute-0 systemd[1]: libpod-conmon-f3f9b0aefb25922a554dc6c75ee3bed4a197471994237423ba56e115dab2a033.scope: Deactivated successfully.
Sep 30 14:53:36 compute-0 podman[294001]: 2025-09-30 14:53:36.218229428 +0000 UTC m=+0.046580354 container create 926522edfac41ae6592686b1f5be71e96cbf4a22006d8223a52b8b1529409983 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_cori, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:53:36 compute-0 systemd[1]: Started libpod-conmon-926522edfac41ae6592686b1f5be71e96cbf4a22006d8223a52b8b1529409983.scope.
Sep 30 14:53:36 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:53:36 compute-0 podman[294001]: 2025-09-30 14:53:36.197751095 +0000 UTC m=+0.026102231 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:53:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9355c2342c031e00d5c34c43564bebd82abcd8ac935c12dc40cdc06395f5651/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:53:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9355c2342c031e00d5c34c43564bebd82abcd8ac935c12dc40cdc06395f5651/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:53:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9355c2342c031e00d5c34c43564bebd82abcd8ac935c12dc40cdc06395f5651/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:53:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9355c2342c031e00d5c34c43564bebd82abcd8ac935c12dc40cdc06395f5651/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:53:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9355c2342c031e00d5c34c43564bebd82abcd8ac935c12dc40cdc06395f5651/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 14:53:36 compute-0 podman[294001]: 2025-09-30 14:53:36.317399722 +0000 UTC m=+0.145750668 container init 926522edfac41ae6592686b1f5be71e96cbf4a22006d8223a52b8b1529409983 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_cori, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:53:36 compute-0 podman[294001]: 2025-09-30 14:53:36.325980156 +0000 UTC m=+0.154331082 container start 926522edfac41ae6592686b1f5be71e96cbf4a22006d8223a52b8b1529409983 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_cori, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:53:36 compute-0 podman[294001]: 2025-09-30 14:53:36.329292152 +0000 UTC m=+0.157643108 container attach 926522edfac41ae6592686b1f5be71e96cbf4a22006d8223a52b8b1529409983 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_cori, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Sep 30 14:53:36 compute-0 ceph-mon[74194]: pgmap v1143: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:53:36 compute-0 ceph-mon[74194]: pgmap v1144: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 591 B/s rd, 0 op/s
Sep 30 14:53:36 compute-0 admiring_cori[294017]: --> passed data devices: 0 physical, 1 LVM
Sep 30 14:53:36 compute-0 admiring_cori[294017]: --> All data devices are unavailable
Sep 30 14:53:36 compute-0 systemd[1]: libpod-926522edfac41ae6592686b1f5be71e96cbf4a22006d8223a52b8b1529409983.scope: Deactivated successfully.
Sep 30 14:53:36 compute-0 conmon[294017]: conmon 926522edfac41ae65926 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-926522edfac41ae6592686b1f5be71e96cbf4a22006d8223a52b8b1529409983.scope/container/memory.events
Sep 30 14:53:36 compute-0 podman[294032]: 2025-09-30 14:53:36.732765035 +0000 UTC m=+0.042513788 container died 926522edfac41ae6592686b1f5be71e96cbf4a22006d8223a52b8b1529409983 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_cori, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Sep 30 14:53:36 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:53:36 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:53:36 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:53:36.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:53:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-a9355c2342c031e00d5c34c43564bebd82abcd8ac935c12dc40cdc06395f5651-merged.mount: Deactivated successfully.
Sep 30 14:53:36 compute-0 podman[294032]: 2025-09-30 14:53:36.776470034 +0000 UTC m=+0.086218787 container remove 926522edfac41ae6592686b1f5be71e96cbf4a22006d8223a52b8b1529409983 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_cori, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:53:36 compute-0 systemd[1]: libpod-conmon-926522edfac41ae6592686b1f5be71e96cbf4a22006d8223a52b8b1529409983.scope: Deactivated successfully.
Sep 30 14:53:36 compute-0 sudo[293892]: pam_unix(sudo:session): session closed for user root
Sep 30 14:53:36 compute-0 sudo[294047]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:53:36 compute-0 sudo[294047]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:53:36 compute-0 sudo[294047]: pam_unix(sudo:session): session closed for user root
Sep 30 14:53:36 compute-0 sudo[294072]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -- lvm list --format json
Sep 30 14:53:36 compute-0 sudo[294072]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:53:37 compute-0 nova_compute[261524]: 2025-09-30 14:53:37.066 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:53:37 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1145: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 591 B/s rd, 0 op/s
Sep 30 14:53:37 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:53:37.213Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:53:37 compute-0 podman[294136]: 2025-09-30 14:53:37.356913508 +0000 UTC m=+0.034905560 container create 6707b35951c47bff3f5ddb6712d543a31e79e8bb5940860791a6500c3525c343 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_beaver, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:53:37 compute-0 systemd[1]: Started libpod-conmon-6707b35951c47bff3f5ddb6712d543a31e79e8bb5940860791a6500c3525c343.scope.
Sep 30 14:53:37 compute-0 podman[294136]: 2025-09-30 14:53:37.341851736 +0000 UTC m=+0.019843808 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:53:37 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:53:37 compute-0 podman[294136]: 2025-09-30 14:53:37.459253575 +0000 UTC m=+0.137245657 container init 6707b35951c47bff3f5ddb6712d543a31e79e8bb5940860791a6500c3525c343 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_beaver, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1)
Sep 30 14:53:37 compute-0 podman[294136]: 2025-09-30 14:53:37.466310199 +0000 UTC m=+0.144302251 container start 6707b35951c47bff3f5ddb6712d543a31e79e8bb5940860791a6500c3525c343 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_beaver, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2)
Sep 30 14:53:37 compute-0 podman[294136]: 2025-09-30 14:53:37.46905673 +0000 UTC m=+0.147048782 container attach 6707b35951c47bff3f5ddb6712d543a31e79e8bb5940860791a6500c3525c343 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_beaver, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Sep 30 14:53:37 compute-0 goofy_beaver[294153]: 167 167
Sep 30 14:53:37 compute-0 systemd[1]: libpod-6707b35951c47bff3f5ddb6712d543a31e79e8bb5940860791a6500c3525c343.scope: Deactivated successfully.
Sep 30 14:53:37 compute-0 podman[294136]: 2025-09-30 14:53:37.473094846 +0000 UTC m=+0.151086958 container died 6707b35951c47bff3f5ddb6712d543a31e79e8bb5940860791a6500c3525c343 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:53:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-bd70f158615d5c17371938ba563c308dda7deb3c02b5b5900b9cd3d3a1dd0eeb-merged.mount: Deactivated successfully.
Sep 30 14:53:37 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:53:37 compute-0 ceph-mon[74194]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #69. Immutable memtables: 0.
Sep 30 14:53:37 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:53:37.524699) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Sep 30 14:53:37 compute-0 ceph-mon[74194]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 69
Sep 30 14:53:37 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759244017524748, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 1898, "num_deletes": 506, "total_data_size": 2514934, "memory_usage": 2567024, "flush_reason": "Manual Compaction"}
Sep 30 14:53:37 compute-0 ceph-mon[74194]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #70: started
Sep 30 14:53:37 compute-0 podman[294136]: 2025-09-30 14:53:37.526729993 +0000 UTC m=+0.204722075 container remove 6707b35951c47bff3f5ddb6712d543a31e79e8bb5940860791a6500c3525c343 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Sep 30 14:53:37 compute-0 systemd[1]: libpod-conmon-6707b35951c47bff3f5ddb6712d543a31e79e8bb5940860791a6500c3525c343.scope: Deactivated successfully.
Sep 30 14:53:37 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759244017544568, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 70, "file_size": 2450634, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 31807, "largest_seqno": 33704, "table_properties": {"data_size": 2441600, "index_size": 4760, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3205, "raw_key_size": 26016, "raw_average_key_size": 20, "raw_value_size": 2419886, "raw_average_value_size": 1903, "num_data_blocks": 204, "num_entries": 1271, "num_filter_entries": 1271, "num_deletions": 506, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759243915, "oldest_key_time": 1759243915, "file_creation_time": 1759244017, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4a74fe2f-a33e-416b-ba25-743e7942b3ac", "db_session_id": "KY5CTSKWFSFJYE5835A9", "orig_file_number": 70, "seqno_to_time_mapping": "N/A"}}
Sep 30 14:53:37 compute-0 ceph-mon[74194]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 19907 microseconds, and 5317 cpu microseconds.
Sep 30 14:53:37 compute-0 ceph-mon[74194]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 14:53:37 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:53:37.544614) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #70: 2450634 bytes OK
Sep 30 14:53:37 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:53:37.544633) [db/memtable_list.cc:519] [default] Level-0 commit table #70 started
Sep 30 14:53:37 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:53:37.547121) [db/memtable_list.cc:722] [default] Level-0 commit table #70: memtable #1 done
Sep 30 14:53:37 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:53:37.547137) EVENT_LOG_v1 {"time_micros": 1759244017547133, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Sep 30 14:53:37 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:53:37.547158) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Sep 30 14:53:37 compute-0 ceph-mon[74194]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 2504754, prev total WAL file size 2504754, number of live WAL files 2.
Sep 30 14:53:37 compute-0 ceph-mon[74194]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000066.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 14:53:37 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:53:37.547993) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B7600323537' seq:72057594037927935, type:22 .. '6B7600353038' seq:0, type:0; will stop at (end)
Sep 30 14:53:37 compute-0 ceph-mon[74194]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Sep 30 14:53:37 compute-0 ceph-mon[74194]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [70(2393KB)], [68(13MB)]
Sep 30 14:53:37 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759244017548027, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [70], "files_L6": [68], "score": -1, "input_data_size": 16940403, "oldest_snapshot_seqno": -1}
Sep 30 14:53:37 compute-0 ceph-mon[74194]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #71: 6539 keys, 15462614 bytes, temperature: kUnknown
Sep 30 14:53:37 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759244017656772, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 71, "file_size": 15462614, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15417623, "index_size": 27535, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16389, "raw_key_size": 171202, "raw_average_key_size": 26, "raw_value_size": 15298591, "raw_average_value_size": 2339, "num_data_blocks": 1091, "num_entries": 6539, "num_filter_entries": 6539, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759241526, "oldest_key_time": 0, "file_creation_time": 1759244017, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4a74fe2f-a33e-416b-ba25-743e7942b3ac", "db_session_id": "KY5CTSKWFSFJYE5835A9", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Sep 30 14:53:37 compute-0 ceph-mon[74194]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 14:53:37 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:53:37.657599) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 15462614 bytes
Sep 30 14:53:37 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:53:37.661878) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 155.1 rd, 141.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 13.8 +0.0 blob) out(14.7 +0.0 blob), read-write-amplify(13.2) write-amplify(6.3) OK, records in: 7568, records dropped: 1029 output_compression: NoCompression
Sep 30 14:53:37 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:53:37.661916) EVENT_LOG_v1 {"time_micros": 1759244017661900, "job": 38, "event": "compaction_finished", "compaction_time_micros": 109243, "compaction_time_cpu_micros": 28552, "output_level": 6, "num_output_files": 1, "total_output_size": 15462614, "num_input_records": 7568, "num_output_records": 6539, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Sep 30 14:53:37 compute-0 ceph-mon[74194]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000070.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 14:53:37 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759244017662462, "job": 38, "event": "table_file_deletion", "file_number": 70}
Sep 30 14:53:37 compute-0 ceph-mon[74194]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 14:53:37 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759244017665492, "job": 38, "event": "table_file_deletion", "file_number": 68}
Sep 30 14:53:37 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:53:37.547941) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:53:37 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:53:37.665564) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:53:37 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:53:37.665571) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:53:37 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:53:37.665573) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:53:37 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:53:37.665575) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:53:37 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:53:37.665577) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:53:37 compute-0 podman[294177]: 2025-09-30 14:53:37.711147297 +0000 UTC m=+0.042162699 container create 4322692ed75a8a924c016f7605710436734300b3e77623d43d96e2b0992d7115 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Sep 30 14:53:37 compute-0 systemd[1]: Started libpod-conmon-4322692ed75a8a924c016f7605710436734300b3e77623d43d96e2b0992d7115.scope.
Sep 30 14:53:37 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:53:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5261d25f4bf460e363bf21d541f15e4cae614950d3e0618fbcc12adec10a4cd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:53:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5261d25f4bf460e363bf21d541f15e4cae614950d3e0618fbcc12adec10a4cd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:53:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5261d25f4bf460e363bf21d541f15e4cae614950d3e0618fbcc12adec10a4cd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:53:37 compute-0 podman[294177]: 2025-09-30 14:53:37.692610944 +0000 UTC m=+0.023626376 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:53:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5261d25f4bf460e363bf21d541f15e4cae614950d3e0618fbcc12adec10a4cd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:53:37 compute-0 podman[294177]: 2025-09-30 14:53:37.798218096 +0000 UTC m=+0.129233548 container init 4322692ed75a8a924c016f7605710436734300b3e77623d43d96e2b0992d7115 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_yonath, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Sep 30 14:53:37 compute-0 podman[294177]: 2025-09-30 14:53:37.804548671 +0000 UTC m=+0.135564083 container start 4322692ed75a8a924c016f7605710436734300b3e77623d43d96e2b0992d7115 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_yonath, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Sep 30 14:53:37 compute-0 podman[294177]: 2025-09-30 14:53:37.808072473 +0000 UTC m=+0.139087895 container attach 4322692ed75a8a924c016f7605710436734300b3e77623d43d96e2b0992d7115 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Sep 30 14:53:37 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:53:37 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:53:37 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:53:37.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:53:38 compute-0 strange_yonath[294193]: {
Sep 30 14:53:38 compute-0 strange_yonath[294193]:     "0": [
Sep 30 14:53:38 compute-0 strange_yonath[294193]:         {
Sep 30 14:53:38 compute-0 strange_yonath[294193]:             "devices": [
Sep 30 14:53:38 compute-0 strange_yonath[294193]:                 "/dev/loop3"
Sep 30 14:53:38 compute-0 strange_yonath[294193]:             ],
Sep 30 14:53:38 compute-0 strange_yonath[294193]:             "lv_name": "ceph_lv0",
Sep 30 14:53:38 compute-0 strange_yonath[294193]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:53:38 compute-0 strange_yonath[294193]:             "lv_size": "21470642176",
Sep 30 14:53:38 compute-0 strange_yonath[294193]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5e3c7776-ac03-5698-b79f-a6dc2d80cae6,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1bf35304-bfb4-41f5-b832-570aa31de1b2,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 14:53:38 compute-0 strange_yonath[294193]:             "lv_uuid": "f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L",
Sep 30 14:53:38 compute-0 strange_yonath[294193]:             "name": "ceph_lv0",
Sep 30 14:53:38 compute-0 strange_yonath[294193]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:53:38 compute-0 strange_yonath[294193]:             "tags": {
Sep 30 14:53:38 compute-0 strange_yonath[294193]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:53:38 compute-0 strange_yonath[294193]:                 "ceph.block_uuid": "f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L",
Sep 30 14:53:38 compute-0 strange_yonath[294193]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 14:53:38 compute-0 strange_yonath[294193]:                 "ceph.cluster_fsid": "5e3c7776-ac03-5698-b79f-a6dc2d80cae6",
Sep 30 14:53:38 compute-0 strange_yonath[294193]:                 "ceph.cluster_name": "ceph",
Sep 30 14:53:38 compute-0 strange_yonath[294193]:                 "ceph.crush_device_class": "",
Sep 30 14:53:38 compute-0 strange_yonath[294193]:                 "ceph.encrypted": "0",
Sep 30 14:53:38 compute-0 strange_yonath[294193]:                 "ceph.osd_fsid": "1bf35304-bfb4-41f5-b832-570aa31de1b2",
Sep 30 14:53:38 compute-0 strange_yonath[294193]:                 "ceph.osd_id": "0",
Sep 30 14:53:38 compute-0 strange_yonath[294193]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 14:53:38 compute-0 strange_yonath[294193]:                 "ceph.type": "block",
Sep 30 14:53:38 compute-0 strange_yonath[294193]:                 "ceph.vdo": "0",
Sep 30 14:53:38 compute-0 strange_yonath[294193]:                 "ceph.with_tpm": "0"
Sep 30 14:53:38 compute-0 strange_yonath[294193]:             },
Sep 30 14:53:38 compute-0 strange_yonath[294193]:             "type": "block",
Sep 30 14:53:38 compute-0 strange_yonath[294193]:             "vg_name": "ceph_vg0"
Sep 30 14:53:38 compute-0 strange_yonath[294193]:         }
Sep 30 14:53:38 compute-0 strange_yonath[294193]:     ]
Sep 30 14:53:38 compute-0 strange_yonath[294193]: }
Sep 30 14:53:38 compute-0 systemd[1]: libpod-4322692ed75a8a924c016f7605710436734300b3e77623d43d96e2b0992d7115.scope: Deactivated successfully.
Sep 30 14:53:38 compute-0 podman[294204]: 2025-09-30 14:53:38.101086388 +0000 UTC m=+0.022981570 container died 4322692ed75a8a924c016f7605710436734300b3e77623d43d96e2b0992d7115 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_yonath, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Sep 30 14:53:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-f5261d25f4bf460e363bf21d541f15e4cae614950d3e0618fbcc12adec10a4cd-merged.mount: Deactivated successfully.
Sep 30 14:53:38 compute-0 podman[294204]: 2025-09-30 14:53:38.139192421 +0000 UTC m=+0.061087603 container remove 4322692ed75a8a924c016f7605710436734300b3e77623d43d96e2b0992d7115 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_yonath, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Sep 30 14:53:38 compute-0 systemd[1]: libpod-conmon-4322692ed75a8a924c016f7605710436734300b3e77623d43d96e2b0992d7115.scope: Deactivated successfully.
Sep 30 14:53:38 compute-0 sudo[294072]: pam_unix(sudo:session): session closed for user root
Sep 30 14:53:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:53:38.271 163966 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:53:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:53:38.271 163966 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:53:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:53:38.271 163966 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:53:38 compute-0 sudo[294219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:53:38 compute-0 sudo[294219]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:53:38 compute-0 sudo[294219]: pam_unix(sudo:session): session closed for user root
Sep 30 14:53:38 compute-0 sudo[294244]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -- raw list --format json
Sep 30 14:53:38 compute-0 sudo[294244]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:53:38 compute-0 ceph-mon[74194]: pgmap v1145: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 591 B/s rd, 0 op/s
Sep 30 14:53:38 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:53:38 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:53:38 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:53:38.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:53:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:53:38.860Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:53:38 compute-0 podman[294312]: 2025-09-30 14:53:38.918299361 +0000 UTC m=+0.072246163 container create 631b516ab60d33f8b080122bc9295ff3580e1445b9f070a6ad3da6ae2360928c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_almeida, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True)
Sep 30 14:53:38 compute-0 systemd[1]: Started libpod-conmon-631b516ab60d33f8b080122bc9295ff3580e1445b9f070a6ad3da6ae2360928c.scope.
Sep 30 14:53:38 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:53:38 compute-0 podman[294312]: 2025-09-30 14:53:38.898068044 +0000 UTC m=+0.052014886 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:53:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:53:38 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:53:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:53:38 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:53:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:53:38 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:53:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:53:39 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:53:39 compute-0 podman[294312]: 2025-09-30 14:53:39.004509558 +0000 UTC m=+0.158456360 container init 631b516ab60d33f8b080122bc9295ff3580e1445b9f070a6ad3da6ae2360928c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_almeida, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:53:39 compute-0 podman[294312]: 2025-09-30 14:53:39.01229235 +0000 UTC m=+0.166239142 container start 631b516ab60d33f8b080122bc9295ff3580e1445b9f070a6ad3da6ae2360928c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_almeida, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:53:39 compute-0 podman[294312]: 2025-09-30 14:53:39.015870124 +0000 UTC m=+0.169816916 container attach 631b516ab60d33f8b080122bc9295ff3580e1445b9f070a6ad3da6ae2360928c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:53:39 compute-0 condescending_almeida[294329]: 167 167
Sep 30 14:53:39 compute-0 systemd[1]: libpod-631b516ab60d33f8b080122bc9295ff3580e1445b9f070a6ad3da6ae2360928c.scope: Deactivated successfully.
Sep 30 14:53:39 compute-0 podman[294334]: 2025-09-30 14:53:39.052060347 +0000 UTC m=+0.023524204 container died 631b516ab60d33f8b080122bc9295ff3580e1445b9f070a6ad3da6ae2360928c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_almeida, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:53:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-1d9d77606767f5fd0913737dfa9efcb07fe7b33febd5f7ada3b9a6f9485fa73f-merged.mount: Deactivated successfully.
Sep 30 14:53:39 compute-0 podman[294334]: 2025-09-30 14:53:39.10284563 +0000 UTC m=+0.074309497 container remove 631b516ab60d33f8b080122bc9295ff3580e1445b9f070a6ad3da6ae2360928c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_almeida, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:53:39 compute-0 systemd[1]: libpod-conmon-631b516ab60d33f8b080122bc9295ff3580e1445b9f070a6ad3da6ae2360928c.scope: Deactivated successfully.
Sep 30 14:53:39 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1146: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 591 B/s rd, 0 op/s
Sep 30 14:53:39 compute-0 podman[294357]: 2025-09-30 14:53:39.315931732 +0000 UTC m=+0.056262417 container create f84e7bae8155aeea2d6b400aba976d45a9222181c7c258b51294e83231be2a14 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_chatelet, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:53:39 compute-0 systemd[1]: Started libpod-conmon-f84e7bae8155aeea2d6b400aba976d45a9222181c7c258b51294e83231be2a14.scope.
Sep 30 14:53:39 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:53:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a39eb4f4e3b0537ec5dd0860abbea0405bc1a9f3e08147cbef778846e2334873/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:53:39 compute-0 podman[294357]: 2025-09-30 14:53:39.298735654 +0000 UTC m=+0.039066369 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:53:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a39eb4f4e3b0537ec5dd0860abbea0405bc1a9f3e08147cbef778846e2334873/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:53:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a39eb4f4e3b0537ec5dd0860abbea0405bc1a9f3e08147cbef778846e2334873/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:53:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a39eb4f4e3b0537ec5dd0860abbea0405bc1a9f3e08147cbef778846e2334873/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:53:39 compute-0 podman[294357]: 2025-09-30 14:53:39.40449821 +0000 UTC m=+0.144828915 container init f84e7bae8155aeea2d6b400aba976d45a9222181c7c258b51294e83231be2a14 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_chatelet, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True)
Sep 30 14:53:39 compute-0 podman[294357]: 2025-09-30 14:53:39.411621966 +0000 UTC m=+0.151952641 container start f84e7bae8155aeea2d6b400aba976d45a9222181c7c258b51294e83231be2a14 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_chatelet, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Sep 30 14:53:39 compute-0 podman[294357]: 2025-09-30 14:53:39.414542562 +0000 UTC m=+0.154873277 container attach f84e7bae8155aeea2d6b400aba976d45a9222181c7c258b51294e83231be2a14 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_chatelet, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:53:39 compute-0 ceph-mon[74194]: pgmap v1146: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 591 B/s rd, 0 op/s
Sep 30 14:53:39 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:53:39 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:53:39 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:53:39.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:53:40 compute-0 lvm[294450]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 14:53:40 compute-0 lvm[294450]: VG ceph_vg0 finished
Sep 30 14:53:40 compute-0 funny_chatelet[294373]: {}
Sep 30 14:53:40 compute-0 systemd[1]: libpod-f84e7bae8155aeea2d6b400aba976d45a9222181c7c258b51294e83231be2a14.scope: Deactivated successfully.
Sep 30 14:53:40 compute-0 systemd[1]: libpod-f84e7bae8155aeea2d6b400aba976d45a9222181c7c258b51294e83231be2a14.scope: Consumed 1.202s CPU time.
Sep 30 14:53:40 compute-0 podman[294357]: 2025-09-30 14:53:40.146051412 +0000 UTC m=+0.886382097 container died f84e7bae8155aeea2d6b400aba976d45a9222181c7c258b51294e83231be2a14 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_chatelet, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Sep 30 14:53:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-a39eb4f4e3b0537ec5dd0860abbea0405bc1a9f3e08147cbef778846e2334873-merged.mount: Deactivated successfully.
Sep 30 14:53:40 compute-0 podman[294357]: 2025-09-30 14:53:40.197520803 +0000 UTC m=+0.937851488 container remove f84e7bae8155aeea2d6b400aba976d45a9222181c7c258b51294e83231be2a14 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_chatelet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:53:40 compute-0 systemd[1]: libpod-conmon-f84e7bae8155aeea2d6b400aba976d45a9222181c7c258b51294e83231be2a14.scope: Deactivated successfully.
Sep 30 14:53:40 compute-0 sudo[294244]: pam_unix(sudo:session): session closed for user root
Sep 30 14:53:40 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 14:53:40 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:53:40 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 14:53:40 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:53:40 compute-0 sudo[294465]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 14:53:40 compute-0 sudo[294465]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:53:40 compute-0 sudo[294465]: pam_unix(sudo:session): session closed for user root
Sep 30 14:53:40 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:53:40 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:53:40 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:53:40.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:53:40 compute-0 nova_compute[261524]: 2025-09-30 14:53:40.892 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:53:41 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1147: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 886 B/s rd, 0 op/s
Sep 30 14:53:41 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:53:41 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:53:41 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:53:41 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:53:41 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:53:41.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:53:42 compute-0 nova_compute[261524]: 2025-09-30 14:53:42.070 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:53:42 compute-0 podman[294492]: 2025-09-30 14:53:42.180636855 +0000 UTC m=+0.085813497 container health_status 3f9405f717bf7bccb1d94628a6cea0442375ebf8d5cf43ef2536ee30dce6c6e0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20250923)
Sep 30 14:53:42 compute-0 podman[294495]: 2025-09-30 14:53:42.201963341 +0000 UTC m=+0.107709668 container health_status c96c6cc1b89de09f21811bef665f35e069874e3ad1bbd4c37d02227af98e3458 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, container_name=ovn_metadata_agent)
Sep 30 14:53:42 compute-0 podman[294494]: 2025-09-30 14:53:42.210943585 +0000 UTC m=+0.114959397 container health_status b3fb2fd96e3ed0561f41ccf5f3a6a8f5edc1610051f196882b9fe64f54041c07 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, io.buildah.version=1.41.3, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20250923)
Sep 30 14:53:42 compute-0 podman[294493]: 2025-09-30 14:53:42.240208417 +0000 UTC m=+0.143995213 container health_status 8ace90c1ad44d98a416105b72e1c541724757ab08efa10adfaa0df7e8c2027c6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, container_name=ovn_controller, tcib_build_tag=36bccb96575468ec919301205d8daa2c, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Sep 30 14:53:42 compute-0 ceph-mon[74194]: pgmap v1147: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 886 B/s rd, 0 op/s
Sep 30 14:53:42 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:53:42 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:53:42 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:53:42 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:53:42.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:53:43 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1148: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 591 B/s rd, 0 op/s
Sep 30 14:53:43 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:53:43.717Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:53:43 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:53:43 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:53:43 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:53:43.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:53:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:53:43 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:53:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:53:43 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:53:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:53:43 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:53:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:53:44 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:53:44 compute-0 ceph-mon[74194]: pgmap v1148: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 591 B/s rd, 0 op/s
Sep 30 14:53:44 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:53:44 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:53:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:53:44] "GET /metrics HTTP/1.1" 200 48534 "" "Prometheus/2.51.0"
Sep 30 14:53:44 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:53:44] "GET /metrics HTTP/1.1" 200 48534 "" "Prometheus/2.51.0"
Sep 30 14:53:44 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:53:44 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:53:44 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:53:44.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:53:45 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1149: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 591 B/s rd, 0 op/s
Sep 30 14:53:45 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:53:45 compute-0 nova_compute[261524]: 2025-09-30 14:53:45.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:53:45 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:53:45 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:53:45 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:53:45.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:53:46 compute-0 ceph-mon[74194]: pgmap v1149: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 591 B/s rd, 0 op/s
Sep 30 14:53:46 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:53:46 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:53:46 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:53:46.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:53:47 compute-0 nova_compute[261524]: 2025-09-30 14:53:47.119 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:53:47 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1150: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:53:47 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:53:47.214Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:53:47 compute-0 sudo[294571]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:53:47 compute-0 sudo[294571]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:53:47 compute-0 sudo[294571]: pam_unix(sudo:session): session closed for user root
Sep 30 14:53:47 compute-0 ceph-mon[74194]: pgmap v1150: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:53:47 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:53:47 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:53:47 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:53:47 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:53:47.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:53:48 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:53:48 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:53:48 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:53:48.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:53:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:53:48.862Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:53:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:53:48 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:53:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:53:48 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:53:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:53:48 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:53:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:53:49 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:53:49 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1151: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:53:49 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:53:49 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:53:49 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:53:49.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:53:50 compute-0 ceph-mon[74194]: pgmap v1151: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:53:50 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:53:50 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:53:50 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:53:50.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:53:50 compute-0 nova_compute[261524]: 2025-09-30 14:53:50.927 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:53:51 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1152: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:53:51 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:53:51 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:53:51 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:53:51.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:53:52 compute-0 nova_compute[261524]: 2025-09-30 14:53:52.122 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:53:52 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:53:52 compute-0 ceph-mon[74194]: pgmap v1152: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:53:52 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:53:52 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:53:52 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:53:52.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:53:53 compute-0 ceph-osd[82707]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Sep 30 14:53:53 compute-0 ceph-osd[82707]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 12K writes, 46K keys, 12K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 12K writes, 3971 syncs, 3.27 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1924 writes, 5855 keys, 1924 commit groups, 1.0 writes per commit group, ingest: 5.83 MB, 0.01 MB/s
                                           Interval WAL: 1924 writes, 867 syncs, 2.22 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Sep 30 14:53:53 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1153: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:53:53 compute-0 ceph-mon[74194]: pgmap v1153: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:53:53 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:53:53.718Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:53:53 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:53:53.718Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:53:53 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:53:53 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:53:53 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:53:53.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:53:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:53:53 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:53:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:53:53 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:53:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:53:53 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:53:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:53:54 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:53:54 compute-0 sudo[285670]: pam_unix(sudo:session): session closed for user root
Sep 30 14:53:54 compute-0 sshd-session[285669]: Received disconnect from 192.168.122.10 port 50618:11: disconnected by user
Sep 30 14:53:54 compute-0 sshd-session[285669]: Disconnected from user zuul 192.168.122.10 port 50618
Sep 30 14:53:54 compute-0 sshd-session[285665]: pam_unix(sshd:session): session closed for user zuul
Sep 30 14:53:54 compute-0 systemd-logind[808]: Session 57 logged out. Waiting for processes to exit.
Sep 30 14:53:54 compute-0 systemd[1]: session-57.scope: Deactivated successfully.
Sep 30 14:53:54 compute-0 systemd[1]: session-57.scope: Consumed 3min 94ms CPU time, 713.4M memory peak, read 222.7M from disk, written 65.7M to disk.
Sep 30 14:53:54 compute-0 systemd-logind[808]: Removed session 57.
Sep 30 14:53:54 compute-0 sshd-session[294604]: Accepted publickey for zuul from 192.168.122.10 port 34982 ssh2: ECDSA SHA256:bXV1aFTGAGwGo0hLh6HZ3pTGxlJrPf0VedxXflT3nU8
Sep 30 14:53:54 compute-0 systemd-logind[808]: New session 58 of user zuul.
Sep 30 14:53:54 compute-0 systemd[1]: Started Session 58 of User zuul.
Sep 30 14:53:54 compute-0 sshd-session[294604]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Sep 30 14:53:54 compute-0 sudo[294608]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/cat /var/tmp/sos-osp/sosreport-compute-0-2025-09-30-ilpbqli.tar.xz
Sep 30 14:53:54 compute-0 sudo[294608]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:53:54 compute-0 sudo[294608]: pam_unix(sudo:session): session closed for user root
Sep 30 14:53:54 compute-0 sshd-session[294607]: Received disconnect from 192.168.122.10 port 34982:11: disconnected by user
Sep 30 14:53:54 compute-0 sshd-session[294607]: Disconnected from user zuul 192.168.122.10 port 34982
Sep 30 14:53:54 compute-0 sshd-session[294604]: pam_unix(sshd:session): session closed for user zuul
Sep 30 14:53:54 compute-0 systemd[1]: session-58.scope: Deactivated successfully.
Sep 30 14:53:54 compute-0 systemd-logind[808]: Session 58 logged out. Waiting for processes to exit.
Sep 30 14:53:54 compute-0 systemd-logind[808]: Removed session 58.
Sep 30 14:53:54 compute-0 sshd-session[294633]: Accepted publickey for zuul from 192.168.122.10 port 34996 ssh2: ECDSA SHA256:bXV1aFTGAGwGo0hLh6HZ3pTGxlJrPf0VedxXflT3nU8
Sep 30 14:53:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:53:54] "GET /metrics HTTP/1.1" 200 48534 "" "Prometheus/2.51.0"
Sep 30 14:53:54 compute-0 systemd-logind[808]: New session 59 of user zuul.
Sep 30 14:53:54 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:53:54] "GET /metrics HTTP/1.1" 200 48534 "" "Prometheus/2.51.0"
Sep 30 14:53:54 compute-0 systemd[1]: Started Session 59 of User zuul.
Sep 30 14:53:54 compute-0 sshd-session[294633]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Sep 30 14:53:54 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:53:54 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:53:54 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:53:54.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:53:54 compute-0 sudo[294637]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/rm -rf /var/tmp/sos-osp
Sep 30 14:53:54 compute-0 sudo[294637]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 14:53:54 compute-0 sudo[294637]: pam_unix(sudo:session): session closed for user root
Sep 30 14:53:54 compute-0 sshd-session[294636]: Received disconnect from 192.168.122.10 port 34996:11: disconnected by user
Sep 30 14:53:54 compute-0 sshd-session[294636]: Disconnected from user zuul 192.168.122.10 port 34996
Sep 30 14:53:54 compute-0 sshd-session[294633]: pam_unix(sshd:session): session closed for user zuul
Sep 30 14:53:54 compute-0 systemd[1]: session-59.scope: Deactivated successfully.
Sep 30 14:53:54 compute-0 systemd-logind[808]: Session 59 logged out. Waiting for processes to exit.
Sep 30 14:53:54 compute-0 systemd-logind[808]: Removed session 59.
Sep 30 14:53:55 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1154: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:53:55 compute-0 nova_compute[261524]: 2025-09-30 14:53:55.927 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:53:55 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:53:55 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:53:55 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:53:55.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:53:56 compute-0 ceph-mon[74194]: pgmap v1154: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:53:56 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:53:56 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:53:56 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:53:56.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:53:57 compute-0 nova_compute[261524]: 2025-09-30 14:53:57.124 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:53:57 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1155: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:53:57 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:53:57.215Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:53:57 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:53:57.216Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:53:57 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:53:57 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:53:57 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:53:57 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:53:57.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:53:58 compute-0 ceph-mon[74194]: pgmap v1155: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:53:58 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:53:58 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:53:58 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:53:58.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:53:58 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:53:58.862Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:53:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:53:58 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:53:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:53:59 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:53:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:53:59 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:53:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:53:59 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:53:59 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1156: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:53:59 compute-0 ceph-mgr[74485]: [balancer INFO root] Optimize plan auto_2025-09-30_14:53:59
Sep 30 14:53:59 compute-0 ceph-mgr[74485]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 14:53:59 compute-0 ceph-mgr[74485]: [balancer INFO root] do_upmap
Sep 30 14:53:59 compute-0 ceph-mgr[74485]: [balancer INFO root] pools ['volumes', 'backups', 'default.rgw.log', 'images', 'vms', '.nfs', 'cephfs.cephfs.data', '.mgr', '.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.control']
Sep 30 14:53:59 compute-0 ceph-mgr[74485]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 14:53:59 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:53:59 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:53:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:53:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:53:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:53:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:53:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:53:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:53:59 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:53:59 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:53:59 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:53:59.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:54:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 14:54:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:54:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 14:54:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:54:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 14:54:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:54:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:54:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:54:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:54:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:54:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Sep 30 14:54:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:54:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Sep 30 14:54:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:54:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:54:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:54:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Sep 30 14:54:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:54:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Sep 30 14:54:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:54:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:54:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:54:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 14:54:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:54:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 14:54:00 compute-0 ceph-mon[74194]: pgmap v1156: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:54:00 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:54:00 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:54:00 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:54:00 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:54:00.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:54:00 compute-0 nova_compute[261524]: 2025-09-30 14:54:00.929 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:54:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 14:54:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 14:54:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 14:54:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 14:54:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 14:54:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 14:54:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 14:54:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 14:54:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 14:54:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 14:54:01 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1157: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:54:01 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:54:01 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.002000052s ======
Sep 30 14:54:01 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:54:01.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Sep 30 14:54:02 compute-0 nova_compute[261524]: 2025-09-30 14:54:02.126 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:54:02 compute-0 ceph-mon[74194]: pgmap v1157: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:54:02 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:54:02 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:54:02 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:54:02 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:54:02.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:54:03 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1158: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:54:03 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:54:03.720Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:54:03 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:54:03 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:54:03 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:54:03.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:54:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:54:03 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:54:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:54:03 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:54:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:54:03 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:54:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:54:04 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:54:04 compute-0 ceph-mon[74194]: pgmap v1158: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:54:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:54:04] "GET /metrics HTTP/1.1" 200 48534 "" "Prometheus/2.51.0"
Sep 30 14:54:04 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:54:04] "GET /metrics HTTP/1.1" 200 48534 "" "Prometheus/2.51.0"
Sep 30 14:54:04 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:54:04 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:54:04 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:54:04.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:54:05 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1159: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:54:05 compute-0 nova_compute[261524]: 2025-09-30 14:54:05.932 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:54:05 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:54:05 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:54:05 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:54:05.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:54:06 compute-0 ceph-mon[74194]: pgmap v1159: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:54:06 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:54:06 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:54:06 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:54:06.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:54:07 compute-0 nova_compute[261524]: 2025-09-30 14:54:07.129 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:54:07 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1160: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:54:07 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:54:07.217Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:54:07 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:54:07 compute-0 sudo[294675]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:54:07 compute-0 sudo[294675]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:54:07 compute-0 sudo[294675]: pam_unix(sudo:session): session closed for user root
Sep 30 14:54:07 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:54:07 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:54:07 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:54:07.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:54:08 compute-0 ceph-mon[74194]: pgmap v1160: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:54:08 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:54:08 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:54:08 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:54:08.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:54:08 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:54:08.864Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:54:08 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:54:08.864Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:54:08 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:54:08.865Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:54:09 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:54:08 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:54:09 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:54:08 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:54:09 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:54:08 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:54:09 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:54:09 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:54:09 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1161: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:54:09 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:54:09 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:54:09 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:54:09.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:54:10 compute-0 ceph-mon[74194]: pgmap v1161: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:54:10 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:54:10 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:54:10 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:54:10.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:54:10 compute-0 nova_compute[261524]: 2025-09-30 14:54:10.933 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:54:11 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 14:54:11 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3356310247' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 14:54:11 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 14:54:11 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3356310247' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 14:54:11 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1162: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:54:11 compute-0 ceph-mon[74194]: from='client.? 192.168.122.10:0/3356310247' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 14:54:11 compute-0 ceph-mon[74194]: from='client.? 192.168.122.10:0/3356310247' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 14:54:11 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:54:11 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:54:11 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:54:11.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:54:12 compute-0 nova_compute[261524]: 2025-09-30 14:54:12.133 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:54:12 compute-0 ceph-mon[74194]: pgmap v1162: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:54:12 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:54:12 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:54:12 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:54:12 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:54:12.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:54:13 compute-0 podman[294707]: 2025-09-30 14:54:13.137024369 +0000 UTC m=+0.057730535 container health_status b3fb2fd96e3ed0561f41ccf5f3a6a8f5edc1610051f196882b9fe64f54041c07 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20250923, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Sep 30 14:54:13 compute-0 podman[294705]: 2025-09-30 14:54:13.143016975 +0000 UTC m=+0.067480679 container health_status 3f9405f717bf7bccb1d94628a6cea0442375ebf8d5cf43ef2536ee30dce6c6e0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Sep 30 14:54:13 compute-0 podman[294708]: 2025-09-30 14:54:13.157273407 +0000 UTC m=+0.075775616 container health_status c96c6cc1b89de09f21811bef665f35e069874e3ad1bbd4c37d02227af98e3458 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Sep 30 14:54:13 compute-0 podman[294706]: 2025-09-30 14:54:13.161897887 +0000 UTC m=+0.085915520 container health_status 8ace90c1ad44d98a416105b72e1c541724757ab08efa10adfaa0df7e8c2027c6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923)
Sep 30 14:54:13 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1163: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:54:13 compute-0 ceph-mon[74194]: pgmap v1163: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:54:13 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:54:13.720Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:54:13 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:54:13 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:54:13 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:54:13.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:54:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:54:13 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:54:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:54:13 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:54:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:54:13 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:54:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:54:14 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:54:14 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:54:14 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:54:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:54:14] "GET /metrics HTTP/1.1" 200 48533 "" "Prometheus/2.51.0"
Sep 30 14:54:14 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:54:14] "GET /metrics HTTP/1.1" 200 48533 "" "Prometheus/2.51.0"
Sep 30 14:54:14 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:54:14 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:54:14 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 14:54:14 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:54:14.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 14:54:15 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1164: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:54:15 compute-0 ceph-mon[74194]: pgmap v1164: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:54:15 compute-0 nova_compute[261524]: 2025-09-30 14:54:15.936 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:54:15 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:54:15 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:54:15 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:54:15.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:54:16 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:54:16 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:54:16 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:54:16.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:54:17 compute-0 nova_compute[261524]: 2025-09-30 14:54:17.135 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:54:17 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1165: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:54:17 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:54:17.218Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:54:17 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:54:17 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:54:17 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:54:17 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:54:17.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:54:18 compute-0 ceph-mon[74194]: pgmap v1165: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:54:18 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:54:18 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:54:18 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:54:18.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:54:18 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:54:18.865Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:54:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:54:18 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:54:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:54:18 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:54:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:54:18 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:54:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:54:19 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:54:19 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1166: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:54:19 compute-0 nova_compute[261524]: 2025-09-30 14:54:19.949 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:54:19 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:54:19 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:54:19 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:54:19.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:54:20 compute-0 ceph-mon[74194]: pgmap v1166: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:54:20 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:54:20 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:54:20 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:54:20.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:54:20 compute-0 nova_compute[261524]: 2025-09-30 14:54:20.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:54:20 compute-0 nova_compute[261524]: 2025-09-30 14:54:20.951 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:54:20 compute-0 nova_compute[261524]: 2025-09-30 14:54:20.952 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:54:21 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1167: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:54:21 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/2576381844' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:54:21 compute-0 ceph-mon[74194]: pgmap v1167: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:54:21 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/3922823511' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:54:21 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:54:21 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:54:21 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:54:21.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:54:21 compute-0 nova_compute[261524]: 2025-09-30 14:54:21.972 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:54:21 compute-0 nova_compute[261524]: 2025-09-30 14:54:21.972 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Sep 30 14:54:21 compute-0 nova_compute[261524]: 2025-09-30 14:54:21.973 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Sep 30 14:54:21 compute-0 nova_compute[261524]: 2025-09-30 14:54:21.988 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Sep 30 14:54:22 compute-0 nova_compute[261524]: 2025-09-30 14:54:22.137 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:54:22 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:54:22 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/2553982944' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:54:22 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:54:22 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:54:22 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:54:22.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:54:22 compute-0 nova_compute[261524]: 2025-09-30 14:54:22.952 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:54:22 compute-0 nova_compute[261524]: 2025-09-30 14:54:22.952 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:54:23 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1168: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:54:23 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/1450632600' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:54:23 compute-0 ceph-mon[74194]: pgmap v1168: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:54:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:54:23.721Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:54:23 compute-0 nova_compute[261524]: 2025-09-30 14:54:23.953 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:54:23 compute-0 nova_compute[261524]: 2025-09-30 14:54:23.953 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:54:23 compute-0 nova_compute[261524]: 2025-09-30 14:54:23.953 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:54:23 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:54:23 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:54:23 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:54:23.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:54:23 compute-0 nova_compute[261524]: 2025-09-30 14:54:23.982 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:54:23 compute-0 nova_compute[261524]: 2025-09-30 14:54:23.983 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:54:23 compute-0 nova_compute[261524]: 2025-09-30 14:54:23.983 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:54:23 compute-0 nova_compute[261524]: 2025-09-30 14:54:23.983 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Sep 30 14:54:23 compute-0 nova_compute[261524]: 2025-09-30 14:54:23.983 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Sep 30 14:54:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:54:23 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:54:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:54:23 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:54:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:54:23 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:54:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:54:24 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:54:24 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 14:54:24 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4153825323' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:54:24 compute-0 nova_compute[261524]: 2025-09-30 14:54:24.470 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Sep 30 14:54:24 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/4153825323' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:54:24 compute-0 nova_compute[261524]: 2025-09-30 14:54:24.685 2 WARNING nova.virt.libvirt.driver [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 14:54:24 compute-0 nova_compute[261524]: 2025-09-30 14:54:24.688 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4492MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Sep 30 14:54:24 compute-0 nova_compute[261524]: 2025-09-30 14:54:24.689 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:54:24 compute-0 nova_compute[261524]: 2025-09-30 14:54:24.689 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:54:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:54:24] "GET /metrics HTTP/1.1" 200 48533 "" "Prometheus/2.51.0"
Sep 30 14:54:24 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:54:24] "GET /metrics HTTP/1.1" 200 48533 "" "Prometheus/2.51.0"
Sep 30 14:54:24 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:54:24 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:54:24 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:54:24.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:54:24 compute-0 nova_compute[261524]: 2025-09-30 14:54:24.879 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Sep 30 14:54:24 compute-0 nova_compute[261524]: 2025-09-30 14:54:24.880 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Sep 30 14:54:25 compute-0 nova_compute[261524]: 2025-09-30 14:54:25.010 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Sep 30 14:54:25 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1169: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:54:25 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 14:54:25 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/89115120' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:54:25 compute-0 nova_compute[261524]: 2025-09-30 14:54:25.497 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Sep 30 14:54:25 compute-0 nova_compute[261524]: 2025-09-30 14:54:25.505 2 DEBUG nova.compute.provider_tree [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Inventory has not changed in ProviderTree for provider: 06783cfc-6d32-454d-9501-ebd8adea3735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Sep 30 14:54:25 compute-0 nova_compute[261524]: 2025-09-30 14:54:25.527 2 DEBUG nova.scheduler.client.report [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Inventory has not changed for provider 06783cfc-6d32-454d-9501-ebd8adea3735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Sep 30 14:54:25 compute-0 nova_compute[261524]: 2025-09-30 14:54:25.531 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Sep 30 14:54:25 compute-0 nova_compute[261524]: 2025-09-30 14:54:25.531 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.842s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:54:25 compute-0 ceph-mon[74194]: pgmap v1169: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:54:25 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/89115120' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:54:25 compute-0 nova_compute[261524]: 2025-09-30 14:54:25.941 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:54:25 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:54:25 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:54:25 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:54:25.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:54:26 compute-0 nova_compute[261524]: 2025-09-30 14:54:26.531 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:54:26 compute-0 nova_compute[261524]: 2025-09-30 14:54:26.531 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:54:26 compute-0 nova_compute[261524]: 2025-09-30 14:54:26.531 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Sep 30 14:54:26 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:54:26 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:54:26 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:54:26.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:54:27 compute-0 nova_compute[261524]: 2025-09-30 14:54:27.139 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:54:27 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1170: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:54:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:54:27.218Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:54:27 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:54:27 compute-0 sudo[294841]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:54:27 compute-0 sudo[294841]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:54:27 compute-0 sudo[294841]: pam_unix(sudo:session): session closed for user root
Sep 30 14:54:27 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:54:27 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:54:27 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:54:27.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:54:28 compute-0 ceph-mon[74194]: pgmap v1170: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:54:28 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:54:28 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:54:28 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:54:28.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:54:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:54:28.866Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:54:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:54:28 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:54:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:54:28 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:54:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:54:28 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:54:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:54:29 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:54:29 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1171: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:54:29 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:54:29 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:54:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:54:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:54:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:54:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:54:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:54:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:54:29 compute-0 nova_compute[261524]: 2025-09-30 14:54:29.953 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:54:29 compute-0 nova_compute[261524]: 2025-09-30 14:54:29.953 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Sep 30 14:54:29 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:54:29 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:54:29 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:54:29.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:54:30 compute-0 ceph-mon[74194]: pgmap v1171: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:54:30 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:54:30 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:54:30 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:54:30 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:54:30.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:54:30 compute-0 nova_compute[261524]: 2025-09-30 14:54:30.944 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:54:31 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1172: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:54:31 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:54:31 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:54:31 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:54:31.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:54:31 compute-0 nova_compute[261524]: 2025-09-30 14:54:31.981 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:54:31 compute-0 nova_compute[261524]: 2025-09-30 14:54:31.982 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Sep 30 14:54:31 compute-0 nova_compute[261524]: 2025-09-30 14:54:31.999 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Sep 30 14:54:32 compute-0 nova_compute[261524]: 2025-09-30 14:54:32.140 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:54:32 compute-0 ceph-mon[74194]: pgmap v1172: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:54:32 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:54:32 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:54:32 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:54:32 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:54:32.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:54:33 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1173: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:54:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:54:33.722Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:54:33 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:54:33 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:54:33 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:54:33.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:54:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:54:34 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:54:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:54:34 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:54:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:54:34 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:54:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:54:34 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:54:34 compute-0 ceph-mon[74194]: pgmap v1173: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:54:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:54:34] "GET /metrics HTTP/1.1" 200 48532 "" "Prometheus/2.51.0"
Sep 30 14:54:34 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:54:34] "GET /metrics HTTP/1.1" 200 48532 "" "Prometheus/2.51.0"
Sep 30 14:54:34 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:54:34 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:54:34 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:54:34.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:54:35 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1174: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:54:35 compute-0 nova_compute[261524]: 2025-09-30 14:54:35.949 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:54:35 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:54:35 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:54:35 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:54:35.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:54:36 compute-0 ceph-mon[74194]: pgmap v1174: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:54:36 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:54:36 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:54:36 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:54:36.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:54:37 compute-0 nova_compute[261524]: 2025-09-30 14:54:37.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:54:37 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1175: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:54:37 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:54:37.219Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:54:37 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:54:37 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:54:37 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:54:37 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:54:37.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:54:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:54:38.271 163966 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:54:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:54:38.272 163966 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:54:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:54:38.272 163966 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:54:38 compute-0 ceph-mon[74194]: pgmap v1175: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:54:38 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:54:38 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:54:38 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:54:38.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:54:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:54:38.869Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:54:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:54:39 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:54:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:54:39 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:54:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:54:39 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:54:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:54:39 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:54:39 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1176: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:54:39 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:54:39 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.002000052s ======
Sep 30 14:54:39 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:54:39.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Sep 30 14:54:40 compute-0 ceph-mon[74194]: pgmap v1176: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:54:40 compute-0 sudo[294879]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:54:40 compute-0 sudo[294879]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:54:40 compute-0 sudo[294879]: pam_unix(sudo:session): session closed for user root
Sep 30 14:54:40 compute-0 sudo[294904]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 14:54:40 compute-0 sudo[294904]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:54:40 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:54:40 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:54:40 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:54:40.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:54:40 compute-0 nova_compute[261524]: 2025-09-30 14:54:40.951 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:54:40 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Sep 30 14:54:40 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:54:40 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Sep 30 14:54:41 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:54:41 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1177: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:54:41 compute-0 sudo[294904]: pam_unix(sudo:session): session closed for user root
Sep 30 14:54:41 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:54:41 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:54:41 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 14:54:41 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 14:54:41 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1178: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 586 B/s rd, 0 op/s
Sep 30 14:54:41 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 14:54:41 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:54:41 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 14:54:41 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:54:41 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 14:54:41 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 14:54:41 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 14:54:41 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 14:54:41 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:54:41 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:54:41 compute-0 sudo[294960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:54:41 compute-0 sudo[294960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:54:41 compute-0 sudo[294960]: pam_unix(sudo:session): session closed for user root
Sep 30 14:54:41 compute-0 sudo[294985]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 14:54:41 compute-0 sudo[294985]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:54:41 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:54:41 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:54:41 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:54:41.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:54:41 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:54:41 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:54:41 compute-0 ceph-mon[74194]: pgmap v1177: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:54:41 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:54:41 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 14:54:41 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:54:41 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:54:41 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 14:54:41 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 14:54:41 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:54:42 compute-0 nova_compute[261524]: 2025-09-30 14:54:42.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:54:42 compute-0 podman[295052]: 2025-09-30 14:54:42.332289796 +0000 UTC m=+0.054332516 container create 63ebf4dc46f450adc15bc0715b1ee2b16929a58ebc57bb28e27ece02e0fc2d9a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_kapitsa, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Sep 30 14:54:42 compute-0 systemd[1]: Started libpod-conmon-63ebf4dc46f450adc15bc0715b1ee2b16929a58ebc57bb28e27ece02e0fc2d9a.scope.
Sep 30 14:54:42 compute-0 podman[295052]: 2025-09-30 14:54:42.312227044 +0000 UTC m=+0.034269784 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:54:42 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:54:42 compute-0 podman[295052]: 2025-09-30 14:54:42.434837308 +0000 UTC m=+0.156880078 container init 63ebf4dc46f450adc15bc0715b1ee2b16929a58ebc57bb28e27ece02e0fc2d9a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_kapitsa, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Sep 30 14:54:42 compute-0 podman[295052]: 2025-09-30 14:54:42.447222501 +0000 UTC m=+0.169265221 container start 63ebf4dc46f450adc15bc0715b1ee2b16929a58ebc57bb28e27ece02e0fc2d9a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_kapitsa, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:54:42 compute-0 podman[295052]: 2025-09-30 14:54:42.451353669 +0000 UTC m=+0.173396449 container attach 63ebf4dc46f450adc15bc0715b1ee2b16929a58ebc57bb28e27ece02e0fc2d9a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_kapitsa, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:54:42 compute-0 flamboyant_kapitsa[295069]: 167 167
Sep 30 14:54:42 compute-0 systemd[1]: libpod-63ebf4dc46f450adc15bc0715b1ee2b16929a58ebc57bb28e27ece02e0fc2d9a.scope: Deactivated successfully.
Sep 30 14:54:42 compute-0 podman[295052]: 2025-09-30 14:54:42.458165756 +0000 UTC m=+0.180208496 container died 63ebf4dc46f450adc15bc0715b1ee2b16929a58ebc57bb28e27ece02e0fc2d9a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_kapitsa, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:54:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-5007985d6326d9e90f0276d5c6d5790766da107c710d97c24927c1aab4dd42d5-merged.mount: Deactivated successfully.
Sep 30 14:54:42 compute-0 podman[295052]: 2025-09-30 14:54:42.509874704 +0000 UTC m=+0.231917424 container remove 63ebf4dc46f450adc15bc0715b1ee2b16929a58ebc57bb28e27ece02e0fc2d9a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_kapitsa, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Sep 30 14:54:42 compute-0 systemd[1]: libpod-conmon-63ebf4dc46f450adc15bc0715b1ee2b16929a58ebc57bb28e27ece02e0fc2d9a.scope: Deactivated successfully.
Sep 30 14:54:42 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:54:42 compute-0 podman[295095]: 2025-09-30 14:54:42.736270053 +0000 UTC m=+0.059793149 container create f48a48c2c42123a140f448128518ae818c7a97418a027d35e6b0797954109bea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_mcclintock, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:54:42 compute-0 systemd[1]: Started libpod-conmon-f48a48c2c42123a140f448128518ae818c7a97418a027d35e6b0797954109bea.scope.
Sep 30 14:54:42 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:54:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21e2a50aa8019d409d45894511adb87829100c587d50c4441cdbec518441b7a2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:54:42 compute-0 podman[295095]: 2025-09-30 14:54:42.71583241 +0000 UTC m=+0.039355496 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:54:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21e2a50aa8019d409d45894511adb87829100c587d50c4441cdbec518441b7a2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:54:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21e2a50aa8019d409d45894511adb87829100c587d50c4441cdbec518441b7a2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:54:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21e2a50aa8019d409d45894511adb87829100c587d50c4441cdbec518441b7a2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:54:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21e2a50aa8019d409d45894511adb87829100c587d50c4441cdbec518441b7a2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 14:54:42 compute-0 podman[295095]: 2025-09-30 14:54:42.82944201 +0000 UTC m=+0.152965096 container init f48a48c2c42123a140f448128518ae818c7a97418a027d35e6b0797954109bea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_mcclintock, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid)
Sep 30 14:54:42 compute-0 podman[295095]: 2025-09-30 14:54:42.848676102 +0000 UTC m=+0.172199168 container start f48a48c2c42123a140f448128518ae818c7a97418a027d35e6b0797954109bea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_mcclintock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Sep 30 14:54:42 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:54:42 compute-0 podman[295095]: 2025-09-30 14:54:42.852284966 +0000 UTC m=+0.175808082 container attach f48a48c2c42123a140f448128518ae818c7a97418a027d35e6b0797954109bea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_mcclintock, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid)
Sep 30 14:54:42 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:54:42 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:54:42.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:54:43 compute-0 ceph-mon[74194]: pgmap v1178: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 586 B/s rd, 0 op/s
Sep 30 14:54:43 compute-0 priceless_mcclintock[295112]: --> passed data devices: 0 physical, 1 LVM
Sep 30 14:54:43 compute-0 priceless_mcclintock[295112]: --> All data devices are unavailable
Sep 30 14:54:43 compute-0 systemd[1]: libpod-f48a48c2c42123a140f448128518ae818c7a97418a027d35e6b0797954109bea.scope: Deactivated successfully.
Sep 30 14:54:43 compute-0 podman[295095]: 2025-09-30 14:54:43.267033292 +0000 UTC m=+0.590556398 container died f48a48c2c42123a140f448128518ae818c7a97418a027d35e6b0797954109bea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_mcclintock, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Sep 30 14:54:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-21e2a50aa8019d409d45894511adb87829100c587d50c4441cdbec518441b7a2-merged.mount: Deactivated successfully.
Sep 30 14:54:43 compute-0 podman[295095]: 2025-09-30 14:54:43.349425809 +0000 UTC m=+0.672948895 container remove f48a48c2c42123a140f448128518ae818c7a97418a027d35e6b0797954109bea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_mcclintock, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:54:43 compute-0 systemd[1]: libpod-conmon-f48a48c2c42123a140f448128518ae818c7a97418a027d35e6b0797954109bea.scope: Deactivated successfully.
Sep 30 14:54:43 compute-0 sudo[294985]: pam_unix(sudo:session): session closed for user root
Sep 30 14:54:43 compute-0 podman[295142]: 2025-09-30 14:54:43.398117718 +0000 UTC m=+0.069863592 container health_status c96c6cc1b89de09f21811bef665f35e069874e3ad1bbd4c37d02227af98e3458 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent)
Sep 30 14:54:43 compute-0 podman[295128]: 2025-09-30 14:54:43.400957492 +0000 UTC m=+0.087037579 container health_status 3f9405f717bf7bccb1d94628a6cea0442375ebf8d5cf43ef2536ee30dce6c6e0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Sep 30 14:54:43 compute-0 podman[295136]: 2025-09-30 14:54:43.409631018 +0000 UTC m=+0.092242995 container health_status b3fb2fd96e3ed0561f41ccf5f3a6a8f5edc1610051f196882b9fe64f54041c07 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Sep 30 14:54:43 compute-0 podman[295135]: 2025-09-30 14:54:43.4580506 +0000 UTC m=+0.147740841 container health_status 8ace90c1ad44d98a416105b72e1c541724757ab08efa10adfaa0df7e8c2027c6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Sep 30 14:54:43 compute-0 sudo[295211]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:54:43 compute-0 sudo[295211]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:54:43 compute-0 sudo[295211]: pam_unix(sudo:session): session closed for user root
Sep 30 14:54:43 compute-0 sudo[295241]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -- lvm list --format json
Sep 30 14:54:43 compute-0 sudo[295241]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:54:43 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1179: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 586 B/s rd, 0 op/s
Sep 30 14:54:43 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:54:43.723Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:54:43 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:54:43.724Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:54:43 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:54:43 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:54:43 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:54:43.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:54:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:54:43 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:54:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:54:43 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:54:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:54:43 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:54:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:54:44 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:54:44 compute-0 podman[295307]: 2025-09-30 14:54:44.037926469 +0000 UTC m=+0.061825312 container create 4f184c8ab0692cc87c035de3c82ff92e1a95b991da888b4c82959b30dff8937e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_poitras, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:54:44 compute-0 systemd[1]: Started libpod-conmon-4f184c8ab0692cc87c035de3c82ff92e1a95b991da888b4c82959b30dff8937e.scope.
Sep 30 14:54:44 compute-0 podman[295307]: 2025-09-30 14:54:44.00879109 +0000 UTC m=+0.032690003 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:54:44 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:54:44 compute-0 podman[295307]: 2025-09-30 14:54:44.11855286 +0000 UTC m=+0.142451703 container init 4f184c8ab0692cc87c035de3c82ff92e1a95b991da888b4c82959b30dff8937e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_poitras, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:54:44 compute-0 podman[295307]: 2025-09-30 14:54:44.131048575 +0000 UTC m=+0.154947368 container start 4f184c8ab0692cc87c035de3c82ff92e1a95b991da888b4c82959b30dff8937e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_poitras, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Sep 30 14:54:44 compute-0 podman[295307]: 2025-09-30 14:54:44.134616998 +0000 UTC m=+0.158515811 container attach 4f184c8ab0692cc87c035de3c82ff92e1a95b991da888b4c82959b30dff8937e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_poitras, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Sep 30 14:54:44 compute-0 infallible_poitras[295323]: 167 167
Sep 30 14:54:44 compute-0 systemd[1]: libpod-4f184c8ab0692cc87c035de3c82ff92e1a95b991da888b4c82959b30dff8937e.scope: Deactivated successfully.
Sep 30 14:54:44 compute-0 podman[295307]: 2025-09-30 14:54:44.14122386 +0000 UTC m=+0.165122663 container died 4f184c8ab0692cc87c035de3c82ff92e1a95b991da888b4c82959b30dff8937e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_poitras, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Sep 30 14:54:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-16257787db7397f3165aada01f02538a6559e6e075d553f8122a61082bc99852-merged.mount: Deactivated successfully.
Sep 30 14:54:44 compute-0 podman[295307]: 2025-09-30 14:54:44.198157574 +0000 UTC m=+0.222056397 container remove 4f184c8ab0692cc87c035de3c82ff92e1a95b991da888b4c82959b30dff8937e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_poitras, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:54:44 compute-0 systemd[1]: libpod-conmon-4f184c8ab0692cc87c035de3c82ff92e1a95b991da888b4c82959b30dff8937e.scope: Deactivated successfully.
Sep 30 14:54:44 compute-0 podman[295347]: 2025-09-30 14:54:44.396004689 +0000 UTC m=+0.060394055 container create c9eb800485360d497ce982e613eec95658359456f7cdfc663af1448acb6d0d85 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_mendeleev, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Sep 30 14:54:44 compute-0 systemd[1]: Started libpod-conmon-c9eb800485360d497ce982e613eec95658359456f7cdfc663af1448acb6d0d85.scope.
Sep 30 14:54:44 compute-0 podman[295347]: 2025-09-30 14:54:44.373335978 +0000 UTC m=+0.037725434 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:54:44 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:54:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/618249f15b3e41c6632826d8ad6298bbb14a6e44af7e8d85875901889bb293b4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:54:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/618249f15b3e41c6632826d8ad6298bbb14a6e44af7e8d85875901889bb293b4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:54:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/618249f15b3e41c6632826d8ad6298bbb14a6e44af7e8d85875901889bb293b4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:54:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/618249f15b3e41c6632826d8ad6298bbb14a6e44af7e8d85875901889bb293b4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:54:44 compute-0 podman[295347]: 2025-09-30 14:54:44.495648175 +0000 UTC m=+0.160037591 container init c9eb800485360d497ce982e613eec95658359456f7cdfc663af1448acb6d0d85 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_mendeleev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:54:44 compute-0 podman[295347]: 2025-09-30 14:54:44.511482838 +0000 UTC m=+0.175872234 container start c9eb800485360d497ce982e613eec95658359456f7cdfc663af1448acb6d0d85 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_mendeleev, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True)
Sep 30 14:54:44 compute-0 podman[295347]: 2025-09-30 14:54:44.514986369 +0000 UTC m=+0.179375755 container attach c9eb800485360d497ce982e613eec95658359456f7cdfc663af1448acb6d0d85 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_mendeleev, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:54:44 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:54:44 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:54:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:54:44] "GET /metrics HTTP/1.1" 200 48532 "" "Prometheus/2.51.0"
Sep 30 14:54:44 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:54:44] "GET /metrics HTTP/1.1" 200 48532 "" "Prometheus/2.51.0"
Sep 30 14:54:44 compute-0 dreamy_mendeleev[295364]: {
Sep 30 14:54:44 compute-0 dreamy_mendeleev[295364]:     "0": [
Sep 30 14:54:44 compute-0 dreamy_mendeleev[295364]:         {
Sep 30 14:54:44 compute-0 dreamy_mendeleev[295364]:             "devices": [
Sep 30 14:54:44 compute-0 dreamy_mendeleev[295364]:                 "/dev/loop3"
Sep 30 14:54:44 compute-0 dreamy_mendeleev[295364]:             ],
Sep 30 14:54:44 compute-0 dreamy_mendeleev[295364]:             "lv_name": "ceph_lv0",
Sep 30 14:54:44 compute-0 dreamy_mendeleev[295364]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:54:44 compute-0 dreamy_mendeleev[295364]:             "lv_size": "21470642176",
Sep 30 14:54:44 compute-0 dreamy_mendeleev[295364]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5e3c7776-ac03-5698-b79f-a6dc2d80cae6,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1bf35304-bfb4-41f5-b832-570aa31de1b2,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 14:54:44 compute-0 dreamy_mendeleev[295364]:             "lv_uuid": "f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L",
Sep 30 14:54:44 compute-0 dreamy_mendeleev[295364]:             "name": "ceph_lv0",
Sep 30 14:54:44 compute-0 dreamy_mendeleev[295364]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:54:44 compute-0 dreamy_mendeleev[295364]:             "tags": {
Sep 30 14:54:44 compute-0 dreamy_mendeleev[295364]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:54:44 compute-0 dreamy_mendeleev[295364]:                 "ceph.block_uuid": "f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L",
Sep 30 14:54:44 compute-0 dreamy_mendeleev[295364]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 14:54:44 compute-0 dreamy_mendeleev[295364]:                 "ceph.cluster_fsid": "5e3c7776-ac03-5698-b79f-a6dc2d80cae6",
Sep 30 14:54:44 compute-0 dreamy_mendeleev[295364]:                 "ceph.cluster_name": "ceph",
Sep 30 14:54:44 compute-0 dreamy_mendeleev[295364]:                 "ceph.crush_device_class": "",
Sep 30 14:54:44 compute-0 dreamy_mendeleev[295364]:                 "ceph.encrypted": "0",
Sep 30 14:54:44 compute-0 dreamy_mendeleev[295364]:                 "ceph.osd_fsid": "1bf35304-bfb4-41f5-b832-570aa31de1b2",
Sep 30 14:54:44 compute-0 dreamy_mendeleev[295364]:                 "ceph.osd_id": "0",
Sep 30 14:54:44 compute-0 dreamy_mendeleev[295364]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 14:54:44 compute-0 dreamy_mendeleev[295364]:                 "ceph.type": "block",
Sep 30 14:54:44 compute-0 dreamy_mendeleev[295364]:                 "ceph.vdo": "0",
Sep 30 14:54:44 compute-0 dreamy_mendeleev[295364]:                 "ceph.with_tpm": "0"
Sep 30 14:54:44 compute-0 dreamy_mendeleev[295364]:             },
Sep 30 14:54:44 compute-0 dreamy_mendeleev[295364]:             "type": "block",
Sep 30 14:54:44 compute-0 dreamy_mendeleev[295364]:             "vg_name": "ceph_vg0"
Sep 30 14:54:44 compute-0 dreamy_mendeleev[295364]:         }
Sep 30 14:54:44 compute-0 dreamy_mendeleev[295364]:     ]
Sep 30 14:54:44 compute-0 dreamy_mendeleev[295364]: }
Sep 30 14:54:44 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:54:44 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:54:44 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:54:44.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:54:44 compute-0 systemd[1]: libpod-c9eb800485360d497ce982e613eec95658359456f7cdfc663af1448acb6d0d85.scope: Deactivated successfully.
Sep 30 14:54:44 compute-0 podman[295347]: 2025-09-30 14:54:44.858560522 +0000 UTC m=+0.522949978 container died c9eb800485360d497ce982e613eec95658359456f7cdfc663af1448acb6d0d85 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_mendeleev, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Sep 30 14:54:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-618249f15b3e41c6632826d8ad6298bbb14a6e44af7e8d85875901889bb293b4-merged.mount: Deactivated successfully.
Sep 30 14:54:44 compute-0 podman[295347]: 2025-09-30 14:54:44.916044699 +0000 UTC m=+0.580434065 container remove c9eb800485360d497ce982e613eec95658359456f7cdfc663af1448acb6d0d85 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_mendeleev, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:54:44 compute-0 systemd[1]: libpod-conmon-c9eb800485360d497ce982e613eec95658359456f7cdfc663af1448acb6d0d85.scope: Deactivated successfully.
Sep 30 14:54:44 compute-0 sudo[295241]: pam_unix(sudo:session): session closed for user root
Sep 30 14:54:45 compute-0 ceph-mon[74194]: pgmap v1179: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 586 B/s rd, 0 op/s
Sep 30 14:54:45 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:54:45 compute-0 sudo[295387]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:54:45 compute-0 sudo[295387]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:54:45 compute-0 sudo[295387]: pam_unix(sudo:session): session closed for user root
Sep 30 14:54:45 compute-0 sudo[295412]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -- raw list --format json
Sep 30 14:54:45 compute-0 sudo[295412]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:54:45 compute-0 podman[295479]: 2025-09-30 14:54:45.66446536 +0000 UTC m=+0.043884485 container create 6057f82ebf0be8ec1701080351112e541824efdc2e756ec7ca5f23f1cb102463 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_wright, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:54:45 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1180: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 586 B/s rd, 0 op/s
Sep 30 14:54:45 compute-0 systemd[1]: Started libpod-conmon-6057f82ebf0be8ec1701080351112e541824efdc2e756ec7ca5f23f1cb102463.scope.
Sep 30 14:54:45 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:54:45 compute-0 podman[295479]: 2025-09-30 14:54:45.644501149 +0000 UTC m=+0.023920334 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:54:45 compute-0 podman[295479]: 2025-09-30 14:54:45.746708832 +0000 UTC m=+0.126127977 container init 6057f82ebf0be8ec1701080351112e541824efdc2e756ec7ca5f23f1cb102463 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_wright, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1)
Sep 30 14:54:45 compute-0 podman[295479]: 2025-09-30 14:54:45.753304864 +0000 UTC m=+0.132723999 container start 6057f82ebf0be8ec1701080351112e541824efdc2e756ec7ca5f23f1cb102463 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_wright, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Sep 30 14:54:45 compute-0 eloquent_wright[295495]: 167 167
Sep 30 14:54:45 compute-0 systemd[1]: libpod-6057f82ebf0be8ec1701080351112e541824efdc2e756ec7ca5f23f1cb102463.scope: Deactivated successfully.
Sep 30 14:54:45 compute-0 podman[295479]: 2025-09-30 14:54:45.758996883 +0000 UTC m=+0.138416018 container attach 6057f82ebf0be8ec1701080351112e541824efdc2e756ec7ca5f23f1cb102463 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_wright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:54:45 compute-0 podman[295479]: 2025-09-30 14:54:45.7596712 +0000 UTC m=+0.139090345 container died 6057f82ebf0be8ec1701080351112e541824efdc2e756ec7ca5f23f1cb102463 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_wright, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:54:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-604bf7dc359fdde7c4d501425255d4bb4f944104af411e14e978294b44cc3aa6-merged.mount: Deactivated successfully.
Sep 30 14:54:45 compute-0 podman[295479]: 2025-09-30 14:54:45.795065652 +0000 UTC m=+0.174484777 container remove 6057f82ebf0be8ec1701080351112e541824efdc2e756ec7ca5f23f1cb102463 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_wright, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:54:45 compute-0 systemd[1]: libpod-conmon-6057f82ebf0be8ec1701080351112e541824efdc2e756ec7ca5f23f1cb102463.scope: Deactivated successfully.
Sep 30 14:54:45 compute-0 nova_compute[261524]: 2025-09-30 14:54:45.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:54:45 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:54:45 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:54:45 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:54:45.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:54:45 compute-0 podman[295519]: 2025-09-30 14:54:45.996820209 +0000 UTC m=+0.047859948 container create 9a4a7f619eb6d0b35ffbb8c24bb103cb7818bb8cc99e9b168bb4499062ea8187 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_bell, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:54:46 compute-0 systemd[1]: Started libpod-conmon-9a4a7f619eb6d0b35ffbb8c24bb103cb7818bb8cc99e9b168bb4499062ea8187.scope.
Sep 30 14:54:46 compute-0 podman[295519]: 2025-09-30 14:54:45.97765607 +0000 UTC m=+0.028695789 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:54:46 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:54:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8cb814e410a06c30a8c12ebe4068decea608104f9f24d8bb47513f6ab14ef53/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:54:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8cb814e410a06c30a8c12ebe4068decea608104f9f24d8bb47513f6ab14ef53/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:54:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8cb814e410a06c30a8c12ebe4068decea608104f9f24d8bb47513f6ab14ef53/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:54:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8cb814e410a06c30a8c12ebe4068decea608104f9f24d8bb47513f6ab14ef53/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:54:46 compute-0 podman[295519]: 2025-09-30 14:54:46.118164461 +0000 UTC m=+0.169204150 container init 9a4a7f619eb6d0b35ffbb8c24bb103cb7818bb8cc99e9b168bb4499062ea8187 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_bell, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Sep 30 14:54:46 compute-0 podman[295519]: 2025-09-30 14:54:46.130724469 +0000 UTC m=+0.181764198 container start 9a4a7f619eb6d0b35ffbb8c24bb103cb7818bb8cc99e9b168bb4499062ea8187 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_bell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Sep 30 14:54:46 compute-0 podman[295519]: 2025-09-30 14:54:46.134894927 +0000 UTC m=+0.185934616 container attach 9a4a7f619eb6d0b35ffbb8c24bb103cb7818bb8cc99e9b168bb4499062ea8187 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_bell, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Sep 30 14:54:46 compute-0 lvm[295610]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 14:54:46 compute-0 lvm[295610]: VG ceph_vg0 finished
Sep 30 14:54:46 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:54:46 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:54:46 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:54:46.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:54:46 compute-0 festive_bell[295536]: {}
Sep 30 14:54:46 compute-0 systemd[1]: libpod-9a4a7f619eb6d0b35ffbb8c24bb103cb7818bb8cc99e9b168bb4499062ea8187.scope: Deactivated successfully.
Sep 30 14:54:46 compute-0 podman[295519]: 2025-09-30 14:54:46.954052961 +0000 UTC m=+1.005092690 container died 9a4a7f619eb6d0b35ffbb8c24bb103cb7818bb8cc99e9b168bb4499062ea8187 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_bell, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:54:46 compute-0 systemd[1]: libpod-9a4a7f619eb6d0b35ffbb8c24bb103cb7818bb8cc99e9b168bb4499062ea8187.scope: Consumed 1.406s CPU time.
Sep 30 14:54:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-b8cb814e410a06c30a8c12ebe4068decea608104f9f24d8bb47513f6ab14ef53-merged.mount: Deactivated successfully.
Sep 30 14:54:47 compute-0 podman[295519]: 2025-09-30 14:54:47.004339502 +0000 UTC m=+1.055379231 container remove 9a4a7f619eb6d0b35ffbb8c24bb103cb7818bb8cc99e9b168bb4499062ea8187 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_bell, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:54:47 compute-0 systemd[1]: libpod-conmon-9a4a7f619eb6d0b35ffbb8c24bb103cb7818bb8cc99e9b168bb4499062ea8187.scope: Deactivated successfully.
Sep 30 14:54:47 compute-0 sudo[295412]: pam_unix(sudo:session): session closed for user root
Sep 30 14:54:47 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 14:54:47 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:54:47 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 14:54:47 compute-0 ceph-mon[74194]: pgmap v1180: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 586 B/s rd, 0 op/s
Sep 30 14:54:47 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:54:47 compute-0 nova_compute[261524]: 2025-09-30 14:54:47.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:54:47 compute-0 sudo[295627]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 14:54:47 compute-0 sudo[295627]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:54:47 compute-0 sudo[295627]: pam_unix(sudo:session): session closed for user root
Sep 30 14:54:47 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:54:47.220Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:54:47 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:54:47.220Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:54:47 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:54:47 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1181: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 586 B/s rd, 0 op/s
Sep 30 14:54:47 compute-0 sudo[295653]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:54:47 compute-0 sudo[295653]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:54:47 compute-0 sudo[295653]: pam_unix(sudo:session): session closed for user root
Sep 30 14:54:47 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:54:47 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:54:47 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:54:47.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:54:48 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:54:48 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:54:48 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:54:48 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:54:48 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:54:48.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:54:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:54:48.870Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:54:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:54:48 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:54:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:54:48 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:54:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:54:48 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:54:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:54:49 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:54:49 compute-0 ceph-mon[74194]: pgmap v1181: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 586 B/s rd, 0 op/s
Sep 30 14:54:49 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1182: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 586 B/s rd, 0 op/s
Sep 30 14:54:49 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:54:49 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:54:49 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:54:49.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:54:50 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:54:50 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:54:50 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:54:50.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:54:50 compute-0 nova_compute[261524]: 2025-09-30 14:54:50.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:54:51 compute-0 ceph-mon[74194]: pgmap v1182: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 586 B/s rd, 0 op/s
Sep 30 14:54:51 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1183: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 586 B/s rd, 0 op/s
Sep 30 14:54:51 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:54:51 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:54:51 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:54:51.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:54:52 compute-0 nova_compute[261524]: 2025-09-30 14:54:52.151 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:54:52 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:54:52 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:54:52 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:54:52 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:54:52.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:54:53 compute-0 ceph-mon[74194]: pgmap v1183: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 586 B/s rd, 0 op/s
Sep 30 14:54:53 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1184: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:54:53 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:54:53.725Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:54:53 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:54:53 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:54:53 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:54:53.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:54:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:54:53 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:54:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:54:53 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:54:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:54:53 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:54:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:54:54 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:54:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:54:54] "GET /metrics HTTP/1.1" 200 48534 "" "Prometheus/2.51.0"
Sep 30 14:54:54 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:54:54] "GET /metrics HTTP/1.1" 200 48534 "" "Prometheus/2.51.0"
Sep 30 14:54:54 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:54:54 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:54:54 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:54:54.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:54:55 compute-0 ceph-mon[74194]: pgmap v1184: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:54:55 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1185: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:54:55 compute-0 nova_compute[261524]: 2025-09-30 14:54:55.958 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:54:56 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:54:56 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:54:56 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:54:55.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:54:56 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:54:56 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:54:56 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:54:56.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:54:57 compute-0 nova_compute[261524]: 2025-09-30 14:54:57.154 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:54:57 compute-0 ceph-mon[74194]: pgmap v1185: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:54:57 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:54:57.222Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:54:57 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:54:57 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1186: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:54:58 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:54:58 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:54:58 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:54:58.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:54:58 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:54:58.871Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:54:58 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:54:58 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:54:58 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:54:58.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:54:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:54:58 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:54:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:54:58 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:54:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:54:58 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:54:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:54:59 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:54:59 compute-0 ceph-mon[74194]: pgmap v1186: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:54:59 compute-0 ceph-mgr[74485]: [balancer INFO root] Optimize plan auto_2025-09-30_14:54:59
Sep 30 14:54:59 compute-0 ceph-mgr[74485]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 14:54:59 compute-0 ceph-mgr[74485]: [balancer INFO root] do_upmap
Sep 30 14:54:59 compute-0 ceph-mgr[74485]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.log', 'vms', 'backups', '.nfs', 'cephfs.cephfs.meta', '.mgr', 'volumes', '.rgw.root', 'default.rgw.meta', 'images', 'cephfs.cephfs.data']
Sep 30 14:54:59 compute-0 ceph-mgr[74485]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 14:54:59 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1187: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:54:59 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:54:59 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:54:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:54:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:54:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:54:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:54:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:54:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:55:00 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:55:00 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:55:00 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:55:00.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:55:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 14:55:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:55:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 14:55:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:55:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 14:55:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:55:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:55:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:55:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:55:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:55:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Sep 30 14:55:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:55:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Sep 30 14:55:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:55:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:55:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:55:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Sep 30 14:55:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:55:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Sep 30 14:55:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:55:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:55:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:55:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 14:55:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:55:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 14:55:00 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:55:00 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:55:00 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:55:00 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:55:00.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:55:00 compute-0 nova_compute[261524]: 2025-09-30 14:55:00.962 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:55:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 14:55:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 14:55:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 14:55:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 14:55:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 14:55:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 14:55:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 14:55:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 14:55:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 14:55:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 14:55:01 compute-0 ceph-mon[74194]: pgmap v1187: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:55:01 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1188: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:55:02 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:55:02 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:55:02 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:55:02.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:55:02 compute-0 nova_compute[261524]: 2025-09-30 14:55:02.156 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:55:02 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:55:02 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:55:02 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:55:02 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:55:02.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:55:03 compute-0 ceph-mon[74194]: pgmap v1188: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:55:03 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1189: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:55:03 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:55:03.726Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:55:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:55:04 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:55:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:55:04 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:55:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:55:04 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:55:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:55:04 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:55:04 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:55:04 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:55:04 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:55:04.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:55:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:55:04] "GET /metrics HTTP/1.1" 200 48533 "" "Prometheus/2.51.0"
Sep 30 14:55:04 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:55:04] "GET /metrics HTTP/1.1" 200 48533 "" "Prometheus/2.51.0"
Sep 30 14:55:04 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:55:04 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:55:04 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:55:04.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:55:05 compute-0 ceph-mon[74194]: pgmap v1189: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:55:05 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1190: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:55:05 compute-0 nova_compute[261524]: 2025-09-30 14:55:05.964 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:55:06 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:55:06 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:55:06 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:55:06.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:55:06 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:55:06 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:55:06 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:55:06.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:55:07 compute-0 nova_compute[261524]: 2025-09-30 14:55:07.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:55:07 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:55:07.222Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:55:07 compute-0 ceph-mon[74194]: pgmap v1190: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:55:07 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:55:07 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1191: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:55:07 compute-0 sudo[295698]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:55:07 compute-0 sudo[295698]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:55:07 compute-0 sudo[295698]: pam_unix(sudo:session): session closed for user root
Sep 30 14:55:07 compute-0 nova_compute[261524]: 2025-09-30 14:55:07.875 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:55:08 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:55:08 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:55:08 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:55:08.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:55:08 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:55:08.872Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:55:08 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:55:08 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:55:08 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:55:08.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:55:09 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:55:08 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:55:09 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:55:08 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:55:09 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:55:08 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:55:09 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:55:09 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:55:09 compute-0 ceph-mon[74194]: pgmap v1191: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:55:09 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1192: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:55:10 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:55:10 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:55:10 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:55:10.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:55:10 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:55:10 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:55:10 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:55:10.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:55:10 compute-0 nova_compute[261524]: 2025-09-30 14:55:10.967 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:55:11 compute-0 ceph-mon[74194]: pgmap v1192: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:55:11 compute-0 ceph-mon[74194]: from='client.? 192.168.122.10:0/1323500117' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 14:55:11 compute-0 ceph-mon[74194]: from='client.? 192.168.122.10:0/1323500117' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 14:55:11 compute-0 ceph-mon[74194]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #72. Immutable memtables: 0.
Sep 30 14:55:11 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:55:11.319311) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Sep 30 14:55:11 compute-0 ceph-mon[74194]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 72
Sep 30 14:55:11 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759244111319338, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 1067, "num_deletes": 251, "total_data_size": 1885325, "memory_usage": 1917824, "flush_reason": "Manual Compaction"}
Sep 30 14:55:11 compute-0 ceph-mon[74194]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #73: started
Sep 30 14:55:11 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759244111326726, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 73, "file_size": 1830270, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 33705, "largest_seqno": 34771, "table_properties": {"data_size": 1825127, "index_size": 2603, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 11281, "raw_average_key_size": 19, "raw_value_size": 1814765, "raw_average_value_size": 3206, "num_data_blocks": 113, "num_entries": 566, "num_filter_entries": 566, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759244018, "oldest_key_time": 1759244018, "file_creation_time": 1759244111, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4a74fe2f-a33e-416b-ba25-743e7942b3ac", "db_session_id": "KY5CTSKWFSFJYE5835A9", "orig_file_number": 73, "seqno_to_time_mapping": "N/A"}}
Sep 30 14:55:11 compute-0 ceph-mon[74194]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 7442 microseconds, and 3733 cpu microseconds.
Sep 30 14:55:11 compute-0 ceph-mon[74194]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 14:55:11 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:55:11.326755) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #73: 1830270 bytes OK
Sep 30 14:55:11 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:55:11.326769) [db/memtable_list.cc:519] [default] Level-0 commit table #73 started
Sep 30 14:55:11 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:55:11.328340) [db/memtable_list.cc:722] [default] Level-0 commit table #73: memtable #1 done
Sep 30 14:55:11 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:55:11.328353) EVENT_LOG_v1 {"time_micros": 1759244111328349, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Sep 30 14:55:11 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:55:11.328369) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Sep 30 14:55:11 compute-0 ceph-mon[74194]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 1880485, prev total WAL file size 1880485, number of live WAL files 2.
Sep 30 14:55:11 compute-0 ceph-mon[74194]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000069.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 14:55:11 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:55:11.329046) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Sep 30 14:55:11 compute-0 ceph-mon[74194]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Sep 30 14:55:11 compute-0 ceph-mon[74194]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [73(1787KB)], [71(14MB)]
Sep 30 14:55:11 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759244111329136, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [73], "files_L6": [71], "score": -1, "input_data_size": 17292884, "oldest_snapshot_seqno": -1}
Sep 30 14:55:11 compute-0 ceph-mon[74194]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #74: 6587 keys, 15203410 bytes, temperature: kUnknown
Sep 30 14:55:11 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759244111437095, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 74, "file_size": 15203410, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15158360, "index_size": 27466, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16517, "raw_key_size": 172863, "raw_average_key_size": 26, "raw_value_size": 15038682, "raw_average_value_size": 2283, "num_data_blocks": 1086, "num_entries": 6587, "num_filter_entries": 6587, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759241526, "oldest_key_time": 0, "file_creation_time": 1759244111, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4a74fe2f-a33e-416b-ba25-743e7942b3ac", "db_session_id": "KY5CTSKWFSFJYE5835A9", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Sep 30 14:55:11 compute-0 ceph-mon[74194]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 14:55:11 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:55:11.437308) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 15203410 bytes
Sep 30 14:55:11 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:55:11.438208) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 160.1 rd, 140.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 14.7 +0.0 blob) out(14.5 +0.0 blob), read-write-amplify(17.8) write-amplify(8.3) OK, records in: 7105, records dropped: 518 output_compression: NoCompression
Sep 30 14:55:11 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:55:11.438223) EVENT_LOG_v1 {"time_micros": 1759244111438215, "job": 40, "event": "compaction_finished", "compaction_time_micros": 108006, "compaction_time_cpu_micros": 49361, "output_level": 6, "num_output_files": 1, "total_output_size": 15203410, "num_input_records": 7105, "num_output_records": 6587, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Sep 30 14:55:11 compute-0 ceph-mon[74194]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000073.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 14:55:11 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759244111438553, "job": 40, "event": "table_file_deletion", "file_number": 73}
Sep 30 14:55:11 compute-0 ceph-mon[74194]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 14:55:11 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759244111441299, "job": 40, "event": "table_file_deletion", "file_number": 71}
Sep 30 14:55:11 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:55:11.328883) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:55:11 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:55:11.441414) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:55:11 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:55:11.441423) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:55:11 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:55:11.441426) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:55:11 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:55:11.441429) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:55:11 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:55:11.441432) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:55:11 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1193: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:55:12 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:55:12 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:55:12 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:55:12.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:55:12 compute-0 nova_compute[261524]: 2025-09-30 14:55:12.160 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:55:12 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:55:12 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:55:12 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:55:12 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:55:12.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:55:13 compute-0 ceph-mon[74194]: pgmap v1193: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:55:13 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1194: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:55:13 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:55:13.728Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:55:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:55:14 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:55:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:55:14 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:55:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:55:14 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:55:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:55:14 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:55:14 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:55:14 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:55:14 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:55:14.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:55:14 compute-0 podman[295730]: 2025-09-30 14:55:14.177462607 +0000 UTC m=+0.090754156 container health_status 3f9405f717bf7bccb1d94628a6cea0442375ebf8d5cf43ef2536ee30dce6c6e0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Sep 30 14:55:14 compute-0 podman[295733]: 2025-09-30 14:55:14.179348156 +0000 UTC m=+0.077095490 container health_status c96c6cc1b89de09f21811bef665f35e069874e3ad1bbd4c37d02227af98e3458 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Sep 30 14:55:14 compute-0 podman[295731]: 2025-09-30 14:55:14.205070846 +0000 UTC m=+0.113341204 container health_status 8ace90c1ad44d98a416105b72e1c541724757ab08efa10adfaa0df7e8c2027c6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Sep 30 14:55:14 compute-0 podman[295732]: 2025-09-30 14:55:14.218774303 +0000 UTC m=+0.120357807 container health_status b3fb2fd96e3ed0561f41ccf5f3a6a8f5edc1610051f196882b9fe64f54041c07 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible)
Sep 30 14:55:14 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:55:14 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:55:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:55:14] "GET /metrics HTTP/1.1" 200 48533 "" "Prometheus/2.51.0"
Sep 30 14:55:14 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:55:14] "GET /metrics HTTP/1.1" 200 48533 "" "Prometheus/2.51.0"
Sep 30 14:55:14 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:55:14 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:55:14 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:55:14.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:55:15 compute-0 ceph-mon[74194]: pgmap v1194: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:55:15 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:55:15 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1195: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:55:15 compute-0 nova_compute[261524]: 2025-09-30 14:55:15.970 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:55:16 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:55:16 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:55:16 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:55:16.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:55:16 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:55:16 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:55:16 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:55:16.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:55:17 compute-0 nova_compute[261524]: 2025-09-30 14:55:17.163 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:55:17 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:55:17.224Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:55:17 compute-0 ceph-mon[74194]: pgmap v1195: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:55:17 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:55:17 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1196: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:55:18 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:55:18 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:55:18 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:55:18.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:55:18 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:55:18.873Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:55:18 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:55:18 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:55:18 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:55:18.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:55:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:55:18 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:55:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:55:18 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:55:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:55:18 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:55:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:55:19 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:55:19 compute-0 ceph-mon[74194]: pgmap v1196: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:55:19 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1197: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:55:20 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:55:20 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:55:20 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:55:20.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:55:20 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:55:20 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:55:20 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:55:20.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:55:20 compute-0 nova_compute[261524]: 2025-09-30 14:55:20.973 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:55:21 compute-0 ceph-mon[74194]: pgmap v1197: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:55:21 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/302148375' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:55:21 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1198: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:55:22 compute-0 nova_compute[261524]: 2025-09-30 14:55:22.025 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:55:22 compute-0 nova_compute[261524]: 2025-09-30 14:55:22.025 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Sep 30 14:55:22 compute-0 nova_compute[261524]: 2025-09-30 14:55:22.025 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Sep 30 14:55:22 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:55:22 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:55:22 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:55:22.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:55:22 compute-0 nova_compute[261524]: 2025-09-30 14:55:22.060 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Sep 30 14:55:22 compute-0 nova_compute[261524]: 2025-09-30 14:55:22.166 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:55:22 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/1131396890' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:55:22 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/3572888752' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:55:22 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:55:22 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:55:22 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:55:22 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:55:22.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:55:22 compute-0 nova_compute[261524]: 2025-09-30 14:55:22.952 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:55:23 compute-0 ceph-mon[74194]: pgmap v1198: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:55:23 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/2984454416' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:55:23 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1199: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:55:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:55:23.729Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:55:23 compute-0 nova_compute[261524]: 2025-09-30 14:55:23.953 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:55:23 compute-0 nova_compute[261524]: 2025-09-30 14:55:23.953 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:55:23 compute-0 nova_compute[261524]: 2025-09-30 14:55:23.954 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:55:23 compute-0 nova_compute[261524]: 2025-09-30 14:55:23.954 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:55:23 compute-0 nova_compute[261524]: 2025-09-30 14:55:23.983 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:55:23 compute-0 nova_compute[261524]: 2025-09-30 14:55:23.983 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:55:23 compute-0 nova_compute[261524]: 2025-09-30 14:55:23.984 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:55:23 compute-0 nova_compute[261524]: 2025-09-30 14:55:23.984 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Sep 30 14:55:23 compute-0 nova_compute[261524]: 2025-09-30 14:55:23.984 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Sep 30 14:55:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:55:23 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:55:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:55:23 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:55:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:55:23 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:55:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:55:24 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:55:24 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:55:24 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:55:24 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:55:24.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:55:24 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 14:55:24 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2708558743' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:55:24 compute-0 nova_compute[261524]: 2025-09-30 14:55:24.436 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Sep 30 14:55:24 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/2708558743' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:55:24 compute-0 nova_compute[261524]: 2025-09-30 14:55:24.592 2 WARNING nova.virt.libvirt.driver [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 14:55:24 compute-0 nova_compute[261524]: 2025-09-30 14:55:24.593 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4491MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Sep 30 14:55:24 compute-0 nova_compute[261524]: 2025-09-30 14:55:24.593 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:55:24 compute-0 nova_compute[261524]: 2025-09-30 14:55:24.594 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:55:24 compute-0 nova_compute[261524]: 2025-09-30 14:55:24.680 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Sep 30 14:55:24 compute-0 nova_compute[261524]: 2025-09-30 14:55:24.680 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Sep 30 14:55:24 compute-0 nova_compute[261524]: 2025-09-30 14:55:24.695 2 DEBUG nova.scheduler.client.report [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Refreshing inventories for resource provider 06783cfc-6d32-454d-9501-ebd8adea3735 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Sep 30 14:55:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:55:24] "GET /metrics HTTP/1.1" 200 48534 "" "Prometheus/2.51.0"
Sep 30 14:55:24 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:55:24] "GET /metrics HTTP/1.1" 200 48534 "" "Prometheus/2.51.0"
Sep 30 14:55:24 compute-0 nova_compute[261524]: 2025-09-30 14:55:24.800 2 DEBUG nova.scheduler.client.report [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Updating ProviderTree inventory for provider 06783cfc-6d32-454d-9501-ebd8adea3735 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Sep 30 14:55:24 compute-0 nova_compute[261524]: 2025-09-30 14:55:24.800 2 DEBUG nova.compute.provider_tree [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Updating inventory in ProviderTree for provider 06783cfc-6d32-454d-9501-ebd8adea3735 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Sep 30 14:55:24 compute-0 nova_compute[261524]: 2025-09-30 14:55:24.815 2 DEBUG nova.scheduler.client.report [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Refreshing aggregate associations for resource provider 06783cfc-6d32-454d-9501-ebd8adea3735, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Sep 30 14:55:24 compute-0 nova_compute[261524]: 2025-09-30 14:55:24.837 2 DEBUG nova.scheduler.client.report [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Refreshing trait associations for resource provider 06783cfc-6d32-454d-9501-ebd8adea3735, traits: COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AVX,HW_CPU_X86_SSE2,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_BMI2,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SVM,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_BMI,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_FMA3,HW_CPU_X86_AVX2,HW_CPU_X86_SSE42,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_F16C,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_RESCUE_BFV,COMPUTE_NODE,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_USB,COMPUTE_ACCELERATORS,HW_CPU_X86_CLMUL,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE4A,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_AMD_SVM _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Sep 30 14:55:24 compute-0 nova_compute[261524]: 2025-09-30 14:55:24.854 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Sep 30 14:55:24 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:55:24 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:55:24 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:55:24.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:55:25 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 14:55:25 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1385494547' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:55:25 compute-0 nova_compute[261524]: 2025-09-30 14:55:25.350 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Sep 30 14:55:25 compute-0 nova_compute[261524]: 2025-09-30 14:55:25.357 2 DEBUG nova.compute.provider_tree [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Inventory has not changed in ProviderTree for provider: 06783cfc-6d32-454d-9501-ebd8adea3735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Sep 30 14:55:25 compute-0 nova_compute[261524]: 2025-09-30 14:55:25.376 2 DEBUG nova.scheduler.client.report [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Inventory has not changed for provider 06783cfc-6d32-454d-9501-ebd8adea3735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Sep 30 14:55:25 compute-0 nova_compute[261524]: 2025-09-30 14:55:25.377 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Sep 30 14:55:25 compute-0 nova_compute[261524]: 2025-09-30 14:55:25.377 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.784s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:55:25 compute-0 ceph-mon[74194]: pgmap v1199: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:55:25 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/1385494547' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:55:25 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1200: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:55:25 compute-0 nova_compute[261524]: 2025-09-30 14:55:25.976 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:55:26 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:55:26 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:55:26 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:55:26.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:55:26 compute-0 nova_compute[261524]: 2025-09-30 14:55:26.371 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:55:26 compute-0 nova_compute[261524]: 2025-09-30 14:55:26.372 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:55:26 compute-0 nova_compute[261524]: 2025-09-30 14:55:26.372 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Sep 30 14:55:26 compute-0 ceph-mon[74194]: pgmap v1200: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:55:26 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:55:26 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:55:26 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:55:26.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:55:26 compute-0 nova_compute[261524]: 2025-09-30 14:55:26.952 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:55:27 compute-0 nova_compute[261524]: 2025-09-30 14:55:27.168 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:55:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:55:27.225Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:55:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:55:27.225Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:55:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:55:27.226Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:55:27 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:55:27 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1201: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:55:27 compute-0 sudo[295864]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:55:27 compute-0 sudo[295864]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:55:27 compute-0 sudo[295864]: pam_unix(sudo:session): session closed for user root
Sep 30 14:55:28 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:55:28 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:55:28 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:55:28.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:55:28 compute-0 ceph-mon[74194]: pgmap v1201: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:55:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:55:28.874Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:55:28 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:55:28 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:55:28 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:55:28.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:55:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:55:28 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:55:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:55:28 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:55:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:55:29 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:55:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:55:29 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:55:29 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:55:29 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:55:29 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1202: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:55:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:55:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:55:29 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:55:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:55:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:55:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:55:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:55:30 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:55:30 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:55:30 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:55:30.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:55:30 compute-0 ceph-mon[74194]: pgmap v1202: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:55:30 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:55:30 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:55:30 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:55:30.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:55:30 compute-0 nova_compute[261524]: 2025-09-30 14:55:30.980 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:55:31 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1203: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:55:32 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:55:32 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:55:32 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:55:32.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:55:32 compute-0 nova_compute[261524]: 2025-09-30 14:55:32.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:55:32 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:55:32 compute-0 ceph-mon[74194]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #75. Immutable memtables: 0.
Sep 30 14:55:32 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:55:32.562924) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Sep 30 14:55:32 compute-0 ceph-mon[74194]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 75
Sep 30 14:55:32 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759244132562964, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 437, "num_deletes": 252, "total_data_size": 412622, "memory_usage": 421456, "flush_reason": "Manual Compaction"}
Sep 30 14:55:32 compute-0 ceph-mon[74194]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #76: started
Sep 30 14:55:32 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759244132566788, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 76, "file_size": 322403, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 34773, "largest_seqno": 35208, "table_properties": {"data_size": 319985, "index_size": 518, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 6603, "raw_average_key_size": 20, "raw_value_size": 315070, "raw_average_value_size": 972, "num_data_blocks": 23, "num_entries": 324, "num_filter_entries": 324, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759244111, "oldest_key_time": 1759244111, "file_creation_time": 1759244132, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4a74fe2f-a33e-416b-ba25-743e7942b3ac", "db_session_id": "KY5CTSKWFSFJYE5835A9", "orig_file_number": 76, "seqno_to_time_mapping": "N/A"}}
Sep 30 14:55:32 compute-0 ceph-mon[74194]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 3898 microseconds, and 1573 cpu microseconds.
Sep 30 14:55:32 compute-0 ceph-mon[74194]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 14:55:32 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:55:32.566823) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #76: 322403 bytes OK
Sep 30 14:55:32 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:55:32.566840) [db/memtable_list.cc:519] [default] Level-0 commit table #76 started
Sep 30 14:55:32 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:55:32.569443) [db/memtable_list.cc:722] [default] Level-0 commit table #76: memtable #1 done
Sep 30 14:55:32 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:55:32.569456) EVENT_LOG_v1 {"time_micros": 1759244132569452, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Sep 30 14:55:32 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:55:32.569470) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Sep 30 14:55:32 compute-0 ceph-mon[74194]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 409971, prev total WAL file size 409971, number of live WAL files 2.
Sep 30 14:55:32 compute-0 ceph-mon[74194]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000072.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 14:55:32 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:55:32.569865) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303032' seq:72057594037927935, type:22 .. '6D6772737461740031323535' seq:0, type:0; will stop at (end)
Sep 30 14:55:32 compute-0 ceph-mon[74194]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00
Sep 30 14:55:32 compute-0 ceph-mon[74194]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [76(314KB)], [74(14MB)]
Sep 30 14:55:32 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759244132569906, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [76], "files_L6": [74], "score": -1, "input_data_size": 15525813, "oldest_snapshot_seqno": -1}
Sep 30 14:55:32 compute-0 ceph-mon[74194]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #77: 6403 keys, 11474366 bytes, temperature: kUnknown
Sep 30 14:55:32 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759244132653834, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 77, "file_size": 11474366, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11435149, "index_size": 22083, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16069, "raw_key_size": 169242, "raw_average_key_size": 26, "raw_value_size": 11323336, "raw_average_value_size": 1768, "num_data_blocks": 861, "num_entries": 6403, "num_filter_entries": 6403, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759241526, "oldest_key_time": 0, "file_creation_time": 1759244132, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4a74fe2f-a33e-416b-ba25-743e7942b3ac", "db_session_id": "KY5CTSKWFSFJYE5835A9", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}}
Sep 30 14:55:32 compute-0 ceph-mon[74194]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 14:55:32 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:55:32.655208) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 11474366 bytes
Sep 30 14:55:32 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:55:32.656813) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 184.7 rd, 136.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 14.5 +0.0 blob) out(10.9 +0.0 blob), read-write-amplify(83.7) write-amplify(35.6) OK, records in: 6911, records dropped: 508 output_compression: NoCompression
Sep 30 14:55:32 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:55:32.656859) EVENT_LOG_v1 {"time_micros": 1759244132656841, "job": 42, "event": "compaction_finished", "compaction_time_micros": 84037, "compaction_time_cpu_micros": 31137, "output_level": 6, "num_output_files": 1, "total_output_size": 11474366, "num_input_records": 6911, "num_output_records": 6403, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Sep 30 14:55:32 compute-0 ceph-mon[74194]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000076.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 14:55:32 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759244132657150, "job": 42, "event": "table_file_deletion", "file_number": 76}
Sep 30 14:55:32 compute-0 ceph-mon[74194]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 14:55:32 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759244132662837, "job": 42, "event": "table_file_deletion", "file_number": 74}
Sep 30 14:55:32 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:55:32.569803) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:55:32 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:55:32.662918) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:55:32 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:55:32.662924) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:55:32 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:55:32.662927) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:55:32 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:55:32.662930) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:55:32 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:55:32.662933) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:55:32 compute-0 ceph-mon[74194]: pgmap v1203: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:55:32 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:55:32 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:55:32 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:55:32.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:55:33 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1204: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:55:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:55:33.730Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:55:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:55:33 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:55:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:55:33 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:55:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:55:33 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:55:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:55:34 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:55:34 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:55:34 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:55:34 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:55:34.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:55:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:55:34] "GET /metrics HTTP/1.1" 200 48534 "" "Prometheus/2.51.0"
Sep 30 14:55:34 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:55:34] "GET /metrics HTTP/1.1" 200 48534 "" "Prometheus/2.51.0"
Sep 30 14:55:34 compute-0 ceph-mon[74194]: pgmap v1204: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:55:34 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:55:34 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:55:34 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:55:34.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:55:35 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1205: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:55:35 compute-0 nova_compute[261524]: 2025-09-30 14:55:35.981 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:55:36 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:55:36 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:55:36 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:55:36.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:55:36 compute-0 ceph-mon[74194]: pgmap v1205: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:55:36 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:55:36 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:55:36 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:55:36.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:55:37 compute-0 nova_compute[261524]: 2025-09-30 14:55:37.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:55:37 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:55:37.227Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:55:37 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:55:37 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1206: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 7.8 KiB/s rd, 0 B/s wr, 12 op/s
Sep 30 14:55:38 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:55:38 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:55:38 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:55:38.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:55:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:55:38.272 163966 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:55:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:55:38.273 163966 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:55:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:55:38.273 163966 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:55:38 compute-0 ceph-mon[74194]: pgmap v1206: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 7.8 KiB/s rd, 0 B/s wr, 12 op/s
Sep 30 14:55:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:55:38.875Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:55:38 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:55:38 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:55:38 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:55:38.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:55:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:55:38 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:55:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:55:38 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:55:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:55:38 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:55:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:55:39 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:55:39 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1207: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 7.6 KiB/s rd, 0 B/s wr, 12 op/s
Sep 30 14:55:40 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:55:40 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:55:40 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:55:40.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:55:40 compute-0 ceph-mon[74194]: pgmap v1207: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 7.6 KiB/s rd, 0 B/s wr, 12 op/s
Sep 30 14:55:40 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:55:40 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:55:40 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:55:40.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:55:41 compute-0 nova_compute[261524]: 2025-09-30 14:55:41.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:55:41 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1208: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 60 op/s
Sep 30 14:55:42 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:55:42 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:55:42 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:55:42.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:55:42 compute-0 nova_compute[261524]: 2025-09-30 14:55:42.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:55:42 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:55:42 compute-0 ceph-mon[74194]: pgmap v1208: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 60 op/s
Sep 30 14:55:42 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:55:42 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:55:42 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:55:42.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:55:43 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1209: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 60 op/s
Sep 30 14:55:43 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:55:43.731Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:55:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:55:43 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:55:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:55:43 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:55:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:55:43 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:55:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:55:44 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:55:44 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:55:44 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:55:44 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:55:44.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:55:44 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:55:44 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:55:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:55:44] "GET /metrics HTTP/1.1" 200 48534 "" "Prometheus/2.51.0"
Sep 30 14:55:44 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:55:44] "GET /metrics HTTP/1.1" 200 48534 "" "Prometheus/2.51.0"
Sep 30 14:55:44 compute-0 ceph-mon[74194]: pgmap v1209: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 60 op/s
Sep 30 14:55:44 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:55:44 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:55:44 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:55:44 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:55:44.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:55:45 compute-0 podman[295906]: 2025-09-30 14:55:45.15327894 +0000 UTC m=+0.067794708 container health_status 3f9405f717bf7bccb1d94628a6cea0442375ebf8d5cf43ef2536ee30dce6c6e0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:55:45 compute-0 podman[295907]: 2025-09-30 14:55:45.184548005 +0000 UTC m=+0.095490510 container health_status 8ace90c1ad44d98a416105b72e1c541724757ab08efa10adfaa0df7e8c2027c6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3)
Sep 30 14:55:45 compute-0 podman[295908]: 2025-09-30 14:55:45.192151793 +0000 UTC m=+0.090014077 container health_status b3fb2fd96e3ed0561f41ccf5f3a6a8f5edc1610051f196882b9fe64f54041c07 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Sep 30 14:55:45 compute-0 podman[295914]: 2025-09-30 14:55:45.213199181 +0000 UTC m=+0.102811620 container health_status c96c6cc1b89de09f21811bef665f35e069874e3ad1bbd4c37d02227af98e3458 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Sep 30 14:55:45 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1210: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 60 op/s
Sep 30 14:55:46 compute-0 nova_compute[261524]: 2025-09-30 14:55:46.014 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:55:46 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:55:46 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:55:46 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:55:46.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:55:46 compute-0 ceph-mon[74194]: pgmap v1210: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 60 op/s
Sep 30 14:55:46 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:55:46 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:55:46 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:55:46.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:55:47 compute-0 nova_compute[261524]: 2025-09-30 14:55:47.177 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:55:47 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:55:47.227Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:55:47 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:55:47.227Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:55:47 compute-0 sudo[295991]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:55:47 compute-0 sudo[295991]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:55:47 compute-0 sudo[295991]: pam_unix(sudo:session): session closed for user root
Sep 30 14:55:47 compute-0 sudo[296017]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 14:55:47 compute-0 sudo[296017]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:55:47 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:55:47 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1211: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 60 op/s
Sep 30 14:55:48 compute-0 sudo[296072]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:55:48 compute-0 sudo[296017]: pam_unix(sudo:session): session closed for user root
Sep 30 14:55:48 compute-0 sudo[296072]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:55:48 compute-0 sudo[296072]: pam_unix(sudo:session): session closed for user root
Sep 30 14:55:48 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:55:48 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:55:48 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:55:48.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:55:48 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:55:48 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:55:48 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 14:55:48 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 14:55:48 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1212: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 0 B/s wr, 55 op/s
Sep 30 14:55:48 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 14:55:48 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:55:48 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 14:55:48 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:55:48 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 14:55:48 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 14:55:48 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 14:55:48 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 14:55:48 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:55:48 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:55:48 compute-0 sudo[296098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:55:48 compute-0 sudo[296098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:55:48 compute-0 sudo[296098]: pam_unix(sudo:session): session closed for user root
Sep 30 14:55:48 compute-0 sudo[296123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 14:55:48 compute-0 sudo[296123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:55:48 compute-0 podman[296188]: 2025-09-30 14:55:48.745525559 +0000 UTC m=+0.047943270 container create 1b0a8ca5a5afda8b9c4e02f31aa398d0934c9bd6ae832242060006ca5988f647 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Sep 30 14:55:48 compute-0 systemd[1]: Started libpod-conmon-1b0a8ca5a5afda8b9c4e02f31aa398d0934c9bd6ae832242060006ca5988f647.scope.
Sep 30 14:55:48 compute-0 podman[296188]: 2025-09-30 14:55:48.723483055 +0000 UTC m=+0.025900786 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:55:48 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:55:48 compute-0 podman[296188]: 2025-09-30 14:55:48.846561122 +0000 UTC m=+0.148978873 container init 1b0a8ca5a5afda8b9c4e02f31aa398d0934c9bd6ae832242060006ca5988f647 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_jackson, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Sep 30 14:55:48 compute-0 podman[296188]: 2025-09-30 14:55:48.855534796 +0000 UTC m=+0.157952507 container start 1b0a8ca5a5afda8b9c4e02f31aa398d0934c9bd6ae832242060006ca5988f647 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_jackson, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1)
Sep 30 14:55:48 compute-0 podman[296188]: 2025-09-30 14:55:48.858933685 +0000 UTC m=+0.161351406 container attach 1b0a8ca5a5afda8b9c4e02f31aa398d0934c9bd6ae832242060006ca5988f647 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_jackson, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:55:48 compute-0 confident_jackson[296205]: 167 167
Sep 30 14:55:48 compute-0 systemd[1]: libpod-1b0a8ca5a5afda8b9c4e02f31aa398d0934c9bd6ae832242060006ca5988f647.scope: Deactivated successfully.
Sep 30 14:55:48 compute-0 conmon[296205]: conmon 1b0a8ca5a5afda8b9c4e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1b0a8ca5a5afda8b9c4e02f31aa398d0934c9bd6ae832242060006ca5988f647.scope/container/memory.events
Sep 30 14:55:48 compute-0 podman[296188]: 2025-09-30 14:55:48.863532814 +0000 UTC m=+0.165950545 container died 1b0a8ca5a5afda8b9c4e02f31aa398d0934c9bd6ae832242060006ca5988f647 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_jackson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:55:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:55:48.876Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:55:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-475556fc805b8e98d0dc1ca9ade5cace959296f0b5cbd43baf30b7b005c4e6de-merged.mount: Deactivated successfully.
Sep 30 14:55:48 compute-0 podman[296188]: 2025-09-30 14:55:48.907925801 +0000 UTC m=+0.210343512 container remove 1b0a8ca5a5afda8b9c4e02f31aa398d0934c9bd6ae832242060006ca5988f647 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_jackson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Sep 30 14:55:48 compute-0 systemd[1]: libpod-conmon-1b0a8ca5a5afda8b9c4e02f31aa398d0934c9bd6ae832242060006ca5988f647.scope: Deactivated successfully.
Sep 30 14:55:48 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:55:48 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:55:48 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:55:48.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:55:48 compute-0 ceph-mon[74194]: pgmap v1211: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 60 op/s
Sep 30 14:55:48 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:55:48 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 14:55:48 compute-0 ceph-mon[74194]: pgmap v1212: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 0 B/s wr, 55 op/s
Sep 30 14:55:48 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:55:48 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:55:48 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 14:55:48 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 14:55:48 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:55:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:55:48 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:55:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:55:48 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:55:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:55:48 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:55:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:55:49 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:55:49 compute-0 podman[296232]: 2025-09-30 14:55:49.076461752 +0000 UTC m=+0.053436983 container create 60cc64b840c8121ff595453dc04c6ba9121972c8f8e7a8f6840df93f739fedd7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_keldysh, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Sep 30 14:55:49 compute-0 systemd[1]: Started libpod-conmon-60cc64b840c8121ff595453dc04c6ba9121972c8f8e7a8f6840df93f739fedd7.scope.
Sep 30 14:55:49 compute-0 podman[296232]: 2025-09-30 14:55:49.049412058 +0000 UTC m=+0.026387379 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:55:49 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:55:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64c7cd799cc9a9d1c478b4cbbbab51e9d05da966c7694c4a981b52aff5c535be/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:55:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64c7cd799cc9a9d1c478b4cbbbab51e9d05da966c7694c4a981b52aff5c535be/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:55:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64c7cd799cc9a9d1c478b4cbbbab51e9d05da966c7694c4a981b52aff5c535be/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:55:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64c7cd799cc9a9d1c478b4cbbbab51e9d05da966c7694c4a981b52aff5c535be/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:55:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64c7cd799cc9a9d1c478b4cbbbab51e9d05da966c7694c4a981b52aff5c535be/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 14:55:49 compute-0 podman[296232]: 2025-09-30 14:55:49.174550358 +0000 UTC m=+0.151525669 container init 60cc64b840c8121ff595453dc04c6ba9121972c8f8e7a8f6840df93f739fedd7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_keldysh, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2)
Sep 30 14:55:49 compute-0 podman[296232]: 2025-09-30 14:55:49.192158197 +0000 UTC m=+0.169133468 container start 60cc64b840c8121ff595453dc04c6ba9121972c8f8e7a8f6840df93f739fedd7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_keldysh, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Sep 30 14:55:49 compute-0 podman[296232]: 2025-09-30 14:55:49.196355716 +0000 UTC m=+0.173331037 container attach 60cc64b840c8121ff595453dc04c6ba9121972c8f8e7a8f6840df93f739fedd7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_keldysh, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:55:49 compute-0 friendly_keldysh[296248]: --> passed data devices: 0 physical, 1 LVM
Sep 30 14:55:49 compute-0 friendly_keldysh[296248]: --> All data devices are unavailable
Sep 30 14:55:49 compute-0 systemd[1]: libpod-60cc64b840c8121ff595453dc04c6ba9121972c8f8e7a8f6840df93f739fedd7.scope: Deactivated successfully.
Sep 30 14:55:49 compute-0 podman[296232]: 2025-09-30 14:55:49.580624059 +0000 UTC m=+0.557599320 container died 60cc64b840c8121ff595453dc04c6ba9121972c8f8e7a8f6840df93f739fedd7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_keldysh, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:55:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-64c7cd799cc9a9d1c478b4cbbbab51e9d05da966c7694c4a981b52aff5c535be-merged.mount: Deactivated successfully.
Sep 30 14:55:49 compute-0 podman[296232]: 2025-09-30 14:55:49.633963539 +0000 UTC m=+0.610938810 container remove 60cc64b840c8121ff595453dc04c6ba9121972c8f8e7a8f6840df93f739fedd7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_keldysh, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:55:49 compute-0 systemd[1]: libpod-conmon-60cc64b840c8121ff595453dc04c6ba9121972c8f8e7a8f6840df93f739fedd7.scope: Deactivated successfully.
Sep 30 14:55:49 compute-0 sudo[296123]: pam_unix(sudo:session): session closed for user root
Sep 30 14:55:49 compute-0 sudo[296279]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:55:49 compute-0 sudo[296279]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:55:49 compute-0 sudo[296279]: pam_unix(sudo:session): session closed for user root
Sep 30 14:55:49 compute-0 sudo[296304]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -- lvm list --format json
Sep 30 14:55:49 compute-0 sudo[296304]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:55:50 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:55:50 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:55:50 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:55:50.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:55:50 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1213: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 0 B/s wr, 55 op/s
Sep 30 14:55:50 compute-0 podman[296371]: 2025-09-30 14:55:50.337236993 +0000 UTC m=+0.048205377 container create 986ece20d713864d557429f3a2ed5e7bc664d7ff6aef11244892eeb2227d30fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_shamir, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Sep 30 14:55:50 compute-0 systemd[1]: Started libpod-conmon-986ece20d713864d557429f3a2ed5e7bc664d7ff6aef11244892eeb2227d30fc.scope.
Sep 30 14:55:50 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:55:50 compute-0 podman[296371]: 2025-09-30 14:55:50.41237132 +0000 UTC m=+0.123339744 container init 986ece20d713864d557429f3a2ed5e7bc664d7ff6aef11244892eeb2227d30fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_shamir, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Sep 30 14:55:50 compute-0 podman[296371]: 2025-09-30 14:55:50.320003524 +0000 UTC m=+0.030971908 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:55:50 compute-0 podman[296371]: 2025-09-30 14:55:50.420566254 +0000 UTC m=+0.131534638 container start 986ece20d713864d557429f3a2ed5e7bc664d7ff6aef11244892eeb2227d30fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_shamir, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:55:50 compute-0 podman[296371]: 2025-09-30 14:55:50.423439229 +0000 UTC m=+0.134407603 container attach 986ece20d713864d557429f3a2ed5e7bc664d7ff6aef11244892eeb2227d30fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_shamir, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:55:50 compute-0 beautiful_shamir[296387]: 167 167
Sep 30 14:55:50 compute-0 systemd[1]: libpod-986ece20d713864d557429f3a2ed5e7bc664d7ff6aef11244892eeb2227d30fc.scope: Deactivated successfully.
Sep 30 14:55:50 compute-0 podman[296371]: 2025-09-30 14:55:50.425455741 +0000 UTC m=+0.136424155 container died 986ece20d713864d557429f3a2ed5e7bc664d7ff6aef11244892eeb2227d30fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_shamir, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True)
Sep 30 14:55:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-e11af8ba25f27063f1fd840b2a9d979a55bb23b52bd14ff366c66afefc88a7a6-merged.mount: Deactivated successfully.
Sep 30 14:55:50 compute-0 podman[296371]: 2025-09-30 14:55:50.469849348 +0000 UTC m=+0.180817722 container remove 986ece20d713864d557429f3a2ed5e7bc664d7ff6aef11244892eeb2227d30fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_shamir, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:55:50 compute-0 systemd[1]: libpod-conmon-986ece20d713864d557429f3a2ed5e7bc664d7ff6aef11244892eeb2227d30fc.scope: Deactivated successfully.
Sep 30 14:55:50 compute-0 podman[296411]: 2025-09-30 14:55:50.694696757 +0000 UTC m=+0.060917619 container create fd1c36fd85f686a601a7664545459a1e98d46c6bbb2fc977e929880989a6fc55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_franklin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:55:50 compute-0 systemd[1]: Started libpod-conmon-fd1c36fd85f686a601a7664545459a1e98d46c6bbb2fc977e929880989a6fc55.scope.
Sep 30 14:55:50 compute-0 podman[296411]: 2025-09-30 14:55:50.672414756 +0000 UTC m=+0.038635618 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:55:50 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:55:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fd84c7949282ec28b012ca075ec65f8154cbb6445c5204b1b30e717e9ce59d0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:55:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fd84c7949282ec28b012ca075ec65f8154cbb6445c5204b1b30e717e9ce59d0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:55:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fd84c7949282ec28b012ca075ec65f8154cbb6445c5204b1b30e717e9ce59d0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:55:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fd84c7949282ec28b012ca075ec65f8154cbb6445c5204b1b30e717e9ce59d0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:55:50 compute-0 podman[296411]: 2025-09-30 14:55:50.806090859 +0000 UTC m=+0.172311721 container init fd1c36fd85f686a601a7664545459a1e98d46c6bbb2fc977e929880989a6fc55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_franklin, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:55:50 compute-0 podman[296411]: 2025-09-30 14:55:50.81381925 +0000 UTC m=+0.180040082 container start fd1c36fd85f686a601a7664545459a1e98d46c6bbb2fc977e929880989a6fc55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_franklin, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Sep 30 14:55:50 compute-0 podman[296411]: 2025-09-30 14:55:50.817054315 +0000 UTC m=+0.183275227 container attach fd1c36fd85f686a601a7664545459a1e98d46c6bbb2fc977e929880989a6fc55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_franklin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325)
Sep 30 14:55:50 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:55:50 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:55:50 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:55:50.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:55:51 compute-0 nova_compute[261524]: 2025-09-30 14:55:51.017 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:55:51 compute-0 compassionate_franklin[296427]: {
Sep 30 14:55:51 compute-0 compassionate_franklin[296427]:     "0": [
Sep 30 14:55:51 compute-0 compassionate_franklin[296427]:         {
Sep 30 14:55:51 compute-0 compassionate_franklin[296427]:             "devices": [
Sep 30 14:55:51 compute-0 compassionate_franklin[296427]:                 "/dev/loop3"
Sep 30 14:55:51 compute-0 compassionate_franklin[296427]:             ],
Sep 30 14:55:51 compute-0 compassionate_franklin[296427]:             "lv_name": "ceph_lv0",
Sep 30 14:55:51 compute-0 compassionate_franklin[296427]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:55:51 compute-0 compassionate_franklin[296427]:             "lv_size": "21470642176",
Sep 30 14:55:51 compute-0 compassionate_franklin[296427]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5e3c7776-ac03-5698-b79f-a6dc2d80cae6,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1bf35304-bfb4-41f5-b832-570aa31de1b2,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 14:55:51 compute-0 compassionate_franklin[296427]:             "lv_uuid": "f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L",
Sep 30 14:55:51 compute-0 compassionate_franklin[296427]:             "name": "ceph_lv0",
Sep 30 14:55:51 compute-0 compassionate_franklin[296427]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:55:51 compute-0 compassionate_franklin[296427]:             "tags": {
Sep 30 14:55:51 compute-0 compassionate_franklin[296427]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:55:51 compute-0 compassionate_franklin[296427]:                 "ceph.block_uuid": "f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L",
Sep 30 14:55:51 compute-0 compassionate_franklin[296427]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 14:55:51 compute-0 compassionate_franklin[296427]:                 "ceph.cluster_fsid": "5e3c7776-ac03-5698-b79f-a6dc2d80cae6",
Sep 30 14:55:51 compute-0 compassionate_franklin[296427]:                 "ceph.cluster_name": "ceph",
Sep 30 14:55:51 compute-0 compassionate_franklin[296427]:                 "ceph.crush_device_class": "",
Sep 30 14:55:51 compute-0 compassionate_franklin[296427]:                 "ceph.encrypted": "0",
Sep 30 14:55:51 compute-0 compassionate_franklin[296427]:                 "ceph.osd_fsid": "1bf35304-bfb4-41f5-b832-570aa31de1b2",
Sep 30 14:55:51 compute-0 compassionate_franklin[296427]:                 "ceph.osd_id": "0",
Sep 30 14:55:51 compute-0 compassionate_franklin[296427]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 14:55:51 compute-0 compassionate_franklin[296427]:                 "ceph.type": "block",
Sep 30 14:55:51 compute-0 compassionate_franklin[296427]:                 "ceph.vdo": "0",
Sep 30 14:55:51 compute-0 compassionate_franklin[296427]:                 "ceph.with_tpm": "0"
Sep 30 14:55:51 compute-0 compassionate_franklin[296427]:             },
Sep 30 14:55:51 compute-0 compassionate_franklin[296427]:             "type": "block",
Sep 30 14:55:51 compute-0 compassionate_franklin[296427]:             "vg_name": "ceph_vg0"
Sep 30 14:55:51 compute-0 compassionate_franklin[296427]:         }
Sep 30 14:55:51 compute-0 compassionate_franklin[296427]:     ]
Sep 30 14:55:51 compute-0 compassionate_franklin[296427]: }
Sep 30 14:55:51 compute-0 systemd[1]: libpod-fd1c36fd85f686a601a7664545459a1e98d46c6bbb2fc977e929880989a6fc55.scope: Deactivated successfully.
Sep 30 14:55:51 compute-0 podman[296411]: 2025-09-30 14:55:51.149138658 +0000 UTC m=+0.515359490 container died fd1c36fd85f686a601a7664545459a1e98d46c6bbb2fc977e929880989a6fc55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_franklin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Sep 30 14:55:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-9fd84c7949282ec28b012ca075ec65f8154cbb6445c5204b1b30e717e9ce59d0-merged.mount: Deactivated successfully.
Sep 30 14:55:51 compute-0 podman[296411]: 2025-09-30 14:55:51.226720999 +0000 UTC m=+0.592941851 container remove fd1c36fd85f686a601a7664545459a1e98d46c6bbb2fc977e929880989a6fc55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_franklin, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Sep 30 14:55:51 compute-0 ceph-mon[74194]: pgmap v1213: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 0 B/s wr, 55 op/s
Sep 30 14:55:51 compute-0 systemd[1]: libpod-conmon-fd1c36fd85f686a601a7664545459a1e98d46c6bbb2fc977e929880989a6fc55.scope: Deactivated successfully.
Sep 30 14:55:51 compute-0 sudo[296304]: pam_unix(sudo:session): session closed for user root
Sep 30 14:55:51 compute-0 sudo[296448]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:55:51 compute-0 sudo[296448]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:55:51 compute-0 sudo[296448]: pam_unix(sudo:session): session closed for user root
Sep 30 14:55:51 compute-0 sudo[296473]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -- raw list --format json
Sep 30 14:55:51 compute-0 sudo[296473]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:55:51 compute-0 podman[296537]: 2025-09-30 14:55:51.861317525 +0000 UTC m=+0.038293919 container create 01d8bc6dc66fb68c5215ba8298629889d700bea11875c1d364c83bf96eb2e7f2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_tharp, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:55:51 compute-0 systemd[1]: Started libpod-conmon-01d8bc6dc66fb68c5215ba8298629889d700bea11875c1d364c83bf96eb2e7f2.scope.
Sep 30 14:55:51 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:55:51 compute-0 podman[296537]: 2025-09-30 14:55:51.929143922 +0000 UTC m=+0.106120316 container init 01d8bc6dc66fb68c5215ba8298629889d700bea11875c1d364c83bf96eb2e7f2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_tharp, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:55:51 compute-0 podman[296537]: 2025-09-30 14:55:51.937801918 +0000 UTC m=+0.114778272 container start 01d8bc6dc66fb68c5215ba8298629889d700bea11875c1d364c83bf96eb2e7f2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:55:51 compute-0 podman[296537]: 2025-09-30 14:55:51.844371753 +0000 UTC m=+0.021348107 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:55:51 compute-0 affectionate_tharp[296553]: 167 167
Sep 30 14:55:51 compute-0 podman[296537]: 2025-09-30 14:55:51.940932099 +0000 UTC m=+0.117908483 container attach 01d8bc6dc66fb68c5215ba8298629889d700bea11875c1d364c83bf96eb2e7f2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_tharp, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:55:51 compute-0 systemd[1]: libpod-01d8bc6dc66fb68c5215ba8298629889d700bea11875c1d364c83bf96eb2e7f2.scope: Deactivated successfully.
Sep 30 14:55:51 compute-0 podman[296537]: 2025-09-30 14:55:51.942304785 +0000 UTC m=+0.119281159 container died 01d8bc6dc66fb68c5215ba8298629889d700bea11875c1d364c83bf96eb2e7f2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_tharp, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Sep 30 14:55:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-599a0748f68eea4b40701b63412182f6fce33eb889cf763f54a78b44f18ea15f-merged.mount: Deactivated successfully.
Sep 30 14:55:51 compute-0 podman[296537]: 2025-09-30 14:55:51.979140255 +0000 UTC m=+0.156116609 container remove 01d8bc6dc66fb68c5215ba8298629889d700bea11875c1d364c83bf96eb2e7f2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Sep 30 14:55:51 compute-0 systemd[1]: libpod-conmon-01d8bc6dc66fb68c5215ba8298629889d700bea11875c1d364c83bf96eb2e7f2.scope: Deactivated successfully.
Sep 30 14:55:52 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:55:52 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:55:52 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:55:52.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:55:52 compute-0 podman[296578]: 2025-09-30 14:55:52.148285552 +0000 UTC m=+0.041301357 container create 1a332d70062ae6b423d4fe749b4cf04986686e130a628ee793b478019902d749 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_maxwell, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:55:52 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1214: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 586 B/s rd, 0 op/s
Sep 30 14:55:52 compute-0 nova_compute[261524]: 2025-09-30 14:55:52.179 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:55:52 compute-0 systemd[1]: Started libpod-conmon-1a332d70062ae6b423d4fe749b4cf04986686e130a628ee793b478019902d749.scope.
Sep 30 14:55:52 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:55:52 compute-0 podman[296578]: 2025-09-30 14:55:52.13208891 +0000 UTC m=+0.025104735 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:55:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd146597f2b38e543796161a8ae3e4f42869a6725823783cbb86c193adf74324/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:55:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd146597f2b38e543796161a8ae3e4f42869a6725823783cbb86c193adf74324/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:55:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd146597f2b38e543796161a8ae3e4f42869a6725823783cbb86c193adf74324/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:55:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd146597f2b38e543796161a8ae3e4f42869a6725823783cbb86c193adf74324/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:55:52 compute-0 podman[296578]: 2025-09-30 14:55:52.253599146 +0000 UTC m=+0.146614971 container init 1a332d70062ae6b423d4fe749b4cf04986686e130a628ee793b478019902d749 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_maxwell, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Sep 30 14:55:52 compute-0 podman[296578]: 2025-09-30 14:55:52.260217858 +0000 UTC m=+0.153233663 container start 1a332d70062ae6b423d4fe749b4cf04986686e130a628ee793b478019902d749 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_maxwell, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:55:52 compute-0 podman[296578]: 2025-09-30 14:55:52.264223413 +0000 UTC m=+0.157239228 container attach 1a332d70062ae6b423d4fe749b4cf04986686e130a628ee793b478019902d749 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_maxwell, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:55:52 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:55:52 compute-0 lvm[296668]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 14:55:52 compute-0 lvm[296668]: VG ceph_vg0 finished
Sep 30 14:55:52 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:55:52 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:55:52 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:55:52.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:55:52 compute-0 romantic_maxwell[296594]: {}
Sep 30 14:55:52 compute-0 systemd[1]: libpod-1a332d70062ae6b423d4fe749b4cf04986686e130a628ee793b478019902d749.scope: Deactivated successfully.
Sep 30 14:55:52 compute-0 systemd[1]: libpod-1a332d70062ae6b423d4fe749b4cf04986686e130a628ee793b478019902d749.scope: Consumed 1.141s CPU time.
Sep 30 14:55:52 compute-0 podman[296578]: 2025-09-30 14:55:52.997357646 +0000 UTC m=+0.890373441 container died 1a332d70062ae6b423d4fe749b4cf04986686e130a628ee793b478019902d749 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_maxwell, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Sep 30 14:55:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-cd146597f2b38e543796161a8ae3e4f42869a6725823783cbb86c193adf74324-merged.mount: Deactivated successfully.
Sep 30 14:55:53 compute-0 podman[296578]: 2025-09-30 14:55:53.050735886 +0000 UTC m=+0.943751701 container remove 1a332d70062ae6b423d4fe749b4cf04986686e130a628ee793b478019902d749 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_maxwell, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Sep 30 14:55:53 compute-0 systemd[1]: libpod-conmon-1a332d70062ae6b423d4fe749b4cf04986686e130a628ee793b478019902d749.scope: Deactivated successfully.
Sep 30 14:55:53 compute-0 sudo[296473]: pam_unix(sudo:session): session closed for user root
Sep 30 14:55:53 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 14:55:53 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:55:53 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 14:55:53 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:55:53 compute-0 sudo[296683]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 14:55:53 compute-0 sudo[296683]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:55:53 compute-0 sudo[296683]: pam_unix(sudo:session): session closed for user root
Sep 30 14:55:53 compute-0 ceph-mon[74194]: pgmap v1214: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 586 B/s rd, 0 op/s
Sep 30 14:55:53 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:55:53 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:55:53 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:55:53.731Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:55:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:55:53 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:55:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:55:54 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:55:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:55:54 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:55:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:55:54 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:55:54 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:55:54 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:55:54 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:55:54.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:55:54 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1215: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 586 B/s rd, 0 op/s
Sep 30 14:55:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:55:54] "GET /metrics HTTP/1.1" 200 48534 "" "Prometheus/2.51.0"
Sep 30 14:55:54 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:55:54] "GET /metrics HTTP/1.1" 200 48534 "" "Prometheus/2.51.0"
Sep 30 14:55:54 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:55:54 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:55:54 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:55:54.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:55:55 compute-0 ceph-mon[74194]: pgmap v1215: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 586 B/s rd, 0 op/s
Sep 30 14:55:56 compute-0 nova_compute[261524]: 2025-09-30 14:55:56.019 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:55:56 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:55:56 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:55:56 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:55:56.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:55:56 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1216: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 586 B/s rd, 0 op/s
Sep 30 14:55:56 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:55:56 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:55:56 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:55:56.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:55:57 compute-0 nova_compute[261524]: 2025-09-30 14:55:57.181 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:55:57 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:55:57.228Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:55:57 compute-0 ceph-mon[74194]: pgmap v1216: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 586 B/s rd, 0 op/s
Sep 30 14:55:57 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:55:58 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:55:58 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:55:58 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:55:58.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:55:58 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1217: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 586 B/s rd, 0 op/s
Sep 30 14:55:58 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:55:58.878Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:55:58 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:55:58 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:55:58 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:55:58.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:55:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:55:58 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:55:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:55:58 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:55:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:55:58 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:55:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:55:59 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:55:59 compute-0 ceph-mon[74194]: pgmap v1217: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 586 B/s rd, 0 op/s
Sep 30 14:55:59 compute-0 ceph-mgr[74485]: [balancer INFO root] Optimize plan auto_2025-09-30_14:55:59
Sep 30 14:55:59 compute-0 ceph-mgr[74485]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 14:55:59 compute-0 ceph-mgr[74485]: [balancer INFO root] do_upmap
Sep 30 14:55:59 compute-0 ceph-mgr[74485]: [balancer INFO root] pools ['default.rgw.control', '.nfs', '.mgr', 'backups', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'vms', 'images', '.rgw.root']
Sep 30 14:55:59 compute-0 ceph-mgr[74485]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 14:55:59 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:55:59 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:55:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:55:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:55:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:55:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:55:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:55:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:56:00 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:56:00 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:56:00 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:56:00.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:56:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 14:56:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:56:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 14:56:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:56:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 14:56:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:56:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:56:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:56:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:56:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:56:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Sep 30 14:56:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:56:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Sep 30 14:56:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:56:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:56:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:56:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Sep 30 14:56:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:56:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Sep 30 14:56:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:56:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:56:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:56:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 14:56:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:56:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 14:56:00 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1218: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:56:00 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:56:00 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:56:00 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:56:00 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:56:00.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:56:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 14:56:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 14:56:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 14:56:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 14:56:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 14:56:01 compute-0 nova_compute[261524]: 2025-09-30 14:56:01.021 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:56:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 14:56:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 14:56:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 14:56:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 14:56:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 14:56:01 compute-0 ceph-mon[74194]: pgmap v1218: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:56:02 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:56:02 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:56:02 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:56:02.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:56:02 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1219: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:56:02 compute-0 nova_compute[261524]: 2025-09-30 14:56:02.184 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:56:02 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:56:02 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:56:02 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:56:02 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:56:02.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:56:03 compute-0 ceph-mon[74194]: pgmap v1219: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:56:03 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:56:03.732Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:56:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:56:03 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:56:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:56:03 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:56:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:56:03 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:56:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:56:04 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:56:04 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:56:04 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:56:04 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:56:04.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:56:04 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1220: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:56:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:56:04] "GET /metrics HTTP/1.1" 200 48533 "" "Prometheus/2.51.0"
Sep 30 14:56:04 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:56:04] "GET /metrics HTTP/1.1" 200 48533 "" "Prometheus/2.51.0"
Sep 30 14:56:04 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:56:04 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:56:04 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:56:04.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:56:05 compute-0 ceph-mon[74194]: pgmap v1220: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:56:06 compute-0 nova_compute[261524]: 2025-09-30 14:56:06.022 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:56:06 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:56:06 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:56:06 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:56:06.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:56:06 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1221: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:56:06 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:56:06 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:56:06 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:56:06.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:56:07 compute-0 nova_compute[261524]: 2025-09-30 14:56:07.186 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:56:07 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:56:07.229Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:56:07 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:56:07.230Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:56:07 compute-0 ceph-mon[74194]: pgmap v1221: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:56:07 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:56:08 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:56:08 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:56:08 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:56:08.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:56:08 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1222: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:56:08 compute-0 sudo[296724]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:56:08 compute-0 sudo[296724]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:56:08 compute-0 sudo[296724]: pam_unix(sudo:session): session closed for user root
Sep 30 14:56:08 compute-0 ceph-mon[74194]: pgmap v1222: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:56:08 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:56:08.879Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:56:08 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:56:08.882Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:56:08 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:56:08 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:56:08 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:56:08.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:56:09 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:56:08 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:56:09 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:56:08 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:56:09 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:56:08 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:56:09 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:56:09 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:56:10 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:56:10 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:56:10 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:56:10.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:56:10 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1223: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:56:10 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:56:10 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:56:10 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:56:10.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:56:11 compute-0 nova_compute[261524]: 2025-09-30 14:56:11.024 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:56:11 compute-0 ceph-mon[74194]: pgmap v1223: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:56:11 compute-0 ceph-mon[74194]: from='client.? 192.168.122.10:0/2719705085' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 14:56:11 compute-0 ceph-mon[74194]: from='client.? 192.168.122.10:0/2719705085' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 14:56:12 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:56:12 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:56:12 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:56:12.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:56:12 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1224: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:56:12 compute-0 nova_compute[261524]: 2025-09-30 14:56:12.189 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:56:12 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:56:12 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:56:12 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:56:12 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:56:12.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:56:13 compute-0 ceph-mon[74194]: pgmap v1224: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:56:13 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:56:13.733Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:56:13 compute-0 unix_chkpwd[296757]: password check failed for user (root)
Sep 30 14:56:13 compute-0 sshd-session[296753]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.233  user=root
Sep 30 14:56:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:56:13 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:56:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:56:13 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:56:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:56:13 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:56:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:56:14 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:56:14 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:56:14 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:56:14 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:56:14.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:56:14 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1225: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:56:14 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:56:14 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:56:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:56:14] "GET /metrics HTTP/1.1" 200 48535 "" "Prometheus/2.51.0"
Sep 30 14:56:14 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:56:14] "GET /metrics HTTP/1.1" 200 48535 "" "Prometheus/2.51.0"
Sep 30 14:56:14 compute-0 sshd-session[296758]: banner exchange: Connection from 195.184.76.214 port 52859: invalid format
Sep 30 14:56:14 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:56:14 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:56:14 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:56:14.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:56:15 compute-0 ceph-mon[74194]: pgmap v1225: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:56:15 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:56:15 compute-0 sshd-session[296753]: Failed password for root from 80.94.93.233 port 32300 ssh2
Sep 30 14:56:16 compute-0 nova_compute[261524]: 2025-09-30 14:56:16.027 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:56:16 compute-0 unix_chkpwd[296763]: password check failed for user (root)
Sep 30 14:56:16 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:56:16 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:56:16 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:56:16.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:56:16 compute-0 podman[296765]: 2025-09-30 14:56:16.139843999 +0000 UTC m=+0.060792115 container health_status b3fb2fd96e3ed0561f41ccf5f3a6a8f5edc1610051f196882b9fe64f54041c07 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, managed_by=edpm_ansible, org.label-schema.build-date=20250923, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Sep 30 14:56:16 compute-0 podman[296762]: 2025-09-30 14:56:16.162980162 +0000 UTC m=+0.086240858 container health_status 3f9405f717bf7bccb1d94628a6cea0442375ebf8d5cf43ef2536ee30dce6c6e0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Sep 30 14:56:16 compute-0 podman[296766]: 2025-09-30 14:56:16.162999583 +0000 UTC m=+0.079693808 container health_status c96c6cc1b89de09f21811bef665f35e069874e3ad1bbd4c37d02227af98e3458 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250923, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Sep 30 14:56:16 compute-0 podman[296764]: 2025-09-30 14:56:16.167064549 +0000 UTC m=+0.090291234 container health_status 8ace90c1ad44d98a416105b72e1c541724757ab08efa10adfaa0df7e8c2027c6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20250923, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Sep 30 14:56:16 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1226: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 0 B/s wr, 46 op/s
Sep 30 14:56:17 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:56:17 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:56:17 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:56:16.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:56:17 compute-0 nova_compute[261524]: 2025-09-30 14:56:17.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:56:17 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:56:17.231Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:56:17 compute-0 ceph-mon[74194]: pgmap v1226: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 0 B/s wr, 46 op/s
Sep 30 14:56:17 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:56:17 compute-0 sshd-session[296759]: Connection closed by 195.184.76.140 port 34797
Sep 30 14:56:18 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:56:18 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:56:18 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:56:18.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:56:18 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1227: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 0 B/s wr, 66 op/s
Sep 30 14:56:18 compute-0 sshd-session[296753]: Failed password for root from 80.94.93.233 port 32300 ssh2
Sep 30 14:56:18 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:56:18.883Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:56:18 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:56:18.884Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:56:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:56:18 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:56:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:56:18 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:56:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:56:18 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:56:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:56:19 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:56:19 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:56:19 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:56:19 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:56:19.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:56:19 compute-0 ceph-mon[74194]: pgmap v1227: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 0 B/s wr, 66 op/s
Sep 30 14:56:20 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:56:20 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:56:20 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:56:20.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:56:20 compute-0 unix_chkpwd[296849]: password check failed for user (root)
Sep 30 14:56:20 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1228: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 0 B/s wr, 66 op/s
Sep 30 14:56:21 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:56:21 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:56:21 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:56:21.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:56:21 compute-0 nova_compute[261524]: 2025-09-30 14:56:21.028 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:56:21 compute-0 ceph-mon[74194]: pgmap v1228: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 0 B/s wr, 66 op/s
Sep 30 14:56:22 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:56:22 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:56:22 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:56:22.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:56:22 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1229: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 72 KiB/s rd, 0 B/s wr, 119 op/s
Sep 30 14:56:22 compute-0 nova_compute[261524]: 2025-09-30 14:56:22.224 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:56:22 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/1915098684' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:56:22 compute-0 sshd-session[296753]: Failed password for root from 80.94.93.233 port 32300 ssh2
Sep 30 14:56:22 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:56:22 compute-0 nova_compute[261524]: 2025-09-30 14:56:22.952 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:56:22 compute-0 nova_compute[261524]: 2025-09-30 14:56:22.953 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Sep 30 14:56:22 compute-0 nova_compute[261524]: 2025-09-30 14:56:22.953 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Sep 30 14:56:22 compute-0 nova_compute[261524]: 2025-09-30 14:56:22.977 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Sep 30 14:56:23 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:56:23 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:56:23 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:56:23.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:56:23 compute-0 ceph-mon[74194]: pgmap v1229: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 72 KiB/s rd, 0 B/s wr, 119 op/s
Sep 30 14:56:23 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/1161547895' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:56:23 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/3003230872' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:56:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:56:23.734Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:56:23 compute-0 nova_compute[261524]: 2025-09-30 14:56:23.952 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:56:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:56:23 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:56:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:56:23 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:56:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:56:23 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:56:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:56:24 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:56:24 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:56:24 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:56:24 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:56:24.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:56:24 compute-0 sshd-session[296753]: Received disconnect from 80.94.93.233 port 32300:11:  [preauth]
Sep 30 14:56:24 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1230: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 72 KiB/s rd, 0 B/s wr, 119 op/s
Sep 30 14:56:24 compute-0 sshd-session[296753]: Disconnected from authenticating user root 80.94.93.233 port 32300 [preauth]
Sep 30 14:56:24 compute-0 sshd-session[296753]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.233  user=root
Sep 30 14:56:24 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/2011748445' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:56:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:56:24] "GET /metrics HTTP/1.1" 200 48535 "" "Prometheus/2.51.0"
Sep 30 14:56:24 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:56:24] "GET /metrics HTTP/1.1" 200 48535 "" "Prometheus/2.51.0"
Sep 30 14:56:24 compute-0 nova_compute[261524]: 2025-09-30 14:56:24.947 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:56:25 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:56:25 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:56:25 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:56:25.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:56:25 compute-0 nova_compute[261524]: 2025-09-30 14:56:25.070 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:56:25 compute-0 nova_compute[261524]: 2025-09-30 14:56:25.071 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:56:25 compute-0 nova_compute[261524]: 2025-09-30 14:56:25.071 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:56:25 compute-0 nova_compute[261524]: 2025-09-30 14:56:25.096 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:56:25 compute-0 nova_compute[261524]: 2025-09-30 14:56:25.096 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:56:25 compute-0 nova_compute[261524]: 2025-09-30 14:56:25.096 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:56:25 compute-0 nova_compute[261524]: 2025-09-30 14:56:25.097 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Sep 30 14:56:25 compute-0 nova_compute[261524]: 2025-09-30 14:56:25.097 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Sep 30 14:56:25 compute-0 unix_chkpwd[296876]: password check failed for user (root)
Sep 30 14:56:25 compute-0 sshd-session[296854]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.233  user=root
Sep 30 14:56:25 compute-0 ceph-mon[74194]: pgmap v1230: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 72 KiB/s rd, 0 B/s wr, 119 op/s
Sep 30 14:56:25 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 14:56:25 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3522619462' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:56:25 compute-0 nova_compute[261524]: 2025-09-30 14:56:25.623 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Sep 30 14:56:25 compute-0 nova_compute[261524]: 2025-09-30 14:56:25.792 2 WARNING nova.virt.libvirt.driver [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 14:56:25 compute-0 nova_compute[261524]: 2025-09-30 14:56:25.793 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4496MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Sep 30 14:56:25 compute-0 nova_compute[261524]: 2025-09-30 14:56:25.793 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:56:25 compute-0 nova_compute[261524]: 2025-09-30 14:56:25.793 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:56:25 compute-0 nova_compute[261524]: 2025-09-30 14:56:25.855 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Sep 30 14:56:25 compute-0 nova_compute[261524]: 2025-09-30 14:56:25.855 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Sep 30 14:56:25 compute-0 nova_compute[261524]: 2025-09-30 14:56:25.872 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Sep 30 14:56:26 compute-0 nova_compute[261524]: 2025-09-30 14:56:26.030 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:56:26 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:56:26 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000027s ======
Sep 30 14:56:26 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:56:26.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Sep 30 14:56:26 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1231: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 72 KiB/s rd, 0 B/s wr, 119 op/s
Sep 30 14:56:26 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 14:56:26 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2749451695' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:56:26 compute-0 nova_compute[261524]: 2025-09-30 14:56:26.405 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Sep 30 14:56:26 compute-0 nova_compute[261524]: 2025-09-30 14:56:26.410 2 DEBUG nova.compute.provider_tree [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Inventory has not changed in ProviderTree for provider: 06783cfc-6d32-454d-9501-ebd8adea3735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Sep 30 14:56:26 compute-0 nova_compute[261524]: 2025-09-30 14:56:26.430 2 DEBUG nova.scheduler.client.report [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Inventory has not changed for provider 06783cfc-6d32-454d-9501-ebd8adea3735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Sep 30 14:56:26 compute-0 nova_compute[261524]: 2025-09-30 14:56:26.431 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Sep 30 14:56:26 compute-0 nova_compute[261524]: 2025-09-30 14:56:26.431 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.638s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:56:26 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/3522619462' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:56:26 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/2749451695' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:56:27 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:56:27 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:56:27 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:56:27.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:56:27 compute-0 sshd-session[296854]: Failed password for root from 80.94.93.233 port 53286 ssh2
Sep 30 14:56:27 compute-0 nova_compute[261524]: 2025-09-30 14:56:27.226 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:56:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:56:27.232Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:56:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:56:27.233Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:56:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:56:27.233Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:56:27 compute-0 nova_compute[261524]: 2025-09-30 14:56:27.312 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:56:27 compute-0 nova_compute[261524]: 2025-09-30 14:56:27.313 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:56:27 compute-0 nova_compute[261524]: 2025-09-30 14:56:27.313 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:56:27 compute-0 nova_compute[261524]: 2025-09-30 14:56:27.313 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Sep 30 14:56:27 compute-0 unix_chkpwd[296903]: password check failed for user (root)
Sep 30 14:56:27 compute-0 ceph-mon[74194]: pgmap v1231: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 72 KiB/s rd, 0 B/s wr, 119 op/s
Sep 30 14:56:27 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:56:28 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:56:28 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:56:28 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:56:28.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:56:28 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1232: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 73 op/s
Sep 30 14:56:28 compute-0 sudo[296906]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:56:28 compute-0 sudo[296906]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:56:28 compute-0 sudo[296906]: pam_unix(sudo:session): session closed for user root
Sep 30 14:56:28 compute-0 ceph-mon[74194]: pgmap v1232: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 73 op/s
Sep 30 14:56:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:56:28.885Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:56:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:56:28.885Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:56:28 compute-0 nova_compute[261524]: 2025-09-30 14:56:28.953 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:56:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:56:29 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:56:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:56:29 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:56:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:56:29 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:56:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:56:29 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:56:29 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:56:29 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:56:29 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:56:29.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:56:29 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:56:29 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:56:29 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:56:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:56:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:56:29 compute-0 sshd-session[296854]: Failed password for root from 80.94.93.233 port 53286 ssh2
Sep 30 14:56:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:56:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:56:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:56:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:56:30 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:56:30 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:56:30 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:56:30.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:56:30 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1233: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 0 B/s wr, 53 op/s
Sep 30 14:56:30 compute-0 ceph-mon[74194]: pgmap v1233: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 0 B/s wr, 53 op/s
Sep 30 14:56:31 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:56:31 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:56:31 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:56:31.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:56:31 compute-0 nova_compute[261524]: 2025-09-30 14:56:31.033 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:56:31 compute-0 unix_chkpwd[296934]: password check failed for user (root)
Sep 30 14:56:32 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:56:32 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:56:32 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:56:32.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:56:32 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1234: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 0 B/s wr, 53 op/s
Sep 30 14:56:32 compute-0 nova_compute[261524]: 2025-09-30 14:56:32.229 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:56:32 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:56:33 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:56:33 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:56:33 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:56:33.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:56:33 compute-0 ceph-mon[74194]: pgmap v1234: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 0 B/s wr, 53 op/s
Sep 30 14:56:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:56:33.735Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:56:33 compute-0 sshd-session[296854]: Failed password for root from 80.94.93.233 port 53286 ssh2
Sep 30 14:56:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:56:33 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:56:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:56:34 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:56:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:56:34 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:56:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:56:34 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:56:34 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:56:34 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:56:34 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:56:34.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:56:34 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1235: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:56:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:56:34] "GET /metrics HTTP/1.1" 200 48529 "" "Prometheus/2.51.0"
Sep 30 14:56:34 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:56:34] "GET /metrics HTTP/1.1" 200 48529 "" "Prometheus/2.51.0"
Sep 30 14:56:35 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:56:35 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:56:35 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:56:35.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:56:35 compute-0 ceph-mon[74194]: pgmap v1235: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:56:35 compute-0 sshd-session[296854]: Received disconnect from 80.94.93.233 port 53286:11:  [preauth]
Sep 30 14:56:35 compute-0 sshd-session[296854]: Disconnected from authenticating user root 80.94.93.233 port 53286 [preauth]
Sep 30 14:56:35 compute-0 sshd-session[296854]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.233  user=root
Sep 30 14:56:36 compute-0 nova_compute[261524]: 2025-09-30 14:56:36.036 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:56:36 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:56:36 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:56:36 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:56:36.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:56:36 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1236: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:56:36 compute-0 unix_chkpwd[296942]: password check failed for user (root)
Sep 30 14:56:36 compute-0 sshd-session[296939]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.233  user=root
Sep 30 14:56:37 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:56:37 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:56:37 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:56:37.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:56:37 compute-0 nova_compute[261524]: 2025-09-30 14:56:37.232 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:56:37 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:56:37.234Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:56:37 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:56:37.235Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:56:37 compute-0 ceph-mon[74194]: pgmap v1236: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:56:37 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:56:38 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:56:38 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:56:38 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:56:38.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:56:38 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1237: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:56:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:56:38.274 163966 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:56:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:56:38.275 163966 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:56:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:56:38.275 163966 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:56:38 compute-0 sshd-session[296939]: Failed password for root from 80.94.93.233 port 24670 ssh2
Sep 30 14:56:38 compute-0 unix_chkpwd[296945]: password check failed for user (root)
Sep 30 14:56:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:56:38.886Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:56:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:56:38.887Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:56:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:56:39 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:56:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:56:39 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:56:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:56:39 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:56:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:56:39 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:56:39 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:56:39 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:56:39 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:56:39.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:56:39 compute-0 ceph-mon[74194]: pgmap v1237: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:56:40 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:56:40 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:56:40 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:56:40.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:56:40 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1238: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:56:40 compute-0 sshd-session[296939]: Failed password for root from 80.94.93.233 port 24670 ssh2
Sep 30 14:56:41 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:56:41 compute-0 nova_compute[261524]: 2025-09-30 14:56:41.037 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:56:41 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:56:41 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:56:41.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:56:41 compute-0 ceph-mon[74194]: pgmap v1238: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:56:42 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:56:42 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 14:56:42 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:56:42.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 14:56:42 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1239: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:56:42 compute-0 nova_compute[261524]: 2025-09-30 14:56:42.235 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:56:42 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:56:42 compute-0 unix_chkpwd[296950]: password check failed for user (root)
Sep 30 14:56:43 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:56:43 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:56:43 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:56:43.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:56:43 compute-0 ceph-mon[74194]: pgmap v1239: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:56:43 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:56:43.736Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:56:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:56:44 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:56:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:56:44 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:56:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:56:44 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:56:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:56:44 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:56:44 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:56:44 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 14:56:44 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:56:44.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 14:56:44 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1240: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:56:44 compute-0 sshd-session[296939]: Failed password for root from 80.94.93.233 port 24670 ssh2
Sep 30 14:56:44 compute-0 sshd-session[296939]: Received disconnect from 80.94.93.233 port 24670:11:  [preauth]
Sep 30 14:56:44 compute-0 sshd-session[296939]: Disconnected from authenticating user root 80.94.93.233 port 24670 [preauth]
Sep 30 14:56:44 compute-0 sshd-session[296939]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.233  user=root
Sep 30 14:56:44 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:56:44 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:56:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:56:44] "GET /metrics HTTP/1.1" 200 48527 "" "Prometheus/2.51.0"
Sep 30 14:56:44 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:56:44] "GET /metrics HTTP/1.1" 200 48527 "" "Prometheus/2.51.0"
Sep 30 14:56:45 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:56:45 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 14:56:45 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:56:45.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 14:56:45 compute-0 ceph-mon[74194]: pgmap v1240: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:56:45 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:56:46 compute-0 nova_compute[261524]: 2025-09-30 14:56:46.039 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:56:46 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:56:46 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 14:56:46 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:56:46.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 14:56:46 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1241: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:56:47 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:56:47 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 14:56:47 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:56:47.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 14:56:47 compute-0 podman[296958]: 2025-09-30 14:56:47.175486293 +0000 UTC m=+0.087457963 container health_status c96c6cc1b89de09f21811bef665f35e069874e3ad1bbd4c37d02227af98e3458 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Sep 30 14:56:47 compute-0 podman[296957]: 2025-09-30 14:56:47.190398815 +0000 UTC m=+0.093608297 container health_status b3fb2fd96e3ed0561f41ccf5f3a6a8f5edc1610051f196882b9fe64f54041c07 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20250923, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Sep 30 14:56:47 compute-0 podman[296955]: 2025-09-30 14:56:47.211709276 +0000 UTC m=+0.123698337 container health_status 3f9405f717bf7bccb1d94628a6cea0442375ebf8d5cf43ef2536ee30dce6c6e0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3)
Sep 30 14:56:47 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:56:47.235Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:56:47 compute-0 nova_compute[261524]: 2025-09-30 14:56:47.243 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:56:47 compute-0 podman[296956]: 2025-09-30 14:56:47.266152334 +0000 UTC m=+0.174032292 container health_status 8ace90c1ad44d98a416105b72e1c541724757ab08efa10adfaa0df7e8c2027c6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Sep 30 14:56:47 compute-0 ceph-mon[74194]: pgmap v1241: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:56:47 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:56:48 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:56:48 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:56:48 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:56:48.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:56:48 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1242: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:56:48 compute-0 sudo[297039]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:56:48 compute-0 sudo[297039]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:56:48 compute-0 sudo[297039]: pam_unix(sudo:session): session closed for user root
Sep 30 14:56:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:56:48.887Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:56:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:56:48 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:56:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:56:48 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:56:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:56:48 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:56:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:56:49 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:56:49 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:56:49 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:56:49 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:56:49.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:56:49 compute-0 ceph-mon[74194]: pgmap v1242: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:56:50 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:56:50 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:56:50 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:56:50.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:56:50 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1243: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:56:51 compute-0 nova_compute[261524]: 2025-09-30 14:56:51.041 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:56:51 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:56:51 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:56:51 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:56:51.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:56:51 compute-0 ceph-mon[74194]: pgmap v1243: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:56:52 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:56:52 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 14:56:52 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:56:52.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 14:56:52 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1244: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:56:52 compute-0 nova_compute[261524]: 2025-09-30 14:56:52.246 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:56:52 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:56:53 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:56:53 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:56:53 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:56:53.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:56:53 compute-0 ceph-mon[74194]: pgmap v1244: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:56:53 compute-0 sudo[297069]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:56:53 compute-0 sudo[297069]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:56:53 compute-0 sudo[297069]: pam_unix(sudo:session): session closed for user root
Sep 30 14:56:53 compute-0 sudo[297094]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 14:56:53 compute-0 sudo[297094]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:56:53 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:56:53.738Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:56:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:56:53 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:56:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:56:54 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:56:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:56:54 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:56:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:56:54 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:56:54 compute-0 sudo[297094]: pam_unix(sudo:session): session closed for user root
Sep 30 14:56:54 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:56:54 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:56:54 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:56:54.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:56:54 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1245: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:56:54 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:56:54 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:56:54 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 14:56:54 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 14:56:54 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1246: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 610 B/s rd, 0 op/s
Sep 30 14:56:54 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 14:56:54 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:56:54 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 14:56:54 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:56:54 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 14:56:54 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 14:56:54 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 14:56:54 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 14:56:54 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:56:54 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:56:54 compute-0 sudo[297153]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:56:54 compute-0 sudo[297153]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:56:54 compute-0 sudo[297153]: pam_unix(sudo:session): session closed for user root
Sep 30 14:56:54 compute-0 sudo[297178]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 14:56:54 compute-0 sudo[297178]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:56:54 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:56:54 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 14:56:54 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:56:54 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:56:54 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 14:56:54 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 14:56:54 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:56:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:56:54] "GET /metrics HTTP/1.1" 200 48527 "" "Prometheus/2.51.0"
Sep 30 14:56:54 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:56:54] "GET /metrics HTTP/1.1" 200 48527 "" "Prometheus/2.51.0"
Sep 30 14:56:54 compute-0 podman[297243]: 2025-09-30 14:56:54.818427656 +0000 UTC m=+0.056497530 container create 1139777d8bdcebbfb5e5fcceb652dc325dfb1733a548735e6332c427216b9922 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_rubin, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:56:54 compute-0 systemd[1]: Started libpod-conmon-1139777d8bdcebbfb5e5fcceb652dc325dfb1733a548735e6332c427216b9922.scope.
Sep 30 14:56:54 compute-0 podman[297243]: 2025-09-30 14:56:54.789456704 +0000 UTC m=+0.027526658 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:56:54 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:56:54 compute-0 podman[297243]: 2025-09-30 14:56:54.9155707 +0000 UTC m=+0.153640574 container init 1139777d8bdcebbfb5e5fcceb652dc325dfb1733a548735e6332c427216b9922 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_rubin, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:56:54 compute-0 podman[297243]: 2025-09-30 14:56:54.922719498 +0000 UTC m=+0.160789402 container start 1139777d8bdcebbfb5e5fcceb652dc325dfb1733a548735e6332c427216b9922 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_rubin, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:56:54 compute-0 podman[297243]: 2025-09-30 14:56:54.926463592 +0000 UTC m=+0.164533466 container attach 1139777d8bdcebbfb5e5fcceb652dc325dfb1733a548735e6332c427216b9922 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_rubin, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default)
Sep 30 14:56:54 compute-0 zealous_rubin[297259]: 167 167
Sep 30 14:56:54 compute-0 systemd[1]: libpod-1139777d8bdcebbfb5e5fcceb652dc325dfb1733a548735e6332c427216b9922.scope: Deactivated successfully.
Sep 30 14:56:54 compute-0 podman[297243]: 2025-09-30 14:56:54.929275222 +0000 UTC m=+0.167345116 container died 1139777d8bdcebbfb5e5fcceb652dc325dfb1733a548735e6332c427216b9922 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_rubin, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:56:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-a878883e005e353db9e84cda102c3b835bd121815921d6ebb365684bb4573575-merged.mount: Deactivated successfully.
Sep 30 14:56:54 compute-0 podman[297243]: 2025-09-30 14:56:54.967990188 +0000 UTC m=+0.206060052 container remove 1139777d8bdcebbfb5e5fcceb652dc325dfb1733a548735e6332c427216b9922 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_rubin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Sep 30 14:56:54 compute-0 systemd[1]: libpod-conmon-1139777d8bdcebbfb5e5fcceb652dc325dfb1733a548735e6332c427216b9922.scope: Deactivated successfully.
Sep 30 14:56:55 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:56:55 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:56:55 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:56:55.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:56:55 compute-0 podman[297280]: 2025-09-30 14:56:55.193268178 +0000 UTC m=+0.068155582 container create 8923a1ee93f75a1818e182e96ab0d55e5e172ed24e650b196d144c42cac1e2f6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_leakey, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Sep 30 14:56:55 compute-0 systemd[1]: Started libpod-conmon-8923a1ee93f75a1818e182e96ab0d55e5e172ed24e650b196d144c42cac1e2f6.scope.
Sep 30 14:56:55 compute-0 podman[297280]: 2025-09-30 14:56:55.170602762 +0000 UTC m=+0.045490206 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:56:55 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:56:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d215a603caa530c85142f17a55f01e6359ddf0f48850b61f7edc96461ff10a0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:56:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d215a603caa530c85142f17a55f01e6359ddf0f48850b61f7edc96461ff10a0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:56:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d215a603caa530c85142f17a55f01e6359ddf0f48850b61f7edc96461ff10a0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:56:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d215a603caa530c85142f17a55f01e6359ddf0f48850b61f7edc96461ff10a0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:56:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d215a603caa530c85142f17a55f01e6359ddf0f48850b61f7edc96461ff10a0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 14:56:55 compute-0 podman[297280]: 2025-09-30 14:56:55.295312514 +0000 UTC m=+0.170199938 container init 8923a1ee93f75a1818e182e96ab0d55e5e172ed24e650b196d144c42cac1e2f6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_leakey, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Sep 30 14:56:55 compute-0 podman[297280]: 2025-09-30 14:56:55.307288562 +0000 UTC m=+0.182175966 container start 8923a1ee93f75a1818e182e96ab0d55e5e172ed24e650b196d144c42cac1e2f6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_leakey, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:56:55 compute-0 podman[297280]: 2025-09-30 14:56:55.310055471 +0000 UTC m=+0.184942915 container attach 8923a1ee93f75a1818e182e96ab0d55e5e172ed24e650b196d144c42cac1e2f6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_leakey, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Sep 30 14:56:55 compute-0 ceph-mon[74194]: pgmap v1245: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:56:55 compute-0 ceph-mon[74194]: pgmap v1246: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 610 B/s rd, 0 op/s
Sep 30 14:56:55 compute-0 upbeat_leakey[297296]: --> passed data devices: 0 physical, 1 LVM
Sep 30 14:56:55 compute-0 upbeat_leakey[297296]: --> All data devices are unavailable
Sep 30 14:56:55 compute-0 systemd[1]: libpod-8923a1ee93f75a1818e182e96ab0d55e5e172ed24e650b196d144c42cac1e2f6.scope: Deactivated successfully.
Sep 30 14:56:55 compute-0 podman[297280]: 2025-09-30 14:56:55.64109712 +0000 UTC m=+0.515984564 container died 8923a1ee93f75a1818e182e96ab0d55e5e172ed24e650b196d144c42cac1e2f6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_leakey, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Sep 30 14:56:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-5d215a603caa530c85142f17a55f01e6359ddf0f48850b61f7edc96461ff10a0-merged.mount: Deactivated successfully.
Sep 30 14:56:55 compute-0 podman[297280]: 2025-09-30 14:56:55.824821114 +0000 UTC m=+0.699708538 container remove 8923a1ee93f75a1818e182e96ab0d55e5e172ed24e650b196d144c42cac1e2f6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_leakey, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:56:55 compute-0 systemd[1]: libpod-conmon-8923a1ee93f75a1818e182e96ab0d55e5e172ed24e650b196d144c42cac1e2f6.scope: Deactivated successfully.
Sep 30 14:56:55 compute-0 sudo[297178]: pam_unix(sudo:session): session closed for user root
Sep 30 14:56:55 compute-0 sudo[297326]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:56:55 compute-0 sudo[297326]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:56:55 compute-0 sudo[297326]: pam_unix(sudo:session): session closed for user root
Sep 30 14:56:56 compute-0 nova_compute[261524]: 2025-09-30 14:56:56.044 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:56:56 compute-0 sudo[297352]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -- lvm list --format json
Sep 30 14:56:56 compute-0 sudo[297352]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:56:56 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:56:56 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:56:56 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:56:56.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:56:56 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1247: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 915 B/s rd, 0 op/s
Sep 30 14:56:56 compute-0 podman[297418]: 2025-09-30 14:56:56.516929711 +0000 UTC m=+0.047798334 container create df37980fe8fe3fac527e52db37d996c4a1e433ec9192ec683d1d762c59d83c56 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_goldwasser, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325)
Sep 30 14:56:56 compute-0 systemd[1]: Started libpod-conmon-df37980fe8fe3fac527e52db37d996c4a1e433ec9192ec683d1d762c59d83c56.scope.
Sep 30 14:56:56 compute-0 podman[297418]: 2025-09-30 14:56:56.491019864 +0000 UTC m=+0.021888497 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:56:56 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:56:56 compute-0 podman[297418]: 2025-09-30 14:56:56.62912616 +0000 UTC m=+0.159994783 container init df37980fe8fe3fac527e52db37d996c4a1e433ec9192ec683d1d762c59d83c56 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_goldwasser, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:56:56 compute-0 podman[297418]: 2025-09-30 14:56:56.63996622 +0000 UTC m=+0.170834843 container start df37980fe8fe3fac527e52db37d996c4a1e433ec9192ec683d1d762c59d83c56 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_goldwasser, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325)
Sep 30 14:56:56 compute-0 optimistic_goldwasser[297434]: 167 167
Sep 30 14:56:56 compute-0 systemd[1]: libpod-df37980fe8fe3fac527e52db37d996c4a1e433ec9192ec683d1d762c59d83c56.scope: Deactivated successfully.
Sep 30 14:56:56 compute-0 podman[297418]: 2025-09-30 14:56:56.653533798 +0000 UTC m=+0.184402501 container attach df37980fe8fe3fac527e52db37d996c4a1e433ec9192ec683d1d762c59d83c56 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_goldwasser, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Sep 30 14:56:56 compute-0 podman[297418]: 2025-09-30 14:56:56.65439726 +0000 UTC m=+0.185265903 container died df37980fe8fe3fac527e52db37d996c4a1e433ec9192ec683d1d762c59d83c56 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_goldwasser, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:56:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-d0e3ff8cabd8bdf0e5dde17466deb0f7caaf966746b7650d53ed6baea09779fc-merged.mount: Deactivated successfully.
Sep 30 14:56:56 compute-0 podman[297418]: 2025-09-30 14:56:56.754300962 +0000 UTC m=+0.285169605 container remove df37980fe8fe3fac527e52db37d996c4a1e433ec9192ec683d1d762c59d83c56 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_goldwasser, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Sep 30 14:56:56 compute-0 systemd[1]: libpod-conmon-df37980fe8fe3fac527e52db37d996c4a1e433ec9192ec683d1d762c59d83c56.scope: Deactivated successfully.
Sep 30 14:56:56 compute-0 podman[297459]: 2025-09-30 14:56:56.975629764 +0000 UTC m=+0.060362477 container create 1b023c70e6b5b965d4d0900f4eed0fd2e59404b4f465077ef79f42715f1f530e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_nash, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Sep 30 14:56:57 compute-0 systemd[1]: Started libpod-conmon-1b023c70e6b5b965d4d0900f4eed0fd2e59404b4f465077ef79f42715f1f530e.scope.
Sep 30 14:56:57 compute-0 podman[297459]: 2025-09-30 14:56:56.942546719 +0000 UTC m=+0.027279512 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:56:57 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:56:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c16aea0da49abf757cfd6d62ea673d4b259ea25d3d03cc6cb57f0dfe5709034d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:56:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c16aea0da49abf757cfd6d62ea673d4b259ea25d3d03cc6cb57f0dfe5709034d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:56:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c16aea0da49abf757cfd6d62ea673d4b259ea25d3d03cc6cb57f0dfe5709034d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:56:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c16aea0da49abf757cfd6d62ea673d4b259ea25d3d03cc6cb57f0dfe5709034d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:56:57 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:56:57 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:56:57 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:56:57.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:56:57 compute-0 podman[297459]: 2025-09-30 14:56:57.098755306 +0000 UTC m=+0.183488099 container init 1b023c70e6b5b965d4d0900f4eed0fd2e59404b4f465077ef79f42715f1f530e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_nash, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Sep 30 14:56:57 compute-0 podman[297459]: 2025-09-30 14:56:57.105401062 +0000 UTC m=+0.190133805 container start 1b023c70e6b5b965d4d0900f4eed0fd2e59404b4f465077ef79f42715f1f530e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_nash, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:56:57 compute-0 podman[297459]: 2025-09-30 14:56:57.116867308 +0000 UTC m=+0.201600051 container attach 1b023c70e6b5b965d4d0900f4eed0fd2e59404b4f465077ef79f42715f1f530e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_nash, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:56:57 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:56:57.236Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:56:57 compute-0 nova_compute[261524]: 2025-09-30 14:56:57.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:56:57 compute-0 awesome_nash[297476]: {
Sep 30 14:56:57 compute-0 awesome_nash[297476]:     "0": [
Sep 30 14:56:57 compute-0 awesome_nash[297476]:         {
Sep 30 14:56:57 compute-0 awesome_nash[297476]:             "devices": [
Sep 30 14:56:57 compute-0 awesome_nash[297476]:                 "/dev/loop3"
Sep 30 14:56:57 compute-0 awesome_nash[297476]:             ],
Sep 30 14:56:57 compute-0 awesome_nash[297476]:             "lv_name": "ceph_lv0",
Sep 30 14:56:57 compute-0 awesome_nash[297476]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:56:57 compute-0 awesome_nash[297476]:             "lv_size": "21470642176",
Sep 30 14:56:57 compute-0 awesome_nash[297476]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5e3c7776-ac03-5698-b79f-a6dc2d80cae6,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1bf35304-bfb4-41f5-b832-570aa31de1b2,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 14:56:57 compute-0 awesome_nash[297476]:             "lv_uuid": "f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L",
Sep 30 14:56:57 compute-0 awesome_nash[297476]:             "name": "ceph_lv0",
Sep 30 14:56:57 compute-0 awesome_nash[297476]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:56:57 compute-0 awesome_nash[297476]:             "tags": {
Sep 30 14:56:57 compute-0 awesome_nash[297476]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:56:57 compute-0 awesome_nash[297476]:                 "ceph.block_uuid": "f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L",
Sep 30 14:56:57 compute-0 awesome_nash[297476]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 14:56:57 compute-0 awesome_nash[297476]:                 "ceph.cluster_fsid": "5e3c7776-ac03-5698-b79f-a6dc2d80cae6",
Sep 30 14:56:57 compute-0 awesome_nash[297476]:                 "ceph.cluster_name": "ceph",
Sep 30 14:56:57 compute-0 awesome_nash[297476]:                 "ceph.crush_device_class": "",
Sep 30 14:56:57 compute-0 awesome_nash[297476]:                 "ceph.encrypted": "0",
Sep 30 14:56:57 compute-0 awesome_nash[297476]:                 "ceph.osd_fsid": "1bf35304-bfb4-41f5-b832-570aa31de1b2",
Sep 30 14:56:57 compute-0 awesome_nash[297476]:                 "ceph.osd_id": "0",
Sep 30 14:56:57 compute-0 awesome_nash[297476]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 14:56:57 compute-0 awesome_nash[297476]:                 "ceph.type": "block",
Sep 30 14:56:57 compute-0 awesome_nash[297476]:                 "ceph.vdo": "0",
Sep 30 14:56:57 compute-0 awesome_nash[297476]:                 "ceph.with_tpm": "0"
Sep 30 14:56:57 compute-0 awesome_nash[297476]:             },
Sep 30 14:56:57 compute-0 awesome_nash[297476]:             "type": "block",
Sep 30 14:56:57 compute-0 awesome_nash[297476]:             "vg_name": "ceph_vg0"
Sep 30 14:56:57 compute-0 awesome_nash[297476]:         }
Sep 30 14:56:57 compute-0 awesome_nash[297476]:     ]
Sep 30 14:56:57 compute-0 awesome_nash[297476]: }
Sep 30 14:56:57 compute-0 systemd[1]: libpod-1b023c70e6b5b965d4d0900f4eed0fd2e59404b4f465077ef79f42715f1f530e.scope: Deactivated successfully.
Sep 30 14:56:57 compute-0 podman[297486]: 2025-09-30 14:56:57.476742646 +0000 UTC m=+0.022047921 container died 1b023c70e6b5b965d4d0900f4eed0fd2e59404b4f465077ef79f42715f1f530e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_nash, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:56:57 compute-0 ceph-mon[74194]: pgmap v1247: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 915 B/s rd, 0 op/s
Sep 30 14:56:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-c16aea0da49abf757cfd6d62ea673d4b259ea25d3d03cc6cb57f0dfe5709034d-merged.mount: Deactivated successfully.
Sep 30 14:56:57 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:56:57 compute-0 podman[297486]: 2025-09-30 14:56:57.658453479 +0000 UTC m=+0.203758744 container remove 1b023c70e6b5b965d4d0900f4eed0fd2e59404b4f465077ef79f42715f1f530e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Sep 30 14:56:57 compute-0 systemd[1]: libpod-conmon-1b023c70e6b5b965d4d0900f4eed0fd2e59404b4f465077ef79f42715f1f530e.scope: Deactivated successfully.
Sep 30 14:56:57 compute-0 sudo[297352]: pam_unix(sudo:session): session closed for user root
Sep 30 14:56:57 compute-0 sudo[297500]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:56:57 compute-0 sudo[297500]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:56:57 compute-0 sudo[297500]: pam_unix(sudo:session): session closed for user root
Sep 30 14:56:57 compute-0 sudo[297525]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -- raw list --format json
Sep 30 14:56:57 compute-0 sudo[297525]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:56:58 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:56:58 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:56:58 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:56:58.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:56:58 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1248: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 610 B/s rd, 0 op/s
Sep 30 14:56:58 compute-0 podman[297594]: 2025-09-30 14:56:58.337576861 +0000 UTC m=+0.085559085 container create 52fb0476edd35a1cf4be253b358d7857b76a1f333ce542ac1ba33cfa99da77fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_payne, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:56:58 compute-0 nova_compute[261524]: 2025-09-30 14:56:58.371 2 DEBUG oslo_concurrency.processutils [None req-5c1dd5f9-4c04-4fe4-a375-236068b8a0b9 61ea0d6bfa3d476f918c321afd8731f2 5beed35d375f4bd185a6774dc475e0b9 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Sep 30 14:56:58 compute-0 podman[297594]: 2025-09-30 14:56:58.298418464 +0000 UTC m=+0.046400698 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:56:58 compute-0 systemd[1]: Started libpod-conmon-52fb0476edd35a1cf4be253b358d7857b76a1f333ce542ac1ba33cfa99da77fe.scope.
Sep 30 14:56:58 compute-0 nova_compute[261524]: 2025-09-30 14:56:58.414 2 DEBUG oslo_concurrency.processutils [None req-5c1dd5f9-4c04-4fe4-a375-236068b8a0b9 61ea0d6bfa3d476f918c321afd8731f2 5beed35d375f4bd185a6774dc475e0b9 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.043s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Sep 30 14:56:58 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:56:58 compute-0 podman[297594]: 2025-09-30 14:56:58.477932483 +0000 UTC m=+0.225914727 container init 52fb0476edd35a1cf4be253b358d7857b76a1f333ce542ac1ba33cfa99da77fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_payne, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:56:58 compute-0 podman[297594]: 2025-09-30 14:56:58.489675576 +0000 UTC m=+0.237657810 container start 52fb0476edd35a1cf4be253b358d7857b76a1f333ce542ac1ba33cfa99da77fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_payne, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:56:58 compute-0 cool_payne[297612]: 167 167
Sep 30 14:56:58 compute-0 systemd[1]: libpod-52fb0476edd35a1cf4be253b358d7857b76a1f333ce542ac1ba33cfa99da77fe.scope: Deactivated successfully.
Sep 30 14:56:58 compute-0 podman[297594]: 2025-09-30 14:56:58.509745286 +0000 UTC m=+0.257727530 container attach 52fb0476edd35a1cf4be253b358d7857b76a1f333ce542ac1ba33cfa99da77fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_payne, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Sep 30 14:56:58 compute-0 podman[297594]: 2025-09-30 14:56:58.513557302 +0000 UTC m=+0.261539536 container died 52fb0476edd35a1cf4be253b358d7857b76a1f333ce542ac1ba33cfa99da77fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_payne, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:56:58 compute-0 ceph-mon[74194]: pgmap v1248: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 610 B/s rd, 0 op/s
Sep 30 14:56:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-eed6c079fa6044f7ef44d7481681fd815c61d3678ee746d3d813554cb904ceca-merged.mount: Deactivated successfully.
Sep 30 14:56:58 compute-0 podman[297594]: 2025-09-30 14:56:58.656396875 +0000 UTC m=+0.404379089 container remove 52fb0476edd35a1cf4be253b358d7857b76a1f333ce542ac1ba33cfa99da77fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_payne, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:56:58 compute-0 systemd[1]: libpod-conmon-52fb0476edd35a1cf4be253b358d7857b76a1f333ce542ac1ba33cfa99da77fe.scope: Deactivated successfully.
Sep 30 14:56:58 compute-0 podman[297636]: 2025-09-30 14:56:58.850078277 +0000 UTC m=+0.067069554 container create b4dd043c452480d5e061cea3b6a69d4e7676770a96f2066da9bbfb8ad5526a84 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_ramanujan, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:56:58 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:56:58.888Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:56:58 compute-0 podman[297636]: 2025-09-30 14:56:58.807164007 +0000 UTC m=+0.024155334 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:56:58 compute-0 systemd[1]: Started libpod-conmon-b4dd043c452480d5e061cea3b6a69d4e7676770a96f2066da9bbfb8ad5526a84.scope.
Sep 30 14:56:58 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:56:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/974a9c0d9e6e164d1a010626e7df47d5b7559fb6fdbb38c3109b404ddbf7132e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:56:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/974a9c0d9e6e164d1a010626e7df47d5b7559fb6fdbb38c3109b404ddbf7132e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:56:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/974a9c0d9e6e164d1a010626e7df47d5b7559fb6fdbb38c3109b404ddbf7132e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:56:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/974a9c0d9e6e164d1a010626e7df47d5b7559fb6fdbb38c3109b404ddbf7132e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:56:58 compute-0 podman[297636]: 2025-09-30 14:56:58.975967388 +0000 UTC m=+0.192958645 container init b4dd043c452480d5e061cea3b6a69d4e7676770a96f2066da9bbfb8ad5526a84 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_ramanujan, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid)
Sep 30 14:56:58 compute-0 podman[297636]: 2025-09-30 14:56:58.987604148 +0000 UTC m=+0.204595385 container start b4dd043c452480d5e061cea3b6a69d4e7676770a96f2066da9bbfb8ad5526a84 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_ramanujan, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Sep 30 14:56:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:56:58 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:56:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:56:58 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:56:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:56:58 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:56:59 compute-0 podman[297636]: 2025-09-30 14:56:59.002190792 +0000 UTC m=+0.219182069 container attach b4dd043c452480d5e061cea3b6a69d4e7676770a96f2066da9bbfb8ad5526a84 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_ramanujan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:56:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:56:59 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:56:59 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:56:59 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:56:59 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:56:59.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:56:59 compute-0 ceph-mgr[74485]: [balancer INFO root] Optimize plan auto_2025-09-30_14:56:59
Sep 30 14:56:59 compute-0 ceph-mgr[74485]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 14:56:59 compute-0 ceph-mgr[74485]: [balancer INFO root] do_upmap
Sep 30 14:56:59 compute-0 ceph-mgr[74485]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.data', 'volumes', 'vms', 'images', 'backups', '.rgw.root', '.mgr', '.nfs', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.log']
Sep 30 14:56:59 compute-0 ceph-mgr[74485]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 14:56:59 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:56:59 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:56:59 compute-0 lvm[297728]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 14:56:59 compute-0 lvm[297728]: VG ceph_vg0 finished
Sep 30 14:56:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:56:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:56:59 compute-0 confident_ramanujan[297652]: {}
Sep 30 14:56:59 compute-0 systemd[1]: libpod-b4dd043c452480d5e061cea3b6a69d4e7676770a96f2066da9bbfb8ad5526a84.scope: Deactivated successfully.
Sep 30 14:56:59 compute-0 systemd[1]: libpod-b4dd043c452480d5e061cea3b6a69d4e7676770a96f2066da9bbfb8ad5526a84.scope: Consumed 1.276s CPU time.
Sep 30 14:56:59 compute-0 podman[297636]: 2025-09-30 14:56:59.798127819 +0000 UTC m=+1.015119116 container died b4dd043c452480d5e061cea3b6a69d4e7676770a96f2066da9bbfb8ad5526a84 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_ramanujan, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:56:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:56:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:56:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:56:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:56:59 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:57:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 14:57:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:57:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 14:57:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:57:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 14:57:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:57:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:57:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:57:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:57:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:57:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Sep 30 14:57:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:57:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Sep 30 14:57:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:57:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:57:00 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:57:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:57:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Sep 30 14:57:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:57:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Sep 30 14:57:00 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 14:57:00 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:57:00.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 14:57:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:57:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:57:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:57:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 14:57:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:57:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 14:57:00 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1249: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 610 B/s rd, 0 op/s
Sep 30 14:57:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-974a9c0d9e6e164d1a010626e7df47d5b7559fb6fdbb38c3109b404ddbf7132e-merged.mount: Deactivated successfully.
Sep 30 14:57:00 compute-0 podman[297636]: 2025-09-30 14:57:00.789651455 +0000 UTC m=+2.006642702 container remove b4dd043c452480d5e061cea3b6a69d4e7676770a96f2066da9bbfb8ad5526a84 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_ramanujan, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 14:57:00 compute-0 sudo[297525]: pam_unix(sudo:session): session closed for user root
Sep 30 14:57:00 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 14:57:00 compute-0 systemd[1]: libpod-conmon-b4dd043c452480d5e061cea3b6a69d4e7676770a96f2066da9bbfb8ad5526a84.scope: Deactivated successfully.
Sep 30 14:57:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 14:57:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 14:57:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 14:57:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 14:57:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 14:57:01 compute-0 nova_compute[261524]: 2025-09-30 14:57:01.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:57:01 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:57:01 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:57:01 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:57:01.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:57:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 14:57:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 14:57:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 14:57:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 14:57:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 14:57:01 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:57:01 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 14:57:01 compute-0 ceph-mon[74194]: pgmap v1249: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 610 B/s rd, 0 op/s
Sep 30 14:57:02 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:57:02 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:57:02 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:57:02 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:57:02.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:57:02 compute-0 sudo[297746]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 14:57:02 compute-0 sudo[297746]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:57:02 compute-0 sudo[297746]: pam_unix(sudo:session): session closed for user root
Sep 30 14:57:02 compute-0 nova_compute[261524]: 2025-09-30 14:57:02.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:57:02 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1250: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 610 B/s rd, 0 op/s
Sep 30 14:57:02 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:57:03 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:57:03 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:57:03 compute-0 ceph-mon[74194]: pgmap v1250: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 610 B/s rd, 0 op/s
Sep 30 14:57:03 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:57:03 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:57:03 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:57:03.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:57:03 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:57:03.739Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:57:03 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:57:03.981 163966 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ea:30:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '86:54:af:bb:5a:5f'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Sep 30 14:57:03 compute-0 nova_compute[261524]: 2025-09-30 14:57:03.981 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:57:03 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:57:03.982 163966 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Sep 30 14:57:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:57:03 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:57:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:57:03 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:57:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:57:03 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:57:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:57:04 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:57:04 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:57:04 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:57:04 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:57:04.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:57:04 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1251: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 610 B/s rd, 0 op/s
Sep 30 14:57:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:57:04] "GET /metrics HTTP/1.1" 200 48535 "" "Prometheus/2.51.0"
Sep 30 14:57:04 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:57:04] "GET /metrics HTTP/1.1" 200 48535 "" "Prometheus/2.51.0"
Sep 30 14:57:04 compute-0 ceph-mon[74194]: pgmap v1251: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 610 B/s rd, 0 op/s
Sep 30 14:57:05 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:57:05 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 14:57:05 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:57:05.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 14:57:06 compute-0 nova_compute[261524]: 2025-09-30 14:57:06.047 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:57:06 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:57:06 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000024s ======
Sep 30 14:57:06 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:57:06.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Sep 30 14:57:06 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1252: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:57:07 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:57:07 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:57:07 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:57:07.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:57:07 compute-0 ceph-mon[74194]: pgmap v1252: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:57:07 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:57:07.238Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:57:07 compute-0 nova_compute[261524]: 2025-09-30 14:57:07.253 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:57:07 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:57:08 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:57:08 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:57:08 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:57:08.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:57:08 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1253: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:57:08 compute-0 sudo[297777]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:57:08 compute-0 sudo[297777]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:57:08 compute-0 sudo[297777]: pam_unix(sudo:session): session closed for user root
Sep 30 14:57:08 compute-0 ceph-mon[74194]: pgmap v1253: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:57:08 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:57:08.889Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:57:08 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:57:08.890Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:57:08 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:57:08.985 163966 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c6331d25-78a2-493c-bb43-51ad387342be, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Sep 30 14:57:09 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:57:08 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:57:09 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:57:09 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:57:09 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:57:09 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:57:09 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:57:09 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:57:09 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:57:09 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:57:09 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:57:09.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:57:10 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:57:10 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:57:10 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:57:10.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:57:10 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1254: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:57:11 compute-0 nova_compute[261524]: 2025-09-30 14:57:11.049 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:57:11 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 14:57:11 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2305621749' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 14:57:11 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 14:57:11 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2305621749' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 14:57:11 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:57:11 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:57:11 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:57:11.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:57:11 compute-0 ceph-mon[74194]: pgmap v1254: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:57:11 compute-0 ceph-mon[74194]: from='client.? 192.168.122.10:0/2305621749' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 14:57:11 compute-0 ceph-mon[74194]: from='client.? 192.168.122.10:0/2305621749' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 14:57:12 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:57:12 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:57:12 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:57:12.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:57:12 compute-0 nova_compute[261524]: 2025-09-30 14:57:12.255 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:57:12 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1255: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:57:12 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:57:13 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:57:13 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:57:13 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:57:13.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:57:13 compute-0 ceph-mon[74194]: pgmap v1255: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:57:13 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:57:13.740Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:57:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:57:13 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:57:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:57:14 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:57:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:57:14 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:57:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:57:14 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:57:14 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:57:14 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:57:14 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:57:14.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:57:14 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1256: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:57:14 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:57:14 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:57:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:57:14] "GET /metrics HTTP/1.1" 200 48531 "" "Prometheus/2.51.0"
Sep 30 14:57:14 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:57:14] "GET /metrics HTTP/1.1" 200 48531 "" "Prometheus/2.51.0"
Sep 30 14:57:15 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:57:15 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:57:15 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:57:15.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:57:15 compute-0 ceph-mon[74194]: pgmap v1256: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:57:15 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:57:16 compute-0 nova_compute[261524]: 2025-09-30 14:57:16.052 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:57:16 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:57:16 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 14:57:16 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:57:16.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 14:57:16 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1257: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:57:17 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:57:17 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:57:17 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:57:17.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:57:17 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:57:17.239Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:57:17 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:57:17.239Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:57:17 compute-0 nova_compute[261524]: 2025-09-30 14:57:17.257 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:57:17 compute-0 ceph-mon[74194]: pgmap v1257: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:57:17 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:57:18 compute-0 podman[297815]: 2025-09-30 14:57:18.140584021 +0000 UTC m=+0.050925261 container health_status c96c6cc1b89de09f21811bef665f35e069874e3ad1bbd4c37d02227af98e3458 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Sep 30 14:57:18 compute-0 podman[297814]: 2025-09-30 14:57:18.146229342 +0000 UTC m=+0.060872000 container health_status b3fb2fd96e3ed0561f41ccf5f3a6a8f5edc1610051f196882b9fe64f54041c07 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true)
Sep 30 14:57:18 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:57:18 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 14:57:18 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:57:18.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 14:57:18 compute-0 podman[297812]: 2025-09-30 14:57:18.169051911 +0000 UTC m=+0.084666723 container health_status 3f9405f717bf7bccb1d94628a6cea0442375ebf8d5cf43ef2536ee30dce6c6e0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Sep 30 14:57:18 compute-0 podman[297813]: 2025-09-30 14:57:18.20508295 +0000 UTC m=+0.121365638 container health_status 8ace90c1ad44d98a416105b72e1c541724757ab08efa10adfaa0df7e8c2027c6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Sep 30 14:57:18 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1258: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:57:18 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:57:18.891Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:57:18 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:57:18.891Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:57:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:57:18 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:57:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:57:18 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:57:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:57:18 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:57:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:57:19 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:57:19 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:57:19 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:57:19 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:57:19.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:57:19 compute-0 ceph-mon[74194]: pgmap v1258: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:57:20 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:57:20 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000024s ======
Sep 30 14:57:20 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:57:20.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Sep 30 14:57:20 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1259: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:57:21 compute-0 nova_compute[261524]: 2025-09-30 14:57:21.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:57:21 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:57:21 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:57:21 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:57:21.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:57:21 compute-0 ceph-mon[74194]: pgmap v1259: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:57:22 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:57:22 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 14:57:22 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:57:22.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 14:57:22 compute-0 nova_compute[261524]: 2025-09-30 14:57:22.260 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:57:22 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1260: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:57:22 compute-0 ceph-mon[74194]: pgmap v1260: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:57:22 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:57:22 compute-0 nova_compute[261524]: 2025-09-30 14:57:22.952 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:57:22 compute-0 nova_compute[261524]: 2025-09-30 14:57:22.953 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Sep 30 14:57:22 compute-0 nova_compute[261524]: 2025-09-30 14:57:22.953 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Sep 30 14:57:22 compute-0 nova_compute[261524]: 2025-09-30 14:57:22.973 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Sep 30 14:57:23 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:57:23 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:57:23 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:57:23.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:57:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:57:23.742Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:57:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:57:23 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:57:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:57:23 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:57:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:57:23 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:57:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:57:24 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:57:24 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:57:24 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:57:24 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:57:24.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:57:24 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1261: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:57:24 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/3605473371' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:57:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:57:24] "GET /metrics HTTP/1.1" 200 48531 "" "Prometheus/2.51.0"
Sep 30 14:57:24 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:57:24] "GET /metrics HTTP/1.1" 200 48531 "" "Prometheus/2.51.0"
Sep 30 14:57:25 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:57:25 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:57:25 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:57:25.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:57:25 compute-0 ceph-mon[74194]: pgmap v1261: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:57:25 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/1470809467' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:57:25 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/1535186889' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:57:25 compute-0 nova_compute[261524]: 2025-09-30 14:57:25.952 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:57:25 compute-0 nova_compute[261524]: 2025-09-30 14:57:25.953 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:57:25 compute-0 nova_compute[261524]: 2025-09-30 14:57:25.953 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:57:25 compute-0 nova_compute[261524]: 2025-09-30 14:57:25.953 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:57:25 compute-0 nova_compute[261524]: 2025-09-30 14:57:25.953 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Sep 30 14:57:25 compute-0 nova_compute[261524]: 2025-09-30 14:57:25.953 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:57:25 compute-0 nova_compute[261524]: 2025-09-30 14:57:25.978 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:57:25 compute-0 nova_compute[261524]: 2025-09-30 14:57:25.979 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:57:25 compute-0 nova_compute[261524]: 2025-09-30 14:57:25.979 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:57:25 compute-0 nova_compute[261524]: 2025-09-30 14:57:25.979 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Sep 30 14:57:25 compute-0 nova_compute[261524]: 2025-09-30 14:57:25.979 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Sep 30 14:57:26 compute-0 nova_compute[261524]: 2025-09-30 14:57:26.057 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:57:26 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:57:26 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:57:26 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:57:26.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:57:26 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1262: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:57:26 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 14:57:26 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/670463785' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:57:26 compute-0 nova_compute[261524]: 2025-09-30 14:57:26.470 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Sep 30 14:57:26 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/1939895199' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:57:26 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/670463785' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:57:26 compute-0 nova_compute[261524]: 2025-09-30 14:57:26.621 2 WARNING nova.virt.libvirt.driver [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 14:57:26 compute-0 nova_compute[261524]: 2025-09-30 14:57:26.622 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4486MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Sep 30 14:57:26 compute-0 nova_compute[261524]: 2025-09-30 14:57:26.623 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:57:26 compute-0 nova_compute[261524]: 2025-09-30 14:57:26.623 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:57:26 compute-0 nova_compute[261524]: 2025-09-30 14:57:26.695 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Sep 30 14:57:26 compute-0 nova_compute[261524]: 2025-09-30 14:57:26.696 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Sep 30 14:57:26 compute-0 nova_compute[261524]: 2025-09-30 14:57:26.711 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Sep 30 14:57:27 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:57:27 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:57:27 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:57:27.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:57:27 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 14:57:27 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1399402267' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:57:27 compute-0 nova_compute[261524]: 2025-09-30 14:57:27.166 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Sep 30 14:57:27 compute-0 nova_compute[261524]: 2025-09-30 14:57:27.172 2 DEBUG nova.compute.provider_tree [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Inventory has not changed in ProviderTree for provider: 06783cfc-6d32-454d-9501-ebd8adea3735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Sep 30 14:57:27 compute-0 nova_compute[261524]: 2025-09-30 14:57:27.187 2 DEBUG nova.scheduler.client.report [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Inventory has not changed for provider 06783cfc-6d32-454d-9501-ebd8adea3735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Sep 30 14:57:27 compute-0 nova_compute[261524]: 2025-09-30 14:57:27.189 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Sep 30 14:57:27 compute-0 nova_compute[261524]: 2025-09-30 14:57:27.189 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.566s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:57:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:57:27.240Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:57:27 compute-0 nova_compute[261524]: 2025-09-30 14:57:27.262 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:57:27 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:57:27 compute-0 ceph-mon[74194]: pgmap v1262: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:57:27 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/1399402267' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:57:28 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:57:28 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:57:28 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:57:28.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:57:28 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1263: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:57:28 compute-0 ceph-mon[74194]: pgmap v1263: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:57:28 compute-0 sudo[297941]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:57:28 compute-0 sudo[297941]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:57:28 compute-0 sudo[297941]: pam_unix(sudo:session): session closed for user root
Sep 30 14:57:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:57:28.892Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:57:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:57:28.892Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:57:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:57:28 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:57:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:57:28 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:57:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:57:28 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:57:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:57:29 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:57:29 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:57:29 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:57:29 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:57:29.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:57:29 compute-0 nova_compute[261524]: 2025-09-30 14:57:29.184 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:57:29 compute-0 nova_compute[261524]: 2025-09-30 14:57:29.185 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:57:29 compute-0 nova_compute[261524]: 2025-09-30 14:57:29.185 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:57:29 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:57:29 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:57:29 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:57:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:57:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:57:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:57:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:57:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:57:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:57:30 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:57:30 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 14:57:30 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:57:30.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 14:57:30 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1264: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:57:30 compute-0 ceph-mon[74194]: pgmap v1264: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:57:31 compute-0 nova_compute[261524]: 2025-09-30 14:57:31.058 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:57:31 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:57:31 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:57:31 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:57:31.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:57:32 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:57:32 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:57:32 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:57:32.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:57:32 compute-0 nova_compute[261524]: 2025-09-30 14:57:32.265 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:57:32 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1265: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:57:32 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:57:33 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:57:33 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:57:33 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:57:33.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:57:33 compute-0 ceph-mon[74194]: pgmap v1265: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:57:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:57:33.743Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:57:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:57:33 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:57:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:57:33 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:57:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:57:33 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:57:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:57:34 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:57:34 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:57:34 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:57:34 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:57:34.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:57:34 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1266: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:57:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:57:34] "GET /metrics HTTP/1.1" 200 48532 "" "Prometheus/2.51.0"
Sep 30 14:57:34 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:57:34] "GET /metrics HTTP/1.1" 200 48532 "" "Prometheus/2.51.0"
Sep 30 14:57:35 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:57:35 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:57:35 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:57:35.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:57:35 compute-0 ceph-mon[74194]: pgmap v1266: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:57:36 compute-0 nova_compute[261524]: 2025-09-30 14:57:36.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:57:36 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:57:36 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 14:57:36 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:57:36.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 14:57:36 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1267: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:57:37 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:57:37 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:57:37 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:57:37.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:57:37 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:57:37.241Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:57:37 compute-0 nova_compute[261524]: 2025-09-30 14:57:37.267 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:57:37 compute-0 ceph-mon[74194]: pgmap v1267: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:57:37 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:57:38 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:57:38 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 14:57:38 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:57:38.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 14:57:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:57:38.276 163966 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:57:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:57:38.277 163966 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:57:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:57:38.277 163966 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:57:38 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1268: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:57:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:57:38.893Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:57:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:57:38 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:57:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:57:38 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:57:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:57:38 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:57:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:57:39 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:57:39 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:57:39 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 14:57:39 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:57:39.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 14:57:39 compute-0 ceph-mon[74194]: pgmap v1268: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:57:40 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:57:40 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:57:40 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:57:40.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:57:40 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1269: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:57:41 compute-0 nova_compute[261524]: 2025-09-30 14:57:41.062 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:57:41 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:57:41 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:57:41 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:57:41.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:57:41 compute-0 ceph-mon[74194]: pgmap v1269: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:57:42 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:57:42 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 14:57:42 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:57:42.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 14:57:42 compute-0 nova_compute[261524]: 2025-09-30 14:57:42.269 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:57:42 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1270: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:57:42 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:57:43 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:57:43 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:57:43 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:57:43.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:57:43 compute-0 ceph-mon[74194]: pgmap v1270: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:57:43 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:57:43.744Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:57:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:57:43 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:57:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:57:43 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:57:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:57:43 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:57:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:57:44 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:57:44 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:57:44 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:57:44 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:57:44.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:57:44 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1271: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:57:44 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:57:44 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:57:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:57:44] "GET /metrics HTTP/1.1" 200 48529 "" "Prometheus/2.51.0"
Sep 30 14:57:44 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:57:44] "GET /metrics HTTP/1.1" 200 48529 "" "Prometheus/2.51.0"
Sep 30 14:57:45 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:57:45 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:57:45 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:57:45.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:57:45 compute-0 ceph-mon[74194]: pgmap v1271: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:57:45 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:57:46 compute-0 nova_compute[261524]: 2025-09-30 14:57:46.064 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:57:46 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:57:46 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:57:46 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:57:46.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:57:46 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1272: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:57:47 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:57:47 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:57:47 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:57:47.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:57:47 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:57:47.242Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:57:47 compute-0 nova_compute[261524]: 2025-09-30 14:57:47.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:57:47 compute-0 ceph-mon[74194]: pgmap v1272: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:57:47 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:57:48 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:57:48 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:57:48 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:57:48.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:57:48 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1273: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:57:48 compute-0 sudo[297986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:57:48 compute-0 sudo[297986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:57:48 compute-0 sudo[297986]: pam_unix(sudo:session): session closed for user root
Sep 30 14:57:48 compute-0 podman[298010]: 2025-09-30 14:57:48.834493386 +0000 UTC m=+0.068185052 container health_status 3f9405f717bf7bccb1d94628a6cea0442375ebf8d5cf43ef2536ee30dce6c6e0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=iscsid, container_name=iscsid, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Sep 30 14:57:48 compute-0 podman[298013]: 2025-09-30 14:57:48.855365047 +0000 UTC m=+0.072568951 container health_status c96c6cc1b89de09f21811bef665f35e069874e3ad1bbd4c37d02227af98e3458 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Sep 30 14:57:48 compute-0 podman[298012]: 2025-09-30 14:57:48.855553302 +0000 UTC m=+0.077030313 container health_status b3fb2fd96e3ed0561f41ccf5f3a6a8f5edc1610051f196882b9fe64f54041c07 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20250923, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Sep 30 14:57:48 compute-0 podman[298011]: 2025-09-30 14:57:48.879358896 +0000 UTC m=+0.109781180 container health_status 8ace90c1ad44d98a416105b72e1c541724757ab08efa10adfaa0df7e8c2027c6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Sep 30 14:57:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:57:48.894Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:57:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:57:48.894Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:57:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:57:48.895Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:57:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:57:49 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:57:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:57:49 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:57:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:57:49 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:57:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:57:49 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:57:49 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:57:49 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 14:57:49 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:57:49.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 14:57:49 compute-0 ceph-mon[74194]: pgmap v1273: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:57:50 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:57:50 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:57:50 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:57:50.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:57:50 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1274: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:57:50 compute-0 ceph-mon[74194]: pgmap v1274: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:57:51 compute-0 nova_compute[261524]: 2025-09-30 14:57:51.066 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:57:51 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:57:51 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 14:57:51 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:57:51.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 14:57:52 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:57:52 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 14:57:52 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:57:52.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 14:57:52 compute-0 nova_compute[261524]: 2025-09-30 14:57:52.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:57:52 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1275: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:57:52 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:57:53 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:57:53 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 14:57:53 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:57:53.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 14:57:53 compute-0 ceph-mon[74194]: pgmap v1275: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:57:53 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:57:53.745Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:57:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:57:53 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:57:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:57:54 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:57:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:57:54 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:57:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:57:54 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:57:54 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:57:54 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:57:54 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:57:54.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:57:54 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1276: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:57:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:57:54] "GET /metrics HTTP/1.1" 200 48529 "" "Prometheus/2.51.0"
Sep 30 14:57:54 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:57:54] "GET /metrics HTTP/1.1" 200 48529 "" "Prometheus/2.51.0"
Sep 30 14:57:55 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:57:55 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 14:57:55 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:57:55.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 14:57:55 compute-0 ceph-mon[74194]: pgmap v1276: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:57:56 compute-0 nova_compute[261524]: 2025-09-30 14:57:56.067 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:57:56 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:57:56 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:57:56 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:57:56.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:57:56 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1277: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:57:57 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:57:57 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:57:57 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:57:57.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:57:57 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:57:57.243Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:57:57 compute-0 nova_compute[261524]: 2025-09-30 14:57:57.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:57:57 compute-0 ceph-mon[74194]: pgmap v1277: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:57:57 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:57:58 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:57:58 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 14:57:58 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:57:58.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 14:57:58 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1278: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:57:58 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:57:58.896Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:57:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:57:58 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:57:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:57:58 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:57:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:57:58 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:57:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:57:59 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:57:59 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:57:59 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:57:59 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:57:59.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:57:59 compute-0 ceph-mon[74194]: pgmap v1278: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:57:59 compute-0 ceph-mgr[74485]: [balancer INFO root] Optimize plan auto_2025-09-30_14:57:59
Sep 30 14:57:59 compute-0 ceph-mgr[74485]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 14:57:59 compute-0 ceph-mgr[74485]: [balancer INFO root] do_upmap
Sep 30 14:57:59 compute-0 ceph-mgr[74485]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.meta', '.rgw.root', 'volumes', 'images', 'default.rgw.control', 'cephfs.cephfs.data', 'backups', '.nfs', 'vms', '.mgr']
Sep 30 14:57:59 compute-0 ceph-mgr[74485]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 14:57:59 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:57:59 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:57:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:57:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:57:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:57:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:57:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:57:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:58:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 14:58:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:58:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 14:58:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:58:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 14:58:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:58:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:58:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:58:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:58:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:58:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Sep 30 14:58:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:58:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Sep 30 14:58:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:58:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:58:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:58:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Sep 30 14:58:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:58:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Sep 30 14:58:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:58:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:58:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:58:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 14:58:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:58:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 14:58:00 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:58:00 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 14:58:00 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:58:00.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 14:58:00 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1279: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:58:00 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:58:00 compute-0 ceph-mgr[74485]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 14:58:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 14:58:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 14:58:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 14:58:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 14:58:01 compute-0 nova_compute[261524]: 2025-09-30 14:58:01.069 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:58:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 14:58:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 14:58:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 14:58:01 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:58:01 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:58:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 14:58:01 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:58:01.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:58:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 14:58:01 compute-0 ceph-mon[74194]: pgmap v1279: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:58:02 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:58:02 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:58:02 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:58:02.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:58:02 compute-0 nova_compute[261524]: 2025-09-30 14:58:02.279 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:58:02 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1280: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:58:02 compute-0 ceph-mgr[74485]: [devicehealth INFO root] Check health
Sep 30 14:58:02 compute-0 sudo[298101]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:58:02 compute-0 sudo[298101]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:58:02 compute-0 sudo[298101]: pam_unix(sudo:session): session closed for user root
Sep 30 14:58:02 compute-0 ceph-mon[74194]: pgmap v1280: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:58:02 compute-0 sudo[298126]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 14:58:02 compute-0 sudo[298126]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:58:02 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:58:03 compute-0 sudo[298126]: pam_unix(sudo:session): session closed for user root
Sep 30 14:58:03 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:58:03 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:58:03 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:58:03.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:58:03 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:58:03 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:58:03 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 14:58:03 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 14:58:03 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1281: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 562 B/s rd, 0 op/s
Sep 30 14:58:03 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 14:58:03 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:58:03 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 14:58:03 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:58:03 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 14:58:03 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 14:58:03 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 14:58:03 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 14:58:03 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:58:03 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:58:03 compute-0 sudo[298183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:58:03 compute-0 sudo[298183]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:58:03 compute-0 sudo[298183]: pam_unix(sudo:session): session closed for user root
Sep 30 14:58:03 compute-0 sudo[298208]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 14:58:03 compute-0 sudo[298208]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:58:03 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:58:03 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 14:58:03 compute-0 ceph-mon[74194]: pgmap v1281: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 562 B/s rd, 0 op/s
Sep 30 14:58:03 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:58:03 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:58:03 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 14:58:03 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 14:58:03 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:58:03 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:58:03.746Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:58:03 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:58:03.746Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:58:03 compute-0 podman[298273]: 2025-09-30 14:58:03.848676348 +0000 UTC m=+0.036644625 container create 7ad839049f898f31e469f38960aa2a776f9dd0beff26d1dbf35bde7aed780156 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Sep 30 14:58:03 compute-0 systemd[1]: Started libpod-conmon-7ad839049f898f31e469f38960aa2a776f9dd0beff26d1dbf35bde7aed780156.scope.
Sep 30 14:58:03 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:58:03 compute-0 podman[298273]: 2025-09-30 14:58:03.831825258 +0000 UTC m=+0.019793555 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:58:03 compute-0 podman[298273]: 2025-09-30 14:58:03.936006797 +0000 UTC m=+0.123975094 container init 7ad839049f898f31e469f38960aa2a776f9dd0beff26d1dbf35bde7aed780156 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_villani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Sep 30 14:58:03 compute-0 podman[298273]: 2025-09-30 14:58:03.943773511 +0000 UTC m=+0.131741788 container start 7ad839049f898f31e469f38960aa2a776f9dd0beff26d1dbf35bde7aed780156 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:58:03 compute-0 podman[298273]: 2025-09-30 14:58:03.947597376 +0000 UTC m=+0.135565643 container attach 7ad839049f898f31e469f38960aa2a776f9dd0beff26d1dbf35bde7aed780156 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_villani, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Sep 30 14:58:03 compute-0 practical_villani[298289]: 167 167
Sep 30 14:58:03 compute-0 systemd[1]: libpod-7ad839049f898f31e469f38960aa2a776f9dd0beff26d1dbf35bde7aed780156.scope: Deactivated successfully.
Sep 30 14:58:03 compute-0 conmon[298289]: conmon 7ad839049f898f31e469 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7ad839049f898f31e469f38960aa2a776f9dd0beff26d1dbf35bde7aed780156.scope/container/memory.events
Sep 30 14:58:03 compute-0 podman[298273]: 2025-09-30 14:58:03.950869888 +0000 UTC m=+0.138838155 container died 7ad839049f898f31e469f38960aa2a776f9dd0beff26d1dbf35bde7aed780156 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_villani, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:58:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-6cb206d55609a1da5ac7dd45bd848723e9c07f2b8769637e20406e390661ba81-merged.mount: Deactivated successfully.
Sep 30 14:58:03 compute-0 podman[298273]: 2025-09-30 14:58:03.9910399 +0000 UTC m=+0.179008177 container remove 7ad839049f898f31e469f38960aa2a776f9dd0beff26d1dbf35bde7aed780156 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_villani, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Sep 30 14:58:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:58:03 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:58:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:58:04 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:58:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:58:04 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:58:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:58:04 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:58:04 compute-0 systemd[1]: libpod-conmon-7ad839049f898f31e469f38960aa2a776f9dd0beff26d1dbf35bde7aed780156.scope: Deactivated successfully.
Sep 30 14:58:04 compute-0 podman[298315]: 2025-09-30 14:58:04.161110023 +0000 UTC m=+0.041530917 container create 4bc7ee58bf4c4cde7c92a2a7ef3178d8a0ae8119d2edccbc762e329d87495fd7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_khorana, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Sep 30 14:58:04 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:58:04 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:58:04 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:58:04.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:58:04 compute-0 systemd[1]: Started libpod-conmon-4bc7ee58bf4c4cde7c92a2a7ef3178d8a0ae8119d2edccbc762e329d87495fd7.scope.
Sep 30 14:58:04 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:58:04 compute-0 podman[298315]: 2025-09-30 14:58:04.140607921 +0000 UTC m=+0.021028845 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:58:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc8b14b0cacb6ca0d1ff90286dcf4288fe53addb3aafd74fe95c8ea7075eb5ce/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:58:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc8b14b0cacb6ca0d1ff90286dcf4288fe53addb3aafd74fe95c8ea7075eb5ce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:58:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc8b14b0cacb6ca0d1ff90286dcf4288fe53addb3aafd74fe95c8ea7075eb5ce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:58:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc8b14b0cacb6ca0d1ff90286dcf4288fe53addb3aafd74fe95c8ea7075eb5ce/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:58:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc8b14b0cacb6ca0d1ff90286dcf4288fe53addb3aafd74fe95c8ea7075eb5ce/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 14:58:04 compute-0 podman[298315]: 2025-09-30 14:58:04.253264702 +0000 UTC m=+0.133685626 container init 4bc7ee58bf4c4cde7c92a2a7ef3178d8a0ae8119d2edccbc762e329d87495fd7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_khorana, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Sep 30 14:58:04 compute-0 podman[298315]: 2025-09-30 14:58:04.263247791 +0000 UTC m=+0.143668695 container start 4bc7ee58bf4c4cde7c92a2a7ef3178d8a0ae8119d2edccbc762e329d87495fd7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_khorana, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Sep 30 14:58:04 compute-0 podman[298315]: 2025-09-30 14:58:04.267034205 +0000 UTC m=+0.147455179 container attach 4bc7ee58bf4c4cde7c92a2a7ef3178d8a0ae8119d2edccbc762e329d87495fd7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_khorana, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Sep 30 14:58:04 compute-0 tender_khorana[298331]: --> passed data devices: 0 physical, 1 LVM
Sep 30 14:58:04 compute-0 tender_khorana[298331]: --> All data devices are unavailable
Sep 30 14:58:04 compute-0 systemd[1]: libpod-4bc7ee58bf4c4cde7c92a2a7ef3178d8a0ae8119d2edccbc762e329d87495fd7.scope: Deactivated successfully.
Sep 30 14:58:04 compute-0 podman[298315]: 2025-09-30 14:58:04.658696557 +0000 UTC m=+0.539117461 container died 4bc7ee58bf4c4cde7c92a2a7ef3178d8a0ae8119d2edccbc762e329d87495fd7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_khorana, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Sep 30 14:58:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-cc8b14b0cacb6ca0d1ff90286dcf4288fe53addb3aafd74fe95c8ea7075eb5ce-merged.mount: Deactivated successfully.
Sep 30 14:58:04 compute-0 podman[298315]: 2025-09-30 14:58:04.706750025 +0000 UTC m=+0.587170919 container remove 4bc7ee58bf4c4cde7c92a2a7ef3178d8a0ae8119d2edccbc762e329d87495fd7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_khorana, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Sep 30 14:58:04 compute-0 systemd[1]: libpod-conmon-4bc7ee58bf4c4cde7c92a2a7ef3178d8a0ae8119d2edccbc762e329d87495fd7.scope: Deactivated successfully.
Sep 30 14:58:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:58:04] "GET /metrics HTTP/1.1" 200 48532 "" "Prometheus/2.51.0"
Sep 30 14:58:04 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:58:04] "GET /metrics HTTP/1.1" 200 48532 "" "Prometheus/2.51.0"
Sep 30 14:58:04 compute-0 sudo[298208]: pam_unix(sudo:session): session closed for user root
Sep 30 14:58:04 compute-0 sudo[298356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:58:04 compute-0 sudo[298356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:58:04 compute-0 sudo[298356]: pam_unix(sudo:session): session closed for user root
Sep 30 14:58:04 compute-0 sudo[298381]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -- lvm list --format json
Sep 30 14:58:04 compute-0 sudo[298381]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:58:05 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:58:05 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 14:58:05 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:58:05.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 14:58:05 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1282: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 562 B/s rd, 0 op/s
Sep 30 14:58:05 compute-0 podman[298444]: 2025-09-30 14:58:05.347566823 +0000 UTC m=+0.044923582 container create 2e2151c94b483ded25e0ee0f3abad38b149fef00a6e34207d98d9fc101c5727f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_lederberg, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Sep 30 14:58:05 compute-0 systemd[1]: Started libpod-conmon-2e2151c94b483ded25e0ee0f3abad38b149fef00a6e34207d98d9fc101c5727f.scope.
Sep 30 14:58:05 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:58:05 compute-0 podman[298444]: 2025-09-30 14:58:05.329535813 +0000 UTC m=+0.026892602 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:58:05 compute-0 podman[298444]: 2025-09-30 14:58:05.431663991 +0000 UTC m=+0.129020740 container init 2e2151c94b483ded25e0ee0f3abad38b149fef00a6e34207d98d9fc101c5727f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_lederberg, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:58:05 compute-0 podman[298444]: 2025-09-30 14:58:05.439373343 +0000 UTC m=+0.136730092 container start 2e2151c94b483ded25e0ee0f3abad38b149fef00a6e34207d98d9fc101c5727f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_lederberg, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:58:05 compute-0 podman[298444]: 2025-09-30 14:58:05.442225544 +0000 UTC m=+0.139582303 container attach 2e2151c94b483ded25e0ee0f3abad38b149fef00a6e34207d98d9fc101c5727f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_lederberg, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Sep 30 14:58:05 compute-0 magical_lederberg[298461]: 167 167
Sep 30 14:58:05 compute-0 systemd[1]: libpod-2e2151c94b483ded25e0ee0f3abad38b149fef00a6e34207d98d9fc101c5727f.scope: Deactivated successfully.
Sep 30 14:58:05 compute-0 conmon[298461]: conmon 2e2151c94b483ded25e0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2e2151c94b483ded25e0ee0f3abad38b149fef00a6e34207d98d9fc101c5727f.scope/container/memory.events
Sep 30 14:58:05 compute-0 podman[298444]: 2025-09-30 14:58:05.446576123 +0000 UTC m=+0.143932882 container died 2e2151c94b483ded25e0ee0f3abad38b149fef00a6e34207d98d9fc101c5727f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Sep 30 14:58:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-57df7552d1d45969b774f794057179a9ef5b05a339ea03af98a63c5c6ec73345-merged.mount: Deactivated successfully.
Sep 30 14:58:05 compute-0 podman[298444]: 2025-09-30 14:58:05.486405756 +0000 UTC m=+0.183762505 container remove 2e2151c94b483ded25e0ee0f3abad38b149fef00a6e34207d98d9fc101c5727f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_lederberg, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:58:05 compute-0 systemd[1]: libpod-conmon-2e2151c94b483ded25e0ee0f3abad38b149fef00a6e34207d98d9fc101c5727f.scope: Deactivated successfully.
Sep 30 14:58:05 compute-0 podman[298485]: 2025-09-30 14:58:05.665943825 +0000 UTC m=+0.047896166 container create 1189d763c6613c669f21e15418822f7c7891898b6adbe652834db3c584517e6e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_mendel, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Sep 30 14:58:05 compute-0 systemd[1]: Started libpod-conmon-1189d763c6613c669f21e15418822f7c7891898b6adbe652834db3c584517e6e.scope.
Sep 30 14:58:05 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:58:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4da88f50626f607d438950a3a7dcb4b0dca60df5cd084fcd8f90f9669b158ea/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:58:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4da88f50626f607d438950a3a7dcb4b0dca60df5cd084fcd8f90f9669b158ea/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:58:05 compute-0 podman[298485]: 2025-09-30 14:58:05.645501405 +0000 UTC m=+0.027453756 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:58:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4da88f50626f607d438950a3a7dcb4b0dca60df5cd084fcd8f90f9669b158ea/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:58:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4da88f50626f607d438950a3a7dcb4b0dca60df5cd084fcd8f90f9669b158ea/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:58:05 compute-0 podman[298485]: 2025-09-30 14:58:05.757709265 +0000 UTC m=+0.139661646 container init 1189d763c6613c669f21e15418822f7c7891898b6adbe652834db3c584517e6e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_mendel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:58:05 compute-0 podman[298485]: 2025-09-30 14:58:05.7663342 +0000 UTC m=+0.148286521 container start 1189d763c6613c669f21e15418822f7c7891898b6adbe652834db3c584517e6e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_mendel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Sep 30 14:58:05 compute-0 podman[298485]: 2025-09-30 14:58:05.769908669 +0000 UTC m=+0.151861060 container attach 1189d763c6613c669f21e15418822f7c7891898b6adbe652834db3c584517e6e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_mendel, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:58:06 compute-0 nova_compute[261524]: 2025-09-30 14:58:06.071 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:58:06 compute-0 jovial_mendel[298501]: {
Sep 30 14:58:06 compute-0 jovial_mendel[298501]:     "0": [
Sep 30 14:58:06 compute-0 jovial_mendel[298501]:         {
Sep 30 14:58:06 compute-0 jovial_mendel[298501]:             "devices": [
Sep 30 14:58:06 compute-0 jovial_mendel[298501]:                 "/dev/loop3"
Sep 30 14:58:06 compute-0 jovial_mendel[298501]:             ],
Sep 30 14:58:06 compute-0 jovial_mendel[298501]:             "lv_name": "ceph_lv0",
Sep 30 14:58:06 compute-0 jovial_mendel[298501]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:58:06 compute-0 jovial_mendel[298501]:             "lv_size": "21470642176",
Sep 30 14:58:06 compute-0 jovial_mendel[298501]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5e3c7776-ac03-5698-b79f-a6dc2d80cae6,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1bf35304-bfb4-41f5-b832-570aa31de1b2,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 14:58:06 compute-0 jovial_mendel[298501]:             "lv_uuid": "f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L",
Sep 30 14:58:06 compute-0 jovial_mendel[298501]:             "name": "ceph_lv0",
Sep 30 14:58:06 compute-0 jovial_mendel[298501]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:58:06 compute-0 jovial_mendel[298501]:             "tags": {
Sep 30 14:58:06 compute-0 jovial_mendel[298501]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:58:06 compute-0 jovial_mendel[298501]:                 "ceph.block_uuid": "f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L",
Sep 30 14:58:06 compute-0 jovial_mendel[298501]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 14:58:06 compute-0 jovial_mendel[298501]:                 "ceph.cluster_fsid": "5e3c7776-ac03-5698-b79f-a6dc2d80cae6",
Sep 30 14:58:06 compute-0 jovial_mendel[298501]:                 "ceph.cluster_name": "ceph",
Sep 30 14:58:06 compute-0 jovial_mendel[298501]:                 "ceph.crush_device_class": "",
Sep 30 14:58:06 compute-0 jovial_mendel[298501]:                 "ceph.encrypted": "0",
Sep 30 14:58:06 compute-0 jovial_mendel[298501]:                 "ceph.osd_fsid": "1bf35304-bfb4-41f5-b832-570aa31de1b2",
Sep 30 14:58:06 compute-0 jovial_mendel[298501]:                 "ceph.osd_id": "0",
Sep 30 14:58:06 compute-0 jovial_mendel[298501]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 14:58:06 compute-0 jovial_mendel[298501]:                 "ceph.type": "block",
Sep 30 14:58:06 compute-0 jovial_mendel[298501]:                 "ceph.vdo": "0",
Sep 30 14:58:06 compute-0 jovial_mendel[298501]:                 "ceph.with_tpm": "0"
Sep 30 14:58:06 compute-0 jovial_mendel[298501]:             },
Sep 30 14:58:06 compute-0 jovial_mendel[298501]:             "type": "block",
Sep 30 14:58:06 compute-0 jovial_mendel[298501]:             "vg_name": "ceph_vg0"
Sep 30 14:58:06 compute-0 jovial_mendel[298501]:         }
Sep 30 14:58:06 compute-0 jovial_mendel[298501]:     ]
Sep 30 14:58:06 compute-0 jovial_mendel[298501]: }
Sep 30 14:58:06 compute-0 systemd[1]: libpod-1189d763c6613c669f21e15418822f7c7891898b6adbe652834db3c584517e6e.scope: Deactivated successfully.
Sep 30 14:58:06 compute-0 podman[298513]: 2025-09-30 14:58:06.157755354 +0000 UTC m=+0.030284916 container died 1189d763c6613c669f21e15418822f7c7891898b6adbe652834db3c584517e6e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_mendel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Sep 30 14:58:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-f4da88f50626f607d438950a3a7dcb4b0dca60df5cd084fcd8f90f9669b158ea-merged.mount: Deactivated successfully.
Sep 30 14:58:06 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:58:06 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:58:06 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:58:06.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:58:06 compute-0 podman[298513]: 2025-09-30 14:58:06.214247624 +0000 UTC m=+0.086777106 container remove 1189d763c6613c669f21e15418822f7c7891898b6adbe652834db3c584517e6e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_mendel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Sep 30 14:58:06 compute-0 systemd[1]: libpod-conmon-1189d763c6613c669f21e15418822f7c7891898b6adbe652834db3c584517e6e.scope: Deactivated successfully.
Sep 30 14:58:06 compute-0 sudo[298381]: pam_unix(sudo:session): session closed for user root
Sep 30 14:58:06 compute-0 sudo[298530]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:58:06 compute-0 sudo[298530]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:58:06 compute-0 sudo[298530]: pam_unix(sudo:session): session closed for user root
Sep 30 14:58:06 compute-0 ceph-mon[74194]: pgmap v1282: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 562 B/s rd, 0 op/s
Sep 30 14:58:06 compute-0 sudo[298556]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -- raw list --format json
Sep 30 14:58:06 compute-0 sudo[298556]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:58:06 compute-0 podman[298618]: 2025-09-30 14:58:06.829114143 +0000 UTC m=+0.027148298 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:58:07 compute-0 podman[298618]: 2025-09-30 14:58:07.001695829 +0000 UTC m=+0.199729984 container create 19eba0bdb8c92ba65d514bf6b30acaf6492bd6f9b0c0fc61ab4693db340f2e2b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_borg, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Sep 30 14:58:07 compute-0 sshd-session[298506]: Invalid user ubnt from 194.0.234.19 port 51304
Sep 30 14:58:07 compute-0 systemd[1]: Started libpod-conmon-19eba0bdb8c92ba65d514bf6b30acaf6492bd6f9b0c0fc61ab4693db340f2e2b.scope.
Sep 30 14:58:07 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:58:07 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:58:07 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:58:07.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:58:07 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:58:07 compute-0 sshd-session[298506]: pam_unix(sshd:auth): check pass; user unknown
Sep 30 14:58:07 compute-0 sshd-session[298506]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=194.0.234.19
Sep 30 14:58:07 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1283: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 562 B/s rd, 0 op/s
Sep 30 14:58:07 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:58:07.244Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:58:07 compute-0 nova_compute[261524]: 2025-09-30 14:58:07.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:58:07 compute-0 podman[298618]: 2025-09-30 14:58:07.300594055 +0000 UTC m=+0.498628200 container init 19eba0bdb8c92ba65d514bf6b30acaf6492bd6f9b0c0fc61ab4693db340f2e2b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_borg, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:58:07 compute-0 podman[298618]: 2025-09-30 14:58:07.312727198 +0000 UTC m=+0.510761353 container start 19eba0bdb8c92ba65d514bf6b30acaf6492bd6f9b0c0fc61ab4693db340f2e2b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_borg, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:58:07 compute-0 podman[298618]: 2025-09-30 14:58:07.319989619 +0000 UTC m=+0.518023754 container attach 19eba0bdb8c92ba65d514bf6b30acaf6492bd6f9b0c0fc61ab4693db340f2e2b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_borg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:58:07 compute-0 silly_borg[298634]: 167 167
Sep 30 14:58:07 compute-0 systemd[1]: libpod-19eba0bdb8c92ba65d514bf6b30acaf6492bd6f9b0c0fc61ab4693db340f2e2b.scope: Deactivated successfully.
Sep 30 14:58:07 compute-0 conmon[298634]: conmon 19eba0bdb8c92ba65d51 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-19eba0bdb8c92ba65d514bf6b30acaf6492bd6f9b0c0fc61ab4693db340f2e2b.scope/container/memory.events
Sep 30 14:58:07 compute-0 podman[298618]: 2025-09-30 14:58:07.322996884 +0000 UTC m=+0.521031019 container died 19eba0bdb8c92ba65d514bf6b30acaf6492bd6f9b0c0fc61ab4693db340f2e2b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_borg, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:58:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-75ad36169893966399aea1615fd97fc7db63c53fce39f31dd9ef74f91acbc68a-merged.mount: Deactivated successfully.
Sep 30 14:58:07 compute-0 podman[298618]: 2025-09-30 14:58:07.368314095 +0000 UTC m=+0.566348210 container remove 19eba0bdb8c92ba65d514bf6b30acaf6492bd6f9b0c0fc61ab4693db340f2e2b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_borg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Sep 30 14:58:07 compute-0 systemd[1]: libpod-conmon-19eba0bdb8c92ba65d514bf6b30acaf6492bd6f9b0c0fc61ab4693db340f2e2b.scope: Deactivated successfully.
Sep 30 14:58:07 compute-0 podman[298659]: 2025-09-30 14:58:07.579333369 +0000 UTC m=+0.057624888 container create 4ab46c6a63134a774669f10812e22afadee0b600d184d56e36733fd43f0bbaa1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_dirac, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:58:07 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:58:07 compute-0 systemd[1]: Started libpod-conmon-4ab46c6a63134a774669f10812e22afadee0b600d184d56e36733fd43f0bbaa1.scope.
Sep 30 14:58:07 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:58:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2beb51e1d7aa0f66584d432d55d77e6e61a0190826323881d0a9b6fb5c77105e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:58:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2beb51e1d7aa0f66584d432d55d77e6e61a0190826323881d0a9b6fb5c77105e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:58:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2beb51e1d7aa0f66584d432d55d77e6e61a0190826323881d0a9b6fb5c77105e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:58:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2beb51e1d7aa0f66584d432d55d77e6e61a0190826323881d0a9b6fb5c77105e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:58:07 compute-0 podman[298659]: 2025-09-30 14:58:07.649011658 +0000 UTC m=+0.127303177 container init 4ab46c6a63134a774669f10812e22afadee0b600d184d56e36733fd43f0bbaa1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_dirac, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Sep 30 14:58:07 compute-0 podman[298659]: 2025-09-30 14:58:07.560206622 +0000 UTC m=+0.038498161 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:58:07 compute-0 podman[298659]: 2025-09-30 14:58:07.657904089 +0000 UTC m=+0.136195578 container start 4ab46c6a63134a774669f10812e22afadee0b600d184d56e36733fd43f0bbaa1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_dirac, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Sep 30 14:58:07 compute-0 podman[298659]: 2025-09-30 14:58:07.661355456 +0000 UTC m=+0.139646955 container attach 4ab46c6a63134a774669f10812e22afadee0b600d184d56e36733fd43f0bbaa1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_dirac, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Sep 30 14:58:08 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:58:08 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:58:08 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:58:08.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:58:08 compute-0 lvm[298753]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 14:58:08 compute-0 lvm[298753]: VG ceph_vg0 finished
Sep 30 14:58:08 compute-0 elastic_dirac[298676]: {}
Sep 30 14:58:08 compute-0 systemd[1]: libpod-4ab46c6a63134a774669f10812e22afadee0b600d184d56e36733fd43f0bbaa1.scope: Deactivated successfully.
Sep 30 14:58:08 compute-0 systemd[1]: libpod-4ab46c6a63134a774669f10812e22afadee0b600d184d56e36733fd43f0bbaa1.scope: Consumed 1.128s CPU time.
Sep 30 14:58:08 compute-0 podman[298659]: 2025-09-30 14:58:08.383784089 +0000 UTC m=+0.862075638 container died 4ab46c6a63134a774669f10812e22afadee0b600d184d56e36733fd43f0bbaa1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:58:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-2beb51e1d7aa0f66584d432d55d77e6e61a0190826323881d0a9b6fb5c77105e-merged.mount: Deactivated successfully.
Sep 30 14:58:08 compute-0 ceph-mon[74194]: pgmap v1283: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 562 B/s rd, 0 op/s
Sep 30 14:58:08 compute-0 podman[298659]: 2025-09-30 14:58:08.432948975 +0000 UTC m=+0.911240464 container remove 4ab46c6a63134a774669f10812e22afadee0b600d184d56e36733fd43f0bbaa1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_dirac, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Sep 30 14:58:08 compute-0 systemd[1]: libpod-conmon-4ab46c6a63134a774669f10812e22afadee0b600d184d56e36733fd43f0bbaa1.scope: Deactivated successfully.
Sep 30 14:58:08 compute-0 sudo[298556]: pam_unix(sudo:session): session closed for user root
Sep 30 14:58:08 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 14:58:08 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:58:08 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 14:58:08 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:58:08 compute-0 sudo[298767]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 14:58:08 compute-0 sudo[298767]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:58:08 compute-0 sudo[298767]: pam_unix(sudo:session): session closed for user root
Sep 30 14:58:08 compute-0 sudo[298792]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:58:08 compute-0 sudo[298792]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:58:08 compute-0 sudo[298792]: pam_unix(sudo:session): session closed for user root
Sep 30 14:58:08 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:58:08.897Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:58:09 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:58:08 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:58:09 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:58:08 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:58:09 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:58:08 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:58:09 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:58:09 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:58:09 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:58:09 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:58:09 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:58:09.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:58:09 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1284: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 562 B/s rd, 0 op/s
Sep 30 14:58:09 compute-0 sshd-session[298506]: Failed password for invalid user ubnt from 194.0.234.19 port 51304 ssh2
Sep 30 14:58:09 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:58:09 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:58:10 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:58:10 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 14:58:10 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:58:10.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 14:58:10 compute-0 ceph-mon[74194]: pgmap v1284: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 562 B/s rd, 0 op/s
Sep 30 14:58:11 compute-0 nova_compute[261524]: 2025-09-30 14:58:11.073 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:58:11 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:58:11 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:58:11 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:58:11.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:58:11 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1285: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 843 B/s rd, 0 op/s
Sep 30 14:58:11 compute-0 sshd-session[298506]: Connection closed by invalid user ubnt 194.0.234.19 port 51304 [preauth]
Sep 30 14:58:11 compute-0 ceph-mon[74194]: from='client.? 192.168.122.10:0/2050251909' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 14:58:11 compute-0 ceph-mon[74194]: from='client.? 192.168.122.10:0/2050251909' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 14:58:12 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:58:12 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:58:12 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:58:12.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:58:12 compute-0 nova_compute[261524]: 2025-09-30 14:58:12.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:58:12 compute-0 ceph-mon[74194]: pgmap v1285: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 843 B/s rd, 0 op/s
Sep 30 14:58:12 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:58:13 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:58:13 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000024s ======
Sep 30 14:58:13 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:58:13.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Sep 30 14:58:13 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1286: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 562 B/s rd, 0 op/s
Sep 30 14:58:13 compute-0 ceph-mon[74194]: pgmap v1286: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 562 B/s rd, 0 op/s
Sep 30 14:58:13 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:58:13.747Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:58:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:58:13 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:58:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:58:13 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:58:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:58:13 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:58:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:58:14 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:58:14 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:58:14 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:58:14 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:58:14.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:58:14 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:58:14 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:58:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:58:14] "GET /metrics HTTP/1.1" 200 48534 "" "Prometheus/2.51.0"
Sep 30 14:58:14 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:58:14] "GET /metrics HTTP/1.1" 200 48534 "" "Prometheus/2.51.0"
Sep 30 14:58:14 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:58:15 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:58:15 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000024s ======
Sep 30 14:58:15 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:58:15.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Sep 30 14:58:15 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1287: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:58:15 compute-0 ceph-mon[74194]: pgmap v1287: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:58:16 compute-0 nova_compute[261524]: 2025-09-30 14:58:16.076 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:58:16 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:58:16 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:58:16 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:58:16.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:58:17 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:58:17 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:58:17 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:58:17.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:58:17 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1288: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:58:17 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:58:17.245Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:58:17 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:58:17.245Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:58:17 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:58:17.245Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:58:17 compute-0 nova_compute[261524]: 2025-09-30 14:58:17.285 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:58:17 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:58:18 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:58:18 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 14:58:18 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:58:18.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 14:58:18 compute-0 ceph-mon[74194]: pgmap v1288: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:58:18 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:58:18.900Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:58:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:58:18 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:58:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:58:18 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:58:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:58:18 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:58:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:58:19 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:58:19 compute-0 podman[298830]: 2025-09-30 14:58:19.145974703 +0000 UTC m=+0.057612268 container health_status c96c6cc1b89de09f21811bef665f35e069874e3ad1bbd4c37d02227af98e3458 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20250923, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Sep 30 14:58:19 compute-0 podman[298827]: 2025-09-30 14:58:19.146086226 +0000 UTC m=+0.062365557 container health_status 3f9405f717bf7bccb1d94628a6cea0442375ebf8d5cf43ef2536ee30dce6c6e0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Sep 30 14:58:19 compute-0 podman[298829]: 2025-09-30 14:58:19.146409404 +0000 UTC m=+0.062840518 container health_status b3fb2fd96e3ed0561f41ccf5f3a6a8f5edc1610051f196882b9fe64f54041c07 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Sep 30 14:58:19 compute-0 podman[298828]: 2025-09-30 14:58:19.178935396 +0000 UTC m=+0.096154250 container health_status 8ace90c1ad44d98a416105b72e1c541724757ab08efa10adfaa0df7e8c2027c6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Sep 30 14:58:19 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:58:19 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:58:19 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:58:19.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:58:19 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1289: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:58:20 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:58:20 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 14:58:20 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:58:20.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 14:58:20 compute-0 ceph-mon[74194]: pgmap v1289: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:58:21 compute-0 nova_compute[261524]: 2025-09-30 14:58:21.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:58:21 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:58:21 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:58:21 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:58:21.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:58:21 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1290: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:58:22 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:58:22 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 14:58:22 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:58:22.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 14:58:22 compute-0 nova_compute[261524]: 2025-09-30 14:58:22.288 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:58:22 compute-0 ceph-mon[74194]: pgmap v1290: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:58:22 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:58:22 compute-0 nova_compute[261524]: 2025-09-30 14:58:22.952 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:58:22 compute-0 nova_compute[261524]: 2025-09-30 14:58:22.953 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Sep 30 14:58:22 compute-0 nova_compute[261524]: 2025-09-30 14:58:22.953 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Sep 30 14:58:22 compute-0 nova_compute[261524]: 2025-09-30 14:58:22.988 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Sep 30 14:58:23 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:58:23 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:58:23 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:58:23.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:58:23 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1291: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:58:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:58:23.749Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:58:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:58:23.749Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:58:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:58:23 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:58:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:58:23 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:58:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:58:23 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:58:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:58:24 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:58:24 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:58:24 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 14:58:24 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:58:24.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 14:58:24 compute-0 ceph-mon[74194]: pgmap v1291: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:58:24 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/1214440273' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:58:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:58:24] "GET /metrics HTTP/1.1" 200 48534 "" "Prometheus/2.51.0"
Sep 30 14:58:24 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:58:24] "GET /metrics HTTP/1.1" 200 48534 "" "Prometheus/2.51.0"
Sep 30 14:58:25 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:58:25 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:58:25 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:58:25.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:58:25 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1292: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:58:25 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/1536201965' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:58:25 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/1317879525' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:58:25 compute-0 nova_compute[261524]: 2025-09-30 14:58:25.952 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:58:25 compute-0 nova_compute[261524]: 2025-09-30 14:58:25.968 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:58:25 compute-0 nova_compute[261524]: 2025-09-30 14:58:25.968 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:58:25 compute-0 nova_compute[261524]: 2025-09-30 14:58:25.968 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:58:25 compute-0 nova_compute[261524]: 2025-09-30 14:58:25.968 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Sep 30 14:58:26 compute-0 nova_compute[261524]: 2025-09-30 14:58:26.079 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:58:26 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:58:26 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 14:58:26 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:58:26.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 14:58:26 compute-0 ceph-mon[74194]: pgmap v1292: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:58:26 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/1562544226' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:58:26 compute-0 nova_compute[261524]: 2025-09-30 14:58:26.952 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:58:27 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:58:27 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 14:58:27 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:58:27.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 14:58:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:58:27.246Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:58:27 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1293: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:58:27 compute-0 nova_compute[261524]: 2025-09-30 14:58:27.292 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:58:27 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:58:27 compute-0 nova_compute[261524]: 2025-09-30 14:58:27.952 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:58:27 compute-0 nova_compute[261524]: 2025-09-30 14:58:27.975 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:58:27 compute-0 nova_compute[261524]: 2025-09-30 14:58:27.976 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:58:27 compute-0 nova_compute[261524]: 2025-09-30 14:58:27.976 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:58:27 compute-0 nova_compute[261524]: 2025-09-30 14:58:27.977 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Sep 30 14:58:27 compute-0 nova_compute[261524]: 2025-09-30 14:58:27.977 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Sep 30 14:58:28 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:58:28 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:58:28 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:58:28.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:58:28 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 14:58:28 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1911441946' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:58:28 compute-0 nova_compute[261524]: 2025-09-30 14:58:28.430 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Sep 30 14:58:28 compute-0 ceph-mon[74194]: pgmap v1293: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:58:28 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/1911441946' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:58:28 compute-0 nova_compute[261524]: 2025-09-30 14:58:28.607 2 WARNING nova.virt.libvirt.driver [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 14:58:28 compute-0 nova_compute[261524]: 2025-09-30 14:58:28.608 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4482MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Sep 30 14:58:28 compute-0 nova_compute[261524]: 2025-09-30 14:58:28.609 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:58:28 compute-0 nova_compute[261524]: 2025-09-30 14:58:28.609 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:58:28 compute-0 nova_compute[261524]: 2025-09-30 14:58:28.682 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Sep 30 14:58:28 compute-0 nova_compute[261524]: 2025-09-30 14:58:28.682 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Sep 30 14:58:28 compute-0 nova_compute[261524]: 2025-09-30 14:58:28.695 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Sep 30 14:58:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:58:28.901Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:58:28 compute-0 sudo[298958]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:58:28 compute-0 sudo[298958]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:58:28 compute-0 sudo[298958]: pam_unix(sudo:session): session closed for user root
Sep 30 14:58:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:58:28 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:58:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:58:29 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:58:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:58:29 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:58:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:58:29 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:58:29 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 14:58:29 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3869495960' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:58:29 compute-0 nova_compute[261524]: 2025-09-30 14:58:29.163 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Sep 30 14:58:29 compute-0 nova_compute[261524]: 2025-09-30 14:58:29.171 2 DEBUG nova.compute.provider_tree [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Inventory has not changed in ProviderTree for provider: 06783cfc-6d32-454d-9501-ebd8adea3735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Sep 30 14:58:29 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:58:29 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:58:29 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:58:29.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:58:29 compute-0 nova_compute[261524]: 2025-09-30 14:58:29.211 2 DEBUG nova.scheduler.client.report [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Inventory has not changed for provider 06783cfc-6d32-454d-9501-ebd8adea3735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Sep 30 14:58:29 compute-0 nova_compute[261524]: 2025-09-30 14:58:29.213 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Sep 30 14:58:29 compute-0 nova_compute[261524]: 2025-09-30 14:58:29.213 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.604s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:58:29 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1294: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:58:29 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/3869495960' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:58:29 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:58:29 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:58:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:58:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:58:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:58:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:58:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:58:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:58:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[106217]: logger=cleanup t=2025-09-30T14:58:29.857058029Z level=info msg="Completed cleanup jobs" duration=15.184089ms
Sep 30 14:58:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[106217]: logger=plugins.update.checker t=2025-09-30T14:58:29.958608353Z level=info msg="Update check succeeded" duration=48.808398ms
Sep 30 14:58:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-grafana-compute-0[106217]: logger=grafana.update.checker t=2025-09-30T14:58:29.966712545Z level=info msg="Update check succeeded" duration=53.943506ms
Sep 30 14:58:30 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:58:30 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 14:58:30 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:58:30.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 14:58:30 compute-0 ceph-mon[74194]: pgmap v1294: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:58:30 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:58:31 compute-0 nova_compute[261524]: 2025-09-30 14:58:31.080 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:58:31 compute-0 nova_compute[261524]: 2025-09-30 14:58:31.208 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:58:31 compute-0 nova_compute[261524]: 2025-09-30 14:58:31.209 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:58:31 compute-0 nova_compute[261524]: 2025-09-30 14:58:31.209 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:58:31 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:58:31 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:58:31 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:58:31.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:58:31 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1295: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:58:31 compute-0 ceph-mon[74194]: pgmap v1295: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:58:32 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:58:32 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:58:32 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:58:32.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:58:32 compute-0 nova_compute[261524]: 2025-09-30 14:58:32.294 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:58:32 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:58:33 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:58:33 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:58:33 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:58:33.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:58:33 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1296: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:58:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:58:33.750Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:58:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:58:33.750Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:58:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:58:33 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:58:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:58:34 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:58:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:58:34 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:58:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:58:34 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:58:34 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:58:34 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:58:34 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:58:34.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:58:34 compute-0 ceph-mon[74194]: pgmap v1296: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:58:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:58:34] "GET /metrics HTTP/1.1" 200 48532 "" "Prometheus/2.51.0"
Sep 30 14:58:34 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:58:34] "GET /metrics HTTP/1.1" 200 48532 "" "Prometheus/2.51.0"
Sep 30 14:58:35 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:58:35 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:58:35 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:58:35.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:58:35 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1297: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:58:36 compute-0 nova_compute[261524]: 2025-09-30 14:58:36.082 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:58:36 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:58:36 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:58:36 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:58:36.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:58:36 compute-0 ceph-mon[74194]: pgmap v1297: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:58:37 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:58:37 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:58:37 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:58:37.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:58:37 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:58:37.247Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:58:37 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1298: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:58:37 compute-0 nova_compute[261524]: 2025-09-30 14:58:37.297 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:58:37 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:58:38 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:58:38 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:58:38 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:58:38.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:58:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:58:38.278 163966 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:58:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:58:38.278 163966 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:58:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:58:38.279 163966 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:58:38 compute-0 ceph-mon[74194]: pgmap v1298: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:58:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:58:38.902Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:58:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:58:38.902Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:58:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:58:38 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:58:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:58:38 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:58:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:58:38 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:58:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:58:39 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:58:39 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:58:39 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:58:39 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:58:39.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:58:39 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1299: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:58:40 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:58:40 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:58:40 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:58:40.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:58:40 compute-0 ceph-mon[74194]: pgmap v1299: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:58:41 compute-0 nova_compute[261524]: 2025-09-30 14:58:41.085 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:58:41 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:58:41 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:58:41 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:58:41.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:58:41 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1300: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:58:42 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:58:42 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:58:42 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:58:42.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:58:42 compute-0 nova_compute[261524]: 2025-09-30 14:58:42.299 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:58:42 compute-0 ceph-mon[74194]: pgmap v1300: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:58:42 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:58:43 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:58:43 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:58:43 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:58:43.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:58:43 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1301: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:58:43 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:58:43.752Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:58:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:58:43 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:58:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:58:43 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:58:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:58:43 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:58:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:58:44 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:58:44 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:58:44 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:58:44 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:58:44.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:58:44 compute-0 ceph-mon[74194]: pgmap v1301: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:58:44 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:58:44 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:58:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:58:44] "GET /metrics HTTP/1.1" 200 48530 "" "Prometheus/2.51.0"
Sep 30 14:58:44 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:58:44] "GET /metrics HTTP/1.1" 200 48530 "" "Prometheus/2.51.0"
Sep 30 14:58:45 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:58:45 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:58:45 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:58:45.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:58:45 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1302: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:58:45 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:58:46 compute-0 nova_compute[261524]: 2025-09-30 14:58:46.086 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:58:46 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:58:46 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:58:46 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:58:46.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:58:46 compute-0 ceph-mon[74194]: pgmap v1302: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:58:47 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:58:47 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:58:47 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:58:47.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:58:47 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:58:47.248Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:58:47 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1303: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:58:47 compute-0 nova_compute[261524]: 2025-09-30 14:58:47.302 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:58:47 compute-0 ceph-mon[74194]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #78. Immutable memtables: 0.
Sep 30 14:58:47 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:58:47.464527) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Sep 30 14:58:47 compute-0 ceph-mon[74194]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 78
Sep 30 14:58:47 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759244327464559, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 1926, "num_deletes": 251, "total_data_size": 3814744, "memory_usage": 3872656, "flush_reason": "Manual Compaction"}
Sep 30 14:58:47 compute-0 ceph-mon[74194]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #79: started
Sep 30 14:58:47 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759244327492134, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 79, "file_size": 3698497, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 35209, "largest_seqno": 37134, "table_properties": {"data_size": 3689829, "index_size": 5354, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2245, "raw_key_size": 17837, "raw_average_key_size": 20, "raw_value_size": 3672499, "raw_average_value_size": 4159, "num_data_blocks": 232, "num_entries": 883, "num_filter_entries": 883, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759244132, "oldest_key_time": 1759244132, "file_creation_time": 1759244327, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4a74fe2f-a33e-416b-ba25-743e7942b3ac", "db_session_id": "KY5CTSKWFSFJYE5835A9", "orig_file_number": 79, "seqno_to_time_mapping": "N/A"}}
Sep 30 14:58:47 compute-0 ceph-mon[74194]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 27716 microseconds, and 7303 cpu microseconds.
Sep 30 14:58:47 compute-0 ceph-mon[74194]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 14:58:47 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:58:47.492235) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #79: 3698497 bytes OK
Sep 30 14:58:47 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:58:47.492260) [db/memtable_list.cc:519] [default] Level-0 commit table #79 started
Sep 30 14:58:47 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:58:47.494972) [db/memtable_list.cc:722] [default] Level-0 commit table #79: memtable #1 done
Sep 30 14:58:47 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:58:47.494998) EVENT_LOG_v1 {"time_micros": 1759244327494991, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Sep 30 14:58:47 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:58:47.495021) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Sep 30 14:58:47 compute-0 ceph-mon[74194]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 3806903, prev total WAL file size 3806903, number of live WAL files 2.
Sep 30 14:58:47 compute-0 ceph-mon[74194]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000075.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 14:58:47 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:58:47.496053) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033303132' seq:72057594037927935, type:22 .. '7061786F730033323634' seq:0, type:0; will stop at (end)
Sep 30 14:58:47 compute-0 ceph-mon[74194]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00
Sep 30 14:58:47 compute-0 ceph-mon[74194]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [79(3611KB)], [77(10MB)]
Sep 30 14:58:47 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759244327496086, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [79], "files_L6": [77], "score": -1, "input_data_size": 15172863, "oldest_snapshot_seqno": -1}
Sep 30 14:58:47 compute-0 ceph-mon[74194]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #80: 6770 keys, 13022082 bytes, temperature: kUnknown
Sep 30 14:58:47 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:58:47 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759244327616046, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 80, "file_size": 13022082, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12979306, "index_size": 24744, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16965, "raw_key_size": 177567, "raw_average_key_size": 26, "raw_value_size": 12859881, "raw_average_value_size": 1899, "num_data_blocks": 972, "num_entries": 6770, "num_filter_entries": 6770, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759241526, "oldest_key_time": 0, "file_creation_time": 1759244327, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4a74fe2f-a33e-416b-ba25-743e7942b3ac", "db_session_id": "KY5CTSKWFSFJYE5835A9", "orig_file_number": 80, "seqno_to_time_mapping": "N/A"}}
Sep 30 14:58:47 compute-0 ceph-mon[74194]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 14:58:47 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:58:47.616412) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 13022082 bytes
Sep 30 14:58:47 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:58:47.618224) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 126.4 rd, 108.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.5, 10.9 +0.0 blob) out(12.4 +0.0 blob), read-write-amplify(7.6) write-amplify(3.5) OK, records in: 7286, records dropped: 516 output_compression: NoCompression
Sep 30 14:58:47 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:58:47.618253) EVENT_LOG_v1 {"time_micros": 1759244327618240, "job": 44, "event": "compaction_finished", "compaction_time_micros": 120045, "compaction_time_cpu_micros": 31617, "output_level": 6, "num_output_files": 1, "total_output_size": 13022082, "num_input_records": 7286, "num_output_records": 6770, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Sep 30 14:58:47 compute-0 ceph-mon[74194]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000079.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 14:58:47 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759244327619595, "job": 44, "event": "table_file_deletion", "file_number": 79}
Sep 30 14:58:47 compute-0 ceph-mon[74194]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 14:58:47 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759244327623507, "job": 44, "event": "table_file_deletion", "file_number": 77}
Sep 30 14:58:47 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:58:47.496011) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:58:47 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:58:47.623553) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:58:47 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:58:47.623558) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:58:47 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:58:47.623561) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:58:47 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:58:47.623564) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:58:47 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-14:58:47.623566) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 14:58:48 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:58:48 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:58:48 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:58:48.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:58:48 compute-0 ceph-mon[74194]: pgmap v1303: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:58:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:58:48.903Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:58:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:58:48 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:58:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:58:48 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:58:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:58:48 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:58:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:58:49 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:58:49 compute-0 sudo[299005]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:58:49 compute-0 sudo[299005]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:58:49 compute-0 sudo[299005]: pam_unix(sudo:session): session closed for user root
Sep 30 14:58:49 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:58:49 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:58:49 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:58:49.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:58:49 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1304: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:58:50 compute-0 podman[299034]: 2025-09-30 14:58:50.136127436 +0000 UTC m=+0.055556247 container health_status b3fb2fd96e3ed0561f41ccf5f3a6a8f5edc1610051f196882b9fe64f54041c07 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250923, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:58:50 compute-0 podman[299032]: 2025-09-30 14:58:50.13628407 +0000 UTC m=+0.058133461 container health_status 3f9405f717bf7bccb1d94628a6cea0442375ebf8d5cf43ef2536ee30dce6c6e0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=iscsid)
Sep 30 14:58:50 compute-0 podman[299033]: 2025-09-30 14:58:50.158984176 +0000 UTC m=+0.079927695 container health_status 8ace90c1ad44d98a416105b72e1c541724757ab08efa10adfaa0df7e8c2027c6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Sep 30 14:58:50 compute-0 podman[299035]: 2025-09-30 14:58:50.160574936 +0000 UTC m=+0.072915630 container health_status c96c6cc1b89de09f21811bef665f35e069874e3ad1bbd4c37d02227af98e3458 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Sep 30 14:58:50 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:58:50 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:58:50 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:58:50.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:58:50 compute-0 ceph-mon[74194]: pgmap v1304: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:58:51 compute-0 nova_compute[261524]: 2025-09-30 14:58:51.088 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:58:51 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:58:51 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:58:51 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:58:51.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:58:51 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1305: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:58:52 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:58:52 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:58:52 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:58:52.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:58:52 compute-0 nova_compute[261524]: 2025-09-30 14:58:52.304 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:58:52 compute-0 ceph-mon[74194]: pgmap v1305: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:58:52 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:58:53 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:58:53 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:58:53 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:58:53.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:58:53 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1306: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:58:53 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:58:53.752Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:58:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:58:53 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:58:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:58:53 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:58:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:58:53 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:58:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:58:54 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:58:54 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:58:54 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:58:54 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:58:54.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:58:54 compute-0 ceph-mon[74194]: pgmap v1306: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:58:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:58:54] "GET /metrics HTTP/1.1" 200 48530 "" "Prometheus/2.51.0"
Sep 30 14:58:54 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:58:54] "GET /metrics HTTP/1.1" 200 48530 "" "Prometheus/2.51.0"
Sep 30 14:58:55 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:58:55 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 14:58:55 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:58:55.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 14:58:55 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1307: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:58:55 compute-0 ceph-mon[74194]: pgmap v1307: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:58:56 compute-0 nova_compute[261524]: 2025-09-30 14:58:56.090 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:58:56 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:58:56 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:58:56 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:58:56.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:58:57 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:58:57.249Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:58:57 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:58:57 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 14:58:57 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:58:57.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 14:58:57 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1308: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:58:57 compute-0 nova_compute[261524]: 2025-09-30 14:58:57.307 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:58:57 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:58:58 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:58:58 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:58:58 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:58:58.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:58:58 compute-0 ceph-mon[74194]: pgmap v1308: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:58:58 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:58:58.904Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:58:58 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:58:58.904Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:58:58 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:58:58.904Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:58:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:58:58 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:58:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:58:58 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:58:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:58:58 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:58:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:58:59 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:58:59 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:58:59 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:58:59 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:58:59.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:58:59 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1309: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:58:59 compute-0 ceph-mgr[74485]: [balancer INFO root] Optimize plan auto_2025-09-30_14:58:59
Sep 30 14:58:59 compute-0 ceph-mgr[74485]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 14:58:59 compute-0 ceph-mgr[74485]: [balancer INFO root] do_upmap
Sep 30 14:58:59 compute-0 ceph-mgr[74485]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.nfs', 'vms', '.mgr', 'cephfs.cephfs.data', 'default.rgw.control', 'images', 'volumes', 'default.rgw.log', '.rgw.root', 'backups', 'default.rgw.meta']
Sep 30 14:58:59 compute-0 ceph-mgr[74485]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 14:58:59 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:58:59 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:58:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:58:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:58:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:58:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:58:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:58:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:59:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 14:59:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:59:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 14:59:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:59:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 14:59:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:59:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:59:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:59:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:59:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:59:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Sep 30 14:59:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:59:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Sep 30 14:59:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:59:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:59:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:59:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Sep 30 14:59:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:59:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Sep 30 14:59:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:59:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 14:59:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:59:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 14:59:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 14:59:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 14:59:00 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:59:00 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 14:59:00 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:59:00.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 14:59:00 compute-0 ceph-mon[74194]: pgmap v1309: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:59:00 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:59:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 14:59:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 14:59:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 14:59:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 14:59:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 14:59:01 compute-0 nova_compute[261524]: 2025-09-30 14:59:01.092 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:59:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 14:59:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 14:59:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 14:59:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 14:59:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 14:59:01 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:59:01 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 14:59:01 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:59:01.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 14:59:01 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1310: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:59:02 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:59:02 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:59:02 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:59:02.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:59:02 compute-0 nova_compute[261524]: 2025-09-30 14:59:02.310 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:59:02 compute-0 ceph-mon[74194]: pgmap v1310: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:59:02 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:59:03 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:59:03 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:59:03 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:59:03.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:59:03 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1311: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:59:03 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:59:03.753Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:59:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:59:03 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:59:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:59:03 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:59:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:59:03 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:59:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:59:04 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:59:04 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:59:04 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 14:59:04 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:59:04.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 14:59:04 compute-0 ceph-mon[74194]: pgmap v1311: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:59:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:59:04] "GET /metrics HTTP/1.1" 200 48531 "" "Prometheus/2.51.0"
Sep 30 14:59:04 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:59:04] "GET /metrics HTTP/1.1" 200 48531 "" "Prometheus/2.51.0"
Sep 30 14:59:05 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1312: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:59:05 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:59:05 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 14:59:05 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:59:05.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 14:59:06 compute-0 nova_compute[261524]: 2025-09-30 14:59:06.092 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:59:06 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:59:06 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:59:06 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:59:06.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:59:06 compute-0 ceph-mon[74194]: pgmap v1312: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:59:07 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:59:07.250Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:59:07 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1313: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:59:07 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:59:07 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:59:07 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:59:07.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:59:07 compute-0 nova_compute[261524]: 2025-09-30 14:59:07.312 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:59:07 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:59:08 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:59:08 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:59:08 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:59:08.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:59:08 compute-0 ceph-mon[74194]: pgmap v1313: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:59:08 compute-0 sudo[299130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:59:08 compute-0 sudo[299130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:59:08 compute-0 sudo[299130]: pam_unix(sudo:session): session closed for user root
Sep 30 14:59:08 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:59:08.904Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:59:08 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:59:08.905Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:59:08 compute-0 sudo[299155]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 14:59:08 compute-0 sudo[299155]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:59:09 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:59:08 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:59:09 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:59:09 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:59:09 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:59:09 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:59:09 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:59:09 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:59:09 compute-0 sudo[299180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:59:09 compute-0 sudo[299180]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:59:09 compute-0 sudo[299180]: pam_unix(sudo:session): session closed for user root
Sep 30 14:59:09 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1314: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:59:09 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:59:09 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:59:09 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:59:09.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:59:09 compute-0 sudo[299155]: pam_unix(sudo:session): session closed for user root
Sep 30 14:59:09 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Sep 30 14:59:09 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Sep 30 14:59:09 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:59:09 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:59:09 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 14:59:09 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 14:59:09 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1315: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 592 B/s rd, 0 op/s
Sep 30 14:59:09 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1316: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 366 B/s rd, 0 op/s
Sep 30 14:59:09 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 14:59:09 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:59:09 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 14:59:09 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:59:09 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 14:59:09 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 14:59:09 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 14:59:09 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 14:59:09 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 14:59:09 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:59:09 compute-0 sudo[299239]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:59:09 compute-0 sudo[299239]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:59:09 compute-0 sudo[299239]: pam_unix(sudo:session): session closed for user root
Sep 30 14:59:09 compute-0 sudo[299264]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 14:59:09 compute-0 sudo[299264]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:59:10 compute-0 podman[299329]: 2025-09-30 14:59:10.219269967 +0000 UTC m=+0.049842884 container create 9e770a82908a0514728cc3336b2b3de8ef3fa98e8e100a3007d256b478094548 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_beaver, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:59:10 compute-0 systemd[1]: Started libpod-conmon-9e770a82908a0514728cc3336b2b3de8ef3fa98e8e100a3007d256b478094548.scope.
Sep 30 14:59:10 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:59:10 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:59:10 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:59:10.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:59:10 compute-0 podman[299329]: 2025-09-30 14:59:10.192631083 +0000 UTC m=+0.023204020 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:59:10 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:59:10 compute-0 podman[299329]: 2025-09-30 14:59:10.315409656 +0000 UTC m=+0.145982563 container init 9e770a82908a0514728cc3336b2b3de8ef3fa98e8e100a3007d256b478094548 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_beaver, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Sep 30 14:59:10 compute-0 podman[299329]: 2025-09-30 14:59:10.325268602 +0000 UTC m=+0.155841509 container start 9e770a82908a0514728cc3336b2b3de8ef3fa98e8e100a3007d256b478094548 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_beaver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 14:59:10 compute-0 podman[299329]: 2025-09-30 14:59:10.329858616 +0000 UTC m=+0.160431513 container attach 9e770a82908a0514728cc3336b2b3de8ef3fa98e8e100a3007d256b478094548 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:59:10 compute-0 fervent_beaver[299345]: 167 167
Sep 30 14:59:10 compute-0 systemd[1]: libpod-9e770a82908a0514728cc3336b2b3de8ef3fa98e8e100a3007d256b478094548.scope: Deactivated successfully.
Sep 30 14:59:10 compute-0 podman[299329]: 2025-09-30 14:59:10.334775429 +0000 UTC m=+0.165348326 container died 9e770a82908a0514728cc3336b2b3de8ef3fa98e8e100a3007d256b478094548 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Sep 30 14:59:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-6ea072cd26123f5c9171fd4abddf24410e873a3c9faf630565b761aad6dbd037-merged.mount: Deactivated successfully.
Sep 30 14:59:10 compute-0 podman[299329]: 2025-09-30 14:59:10.371476315 +0000 UTC m=+0.202049192 container remove 9e770a82908a0514728cc3336b2b3de8ef3fa98e8e100a3007d256b478094548 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_beaver, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:59:10 compute-0 systemd[1]: libpod-conmon-9e770a82908a0514728cc3336b2b3de8ef3fa98e8e100a3007d256b478094548.scope: Deactivated successfully.
Sep 30 14:59:10 compute-0 ceph-mon[74194]: pgmap v1314: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:59:10 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Sep 30 14:59:10 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:59:10 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 14:59:10 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:59:10 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:59:10 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 14:59:10 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 14:59:10 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 14:59:10 compute-0 podman[299369]: 2025-09-30 14:59:10.540616183 +0000 UTC m=+0.041241000 container create f961a046bc79fda9d64e50bc3522c61330dd659e6b59e01a15e3500962f70212 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_taussig, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Sep 30 14:59:10 compute-0 systemd[1]: Started libpod-conmon-f961a046bc79fda9d64e50bc3522c61330dd659e6b59e01a15e3500962f70212.scope.
Sep 30 14:59:10 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:59:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5df6b950fe59e73fe5a29e088c2423d9181b026f30ddd4898b88601da1649523/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:59:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5df6b950fe59e73fe5a29e088c2423d9181b026f30ddd4898b88601da1649523/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:59:10 compute-0 podman[299369]: 2025-09-30 14:59:10.525046045 +0000 UTC m=+0.025670882 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:59:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5df6b950fe59e73fe5a29e088c2423d9181b026f30ddd4898b88601da1649523/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:59:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5df6b950fe59e73fe5a29e088c2423d9181b026f30ddd4898b88601da1649523/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:59:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5df6b950fe59e73fe5a29e088c2423d9181b026f30ddd4898b88601da1649523/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 14:59:10 compute-0 podman[299369]: 2025-09-30 14:59:10.632236449 +0000 UTC m=+0.132861286 container init f961a046bc79fda9d64e50bc3522c61330dd659e6b59e01a15e3500962f70212 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_taussig, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Sep 30 14:59:10 compute-0 podman[299369]: 2025-09-30 14:59:10.644508375 +0000 UTC m=+0.145133232 container start f961a046bc79fda9d64e50bc3522c61330dd659e6b59e01a15e3500962f70212 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_taussig, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:59:10 compute-0 podman[299369]: 2025-09-30 14:59:10.648767072 +0000 UTC m=+0.149391919 container attach f961a046bc79fda9d64e50bc3522c61330dd659e6b59e01a15e3500962f70212 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_taussig, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:59:10 compute-0 charming_taussig[299385]: --> passed data devices: 0 physical, 1 LVM
Sep 30 14:59:10 compute-0 charming_taussig[299385]: --> All data devices are unavailable
Sep 30 14:59:11 compute-0 systemd[1]: libpod-f961a046bc79fda9d64e50bc3522c61330dd659e6b59e01a15e3500962f70212.scope: Deactivated successfully.
Sep 30 14:59:11 compute-0 podman[299369]: 2025-09-30 14:59:11.029851559 +0000 UTC m=+0.530476376 container died f961a046bc79fda9d64e50bc3522c61330dd659e6b59e01a15e3500962f70212 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_taussig, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Sep 30 14:59:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-5df6b950fe59e73fe5a29e088c2423d9181b026f30ddd4898b88601da1649523-merged.mount: Deactivated successfully.
Sep 30 14:59:11 compute-0 podman[299369]: 2025-09-30 14:59:11.076160774 +0000 UTC m=+0.576785601 container remove f961a046bc79fda9d64e50bc3522c61330dd659e6b59e01a15e3500962f70212 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_taussig, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Sep 30 14:59:11 compute-0 systemd[1]: libpod-conmon-f961a046bc79fda9d64e50bc3522c61330dd659e6b59e01a15e3500962f70212.scope: Deactivated successfully.
Sep 30 14:59:11 compute-0 sudo[299264]: pam_unix(sudo:session): session closed for user root
Sep 30 14:59:11 compute-0 nova_compute[261524]: 2025-09-30 14:59:11.153 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:59:11 compute-0 sudo[299415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:59:11 compute-0 sudo[299415]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:59:11 compute-0 sudo[299415]: pam_unix(sudo:session): session closed for user root
Sep 30 14:59:11 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:59:11 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:59:11 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:59:11.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:59:11 compute-0 sudo[299440]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -- lvm list --format json
Sep 30 14:59:11 compute-0 sudo[299440]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:59:11 compute-0 ceph-mon[74194]: pgmap v1315: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 592 B/s rd, 0 op/s
Sep 30 14:59:11 compute-0 ceph-mon[74194]: pgmap v1316: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 366 B/s rd, 0 op/s
Sep 30 14:59:11 compute-0 ceph-mon[74194]: from='client.? 192.168.122.10:0/869483855' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 14:59:11 compute-0 ceph-mon[74194]: from='client.? 192.168.122.10:0/869483855' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 14:59:11 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1317: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 733 B/s rd, 0 op/s
Sep 30 14:59:11 compute-0 podman[299506]: 2025-09-30 14:59:11.739600536 +0000 UTC m=+0.053558507 container create 395bcddb157c7999ca664f4171630115c3601419c392e35e1142ba6153ae238d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_sutherland, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:59:11 compute-0 systemd[1]: Started libpod-conmon-395bcddb157c7999ca664f4171630115c3601419c392e35e1142ba6153ae238d.scope.
Sep 30 14:59:11 compute-0 podman[299506]: 2025-09-30 14:59:11.711027233 +0000 UTC m=+0.024985264 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:59:11 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:59:11 compute-0 podman[299506]: 2025-09-30 14:59:11.839119209 +0000 UTC m=+0.153077240 container init 395bcddb157c7999ca664f4171630115c3601419c392e35e1142ba6153ae238d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_sutherland, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Sep 30 14:59:11 compute-0 podman[299506]: 2025-09-30 14:59:11.850421631 +0000 UTC m=+0.164379572 container start 395bcddb157c7999ca664f4171630115c3601419c392e35e1142ba6153ae238d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_sutherland, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:59:11 compute-0 podman[299506]: 2025-09-30 14:59:11.853977579 +0000 UTC m=+0.167935560 container attach 395bcddb157c7999ca664f4171630115c3601419c392e35e1142ba6153ae238d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_sutherland, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 14:59:11 compute-0 mystifying_sutherland[299522]: 167 167
Sep 30 14:59:11 compute-0 systemd[1]: libpod-395bcddb157c7999ca664f4171630115c3601419c392e35e1142ba6153ae238d.scope: Deactivated successfully.
Sep 30 14:59:11 compute-0 podman[299506]: 2025-09-30 14:59:11.856274127 +0000 UTC m=+0.170232148 container died 395bcddb157c7999ca664f4171630115c3601419c392e35e1142ba6153ae238d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_sutherland, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Sep 30 14:59:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-ece8e9d2e9ecba557dc7d4b3294eb7b3e3ee6dfb9b2bc1c87d4f2dc6a2ee45e0-merged.mount: Deactivated successfully.
Sep 30 14:59:11 compute-0 podman[299506]: 2025-09-30 14:59:11.899655399 +0000 UTC m=+0.213613340 container remove 395bcddb157c7999ca664f4171630115c3601419c392e35e1142ba6153ae238d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_sutherland, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 14:59:11 compute-0 systemd[1]: libpod-conmon-395bcddb157c7999ca664f4171630115c3601419c392e35e1142ba6153ae238d.scope: Deactivated successfully.
Sep 30 14:59:12 compute-0 podman[299546]: 2025-09-30 14:59:12.078682745 +0000 UTC m=+0.054242364 container create 29d743e2b15ba83ec74da2d4a1c84076cd96df5a379382ec160c58d5012a039c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_feistel, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:59:12 compute-0 systemd[1]: Started libpod-conmon-29d743e2b15ba83ec74da2d4a1c84076cd96df5a379382ec160c58d5012a039c.scope.
Sep 30 14:59:12 compute-0 podman[299546]: 2025-09-30 14:59:12.055297172 +0000 UTC m=+0.030856821 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:59:12 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:59:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0ed60ae367b58c18c835f2f356f8ad7aa513723b50bad75ba43cb70a9ca0966/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:59:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0ed60ae367b58c18c835f2f356f8ad7aa513723b50bad75ba43cb70a9ca0966/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:59:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0ed60ae367b58c18c835f2f356f8ad7aa513723b50bad75ba43cb70a9ca0966/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:59:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0ed60ae367b58c18c835f2f356f8ad7aa513723b50bad75ba43cb70a9ca0966/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:59:12 compute-0 podman[299546]: 2025-09-30 14:59:12.171145852 +0000 UTC m=+0.146705491 container init 29d743e2b15ba83ec74da2d4a1c84076cd96df5a379382ec160c58d5012a039c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid)
Sep 30 14:59:12 compute-0 podman[299546]: 2025-09-30 14:59:12.178259749 +0000 UTC m=+0.153819348 container start 29d743e2b15ba83ec74da2d4a1c84076cd96df5a379382ec160c58d5012a039c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_feistel, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Sep 30 14:59:12 compute-0 podman[299546]: 2025-09-30 14:59:12.182030753 +0000 UTC m=+0.157590372 container attach 29d743e2b15ba83ec74da2d4a1c84076cd96df5a379382ec160c58d5012a039c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_feistel, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 14:59:12 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:59:12 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 14:59:12 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:59:12.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 14:59:12 compute-0 nova_compute[261524]: 2025-09-30 14:59:12.314 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:59:12 compute-0 mystifying_feistel[299562]: {
Sep 30 14:59:12 compute-0 mystifying_feistel[299562]:     "0": [
Sep 30 14:59:12 compute-0 mystifying_feistel[299562]:         {
Sep 30 14:59:12 compute-0 mystifying_feistel[299562]:             "devices": [
Sep 30 14:59:12 compute-0 mystifying_feistel[299562]:                 "/dev/loop3"
Sep 30 14:59:12 compute-0 mystifying_feistel[299562]:             ],
Sep 30 14:59:12 compute-0 mystifying_feistel[299562]:             "lv_name": "ceph_lv0",
Sep 30 14:59:12 compute-0 mystifying_feistel[299562]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:59:12 compute-0 mystifying_feistel[299562]:             "lv_size": "21470642176",
Sep 30 14:59:12 compute-0 mystifying_feistel[299562]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5e3c7776-ac03-5698-b79f-a6dc2d80cae6,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1bf35304-bfb4-41f5-b832-570aa31de1b2,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 14:59:12 compute-0 mystifying_feistel[299562]:             "lv_uuid": "f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L",
Sep 30 14:59:12 compute-0 mystifying_feistel[299562]:             "name": "ceph_lv0",
Sep 30 14:59:12 compute-0 mystifying_feistel[299562]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:59:12 compute-0 mystifying_feistel[299562]:             "tags": {
Sep 30 14:59:12 compute-0 mystifying_feistel[299562]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 14:59:12 compute-0 mystifying_feistel[299562]:                 "ceph.block_uuid": "f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L",
Sep 30 14:59:12 compute-0 mystifying_feistel[299562]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 14:59:12 compute-0 mystifying_feistel[299562]:                 "ceph.cluster_fsid": "5e3c7776-ac03-5698-b79f-a6dc2d80cae6",
Sep 30 14:59:12 compute-0 mystifying_feistel[299562]:                 "ceph.cluster_name": "ceph",
Sep 30 14:59:12 compute-0 mystifying_feistel[299562]:                 "ceph.crush_device_class": "",
Sep 30 14:59:12 compute-0 mystifying_feistel[299562]:                 "ceph.encrypted": "0",
Sep 30 14:59:12 compute-0 mystifying_feistel[299562]:                 "ceph.osd_fsid": "1bf35304-bfb4-41f5-b832-570aa31de1b2",
Sep 30 14:59:12 compute-0 mystifying_feistel[299562]:                 "ceph.osd_id": "0",
Sep 30 14:59:12 compute-0 mystifying_feistel[299562]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 14:59:12 compute-0 mystifying_feistel[299562]:                 "ceph.type": "block",
Sep 30 14:59:12 compute-0 mystifying_feistel[299562]:                 "ceph.vdo": "0",
Sep 30 14:59:12 compute-0 mystifying_feistel[299562]:                 "ceph.with_tpm": "0"
Sep 30 14:59:12 compute-0 mystifying_feistel[299562]:             },
Sep 30 14:59:12 compute-0 mystifying_feistel[299562]:             "type": "block",
Sep 30 14:59:12 compute-0 mystifying_feistel[299562]:             "vg_name": "ceph_vg0"
Sep 30 14:59:12 compute-0 mystifying_feistel[299562]:         }
Sep 30 14:59:12 compute-0 mystifying_feistel[299562]:     ]
Sep 30 14:59:12 compute-0 mystifying_feistel[299562]: }
Sep 30 14:59:12 compute-0 systemd[1]: libpod-29d743e2b15ba83ec74da2d4a1c84076cd96df5a379382ec160c58d5012a039c.scope: Deactivated successfully.
Sep 30 14:59:12 compute-0 podman[299546]: 2025-09-30 14:59:12.509012031 +0000 UTC m=+0.484571680 container died 29d743e2b15ba83ec74da2d4a1c84076cd96df5a379382ec160c58d5012a039c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_feistel, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Sep 30 14:59:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-e0ed60ae367b58c18c835f2f356f8ad7aa513723b50bad75ba43cb70a9ca0966-merged.mount: Deactivated successfully.
Sep 30 14:59:12 compute-0 podman[299546]: 2025-09-30 14:59:12.552606909 +0000 UTC m=+0.528166528 container remove 29d743e2b15ba83ec74da2d4a1c84076cd96df5a379382ec160c58d5012a039c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_feistel, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Sep 30 14:59:12 compute-0 systemd[1]: libpod-conmon-29d743e2b15ba83ec74da2d4a1c84076cd96df5a379382ec160c58d5012a039c.scope: Deactivated successfully.
Sep 30 14:59:12 compute-0 sudo[299440]: pam_unix(sudo:session): session closed for user root
Sep 30 14:59:12 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:59:12 compute-0 sudo[299583]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 14:59:12 compute-0 sudo[299583]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:59:12 compute-0 sudo[299583]: pam_unix(sudo:session): session closed for user root
Sep 30 14:59:12 compute-0 sudo[299608]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -- raw list --format json
Sep 30 14:59:12 compute-0 sudo[299608]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:59:13 compute-0 podman[299672]: 2025-09-30 14:59:13.141405408 +0000 UTC m=+0.040929632 container create cc3156fbb85411deb878d98ba04ab9cde10869f581a3ce82ce13bd9bc2cca06b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_montalcini, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Sep 30 14:59:13 compute-0 systemd[1]: Started libpod-conmon-cc3156fbb85411deb878d98ba04ab9cde10869f581a3ce82ce13bd9bc2cca06b.scope.
Sep 30 14:59:13 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:59:13 compute-0 podman[299672]: 2025-09-30 14:59:13.122539287 +0000 UTC m=+0.022063541 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:59:13 compute-0 podman[299672]: 2025-09-30 14:59:13.224029969 +0000 UTC m=+0.123554223 container init cc3156fbb85411deb878d98ba04ab9cde10869f581a3ce82ce13bd9bc2cca06b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_montalcini, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Sep 30 14:59:13 compute-0 podman[299672]: 2025-09-30 14:59:13.231118116 +0000 UTC m=+0.130642340 container start cc3156fbb85411deb878d98ba04ab9cde10869f581a3ce82ce13bd9bc2cca06b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_montalcini, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Sep 30 14:59:13 compute-0 funny_montalcini[299688]: 167 167
Sep 30 14:59:13 compute-0 systemd[1]: libpod-cc3156fbb85411deb878d98ba04ab9cde10869f581a3ce82ce13bd9bc2cca06b.scope: Deactivated successfully.
Sep 30 14:59:13 compute-0 podman[299672]: 2025-09-30 14:59:13.236347576 +0000 UTC m=+0.135871800 container attach cc3156fbb85411deb878d98ba04ab9cde10869f581a3ce82ce13bd9bc2cca06b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_montalcini, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Sep 30 14:59:13 compute-0 podman[299672]: 2025-09-30 14:59:13.236646304 +0000 UTC m=+0.136170528 container died cc3156fbb85411deb878d98ba04ab9cde10869f581a3ce82ce13bd9bc2cca06b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_montalcini, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Sep 30 14:59:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-f4eb9c33ccb03586b6323836e4250b3b1ff6f835c105f22082bfd0d7ab523647-merged.mount: Deactivated successfully.
Sep 30 14:59:13 compute-0 podman[299672]: 2025-09-30 14:59:13.271013771 +0000 UTC m=+0.170537995 container remove cc3156fbb85411deb878d98ba04ab9cde10869f581a3ce82ce13bd9bc2cca06b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_montalcini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Sep 30 14:59:13 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:59:13 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:59:13 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:59:13.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:59:13 compute-0 systemd[1]: libpod-conmon-cc3156fbb85411deb878d98ba04ab9cde10869f581a3ce82ce13bd9bc2cca06b.scope: Deactivated successfully.
Sep 30 14:59:13 compute-0 podman[299710]: 2025-09-30 14:59:13.441939686 +0000 UTC m=+0.056918971 container create 705393e834759515c0736366e8fc6818de8eac03d7a0317df853dea4aea5be6a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_tesla, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 14:59:13 compute-0 systemd[1]: Started libpod-conmon-705393e834759515c0736366e8fc6818de8eac03d7a0317df853dea4aea5be6a.scope.
Sep 30 14:59:13 compute-0 podman[299710]: 2025-09-30 14:59:13.417382093 +0000 UTC m=+0.032361458 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 14:59:13 compute-0 systemd[1]: Started libcrun container.
Sep 30 14:59:13 compute-0 ceph-mon[74194]: pgmap v1317: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 733 B/s rd, 0 op/s
Sep 30 14:59:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb95b2aba0455c9810ff2bf3bf642a1b1631ffbaa683bfc436358662afc2a671/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 14:59:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb95b2aba0455c9810ff2bf3bf642a1b1631ffbaa683bfc436358662afc2a671/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 14:59:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb95b2aba0455c9810ff2bf3bf642a1b1631ffbaa683bfc436358662afc2a671/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 14:59:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb95b2aba0455c9810ff2bf3bf642a1b1631ffbaa683bfc436358662afc2a671/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 14:59:13 compute-0 podman[299710]: 2025-09-30 14:59:13.536069354 +0000 UTC m=+0.151048649 container init 705393e834759515c0736366e8fc6818de8eac03d7a0317df853dea4aea5be6a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_tesla, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True)
Sep 30 14:59:13 compute-0 podman[299710]: 2025-09-30 14:59:13.541723105 +0000 UTC m=+0.156702390 container start 705393e834759515c0736366e8fc6818de8eac03d7a0317df853dea4aea5be6a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_tesla, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Sep 30 14:59:13 compute-0 podman[299710]: 2025-09-30 14:59:13.545341165 +0000 UTC m=+0.160320460 container attach 705393e834759515c0736366e8fc6818de8eac03d7a0317df853dea4aea5be6a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_tesla, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 14:59:13 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1318: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 733 B/s rd, 0 op/s
Sep 30 14:59:13 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:59:13.754Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:59:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:59:13 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:59:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:59:14 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:59:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:59:14 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:59:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:59:14 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:59:14 compute-0 lvm[299803]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 14:59:14 compute-0 lvm[299803]: VG ceph_vg0 finished
Sep 30 14:59:14 compute-0 dazzling_tesla[299728]: {}
Sep 30 14:59:14 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:59:14 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:59:14 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:59:14.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:59:14 compute-0 systemd[1]: libpod-705393e834759515c0736366e8fc6818de8eac03d7a0317df853dea4aea5be6a.scope: Deactivated successfully.
Sep 30 14:59:14 compute-0 podman[299710]: 2025-09-30 14:59:14.293242943 +0000 UTC m=+0.908222248 container died 705393e834759515c0736366e8fc6818de8eac03d7a0317df853dea4aea5be6a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Sep 30 14:59:14 compute-0 systemd[1]: libpod-705393e834759515c0736366e8fc6818de8eac03d7a0317df853dea4aea5be6a.scope: Consumed 1.167s CPU time.
Sep 30 14:59:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-bb95b2aba0455c9810ff2bf3bf642a1b1631ffbaa683bfc436358662afc2a671-merged.mount: Deactivated successfully.
Sep 30 14:59:14 compute-0 podman[299710]: 2025-09-30 14:59:14.350998994 +0000 UTC m=+0.965978289 container remove 705393e834759515c0736366e8fc6818de8eac03d7a0317df853dea4aea5be6a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_tesla, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Sep 30 14:59:14 compute-0 systemd[1]: libpod-conmon-705393e834759515c0736366e8fc6818de8eac03d7a0317df853dea4aea5be6a.scope: Deactivated successfully.
Sep 30 14:59:14 compute-0 sudo[299608]: pam_unix(sudo:session): session closed for user root
Sep 30 14:59:14 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 14:59:14 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:59:14 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 14:59:14 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:59:14 compute-0 sudo[299818]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 14:59:14 compute-0 sudo[299818]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:59:14 compute-0 sudo[299818]: pam_unix(sudo:session): session closed for user root
Sep 30 14:59:14 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:59:14 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:59:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:59:14] "GET /metrics HTTP/1.1" 200 48532 "" "Prometheus/2.51.0"
Sep 30 14:59:14 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:59:14] "GET /metrics HTTP/1.1" 200 48532 "" "Prometheus/2.51.0"
Sep 30 14:59:15 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:59:15 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:59:15 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:59:15.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:59:15 compute-0 ceph-mon[74194]: pgmap v1318: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 733 B/s rd, 0 op/s
Sep 30 14:59:15 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:59:15 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 14:59:15 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:59:15 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1319: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 366 B/s rd, 0 op/s
Sep 30 14:59:16 compute-0 nova_compute[261524]: 2025-09-30 14:59:16.154 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:59:16 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:59:16 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:59:16 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:59:16.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:59:17 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:59:17.252Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:59:17 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:59:17 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:59:17 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:59:17.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:59:17 compute-0 nova_compute[261524]: 2025-09-30 14:59:17.319 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:59:17 compute-0 ceph-mon[74194]: pgmap v1319: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 366 B/s rd, 0 op/s
Sep 30 14:59:17 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:59:17 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1320: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 733 B/s rd, 0 op/s
Sep 30 14:59:18 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:59:18 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:59:18 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:59:18.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:59:18 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:59:18.905Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:59:18 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:59:18.906Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:59:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:59:19 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:59:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:59:19 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:59:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:59:19 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:59:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:59:19 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:59:19 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:59:19 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:59:19 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:59:19.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:59:19 compute-0 ceph-mon[74194]: pgmap v1320: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 733 B/s rd, 0 op/s
Sep 30 14:59:19 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1321: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 0 op/s
Sep 30 14:59:20 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:59:20 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:59:20 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:59:20.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:59:21 compute-0 podman[299852]: 2025-09-30 14:59:21.136630361 +0000 UTC m=+0.055665770 container health_status c96c6cc1b89de09f21811bef665f35e069874e3ad1bbd4c37d02227af98e3458 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Sep 30 14:59:21 compute-0 podman[299849]: 2025-09-30 14:59:21.146935348 +0000 UTC m=+0.068524680 container health_status 3f9405f717bf7bccb1d94628a6cea0442375ebf8d5cf43ef2536ee30dce6c6e0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Sep 30 14:59:21 compute-0 nova_compute[261524]: 2025-09-30 14:59:21.156 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:59:21 compute-0 podman[299851]: 2025-09-30 14:59:21.166282811 +0000 UTC m=+0.088223302 container health_status b3fb2fd96e3ed0561f41ccf5f3a6a8f5edc1610051f196882b9fe64f54041c07 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Sep 30 14:59:21 compute-0 podman[299850]: 2025-09-30 14:59:21.196251378 +0000 UTC m=+0.118337803 container health_status 8ace90c1ad44d98a416105b72e1c541724757ab08efa10adfaa0df7e8c2027c6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Sep 30 14:59:21 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:59:21 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:59:21 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:59:21.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:59:21 compute-0 ceph-mon[74194]: pgmap v1321: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 0 op/s
Sep 30 14:59:21 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1322: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:59:22 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:59:22 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:59:22 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:59:22.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:59:22 compute-0 nova_compute[261524]: 2025-09-30 14:59:22.319 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:59:22 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:59:22 compute-0 nova_compute[261524]: 2025-09-30 14:59:22.953 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:59:23 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:59:23 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:59:23 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:59:23.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:59:23 compute-0 ceph-mon[74194]: pgmap v1322: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:59:23 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1323: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:59:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:59:23.757Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 14:59:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:59:23.758Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:59:23 compute-0 nova_compute[261524]: 2025-09-30 14:59:23.965 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:59:23 compute-0 nova_compute[261524]: 2025-09-30 14:59:23.966 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Sep 30 14:59:23 compute-0 nova_compute[261524]: 2025-09-30 14:59:23.966 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Sep 30 14:59:23 compute-0 nova_compute[261524]: 2025-09-30 14:59:23.981 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Sep 30 14:59:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:59:23 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:59:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:59:23 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:59:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:59:23 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:59:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:59:24 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:59:24 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:59:24 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:59:24 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:59:24.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:59:24 compute-0 ceph-mon[74194]: pgmap v1323: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:59:24 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/3574386585' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:59:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:59:24] "GET /metrics HTTP/1.1" 200 48532 "" "Prometheus/2.51.0"
Sep 30 14:59:24 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:59:24] "GET /metrics HTTP/1.1" 200 48532 "" "Prometheus/2.51.0"
Sep 30 14:59:25 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:59:25 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:59:25 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:59:25.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:59:25 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/3766808394' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:59:25 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/2686760645' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:59:25 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1324: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:59:25 compute-0 nova_compute[261524]: 2025-09-30 14:59:25.952 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:59:25 compute-0 nova_compute[261524]: 2025-09-30 14:59:25.953 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:59:25 compute-0 nova_compute[261524]: 2025-09-30 14:59:25.953 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Sep 30 14:59:26 compute-0 nova_compute[261524]: 2025-09-30 14:59:26.160 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:59:26 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:59:26 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 14:59:26 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:59:26.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 14:59:26 compute-0 ceph-mon[74194]: pgmap v1324: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:59:26 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/2068211056' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:59:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:59:27.254Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:59:27 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:59:27 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:59:27 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:59:27.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:59:27 compute-0 nova_compute[261524]: 2025-09-30 14:59:27.321 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:59:27 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:59:27 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1325: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:59:27 compute-0 nova_compute[261524]: 2025-09-30 14:59:27.952 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:59:27 compute-0 nova_compute[261524]: 2025-09-30 14:59:27.953 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:59:27 compute-0 nova_compute[261524]: 2025-09-30 14:59:27.953 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:59:27 compute-0 nova_compute[261524]: 2025-09-30 14:59:27.995 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:59:27 compute-0 nova_compute[261524]: 2025-09-30 14:59:27.996 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:59:27 compute-0 nova_compute[261524]: 2025-09-30 14:59:27.996 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:59:27 compute-0 nova_compute[261524]: 2025-09-30 14:59:27.996 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Sep 30 14:59:27 compute-0 nova_compute[261524]: 2025-09-30 14:59:27.997 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Sep 30 14:59:28 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:59:28 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:59:28 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:59:28.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:59:28 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 14:59:28 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1648502114' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:59:28 compute-0 nova_compute[261524]: 2025-09-30 14:59:28.462 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Sep 30 14:59:28 compute-0 nova_compute[261524]: 2025-09-30 14:59:28.626 2 WARNING nova.virt.libvirt.driver [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 14:59:28 compute-0 nova_compute[261524]: 2025-09-30 14:59:28.627 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4483MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Sep 30 14:59:28 compute-0 nova_compute[261524]: 2025-09-30 14:59:28.627 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:59:28 compute-0 nova_compute[261524]: 2025-09-30 14:59:28.627 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:59:28 compute-0 ceph-mon[74194]: pgmap v1325: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:59:28 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/1648502114' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:59:28 compute-0 nova_compute[261524]: 2025-09-30 14:59:28.734 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Sep 30 14:59:28 compute-0 nova_compute[261524]: 2025-09-30 14:59:28.734 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Sep 30 14:59:28 compute-0 nova_compute[261524]: 2025-09-30 14:59:28.795 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Sep 30 14:59:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:59:28.906Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:59:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:59:28 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:59:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:59:28 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:59:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:59:28 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:59:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:59:29 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:59:29 compute-0 sudo[299980]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:59:29 compute-0 sudo[299980]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:59:29 compute-0 sudo[299980]: pam_unix(sudo:session): session closed for user root
Sep 30 14:59:29 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 14:59:29 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1145316144' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:59:29 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:59:29 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 14:59:29 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:59:29.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 14:59:29 compute-0 nova_compute[261524]: 2025-09-30 14:59:29.307 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Sep 30 14:59:29 compute-0 nova_compute[261524]: 2025-09-30 14:59:29.313 2 DEBUG nova.compute.provider_tree [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Inventory has not changed in ProviderTree for provider: 06783cfc-6d32-454d-9501-ebd8adea3735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Sep 30 14:59:29 compute-0 nova_compute[261524]: 2025-09-30 14:59:29.334 2 DEBUG nova.scheduler.client.report [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Inventory has not changed for provider 06783cfc-6d32-454d-9501-ebd8adea3735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Sep 30 14:59:29 compute-0 nova_compute[261524]: 2025-09-30 14:59:29.336 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Sep 30 14:59:29 compute-0 nova_compute[261524]: 2025-09-30 14:59:29.336 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.709s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:59:29 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1326: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:59:29 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:59:29 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:59:29 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/1145316144' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 14:59:29 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:59:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:59:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:59:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:59:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:59:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:59:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:59:30 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:59:30 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:59:30 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:59:30.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:59:30 compute-0 ceph-mon[74194]: pgmap v1326: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:59:31 compute-0 nova_compute[261524]: 2025-09-30 14:59:31.163 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:59:31 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:59:31 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:59:31 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:59:31.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:59:31 compute-0 nova_compute[261524]: 2025-09-30 14:59:31.332 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:59:31 compute-0 nova_compute[261524]: 2025-09-30 14:59:31.333 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:59:31 compute-0 nova_compute[261524]: 2025-09-30 14:59:31.333 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:59:31 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1327: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:59:32 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:59:32 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 14:59:32 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:59:32.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 14:59:32 compute-0 nova_compute[261524]: 2025-09-30 14:59:32.325 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:59:32 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:59:32 compute-0 ceph-mon[74194]: pgmap v1327: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:59:33 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:59:33 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:59:33 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:59:33.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:59:33 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1328: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:59:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:59:33.758Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:59:33 compute-0 nova_compute[261524]: 2025-09-30 14:59:33.952 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:59:33 compute-0 nova_compute[261524]: 2025-09-30 14:59:33.953 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Sep 30 14:59:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:59:33 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:59:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:59:33 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:59:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:59:33 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:59:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:59:34 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:59:34 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:59:34 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 14:59:34 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:59:34.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 14:59:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:59:34] "GET /metrics HTTP/1.1" 200 48531 "" "Prometheus/2.51.0"
Sep 30 14:59:34 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:59:34] "GET /metrics HTTP/1.1" 200 48531 "" "Prometheus/2.51.0"
Sep 30 14:59:34 compute-0 ceph-mon[74194]: pgmap v1328: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:59:35 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:59:35 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:59:35 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:59:35.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:59:35 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1329: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:59:36 compute-0 nova_compute[261524]: 2025-09-30 14:59:36.163 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:59:36 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:59:36 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000026s ======
Sep 30 14:59:36 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:59:36.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Sep 30 14:59:36 compute-0 ceph-mon[74194]: pgmap v1329: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:59:37 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:59:37.254Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:59:37 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:59:37 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:59:37 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:59:37.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:59:37 compute-0 nova_compute[261524]: 2025-09-30 14:59:37.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:59:37 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:59:37 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1330: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:59:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:59:38.280 163966 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 14:59:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:59:38.280 163966 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 14:59:38 compute-0 ovn_metadata_agent[163949]: 2025-09-30 14:59:38.280 163966 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 14:59:38 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:59:38 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 14:59:38 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:59:38.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 14:59:38 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:59:38.907Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:59:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:59:38 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:59:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:59:38 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:59:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:59:38 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:59:39 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:59:39 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:59:39 compute-0 ceph-mon[74194]: pgmap v1330: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:59:39 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:59:39 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:59:39 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:59:39.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:59:39 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1331: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:59:40 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:59:40 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:59:40 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:59:40.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:59:40 compute-0 nova_compute[261524]: 2025-09-30 14:59:40.968 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 14:59:40 compute-0 nova_compute[261524]: 2025-09-30 14:59:40.969 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Sep 30 14:59:40 compute-0 nova_compute[261524]: 2025-09-30 14:59:40.986 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Sep 30 14:59:41 compute-0 ceph-mon[74194]: pgmap v1331: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:59:41 compute-0 nova_compute[261524]: 2025-09-30 14:59:41.166 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:59:41 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:59:41 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:59:41 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:59:41.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:59:41 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1332: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:59:42 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:59:42 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:59:42 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:59:42.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:59:42 compute-0 nova_compute[261524]: 2025-09-30 14:59:42.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:59:42 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:59:43 compute-0 ceph-mon[74194]: pgmap v1332: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:59:43 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:59:43 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:59:43 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:59:43.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:59:43 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1333: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:59:43 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:59:43.759Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:59:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:59:43 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:59:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:59:43 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:59:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:59:43 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:59:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:59:44 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:59:44 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:59:44 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:59:44 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:59:44.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:59:44 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:59:44 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:59:44 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:59:44] "GET /metrics HTTP/1.1" 200 48533 "" "Prometheus/2.51.0"
Sep 30 14:59:44 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:59:44] "GET /metrics HTTP/1.1" 200 48533 "" "Prometheus/2.51.0"
Sep 30 14:59:45 compute-0 ceph-mon[74194]: pgmap v1333: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:59:45 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:59:45 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:59:45 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:59:45 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:59:45.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:59:45 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1334: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:59:46 compute-0 nova_compute[261524]: 2025-09-30 14:59:46.168 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:59:46 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:59:46 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:59:46 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:59:46.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:59:47 compute-0 ceph-mon[74194]: pgmap v1334: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:59:47 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:59:47.255Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:59:47 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:59:47 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 14:59:47 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:59:47.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 14:59:47 compute-0 nova_compute[261524]: 2025-09-30 14:59:47.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:59:47 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:59:47 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1335: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:59:48 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:59:48 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 14:59:48 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:59:48.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 14:59:48 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:59:48.908Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:59:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:59:49 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:59:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:59:49 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:59:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:59:49 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:59:49 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:59:49 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:59:49 compute-0 ceph-mon[74194]: pgmap v1335: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:59:49 compute-0 sudo[300027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 14:59:49 compute-0 sudo[300027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 14:59:49 compute-0 sudo[300027]: pam_unix(sudo:session): session closed for user root
Sep 30 14:59:49 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:59:49 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:59:49 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:59:49.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:59:49 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1336: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:59:50 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:59:50 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:59:50 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:59:50.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:59:51 compute-0 ceph-mon[74194]: pgmap v1336: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:59:51 compute-0 nova_compute[261524]: 2025-09-30 14:59:51.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:59:51 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:59:51 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:59:51 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:59:51.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:59:51 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1337: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:59:52 compute-0 podman[300062]: 2025-09-30 14:59:52.145468464 +0000 UTC m=+0.056730536 container health_status c96c6cc1b89de09f21811bef665f35e069874e3ad1bbd4c37d02227af98e3458 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Sep 30 14:59:52 compute-0 podman[300056]: 2025-09-30 14:59:52.155013442 +0000 UTC m=+0.085172036 container health_status 3f9405f717bf7bccb1d94628a6cea0442375ebf8d5cf43ef2536ee30dce6c6e0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, org.label-schema.build-date=20250923, tcib_managed=true, config_id=iscsid)
Sep 30 14:59:52 compute-0 podman[300057]: 2025-09-30 14:59:52.158252743 +0000 UTC m=+0.083944205 container health_status 8ace90c1ad44d98a416105b72e1c541724757ab08efa10adfaa0df7e8c2027c6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Sep 30 14:59:52 compute-0 podman[300058]: 2025-09-30 14:59:52.165115754 +0000 UTC m=+0.087451613 container health_status b3fb2fd96e3ed0561f41ccf5f3a6a8f5edc1610051f196882b9fe64f54041c07 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd)
Sep 30 14:59:52 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:59:52 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 14:59:52 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:59:52.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 14:59:52 compute-0 nova_compute[261524]: 2025-09-30 14:59:52.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:59:52 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:59:53 compute-0 ceph-mon[74194]: pgmap v1337: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:59:53 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:59:53 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 14:59:53 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:59:53.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 14:59:53 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1338: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:59:53 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:59:53.760Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 14:59:53 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:59:53.761Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:59:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:59:53 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:59:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:59:54 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:59:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:59:54 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:59:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:59:54 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:59:54 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:59:54 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:59:54 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:59:54.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:59:54 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:14:59:54] "GET /metrics HTTP/1.1" 200 48533 "" "Prometheus/2.51.0"
Sep 30 14:59:54 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:14:59:54] "GET /metrics HTTP/1.1" 200 48533 "" "Prometheus/2.51.0"
Sep 30 14:59:55 compute-0 ceph-mon[74194]: pgmap v1338: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:59:55 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:59:55 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:59:55 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:59:55.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:59:55 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1339: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:59:56 compute-0 nova_compute[261524]: 2025-09-30 14:59:56.177 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:59:56 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:59:56 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 14:59:56 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:59:56.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 14:59:57 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:59:57.256Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:59:57 compute-0 ceph-mon[74194]: pgmap v1339: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:59:57 compute-0 nova_compute[261524]: 2025-09-30 14:59:57.336 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 14:59:57 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:59:57 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 14:59:57 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:59:57.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 14:59:57 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 14:59:57 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1340: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:59:58 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:59:58 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 14:59:58 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:14:59:58.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 14:59:58 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T14:59:58.909Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 14:59:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:59:58 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 14:59:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:59:58 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 14:59:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:59:58 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 14:59:59 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 14:59:59 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 14:59:59 compute-0 ceph-mon[74194]: pgmap v1340: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 14:59:59 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 14:59:59 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 14:59:59 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:14:59:59.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 14:59:59 compute-0 ceph-mgr[74485]: [balancer INFO root] Optimize plan auto_2025-09-30_14:59:59
Sep 30 14:59:59 compute-0 ceph-mgr[74485]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Sep 30 14:59:59 compute-0 ceph-mgr[74485]: [balancer INFO root] do_upmap
Sep 30 14:59:59 compute-0 ceph-mgr[74485]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.meta', '.rgw.root', 'images', 'default.rgw.log', 'vms', 'volumes', 'default.rgw.meta', '.nfs', 'cephfs.cephfs.data', 'backups', '.mgr']
Sep 30 14:59:59 compute-0 ceph-mgr[74485]: [balancer INFO root] prepared 0/10 upmap changes
Sep 30 14:59:59 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1341: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 14:59:59 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 14:59:59 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 14:59:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:59:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:59:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:59:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 14:59:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 14:59:59 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 15:00:00 compute-0 ceph-mon[74194]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 1 OSD(s) experiencing slow operations in BlueStore; 2 failed cephadm daemon(s)
Sep 30 15:00:00 compute-0 ceph-mon[74194]: log_channel(cluster) log [WRN] : [WRN] BLUESTORE_SLOW_OP_ALERT: 1 OSD(s) experiencing slow operations in BlueStore
Sep 30 15:00:00 compute-0 ceph-mon[74194]: log_channel(cluster) log [WRN] :      osd.1 observed slow operation indications in BlueStore
Sep 30 15:00:00 compute-0 ceph-mon[74194]: log_channel(cluster) log [WRN] : [WRN] CEPHADM_FAILED_DAEMON: 2 failed cephadm daemon(s)
Sep 30 15:00:00 compute-0 ceph-mon[74194]: log_channel(cluster) log [WRN] :     daemon nfs.cephfs.0.0.compute-1.mybdtc on compute-1 is in error state
Sep 30 15:00:00 compute-0 ceph-mon[74194]: log_channel(cluster) log [WRN] :     daemon nfs.cephfs.1.0.compute-2.jhairi on compute-2 is in error state
Sep 30 15:00:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] _maybe_adjust
Sep 30 15:00:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 15:00:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Sep 30 15:00:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 15:00:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 15:00:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 15:00:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 15:00:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 15:00:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 15:00:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 15:00:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Sep 30 15:00:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 15:00:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Sep 30 15:00:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 15:00:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 15:00:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 15:00:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Sep 30 15:00:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 15:00:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Sep 30 15:00:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 15:00:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Sep 30 15:00:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 15:00:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Sep 30 15:00:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Sep 30 15:00:00 compute-0 ceph-mgr[74485]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Sep 30 15:00:00 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 15:00:00 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 15:00:00 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:15:00:00.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 15:00:00 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 15:00:00 compute-0 ceph-mon[74194]: Health detail: HEALTH_WARN 1 OSD(s) experiencing slow operations in BlueStore; 2 failed cephadm daemon(s)
Sep 30 15:00:00 compute-0 ceph-mon[74194]: [WRN] BLUESTORE_SLOW_OP_ALERT: 1 OSD(s) experiencing slow operations in BlueStore
Sep 30 15:00:00 compute-0 ceph-mon[74194]:      osd.1 observed slow operation indications in BlueStore
Sep 30 15:00:00 compute-0 ceph-mon[74194]: [WRN] CEPHADM_FAILED_DAEMON: 2 failed cephadm daemon(s)
Sep 30 15:00:00 compute-0 ceph-mon[74194]:     daemon nfs.cephfs.0.0.compute-1.mybdtc on compute-1 is in error state
Sep 30 15:00:00 compute-0 ceph-mon[74194]:     daemon nfs.cephfs.1.0.compute-2.jhairi on compute-2 is in error state
Sep 30 15:00:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Sep 30 15:00:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 15:00:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 15:00:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 15:00:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 15:00:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Sep 30 15:00:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: vms, start_after=
Sep 30 15:00:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: volumes, start_after=
Sep 30 15:00:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: backups, start_after=
Sep 30 15:00:01 compute-0 nova_compute[261524]: 2025-09-30 15:00:01.177 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 15:00:01 compute-0 ceph-mgr[74485]: [rbd_support INFO root] load_schedules: images, start_after=
Sep 30 15:00:01 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 15:00:01 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 15:00:01 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:15:00:01.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 15:00:01 compute-0 ceph-mon[74194]: pgmap v1341: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 15:00:01 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1342: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 15:00:02 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 15:00:02 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 15:00:02 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:15:00:02.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 15:00:02 compute-0 nova_compute[261524]: 2025-09-30 15:00:02.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 15:00:02 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 15:00:03 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 15:00:03 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 15:00:03 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:15:00:03.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 15:00:03 compute-0 ceph-mon[74194]: pgmap v1342: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 15:00:03 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1343: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 15:00:03 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T15:00:03.762Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 15:00:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 15:00:03 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 15:00:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 15:00:03 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 15:00:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 15:00:03 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 15:00:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 15:00:04 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 15:00:04 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 15:00:04 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 15:00:04 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:15:00:04.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 15:00:04 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:15:00:04] "GET /metrics HTTP/1.1" 200 48531 "" "Prometheus/2.51.0"
Sep 30 15:00:04 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:15:00:04] "GET /metrics HTTP/1.1" 200 48531 "" "Prometheus/2.51.0"
Sep 30 15:00:05 compute-0 sshd-session[300149]: Accepted publickey for zuul from 192.168.122.10 port 60986 ssh2: ECDSA SHA256:bXV1aFTGAGwGo0hLh6HZ3pTGxlJrPf0VedxXflT3nU8
Sep 30 15:00:05 compute-0 systemd-logind[808]: New session 60 of user zuul.
Sep 30 15:00:05 compute-0 systemd[1]: Started Session 60 of User zuul.
Sep 30 15:00:05 compute-0 sshd-session[300149]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Sep 30 15:00:05 compute-0 sudo[300153]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp -p container,openstack_edpm,system,storage,virt'
Sep 30 15:00:05 compute-0 sudo[300153]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Sep 30 15:00:05 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 15:00:05 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 15:00:05 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:15:00:05.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 15:00:05 compute-0 ceph-mon[74194]: pgmap v1343: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 15:00:05 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1344: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 15:00:06 compute-0 nova_compute[261524]: 2025-09-30 15:00:06.180 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 15:00:06 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 15:00:06 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 15:00:06 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:15:00:06.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 15:00:07 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T15:00:07.257Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 15:00:07 compute-0 nova_compute[261524]: 2025-09-30 15:00:07.340 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 15:00:07 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 15:00:07 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 15:00:07 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:15:00:07.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 15:00:07 compute-0 ceph-mon[74194]: pgmap v1344: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 15:00:07 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 15:00:07 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1345: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 15:00:07 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.17895 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:07 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.27482 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:07 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.27067 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:08 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.27488 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:08 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.17901 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:08 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 15:00:08 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.005000125s ======
Sep 30 15:00:08 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:15:00:08.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.005000125s
Sep 30 15:00:08 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.27073 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:08 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status"} v 0)
Sep 30 15:00:08 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3198502747' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Sep 30 15:00:08 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T15:00:08.910Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 15:00:09 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 15:00:08 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 15:00:09 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 15:00:08 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 15:00:09 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 15:00:08 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 15:00:09 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 15:00:09 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 15:00:09 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 15:00:09 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 15:00:09 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:15:00:09.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 15:00:09 compute-0 sudo[300405]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 15:00:09 compute-0 sudo[300405]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 15:00:09 compute-0 sudo[300405]: pam_unix(sudo:session): session closed for user root
Sep 30 15:00:09 compute-0 ceph-mon[74194]: pgmap v1345: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 15:00:09 compute-0 ceph-mon[74194]: from='client.17895 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:09 compute-0 ceph-mon[74194]: from='client.27482 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:09 compute-0 ceph-mon[74194]: from='client.27067 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:09 compute-0 ceph-mon[74194]: from='client.27488 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:09 compute-0 ceph-mon[74194]: from='client.17901 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:09 compute-0 ceph-mon[74194]: from='client.27073 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:09 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/2166993216' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Sep 30 15:00:09 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/3198502747' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Sep 30 15:00:09 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/2443219466' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Sep 30 15:00:09 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1346: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 15:00:10 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 15:00:10 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 15:00:10 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:15:00:10.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 15:00:11 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Sep 30 15:00:11 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3491280709' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 15:00:11 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Sep 30 15:00:11 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3491280709' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 15:00:11 compute-0 nova_compute[261524]: 2025-09-30 15:00:11.181 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 15:00:11 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 15:00:11 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 15:00:11 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:15:00:11.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 15:00:11 compute-0 ceph-mon[74194]: pgmap v1346: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 15:00:11 compute-0 ceph-mon[74194]: from='client.? 192.168.122.10:0/3491280709' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Sep 30 15:00:11 compute-0 ceph-mon[74194]: from='client.? 192.168.122.10:0/3491280709' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Sep 30 15:00:11 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1347: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 15:00:11 compute-0 ovs-vsctl[300499]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Sep 30 15:00:12 compute-0 nova_compute[261524]: 2025-09-30 15:00:12.342 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 15:00:12 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 15:00:12 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 15:00:12 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:15:00:12.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 15:00:12 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 15:00:12 compute-0 virtqemud[261000]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Sep 30 15:00:12 compute-0 virtqemud[261000]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Sep 30 15:00:12 compute-0 virtqemud[261000]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Sep 30 15:00:13 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 15:00:13 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 15:00:13 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:15:00:13.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 15:00:13 compute-0 ceph-mon[74194]: pgmap v1347: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 15:00:13 compute-0 ceph-mds[96424]: mds.cephfs.compute-0.gqfeob asok_command: cache status {prefix=cache status} (starting...)
Sep 30 15:00:13 compute-0 ceph-mds[96424]: mds.cephfs.compute-0.gqfeob Can't run that command on an inactive MDS!
Sep 30 15:00:13 compute-0 ceph-mds[96424]: mds.cephfs.compute-0.gqfeob asok_command: client ls {prefix=client ls} (starting...)
Sep 30 15:00:13 compute-0 ceph-mds[96424]: mds.cephfs.compute-0.gqfeob Can't run that command on an inactive MDS!
Sep 30 15:00:13 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1348: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 15:00:13 compute-0 lvm[300829]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 15:00:13 compute-0 lvm[300829]: VG ceph_vg0 finished
Sep 30 15:00:13 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T15:00:13.763Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 15:00:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 15:00:13 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 15:00:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 15:00:14 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 15:00:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 15:00:14 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 15:00:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 15:00:14 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 15:00:14 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.27518 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:14 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.17928 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:14 compute-0 ceph-mds[96424]: mds.cephfs.compute-0.gqfeob asok_command: damage ls {prefix=damage ls} (starting...)
Sep 30 15:00:14 compute-0 ceph-mds[96424]: mds.cephfs.compute-0.gqfeob Can't run that command on an inactive MDS!
Sep 30 15:00:14 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 15:00:14 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 15:00:14 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:15:00:14.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 15:00:14 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Sep 30 15:00:14 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Sep 30 15:00:14 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.27097 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:14 compute-0 ceph-mds[96424]: mds.cephfs.compute-0.gqfeob asok_command: dump loads {prefix=dump loads} (starting...)
Sep 30 15:00:14 compute-0 ceph-mds[96424]: mds.cephfs.compute-0.gqfeob Can't run that command on an inactive MDS!
Sep 30 15:00:14 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Sep 30 15:00:14 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3642886638' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Sep 30 15:00:14 compute-0 ceph-mon[74194]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Sep 30 15:00:14 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/1323892427' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Sep 30 15:00:14 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/3642886638' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Sep 30 15:00:14 compute-0 ceph-mds[96424]: mds.cephfs.compute-0.gqfeob asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Sep 30 15:00:14 compute-0 ceph-mds[96424]: mds.cephfs.compute-0.gqfeob Can't run that command on an inactive MDS!
Sep 30 15:00:14 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.27542 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:14 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 15:00:14 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 15:00:14 compute-0 ceph-mds[96424]: mds.cephfs.compute-0.gqfeob asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Sep 30 15:00:14 compute-0 ceph-mds[96424]: mds.cephfs.compute-0.gqfeob Can't run that command on an inactive MDS!
Sep 30 15:00:14 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:15:00:14] "GET /metrics HTTP/1.1" 200 48526 "" "Prometheus/2.51.0"
Sep 30 15:00:14 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:15:00:14] "GET /metrics HTTP/1.1" 200 48526 "" "Prometheus/2.51.0"
Sep 30 15:00:14 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Sep 30 15:00:14 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Sep 30 15:00:14 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.17955 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:14 compute-0 sudo[301045]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 15:00:14 compute-0 sudo[301045]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 15:00:14 compute-0 sudo[301045]: pam_unix(sudo:session): session closed for user root
Sep 30 15:00:14 compute-0 sudo[301076]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Sep 30 15:00:14 compute-0 sudo[301076]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 15:00:14 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.27109 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:14 compute-0 ceph-mds[96424]: mds.cephfs.compute-0.gqfeob asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Sep 30 15:00:14 compute-0 ceph-mds[96424]: mds.cephfs.compute-0.gqfeob Can't run that command on an inactive MDS!
Sep 30 15:00:14 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 15:00:14 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1253281911' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 15:00:15 compute-0 ceph-mds[96424]: mds.cephfs.compute-0.gqfeob asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Sep 30 15:00:15 compute-0 ceph-mds[96424]: mds.cephfs.compute-0.gqfeob Can't run that command on an inactive MDS!
Sep 30 15:00:15 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.27554 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:15 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config log"} v 0)
Sep 30 15:00:15 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/756214393' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Sep 30 15:00:15 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.17985 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:15 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.27127 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:15 compute-0 ceph-mds[96424]: mds.cephfs.compute-0.gqfeob asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Sep 30 15:00:15 compute-0 ceph-mds[96424]: mds.cephfs.compute-0.gqfeob Can't run that command on an inactive MDS!
Sep 30 15:00:15 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config log"} v 0)
Sep 30 15:00:15 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1254574065' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Sep 30 15:00:15 compute-0 sudo[301076]: pam_unix(sudo:session): session closed for user root
Sep 30 15:00:15 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 15:00:15 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 15:00:15 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:15:00:15.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 15:00:15 compute-0 ceph-mds[96424]: mds.cephfs.compute-0.gqfeob asok_command: get subtrees {prefix=get subtrees} (starting...)
Sep 30 15:00:15 compute-0 ceph-mds[96424]: mds.cephfs.compute-0.gqfeob Can't run that command on an inactive MDS!
Sep 30 15:00:15 compute-0 ceph-mon[74194]: pgmap v1348: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 15:00:15 compute-0 ceph-mon[74194]: from='client.27518 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:15 compute-0 ceph-mon[74194]: from='client.17928 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:15 compute-0 ceph-mon[74194]: from='client.27097 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:15 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 15:00:15 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/3380349105' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Sep 30 15:00:15 compute-0 ceph-mon[74194]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Sep 30 15:00:15 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/2832590171' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 15:00:15 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/1253281911' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 15:00:15 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/756214393' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Sep 30 15:00:15 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/509019475' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 15:00:15 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/1254574065' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Sep 30 15:00:15 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.27575 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:15 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.18006 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:15 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1349: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 15:00:15 compute-0 ceph-mds[96424]: mds.cephfs.compute-0.gqfeob asok_command: ops {prefix=ops} (starting...)
Sep 30 15:00:15 compute-0 ceph-mds[96424]: mds.cephfs.compute-0.gqfeob Can't run that command on an inactive MDS!
Sep 30 15:00:15 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.27139 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:15 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config-key dump"} v 0)
Sep 30 15:00:15 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/967005215' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Sep 30 15:00:16 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Sep 30 15:00:16 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1548666988' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Sep 30 15:00:16 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.27596 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:16 compute-0 nova_compute[261524]: 2025-09-30 15:00:16.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 15:00:16 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.18036 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:16 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 15:00:16 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 15:00:16 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:15:00:16.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 15:00:16 compute-0 ceph-mds[96424]: mds.cephfs.compute-0.gqfeob asok_command: session ls {prefix=session ls} (starting...)
Sep 30 15:00:16 compute-0 ceph-mds[96424]: mds.cephfs.compute-0.gqfeob Can't run that command on an inactive MDS!
Sep 30 15:00:16 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.27614 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:16 compute-0 ceph-mds[96424]: mds.cephfs.compute-0.gqfeob asok_command: status {prefix=status} (starting...)
Sep 30 15:00:16 compute-0 ceph-mon[74194]: from='client.27542 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:16 compute-0 ceph-mon[74194]: from='client.17955 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:16 compute-0 ceph-mon[74194]: from='client.27109 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:16 compute-0 ceph-mon[74194]: from='client.27554 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:16 compute-0 ceph-mon[74194]: from='client.17985 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:16 compute-0 ceph-mon[74194]: from='client.27127 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:16 compute-0 ceph-mon[74194]: from='client.27575 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:16 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/2509121203' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Sep 30 15:00:16 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/169135652' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Sep 30 15:00:16 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/967005215' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Sep 30 15:00:16 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/1444135951' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Sep 30 15:00:16 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/1548666988' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Sep 30 15:00:16 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/3922135924' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Sep 30 15:00:16 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/2314142505' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Sep 30 15:00:16 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Sep 30 15:00:16 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2063014365' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Sep 30 15:00:16 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.27169 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:16 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.18057 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:16 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Sep 30 15:00:16 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/945989588' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Sep 30 15:00:17 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Sep 30 15:00:17 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Sep 30 15:00:17 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.27181 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:17 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Sep 30 15:00:17 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3543327636' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Sep 30 15:00:17 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Sep 30 15:00:17 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1620048780' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Sep 30 15:00:17 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Sep 30 15:00:17 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T15:00:17.259Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 15:00:17 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 15:00:17 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Sep 30 15:00:17 compute-0 nova_compute[261524]: 2025-09-30 15:00:17.344 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 15:00:17 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 15:00:17 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 15:00:17 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 15:00:17 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:15:00:17.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 15:00:17 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Sep 30 15:00:17 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3717777796' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Sep 30 15:00:17 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 15:00:17 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Sep 30 15:00:17 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Sep 30 15:00:17 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1350: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 15:00:17 compute-0 ceph-mon[74194]: from='client.18006 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:17 compute-0 ceph-mon[74194]: pgmap v1349: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 15:00:17 compute-0 ceph-mon[74194]: from='client.27139 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:17 compute-0 ceph-mon[74194]: from='client.27596 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:17 compute-0 ceph-mon[74194]: from='client.18036 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:17 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/514729598' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Sep 30 15:00:17 compute-0 ceph-mon[74194]: from='client.27614 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:17 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/2063014365' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Sep 30 15:00:17 compute-0 ceph-mon[74194]: from='client.27169 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:17 compute-0 ceph-mon[74194]: from='client.18057 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:17 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/3764007661' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Sep 30 15:00:17 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/945989588' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Sep 30 15:00:17 compute-0 ceph-mon[74194]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Sep 30 15:00:17 compute-0 ceph-mon[74194]: from='client.27181 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:17 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/1438418718' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Sep 30 15:00:17 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/3543327636' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Sep 30 15:00:17 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/1620048780' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Sep 30 15:00:17 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 15:00:17 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/4050787884' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Sep 30 15:00:17 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 15:00:17 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/3546181131' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Sep 30 15:00:17 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/2506735949' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Sep 30 15:00:17 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Sep 30 15:00:17 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3185932070' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Sep 30 15:00:17 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Sep 30 15:00:18 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Sep 30 15:00:18 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Sep 30 15:00:18 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 15:00:18 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Sep 30 15:00:18 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.27662 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:18 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T15:00:18.086+0000 7ffa0cb94640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Sep 30 15:00:18 compute-0 ceph-mgr[74485]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Sep 30 15:00:18 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Sep 30 15:00:18 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4150106663' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Sep 30 15:00:18 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.27674 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:18 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T15:00:18.184+0000 7ffa0cb94640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Sep 30 15:00:18 compute-0 ceph-mgr[74485]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Sep 30 15:00:18 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 15:00:18 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 15:00:18 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 15:00:18 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:15:00:18.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 15:00:18 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.27232 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:18 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: 2025-09-30T15:00:18.575+0000 7ffa0cb94640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Sep 30 15:00:18 compute-0 ceph-mgr[74485]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Sep 30 15:00:18 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat"} v 0)
Sep 30 15:00:18 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3049002099' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Sep 30 15:00:18 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Sep 30 15:00:18 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1045765584' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Sep 30 15:00:18 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/3717777796' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Sep 30 15:00:18 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/3661243403' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Sep 30 15:00:18 compute-0 ceph-mon[74194]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Sep 30 15:00:18 compute-0 ceph-mon[74194]: pgmap v1350: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 15:00:18 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/3185932070' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Sep 30 15:00:18 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/2620502725' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Sep 30 15:00:18 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/2234122077' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Sep 30 15:00:18 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Sep 30 15:00:18 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 15:00:18 compute-0 ceph-mon[74194]: from='client.27662 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:18 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/4041994304' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Sep 30 15:00:18 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/4150106663' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Sep 30 15:00:18 compute-0 ceph-mon[74194]: from='client.27674 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:18 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 15:00:18 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/192830493' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Sep 30 15:00:18 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/1830794122' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Sep 30 15:00:18 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/880698153' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Sep 30 15:00:18 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/3049002099' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Sep 30 15:00:18 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/1045765584' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Sep 30 15:00:18 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T15:00:18.911Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Sep 30 15:00:18 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T15:00:18.911Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 15:00:18 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Sep 30 15:00:18 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Sep 30 15:00:18 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 15:00:18 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 15:00:18 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Sep 30 15:00:18 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 15:00:18 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1351: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 544 B/s rd, 0 op/s
Sep 30 15:00:18 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Sep 30 15:00:18 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 15:00:18 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Sep 30 15:00:18 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 15:00:18 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Sep 30 15:00:18 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 15:00:18 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Sep 30 15:00:18 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 15:00:18 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Sep 30 15:00:18 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 15:00:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 15:00:18 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 15:00:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 15:00:18 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 15:00:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 15:00:18 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 15:00:19 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 15:00:19 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 15:00:19 compute-0 sudo[301631]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 15:00:19 compute-0 sudo[301631]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 15:00:19 compute-0 sudo[301631]: pam_unix(sudo:session): session closed for user root
Sep 30 15:00:19 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Sep 30 15:00:19 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2124409050' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Sep 30 15:00:19 compute-0 sudo[301674]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Sep 30 15:00:19 compute-0 sudo[301674]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 15:00:19 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Sep 30 15:00:19 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4041307142' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Sep 30 15:00:19 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.27713 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:19 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 15:00:19 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 15:00:19 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:15:00:19.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 15:00:19 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.18171 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:19 compute-0 podman[301869]: 2025-09-30 15:00:19.524657225 +0000 UTC m=+0.048777038 container create 5a2b6be80b477a481a607e96285823695e6c066ad54b414b65d7de895ab6d83e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_poitras, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Sep 30 15:00:19 compute-0 systemd[1]: Started libpod-conmon-5a2b6be80b477a481a607e96285823695e6c066ad54b414b65d7de895ab6d83e.scope.
Sep 30 15:00:19 compute-0 systemd[1]: Started libcrun container.
Sep 30 15:00:19 compute-0 podman[301869]: 2025-09-30 15:00:19.502107602 +0000 UTC m=+0.026227435 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 15:00:19 compute-0 podman[301869]: 2025-09-30 15:00:19.60303258 +0000 UTC m=+0.127152423 container init 5a2b6be80b477a481a607e96285823695e6c066ad54b414b65d7de895ab6d83e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_poitras, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Sep 30 15:00:19 compute-0 podman[301869]: 2025-09-30 15:00:19.610772593 +0000 UTC m=+0.134892406 container start 5a2b6be80b477a481a607e96285823695e6c066ad54b414b65d7de895ab6d83e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Sep 30 15:00:19 compute-0 podman[301869]: 2025-09-30 15:00:19.615252705 +0000 UTC m=+0.139372538 container attach 5a2b6be80b477a481a607e96285823695e6c066ad54b414b65d7de895ab6d83e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_poitras, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 15:00:19 compute-0 cool_poitras[301900]: 167 167
Sep 30 15:00:19 compute-0 systemd[1]: libpod-5a2b6be80b477a481a607e96285823695e6c066ad54b414b65d7de895ab6d83e.scope: Deactivated successfully.
Sep 30 15:00:19 compute-0 podman[301869]: 2025-09-30 15:00:19.616500906 +0000 UTC m=+0.140620719 container died 5a2b6be80b477a481a607e96285823695e6c066ad54b414b65d7de895ab6d83e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_poitras, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Sep 30 15:00:19 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.18183 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-11e5e7e9fad442b35df4f395d6202dde4961500c5887c6bc0dd4d906d86e943f-merged.mount: Deactivated successfully.
Sep 30 15:00:19 compute-0 podman[301869]: 2025-09-30 15:00:19.663903349 +0000 UTC m=+0.188023162 container remove 5a2b6be80b477a481a607e96285823695e6c066ad54b414b65d7de895ab6d83e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_poitras, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 15:00:19 compute-0 systemd[1]: libpod-conmon-5a2b6be80b477a481a607e96285823695e6c066ad54b414b65d7de895ab6d83e.scope: Deactivated successfully.
Sep 30 15:00:19 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Sep 30 15:00:19 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/612524074' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Sep 30 15:00:19 compute-0 ceph-mon[74194]: from='client.27232 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:19 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/2656002383' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Sep 30 15:00:19 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/3355341885' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Sep 30 15:00:19 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Sep 30 15:00:19 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 15:00:19 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Sep 30 15:00:19 compute-0 ceph-mon[74194]: pgmap v1351: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 544 B/s rd, 0 op/s
Sep 30 15:00:19 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 15:00:19 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 15:00:19 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Sep 30 15:00:19 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Sep 30 15:00:19 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Sep 30 15:00:19 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/2124409050' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Sep 30 15:00:19 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/1754087944' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Sep 30 15:00:19 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/3652980622' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Sep 30 15:00:19 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/4041307142' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Sep 30 15:00:19 compute-0 ceph-mon[74194]: from='client.27713 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:19 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/3444784093' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Sep 30 15:00:19 compute-0 ceph-mon[74194]: from='client.18171 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:19 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/423527519' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Sep 30 15:00:19 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/2412017039' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Sep 30 15:00:19 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.27271 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:19 compute-0 podman[301985]: 2025-09-30 15:00:19.831548141 +0000 UTC m=+0.052989083 container create d0a265ea1edbad147099f3d412e4256ca7fc54577f692d0165ba7ed9d1d4b503 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_black, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Sep 30 15:00:19 compute-0 systemd[1]: Started libpod-conmon-d0a265ea1edbad147099f3d412e4256ca7fc54577f692d0165ba7ed9d1d4b503.scope.
Sep 30 15:00:19 compute-0 podman[301985]: 2025-09-30 15:00:19.814388413 +0000 UTC m=+0.035829355 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 15:00:19 compute-0 systemd[1]: Started libcrun container.
Sep 30 15:00:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e4c21e0d6a05be47c91da9174a4bab13650b241950fa3447be2ae802faf2c20/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 15:00:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e4c21e0d6a05be47c91da9174a4bab13650b241950fa3447be2ae802faf2c20/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 15:00:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e4c21e0d6a05be47c91da9174a4bab13650b241950fa3447be2ae802faf2c20/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 15:00:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e4c21e0d6a05be47c91da9174a4bab13650b241950fa3447be2ae802faf2c20/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 15:00:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e4c21e0d6a05be47c91da9174a4bab13650b241950fa3447be2ae802faf2c20/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Sep 30 15:00:19 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.18189 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:19 compute-0 podman[301985]: 2025-09-30 15:00:19.933388242 +0000 UTC m=+0.154829204 container init d0a265ea1edbad147099f3d412e4256ca7fc54577f692d0165ba7ed9d1d4b503 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_black, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Sep 30 15:00:19 compute-0 podman[301985]: 2025-09-30 15:00:19.939802722 +0000 UTC m=+0.161243664 container start d0a265ea1edbad147099f3d412e4256ca7fc54577f692d0165ba7ed9d1d4b503 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_black, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 15:00:19 compute-0 podman[301985]: 2025-09-30 15:00:19.942868538 +0000 UTC m=+0.164309510 container attach d0a265ea1edbad147099f3d412e4256ca7fc54577f692d0165ba7ed9d1d4b503 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_black, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 15:00:20 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.27743 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:20 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.27289 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:20 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Sep 30 15:00:20 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2277989491' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Sep 30 15:00:20 compute-0 angry_black[302032]: --> passed data devices: 0 physical, 1 LVM
Sep 30 15:00:20 compute-0 angry_black[302032]: --> All data devices are unavailable
Sep 30 15:00:20 compute-0 systemd[1]: libpod-d0a265ea1edbad147099f3d412e4256ca7fc54577f692d0165ba7ed9d1d4b503.scope: Deactivated successfully.
Sep 30 15:00:20 compute-0 podman[301985]: 2025-09-30 15:00:20.281535807 +0000 UTC m=+0.502976749 container died d0a265ea1edbad147099f3d412e4256ca7fc54577f692d0165ba7ed9d1d4b503 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Sep 30 15:00:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-8e4c21e0d6a05be47c91da9174a4bab13650b241950fa3447be2ae802faf2c20-merged.mount: Deactivated successfully.
Sep 30 15:00:20 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.18204 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:20 compute-0 podman[301985]: 2025-09-30 15:00:20.332728035 +0000 UTC m=+0.554168977 container remove d0a265ea1edbad147099f3d412e4256ca7fc54577f692d0165ba7ed9d1d4b503 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_black, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325)
Sep 30 15:00:20 compute-0 systemd[1]: libpod-conmon-d0a265ea1edbad147099f3d412e4256ca7fc54577f692d0165ba7ed9d1d4b503.scope: Deactivated successfully.
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:27:23.357416+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 3891200 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:27:24.357590+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 3891200 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:27:25.357812+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 3891200 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:27:26.357977+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942798 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 3891200 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:27:27.358138+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 3891200 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:27:28.358242+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 3891200 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:27:29.358385+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 3891200 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:27:30.358521+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 3891200 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:27:31.358638+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942798 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 3891200 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:27:32.358803+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 3891200 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:27:33.358946+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 3891200 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:27:34.359085+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 3891200 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:27:35.359238+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 3891200 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:27:36.359403+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942798 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 3891200 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:27:37.359592+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 3891200 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:27:38.359739+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 3891200 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:27:39.360139+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 3891200 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:27:40.360367+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 3891200 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:27:41.360534+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942798 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 3891200 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:27:42.360690+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 3891200 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:27:43.360850+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 3891200 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:27:44.361021+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 3891200 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:27:45.361216+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 3891200 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:27:46.361359+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942798 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 3891200 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:27:47.361566+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 3891200 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:27:48.361720+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 3891200 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:27:49.361872+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 3891200 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:27:50.362014+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 3891200 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:27:51.362140+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942798 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 3891200 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:27:52.362231+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 3891200 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:27:53.362366+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 3891200 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:27:54.362479+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 3891200 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:27:55.362629+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 3891200 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:27:56.362828+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942798 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 3891200 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:27:57.363028+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 3891200 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:27:58.363573+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 3891200 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:27:59.363875+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 3891200 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:00.364003+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 3891200 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:01.364125+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942798 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3883008 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:02.364225+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3883008 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:03.364332+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3883008 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:04.364471+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3883008 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:05.364640+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3883008 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:06.364877+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942798 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3883008 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:07.365112+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3883008 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:08.365281+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3883008 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:09.365498+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3883008 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:10.365640+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3883008 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:11.365811+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942798 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3883008 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:12.366021+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3883008 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:13.366209+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3883008 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:14.366404+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3883008 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:15.366568+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3883008 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:16.366723+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942798 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3883008 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:17.366934+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3883008 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:18.367064+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3883008 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:19.367618+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3883008 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:20.367760+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 ms_handle_reset con 0x559a37929c00 session 0x559a386f30e0
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 ms_handle_reset con 0x559a36459800 session 0x559a352fa1e0
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3883008 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:21.367889+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942798 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3883008 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:22.368024+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3883008 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:23.368148+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3883008 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:24.370600+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3883008 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:25.370734+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3883008 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:26.370861+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942798 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3883008 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:27.371042+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3883008 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:28.371227+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3883008 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:29.371424+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3883008 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:30.371565+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3883008 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a3793b400
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 125.835968018s of 125.864295959s, submitted: 2
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:31.371678+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942930 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3883008 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:32.371847+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3883008 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:33.371990+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3883008 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:34.372103+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3883008 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:35.372224+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3883008 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:36.372401+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942930 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3883008 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:37.372546+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3883008 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:38.372721+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3883008 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:39.372851+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3883008 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:40.372996+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3883008 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:41.373109+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942339 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3883008 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:42.373303+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3883008 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:43.373426+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3883008 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:44.373681+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3883008 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:45.373888+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3883008 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:46.374034+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942339 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3883008 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:47.374219+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 3883008 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.006479263s of 17.014936447s, submitted: 2
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:48.374379+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 3874816 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:49.374494+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 3874816 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:50.374780+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 3874816 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:51.374958+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942207 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 3874816 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:52.375091+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 ms_handle_reset con 0x559a34810800 session 0x559a37980960
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 ms_handle_reset con 0x559a37928000 session 0x559a386e5860
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 3874816 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:53.375503+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 3874816 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:54.375760+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 3874816 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:55.375917+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 3874816 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:56.376118+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942207 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 3874816 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:57.376491+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 3874816 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:58.376653+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 3874816 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:28:59.376818+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 3874816 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:00.376974+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 3874816 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:01.377090+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942207 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 3874816 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:02.377293+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 3874816 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a37928000
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.830188751s of 14.834419250s, submitted: 1
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:03.377454+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 3866624 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:04.378029+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 3866624 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:05.378145+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 3866624 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:06.378302+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942339 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 3866624 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:07.378482+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 3866624 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:08.378598+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 3866624 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:09.378719+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 3866624 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:10.378911+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 3866624 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:11.379087+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942339 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 3866624 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:12.379240+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 3866624 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:13.379364+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 3866624 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:14.380066+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 3866624 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:15.381277+0000)
Sep 30 15:00:20 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 15:00:20 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 15:00:20 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:15:00:20.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 3866624 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.002157211s of 13.013655663s, submitted: 1
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:16.381695+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942207 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 3858432 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:17.381893+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 3858432 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:18.382060+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 3858432 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:19.382204+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 3858432 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:20.382357+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 3858432 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:21.382484+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942207 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 3858432 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:22.382627+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 3858432 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:23.382772+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 3858432 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:24.382905+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 3858432 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:25.383065+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 3858432 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:26.383229+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942207 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 3858432 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:27.383373+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 3858432 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:28.383508+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 3858432 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:29.383625+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 3858432 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:30.383756+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 3858432 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:31.383891+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942207 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 3858432 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:32.384004+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 3858432 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:33.384139+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 3858432 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:34.384274+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 3858432 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:35.384402+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 3858432 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:36.384537+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942207 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 3858432 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:37.384664+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 3858432 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:38.384783+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 3858432 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:39.384911+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 3858432 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:40.385065+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 3858432 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:41.385198+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942207 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 3858432 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:42.385373+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 3850240 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:43.385509+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 3850240 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:44.385704+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 3850240 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:45.385852+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 ms_handle_reset con 0x559a36455000 session 0x559a380d6d20
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 ms_handle_reset con 0x559a346e3c00 session 0x559a380d65a0
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 3850240 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:46.386039+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 ms_handle_reset con 0x559a3793b400 session 0x559a382a1a40
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942207 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 3850240 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:47.386259+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 3850240 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:48.386438+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 3850240 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:49.386604+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 3850240 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:50.386745+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 3850240 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:51.386863+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942207 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 3850240 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:52.386997+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 3850240 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 sudo[301674]: pam_unix(sudo:session): session closed for user root
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:53.387148+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 3850240 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:54.387269+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 3850240 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:55.387355+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 3850240 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:56.387489+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a34811000
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 40.213050842s of 40.217830658s, submitted: 1
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942339 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 86368256 unmapped: 2801664 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:57.387668+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35203400
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 3850240 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:58.387808+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 3850240 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:29:59.387952+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a36459800
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 3842048 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:00.388102+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 3842048 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:01.388234+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943983 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 3842048 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:02.388386+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 3842048 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:03.388557+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a37929c00
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 3842048 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:04.388753+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 3842048 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:05.388939+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 3842048 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:06.389088+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945495 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 3842048 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:07.389257+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 3842048 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:08.389396+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 3842048 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:09.389510+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 3842048 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:10.389632+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.983433723s of 13.998912811s, submitted: 4
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 3842048 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:11.389839+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945231 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 3842048 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:12.389997+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 3842048 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:13.390123+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 3842048 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:14.390236+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 3842048 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:15.390381+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 3842048 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:16.390524+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945231 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 3842048 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:17.390768+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 3842048 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:18.390896+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:19.391070+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 3842048 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:20.391245+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 3842048 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:21.391415+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 3842048 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945231 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:22.391559+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 3842048 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:23.391696+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 3842048 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 ms_handle_reset con 0x559a36459800 session 0x559a3852a3c0
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 ms_handle_reset con 0x559a37928000 session 0x559a381fbc20
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:24.391804+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 3842048 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:25.391938+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 3842048 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:26.392100+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 3842048 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945231 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:27.392267+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 3842048 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:28.392488+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 3842048 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:29.392677+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 3842048 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:30.392810+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 3842048 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:31.392941+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 3842048 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945231 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:32.393114+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 3842048 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:33.393225+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 3842048 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:34.393343+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 3842048 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a346e3c00
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 24.723939896s of 24.757307053s, submitted: 2
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:35.393470+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 3842048 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:36.393611+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 3842048 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945363 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:37.393811+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 3842048 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:38.393983+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 3842048 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:39.394102+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 3842048 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:40.394221+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 3842048 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a36455000
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:41.394379+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 3833856 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946875 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:42.394522+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 ms_handle_reset con 0x559a36455400 session 0x559a35e89e00
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a36459800
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 3833856 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:43.394651+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 3833856 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:44.394761+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 3833856 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 ms_handle_reset con 0x559a36458400 session 0x559a35a3d680
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a3793b400
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:45.394920+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 3833856 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:46.395073+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 3833856 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.069216728s of 12.078630447s, submitted: 3
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945693 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:47.395240+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 3833856 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:48.395386+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 3833856 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:49.395512+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 3833856 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:50.395686+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 3833856 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:51.395854+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 3833856 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945693 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:52.396007+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 3833856 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:53.396241+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 3833856 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:54.396388+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 3833856 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:55.396520+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 3833856 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:56.396647+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 3825664 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945561 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:57.396800+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 3825664 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:58.397012+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 3825664 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:30:59.397204+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 3825664 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:00.397383+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 3825664 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:01.397527+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 3825664 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945561 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:02.397700+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 3825664 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:03.397853+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 3825664 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:04.398014+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 3825664 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:05.398151+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 3825664 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:06.398302+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 3825664 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945561 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:07.398442+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 3825664 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:08.398592+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 3825664 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:09.398734+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 3825664 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:10.399712+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 3825664 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:11.399838+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 3825664 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945561 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:12.399987+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 3825664 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:13.400147+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 3825664 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:14.400257+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 3825664 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:15.400371+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 3825664 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:16.400488+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 ms_handle_reset con 0x559a36455000 session 0x559a3852b0e0
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 ms_handle_reset con 0x559a346e3c00 session 0x559a3852ab40
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 3825664 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:17.400686+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945561 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 3825664 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:18.400852+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 3825664 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:19.400968+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 3817472 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:20.401091+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 3817472 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:21.401225+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 3817472 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:22.401375+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945561 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 3817472 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:23.401566+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 3817472 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:24.401922+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 3817472 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:25.402386+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 3817472 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:26.402514+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 3817472 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a36458400
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 39.949089050s of 39.967605591s, submitted: 2
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:27.402671+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945693 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 3817472 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:28.402849+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 3817472 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:29.403101+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 3817472 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:30.403379+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 3817472 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:31.403725+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 3817472 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:32.403854+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945693 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 3817472 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a37c6b800
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:33.404042+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 3817472 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:34.404223+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 3817472 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:35.404376+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 3817472 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:36.404545+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 3817472 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:37.404804+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945693 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 3817472 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:38.404949+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 3817472 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.115405083s of 12.119539261s, submitted: 1
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:39.405125+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 3817472 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:40.405429+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 3817472 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:41.405761+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 3817472 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:42.405942+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945102 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 3817472 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:43.406261+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 3817472 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:44.406540+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 3817472 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:45.406819+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 3809280 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:46.407203+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 3809280 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:47.407399+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 944970 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 3809280 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:48.407585+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 3809280 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:49.407697+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 3809280 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:50.407825+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 3809280 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:51.407984+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 3801088 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:52.408157+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 944970 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 3801088 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:53.408884+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 3801088 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:54.409044+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 3801088 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:55.409181+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 3801088 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:56.409376+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 3801088 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:57.409568+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 944970 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 3801088 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:58.409682+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 3801088 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:31:59.409842+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 3801088 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:00.409973+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 3801088 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:01.410244+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 3801088 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:02.410407+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 944970 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 3809280 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:03.410536+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 3809280 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:04.410653+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 3809280 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:05.410774+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 3809280 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:06.410926+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 3809280 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:07.411649+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 944970 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 3801088 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:08.411788+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 3801088 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:09.411918+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 3801088 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:10.412045+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 3801088 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:11.412237+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 3801088 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:12.412427+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 944970 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 3801088 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:13.412572+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 3801088 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:14.412718+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 3801088 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:15.412878+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 ms_handle_reset con 0x559a37929c00 session 0x559a37f42780
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 ms_handle_reset con 0x559a35203400 session 0x559a352fa960
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 3801088 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:16.413005+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 3801088 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:17.413252+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 944970 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 3801088 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:18.413377+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 3801088 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:19.413531+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 3801088 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:20.413679+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 3801088 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:21.413810+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 3801088 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:22.413939+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 944970 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 3801088 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:23.414062+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 3801088 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:24.414194+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 3801088 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:25.414322+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 3801088 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a379d3400
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 46.930648804s of 46.940258026s, submitted: 2
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:26.414487+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 3801088 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:27.415053+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945102 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 3801088 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:28.415235+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 3801088 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35eacc00
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:29.415565+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 3776512 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:30.415688+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 3776512 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:31.415896+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 3776512 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:32.416041+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946614 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 3776512 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:33.416215+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 3768320 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:34.416342+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 3768320 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:35.416522+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 3768320 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:36.416690+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 3768320 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:37.416888+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946614 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 3768320 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:38.417044+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 3768320 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:39.417626+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 3768320 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:40.418004+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 3768320 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:41.418271+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 3768320 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:42.418401+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946614 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.521619797s of 16.530069351s, submitted: 2
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 3768320 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:43.418760+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 3768320 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:44.418950+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 3768320 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:45.419604+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 3768320 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:46.419764+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 ms_handle_reset con 0x559a379d3400 session 0x559a379814a0
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 3768320 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:47.420052+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946482 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 3760128 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:48.420413+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 3760128 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:49.420653+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 3760128 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:50.420783+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 3760128 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:51.423490+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 3760128 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:52.423607+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946482 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 3760128 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:53.423713+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 3760128 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:54.423857+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 3760128 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:55.423990+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 3760128 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:56.424216+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 3760128 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a346e3c00
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.257704735s of 14.261231422s, submitted: 1
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:57.424371+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946614 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 3760128 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:58.424523+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 3760128 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:32:59.424677+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 3760128 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:00.424818+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 3751936 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:01.424954+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 3751936 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:02.425156+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948126 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 3751936 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:03.425309+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 3751936 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:04.425419+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 3743744 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:05.425529+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 3743744 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:06.425646+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 3743744 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:07.425826+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947535 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 3743744 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:08.425950+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 3743744 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:09.426069+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 3743744 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:10.426202+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 3743744 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.876503944s of 14.200399399s, submitted: 3
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:11.426343+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85434368 unmapped: 3735552 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:12.426440+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947403 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85434368 unmapped: 3735552 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:13.426569+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85434368 unmapped: 3735552 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:14.426721+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85434368 unmapped: 3735552 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:15.426857+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85434368 unmapped: 3735552 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:16.426954+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85434368 unmapped: 3735552 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:17.427103+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947403 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85434368 unmapped: 3735552 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:18.427419+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85434368 unmapped: 3735552 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 ms_handle_reset con 0x559a346e3c00 session 0x559a35b7b860
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:19.427601+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85434368 unmapped: 3735552 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:20.427748+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85434368 unmapped: 3735552 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:21.427880+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 3727360 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:22.428010+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947403 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 3727360 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 9389 writes, 35K keys, 9389 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 9389 writes, 2394 syncs, 3.92 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 812 writes, 1249 keys, 812 commit groups, 1.0 writes per commit group, ingest: 0.42 MB, 0.00 MB/s
                                           Interval WAL: 812 writes, 406 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.06              0.00         1    0.065       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.06              0.00         1    0.065       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.06              0.00         1    0.065       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559a3376f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559a3376f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559a3376f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559a3376f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559a3376f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559a3376f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559a3376f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559a3376e9b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559a3376e9b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559a3376e9b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559a3376f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x559a3376f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:23.428136+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85475328 unmapped: 3694592 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:24.428236+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85475328 unmapped: 3694592 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:25.428355+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85475328 unmapped: 3694592 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:26.428485+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85475328 unmapped: 3694592 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:27.428653+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947403 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85475328 unmapped: 3694592 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:28.428788+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85475328 unmapped: 3694592 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:29.428938+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35203400
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.340478897s of 18.891267776s, submitted: 1
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85475328 unmapped: 3694592 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:30.429070+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85475328 unmapped: 3694592 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:31.429224+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85475328 unmapped: 3694592 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:32.429360+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947535 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85475328 unmapped: 3694592 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:33.429508+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85475328 unmapped: 3694592 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:34.429672+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85483520 unmapped: 3686400 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:35.429811+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85483520 unmapped: 3686400 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:36.429938+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85483520 unmapped: 3686400 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a36455000
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:37.430267+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950559 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85483520 unmapped: 3686400 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:38.430411+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85483520 unmapped: 3686400 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:39.430538+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85483520 unmapped: 3686400 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.068456650s of 10.116128922s, submitted: 3
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:40.430734+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85483520 unmapped: 3686400 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:41.430855+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85483520 unmapped: 3686400 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:42.430991+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 949968 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85483520 unmapped: 3686400 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:43.431284+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85483520 unmapped: 3686400 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:44.431468+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85483520 unmapped: 3686400 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:45.431628+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 3678208 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:46.431774+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 3678208 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:47.431942+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 949836 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 3678208 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:48.432066+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 3678208 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:49.432259+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 3678208 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:50.432396+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 3678208 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:51.432617+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 3678208 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:52.432797+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 949836 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 3678208 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:53.432903+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 3678208 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:54.433034+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 3678208 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:55.433196+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 ms_handle_reset con 0x559a35eacc00 session 0x559a3750a960
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 ms_handle_reset con 0x559a34811000 session 0x559a3852a1e0
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 3678208 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:56.433336+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 3678208 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:57.433481+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 949836 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 3678208 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:58.433625+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 3678208 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:33:59.433745+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 3678208 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:00.433856+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 3678208 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:01.434021+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 3678208 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:02.434157+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 949836 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 3678208 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:03.434304+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 3678208 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:04.434505+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 3678208 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:05.434691+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 3678208 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a37929c00
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 26.097017288s of 26.103408813s, submitted: 2
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:06.434816+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 3678208 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:07.435000+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 949968 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 3678208 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:08.435249+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 3678208 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:09.435435+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 3678208 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:10.435665+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 3678208 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:11.435820+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 3678208 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:12.435983+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 949968 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 3678208 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:13.436208+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 3678208 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:14.436383+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 3678208 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:15.436559+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 3678208 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:16.436733+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 3678208 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:17.436954+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948786 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 3670016 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:18.437124+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 3670016 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:19.437327+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 3670016 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:20.437486+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 3670016 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.030439377s of 15.042234421s, submitted: 3
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:21.437648+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 3670016 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:22.437834+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 ms_handle_reset con 0x559a37c6b800 session 0x559a3852be00
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 ms_handle_reset con 0x559a36458400 session 0x559a3852bc20
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948654 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 3670016 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:23.438022+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 3670016 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:24.438183+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 3670016 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:25.438375+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 3670016 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:26.438527+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 3670016 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:27.438856+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948654 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 3670016 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:28.439036+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 3670016 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:29.439274+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 3670016 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:30.439482+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 3670016 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:31.439683+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 3670016 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:32.439870+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948654 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 3670016 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a36458400
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.988931656s of 11.991481781s, submitted: 1
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:33.440010+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 3670016 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:34.440231+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 3670016 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:35.440425+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 ms_handle_reset con 0x559a37929c00 session 0x559a374d52c0
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85516288 unmapped: 3653632 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:36.441221+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85516288 unmapped: 3653632 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:37.441414+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948786 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:38.442349+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85516288 unmapped: 3653632 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:39.443256+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85516288 unmapped: 3653632 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:40.443863+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 3645440 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:41.444094+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 3645440 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:42.444749+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 3645440 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948786 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:43.445367+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 3645440 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:44.445895+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 3645440 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:45.446096+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 3645440 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a346e3c00
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.982230186s of 12.986698151s, submitted: 1
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:46.446245+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 3645440 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:47.446441+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 3645440 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948786 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:48.446845+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 3645440 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:49.447069+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 3645440 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:50.447302+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 3645440 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:51.447557+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 3645440 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:52.447753+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a34811000
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 3645440 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950298 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:53.447982+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 3645440 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:54.448274+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 3645440 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:55.448457+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 3645440 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:56.448652+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85540864 unmapped: 3629056 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:57.449000+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85540864 unmapped: 3629056 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950298 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:58.449270+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.175735474s of 12.185593605s, submitted: 3
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85540864 unmapped: 3629056 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:34:59.449430+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85540864 unmapped: 3629056 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:00.449631+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85540864 unmapped: 3629056 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:01.449781+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85540864 unmapped: 3629056 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:02.449904+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85540864 unmapped: 3629056 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950166 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:03.450011+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85540864 unmapped: 3629056 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:04.450204+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85540864 unmapped: 3629056 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:05.450340+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85590016 unmapped: 3579904 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:06.450498+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85663744 unmapped: 3506176 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:07.450729+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85663744 unmapped: 3506176 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950166 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:08.450944+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85663744 unmapped: 3506176 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:09.451397+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85663744 unmapped: 3506176 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:10.451626+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85663744 unmapped: 3506176 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:11.451872+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85663744 unmapped: 3506176 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:12.452229+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85663744 unmapped: 3506176 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950166 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:13.452988+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85663744 unmapped: 3506176 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:14.453273+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85663744 unmapped: 3506176 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:15.453839+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85663744 unmapped: 3506176 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:16.454019+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85663744 unmapped: 3506176 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:17.454390+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85663744 unmapped: 3506176 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950166 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:18.454588+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85663744 unmapped: 3506176 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:19.454853+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85663744 unmapped: 3506176 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:20.455034+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85663744 unmapped: 3506176 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:21.455249+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 3497984 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:22.455384+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 3497984 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950166 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:23.455569+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 3497984 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:24.455727+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 3497984 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:25.456048+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 3497984 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:26.456283+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 3497984 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:27.456502+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 3497984 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950166 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:28.456706+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 3497984 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:29.456941+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 3497984 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:30.457265+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 3497984 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:31.457502+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 3497984 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:32.457633+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 3497984 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:33.457861+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950166 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 3497984 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:34.458107+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 3497984 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:35.458346+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 3497984 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:36.458555+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 3497984 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:37.458808+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 3497984 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:38.459030+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950166 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 3497984 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:39.459240+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 3497984 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:40.459482+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 3497984 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:41.459663+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 3497984 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:42.459837+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 3497984 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 44.053421021s of 44.496948242s, submitted: 120
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:43.460024+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950166 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85704704 unmapped: 3465216 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:44.460215+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85835776 unmapped: 3334144 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:45.460351+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85852160 unmapped: 3317760 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:46.460489+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85852160 unmapped: 3317760 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:47.460684+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85852160 unmapped: 3317760 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:48.460871+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950166 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85868544 unmapped: 3301376 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:49.461057+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 3252224 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:50.461265+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 3252224 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:51.461464+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 3252224 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:52.461649+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 3252224 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:53.461809+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950166 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 3252224 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:54.462089+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 3252224 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:55.462258+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 3252224 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:56.462393+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 3252224 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:57.462649+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85925888 unmapped: 3244032 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:58.462834+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950166 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85925888 unmapped: 3244032 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:35:59.463006+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85925888 unmapped: 3244032 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:00.463219+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85925888 unmapped: 3244032 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:01.463375+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85925888 unmapped: 3244032 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:02.463714+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85925888 unmapped: 3244032 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:03.463869+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950166 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85925888 unmapped: 3244032 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:04.464034+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85925888 unmapped: 3244032 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:05.464248+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 ms_handle_reset con 0x559a36455000 session 0x559a380d63c0
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 ms_handle_reset con 0x559a35203400 session 0x559a34fb4b40
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85925888 unmapped: 3244032 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:06.464446+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85925888 unmapped: 3244032 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:07.464632+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 ms_handle_reset con 0x559a34811000 session 0x559a35a3d680
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 ms_handle_reset con 0x559a346e3c00 session 0x559a3852b4a0
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85925888 unmapped: 3244032 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:08.464856+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 sudo[302174]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950166 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85925888 unmapped: 3244032 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:09.465000+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85925888 unmapped: 3244032 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:10.465115+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85925888 unmapped: 3244032 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:11.465314+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85925888 unmapped: 3244032 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:12.465434+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85925888 unmapped: 3244032 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:13.465807+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950166 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85925888 unmapped: 3244032 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:14.466030+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85925888 unmapped: 3244032 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:15.466654+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85925888 unmapped: 3244032 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35203400
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 32.520961761s of 33.186458588s, submitted: 236
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:16.466790+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85925888 unmapped: 3244032 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:17.468042+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85925888 unmapped: 3244032 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a36455000
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:18.468215+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950430 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85925888 unmapped: 3244032 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:19.468480+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85925888 unmapped: 3244032 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:20.469148+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85925888 unmapped: 3244032 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:21.469566+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85925888 unmapped: 3244032 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a37929c00
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:22.470107+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85925888 unmapped: 3244032 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:23.470271+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 951942 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3235840 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:24.470470+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3235840 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:25.470632+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3235840 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:26.471073+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3235840 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:27.471450+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3235840 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.164124489s of 12.175537109s, submitted: 3
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:28.471630+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 951351 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 ms_handle_reset con 0x559a36458400 session 0x559a3852a3c0
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3235840 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 sudo[302174]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:29.471803+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3235840 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:30.472124+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3235840 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:31.472409+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3235840 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:32.472571+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3235840 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:33.472805+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 951219 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3235840 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:34.473099+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3235840 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:35.473418+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3235840 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:36.473676+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3235840 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:37.473958+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3235840 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:38.474308+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 951087 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3235840 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:39.474483+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35eacc00
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.271492004s of 11.303041458s, submitted: 3
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3235840 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:40.474686+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3235840 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:41.474863+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3235840 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:42.475120+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a37c6b800
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3235840 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:43.475393+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 954243 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3235840 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:44.475606+0000)
Sep 30 15:00:20 compute-0 sudo[302174]: pam_unix(sudo:session): session closed for user root
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3235840 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:45.475828+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3235840 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:46.476001+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3235840 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:47.476266+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3235840 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:48.476464+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 954243 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3235840 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:49.476629+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3235840 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:50.476794+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3235840 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:51.476978+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3235840 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:52.477225+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3235840 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.656469345s of 13.668901443s, submitted: 4
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:53.477406+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953520 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3235840 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:54.477630+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3235840 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:55.477815+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3235840 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:56.477985+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3235840 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:57.478251+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3235840 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:58.478396+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953520 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3235840 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:36:59.478620+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3235840 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:00.478783+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3235840 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:01.478931+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 3235840 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:02.479129+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3227648 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:03.479317+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953520 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3227648 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:04.479502+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3227648 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:05.479653+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3227648 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:06.479869+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3227648 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:07.480258+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3227648 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:08.480454+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953520 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3227648 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:09.480744+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3227648 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:10.480925+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3227648 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:11.481092+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3227648 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:12.481289+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3227648 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:13.481428+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953520 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3227648 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:14.481601+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3227648 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:15.481761+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3227648 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:16.481932+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3227648 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:17.482198+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3227648 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:18.482337+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953520 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3227648 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:19.482570+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3227648 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:20.483380+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3227648 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:21.484140+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3227648 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:22.484663+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3227648 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:23.485291+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953520 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3227648 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:24.485746+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3227648 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:25.486256+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3227648 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:26.486580+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3227648 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:27.486812+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3227648 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:28.487055+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953520 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3227648 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:29.487223+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3227648 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:30.487431+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3227648 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:31.487849+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3227648 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:32.488084+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 ms_handle_reset con 0x559a37929c00 session 0x559a37d7ed20
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 ms_handle_reset con 0x559a35203400 session 0x559a37d7e5a0
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 ms_handle_reset con 0x559a35eacc00 session 0x559a37d7f2c0
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3227648 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:33.488317+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953520 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3227648 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:34.488492+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3227648 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:35.488712+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:36.488923+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3227648 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:37.489276+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3227648 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:38.489448+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3227648 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953520 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:39.489613+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3227648 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:40.489770+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3227648 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:41.489933+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3227648 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:42.490087+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3227648 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a37c69400
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 49.957733154s of 49.961242676s, submitted: 1
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:43.490256+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 3227648 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35e38000
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953784 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:44.490419+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 3211264 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:45.490590+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 3211264 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:46.490733+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 3211264 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:47.490905+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 3211264 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:48.491871+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 3211264 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953784 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a37c0a400
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:49.492032+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 3211264 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a37944800
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:50.492232+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 3211264 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:51.493354+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 3211264 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:52.494404+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 3211264 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:53.494980+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 3211264 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 954705 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:54.495785+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 3211264 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:55.495971+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.094205856s of 12.115049362s, submitted: 4
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 3211264 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:56.496595+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 3211264 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:57.497146+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 3211264 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:58.497736+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 3211264 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953982 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:37:59.498252+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 3211264 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:00.498758+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 2162688 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:01.499088+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 2162688 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:02.499556+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 2162688 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:03.499979+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 2162688 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953850 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:04.500403+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 2162688 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:05.500768+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 2162688 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:06.501143+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 2162688 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:07.501643+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 2162688 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:08.501948+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 2162688 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953850 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:09.502156+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 2162688 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:10.502521+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 2162688 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:11.502822+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 2162688 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:12.503106+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 2162688 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:13.503457+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 2162688 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953850 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:14.503697+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 2162688 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:15.503958+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 2162688 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:16.504214+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 2162688 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:17.504473+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 2162688 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:18.504616+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 ms_handle_reset con 0x559a37944800 session 0x559a385c5e00
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 ms_handle_reset con 0x559a35e38000 session 0x559a37d7fe00
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 2162688 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953850 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:19.504773+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 2162688 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:20.504966+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 2162688 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:21.505088+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 2162688 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 ms_handle_reset con 0x559a37c0a400 session 0x559a37d7f0e0
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 ms_handle_reset con 0x559a37c69400 session 0x559a37d7f860
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:22.505231+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 2162688 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:23.505358+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 2162688 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953850 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:24.505509+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 2162688 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:25.505656+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 2162688 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:26.505843+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 2162688 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:27.506019+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 2162688 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:28.506247+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 2162688 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953850 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:29.506356+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35e38000
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 34.027877808s of 34.041492462s, submitted: 3
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 3211264 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:30.506485+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 3211264 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:31.506616+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 3211264 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:32.506807+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 3211264 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35203400
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:33.506944+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 3211264 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 954114 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:34.507115+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 3211264 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:35.507269+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35eacc00
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 3211264 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:36.507476+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 3211264 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:37.507694+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 3211264 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:38.507884+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 3211264 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955626 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:39.508022+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 3211264 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:40.508158+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 3211264 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:41.508351+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.082656860s of 12.092800140s, submitted: 3
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 3211264 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:42.508525+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 3211264 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:43.508714+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 3211264 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 954903 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc256000/0x0/0x4ffc00000, data 0xf6f5c/0x1a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:44.508875+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 3211264 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:45.509043+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 3211264 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:46.509306+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 3211264 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a36458400
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:47.509508+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 85966848 unmapped: 3203072 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _renew_subs
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 141 handle_osd_map epochs [142,142], i have 141, src has [1,142]
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:48.509650+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 142 ms_handle_reset con 0x559a35eacc00 session 0x559a35b7ad20
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 142 ms_handle_reset con 0x559a35e38000 session 0x559a3852ab40
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 142 handle_osd_map epochs [142,143], i have 142, src has [1,143]
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87040000 unmapped: 2129920 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 962227 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:49.509819+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc24e000/0x0/0x4ffc00000, data 0xfb188/0x1ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 143 ms_handle_reset con 0x559a36458400 session 0x559a34fb7c20
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87048192 unmapped: 2121728 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35e38000
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _renew_subs
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 143 handle_osd_map epochs [144,144], i have 143, src has [1,144]
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:50.509999+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 144 handle_osd_map epochs [144,145], i have 144, src has [1,145]
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 145 ms_handle_reset con 0x559a35e38000 session 0x559a386f2000
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87113728 unmapped: 18841600 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:51.510234+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87113728 unmapped: 18841600 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:52.510403+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87113728 unmapped: 18841600 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:53.510605+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87113728 unmapped: 18841600 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1022669 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:54.510798+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fba47000/0x0/0x4ffc00000, data 0x8ff3a8/0x9b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87113728 unmapped: 18841600 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:55.510965+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87113728 unmapped: 18841600 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:56.511111+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87113728 unmapped: 18841600 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:57.511340+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _renew_subs
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 145 handle_osd_map epochs [146,146], i have 145, src has [1,146]
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.849364281s of 15.955332756s, submitted: 38
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87113728 unmapped: 18841600 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fba45000/0x0/0x4ffc00000, data 0x90137a/0x9b6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:58.511485+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87113728 unmapped: 18841600 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1024663 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35eacc00
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:38:59.511643+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fba45000/0x0/0x4ffc00000, data 0x90137a/0x9b6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87113728 unmapped: 18841600 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:00.511818+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87113728 unmapped: 18841600 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:01.511983+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87113728 unmapped: 18841600 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:02.512152+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87113728 unmapped: 18841600 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:03.512351+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87113728 unmapped: 18841600 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1024795 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:04.512519+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fba45000/0x0/0x4ffc00000, data 0x90137a/0x9b6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87113728 unmapped: 18841600 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:05.512714+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a37c0a400
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 18833408 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:06.512905+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 18833408 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:07.513244+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 18833408 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:08.513478+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 18833408 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1025467 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:09.513633+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 18833408 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fba46000/0x0/0x4ffc00000, data 0x90137a/0x9b6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:10.513825+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 18833408 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:11.513986+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 18833408 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fba46000/0x0/0x4ffc00000, data 0x90137a/0x9b6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:12.514151+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 18833408 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:13.514387+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 18833408 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1025467 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:14.514527+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.920860291s of 16.938102722s, submitted: 15
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 18833408 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:15.514689+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 18833408 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:16.514937+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 18833408 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fba46000/0x0/0x4ffc00000, data 0x90137a/0x9b6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:17.515309+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 18833408 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:18.515592+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 18833408 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1025335 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:19.515745+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 18833408 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:20.515910+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 18833408 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:21.516067+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 18833408 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fba46000/0x0/0x4ffc00000, data 0x90137a/0x9b6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:22.516212+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 18833408 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:23.516420+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 18833408 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:24.516636+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1025335 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 18833408 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:25.516864+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 18833408 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:26.517054+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fba46000/0x0/0x4ffc00000, data 0x90137a/0x9b6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 18833408 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:27.517277+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 18833408 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:28.517448+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 146 ms_handle_reset con 0x559a35203400 session 0x559a3852b4a0
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 18833408 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:29.517656+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1025335 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 18833408 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:30.517932+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 18833408 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:31.518054+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 18833408 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fba46000/0x0/0x4ffc00000, data 0x90137a/0x9b6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:32.518259+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 18833408 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:33.518447+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 18833408 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:34.518614+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1025335 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 18833408 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:35.518761+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 18833408 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:36.518960+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 18833408 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:37.519133+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fba46000/0x0/0x4ffc00000, data 0x90137a/0x9b6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 18833408 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:38.519303+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 18833408 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:39.519535+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1025335 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a37c69400
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 25.400842667s of 25.404727936s, submitted: 1
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 18833408 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:40.519754+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 146 handle_osd_map epochs [147,147], i have 146, src has [1,147]
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 18833408 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:41.519927+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 147 handle_osd_map epochs [147,148], i have 147, src has [1,148]
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fba42000/0x0/0x4ffc00000, data 0x903466/0x9b9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 18833408 heap: 105955328 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:42.520098+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a37929c00
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87433216 unmapped: 26402816 heap: 113836032 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 148 ms_handle_reset con 0x559a37929c00 session 0x559a38090b40
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:43.520295+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87433216 unmapped: 26402816 heap: 113836032 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a3644d400
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 148 ms_handle_reset con 0x559a3644d400 session 0x559a37c5f0e0
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:44.520507+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1156281 data_alloc: 218103808 data_used: 139264
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87433216 unmapped: 26402816 heap: 113836032 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:45.520662+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87433216 unmapped: 26402816 heap: 113836032 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a37f7fc00
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 148 ms_handle_reset con 0x559a37f7fc00 session 0x559a378d3680
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:46.520804+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fa8d5000/0x0/0x4ffc00000, data 0x1a6f5a6/0x1b26000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87449600 unmapped: 26386432 heap: 113836032 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35203400
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 148 ms_handle_reset con 0x559a35203400 session 0x559a381fa000
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:47.521023+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35e38000
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 148 ms_handle_reset con 0x559a35e38000 session 0x559a385c1680
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87629824 unmapped: 26206208 heap: 113836032 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:48.521286+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fa8b1000/0x0/0x4ffc00000, data 0x1a935b6/0x1b4b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87629824 unmapped: 26206208 heap: 113836032 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a3644d400
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:49.521431+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1159771 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fa8b1000/0x0/0x4ffc00000, data 0x1a935b6/0x1b4b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 87629824 unmapped: 26206208 heap: 113836032 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a37929c00
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:50.521556+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 101638144 unmapped: 12197888 heap: 113836032 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:51.521712+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 148 handle_osd_map epochs [148,149], i have 148, src has [1,149]
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.004156113s of 12.149291992s, submitted: 30
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104964096 unmapped: 8871936 heap: 113836032 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:52.521832+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104964096 unmapped: 8871936 heap: 113836032 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:53.521987+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104964096 unmapped: 8871936 heap: 113836032 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:54.522152+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1285766 data_alloc: 234881024 data_used: 18407424
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104964096 unmapped: 8871936 heap: 113836032 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa8ad000/0x0/0x4ffc00000, data 0x1a95588/0x1b4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:55.522331+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104964096 unmapped: 8871936 heap: 113836032 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:56.522488+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104964096 unmapped: 8871936 heap: 113836032 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:57.522690+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104964096 unmapped: 8871936 heap: 113836032 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:58.522817+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104964096 unmapped: 8871936 heap: 113836032 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:39:59.522926+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1285766 data_alloc: 234881024 data_used: 18407424
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104964096 unmapped: 8871936 heap: 113836032 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:00.523126+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa8ad000/0x0/0x4ffc00000, data 0x1a95588/0x1b4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x37ff9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 114024448 unmapped: 2957312 heap: 116981760 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:01.523328+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.947079659s of 10.143234253s, submitted: 91
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113860608 unmapped: 3121152 heap: 116981760 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f8cc5000/0x0/0x4ffc00000, data 0x24d8588/0x2591000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:02.523553+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a37c69400 session 0x559a385d0b40
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113139712 unmapped: 3842048 heap: 116981760 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:03.523680+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113139712 unmapped: 3842048 heap: 116981760 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:04.523833+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1388708 data_alloc: 234881024 data_used: 19419136
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113139712 unmapped: 3842048 heap: 116981760 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:05.524044+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113139712 unmapped: 3842048 heap: 116981760 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:06.524305+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113139712 unmapped: 3842048 heap: 116981760 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:07.524632+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f8c8e000/0x0/0x4ffc00000, data 0x250d588/0x25c6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112746496 unmapped: 4235264 heap: 116981760 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:08.524863+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112746496 unmapped: 4235264 heap: 116981760 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:09.525056+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1384604 data_alloc: 234881024 data_used: 19419136
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112746496 unmapped: 4235264 heap: 116981760 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:10.525289+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112746496 unmapped: 4235264 heap: 116981760 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:11.525504+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a37c6b800 session 0x559a386e4b40
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a36455000 session 0x559a37fecb40
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112762880 unmapped: 4218880 heap: 116981760 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:12.525709+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112762880 unmapped: 4218880 heap: 116981760 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f8c93000/0x0/0x4ffc00000, data 0x2510588/0x25c9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:13.525834+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112762880 unmapped: 4218880 heap: 116981760 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:14.526045+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1385516 data_alloc: 234881024 data_used: 19488768
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112762880 unmapped: 4218880 heap: 116981760 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:15.526242+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.836216927s of 13.892159462s, submitted: 28
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112762880 unmapped: 4218880 heap: 116981760 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:16.526404+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112762880 unmapped: 4218880 heap: 116981760 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:17.526628+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112762880 unmapped: 4218880 heap: 116981760 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:18.526806+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f8c92000/0x0/0x4ffc00000, data 0x2511588/0x25ca000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112771072 unmapped: 4210688 heap: 116981760 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:19.526962+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1385740 data_alloc: 234881024 data_used: 19488768
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112771072 unmapped: 4210688 heap: 116981760 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:20.527121+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112771072 unmapped: 4210688 heap: 116981760 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:21.527272+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112771072 unmapped: 4210688 heap: 116981760 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:22.527461+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f8c92000/0x0/0x4ffc00000, data 0x2511588/0x25ca000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35203400
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112771072 unmapped: 4210688 heap: 116981760 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:23.527636+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35e38000
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a35e38000 session 0x559a352d65a0
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 4202496 heap: 116981760 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a37c69400
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a37c69400 session 0x559a380914a0
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a37c6b800
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a37c6b800 session 0x559a381fb680
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:24.527794+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1385536 data_alloc: 234881024 data_used: 19488768
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 4202496 heap: 116981760 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:25.527965+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35eadc00
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a35eadc00 session 0x559a380d6f00
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.024140358s of 10.032184601s, submitted: 2
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35202000
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a35202000 session 0x559a34fb6960
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112336896 unmapped: 8855552 heap: 121192448 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35202000
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a35202000 session 0x559a380d7a40
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35e38000
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a35e38000 session 0x559a37f423c0
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35eadc00
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:26.528124+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a35eadc00 session 0x559a386cef00
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a37c69400
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a37c69400 session 0x559a384e5680
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112353280 unmapped: 8839168 heap: 121192448 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:27.528341+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f885e000/0x0/0x4ffc00000, data 0x29435fa/0x29fe000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112353280 unmapped: 8839168 heap: 121192448 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:28.528464+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112353280 unmapped: 8839168 heap: 121192448 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:29.528594+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1423021 data_alloc: 234881024 data_used: 19492864
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112377856 unmapped: 8814592 heap: 121192448 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:30.528782+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112386048 unmapped: 8806400 heap: 121192448 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:31.528939+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f885e000/0x0/0x4ffc00000, data 0x29435fa/0x29fe000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112386048 unmapped: 8806400 heap: 121192448 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:32.529090+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113434624 unmapped: 7757824 heap: 121192448 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:33.529283+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a37c6b800
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113434624 unmapped: 7757824 heap: 121192448 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:34.529421+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1423345 data_alloc: 234881024 data_used: 19529728
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f885e000/0x0/0x4ffc00000, data 0x29435fa/0x29fe000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 116015104 unmapped: 5177344 heap: 121192448 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:35.529518+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 116015104 unmapped: 5177344 heap: 121192448 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:36.529665+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 116015104 unmapped: 5177344 heap: 121192448 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:37.529834+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.171621323s of 12.282431602s, submitted: 33
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 116015104 unmapped: 5177344 heap: 121192448 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:38.530011+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f885e000/0x0/0x4ffc00000, data 0x29435fa/0x29fe000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 116015104 unmapped: 5177344 heap: 121192448 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:39.530234+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1444341 data_alloc: 234881024 data_used: 22675456
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f885e000/0x0/0x4ffc00000, data 0x29435fa/0x29fe000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 116047872 unmapped: 5144576 heap: 121192448 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:40.530444+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 116047872 unmapped: 5144576 heap: 121192448 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:41.530730+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 116047872 unmapped: 5144576 heap: 121192448 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:42.530876+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f885e000/0x0/0x4ffc00000, data 0x29435fa/0x29fe000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 116047872 unmapped: 5144576 heap: 121192448 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:43.531020+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 116064256 unmapped: 5128192 heap: 121192448 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:44.531150+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1447117 data_alloc: 234881024 data_used: 22716416
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f885e000/0x0/0x4ffc00000, data 0x29435fa/0x29fe000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 117702656 unmapped: 5595136 heap: 123297792 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:45.531356+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 119062528 unmapped: 4235264 heap: 123297792 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:46.531488+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 119308288 unmapped: 3989504 heap: 123297792 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:47.531690+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f84c7000/0x0/0x4ffc00000, data 0x2cd15fa/0x2d8c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 119005184 unmapped: 4292608 heap: 123297792 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:48.531915+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f84cf000/0x0/0x4ffc00000, data 0x2cd15fa/0x2d8c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 119005184 unmapped: 4292608 heap: 123297792 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:49.532065+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1479425 data_alloc: 234881024 data_used: 22691840
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 119005184 unmapped: 4292608 heap: 123297792 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:50.532251+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 119005184 unmapped: 4292608 heap: 123297792 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:51.532419+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.723421097s of 13.870462418s, submitted: 62
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 119005184 unmapped: 4292608 heap: 123297792 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f84cf000/0x0/0x4ffc00000, data 0x2cd15fa/0x2d8c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:52.532667+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 119005184 unmapped: 4292608 heap: 123297792 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:53.532819+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 119005184 unmapped: 4292608 heap: 123297792 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:54.533014+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f84cf000/0x0/0x4ffc00000, data 0x2cd15fa/0x2d8c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a37c0a400 session 0x559a35a8f860
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a35eacc00 session 0x559a374d52c0
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1479593 data_alloc: 234881024 data_used: 22691840
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 119013376 unmapped: 4284416 heap: 123297792 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:55.533292+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a37c6b800 session 0x559a385d14a0
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f84d0000/0x0/0x4ffc00000, data 0x2cd15fa/0x2d8c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35202000
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a35202000 session 0x559a35b56780
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 117399552 unmapped: 5898240 heap: 123297792 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:56.533441+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 117399552 unmapped: 5898240 heap: 123297792 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:57.533635+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 117399552 unmapped: 5898240 heap: 123297792 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:58.533786+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 117399552 unmapped: 5898240 heap: 123297792 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:40:59.533975+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1395603 data_alloc: 234881024 data_used: 19476480
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 117399552 unmapped: 5898240 heap: 123297792 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:00.534129+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f8c8b000/0x0/0x4ffc00000, data 0x2512588/0x25cb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a37929c00 session 0x559a385d12c0
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a3644d400 session 0x559a35074b40
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:01.534496+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 117383168 unmapped: 5914624 heap: 123297792 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35e38000
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a35e38000 session 0x559a382a05a0
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:02.534740+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103661568 unmapped: 19636224 heap: 123297792 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:03.534892+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103661568 unmapped: 19636224 heap: 123297792 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:04.535038+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103661568 unmapped: 19636224 heap: 123297792 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1060006 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:05.535251+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103661568 unmapped: 19636224 heap: 123297792 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:06.535425+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103661568 unmapped: 19636224 heap: 123297792 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:07.535662+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103661568 unmapped: 19636224 heap: 123297792 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:08.535870+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103661568 unmapped: 19636224 heap: 123297792 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:09.536040+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103661568 unmapped: 19636224 heap: 123297792 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1060006 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:10.536273+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103661568 unmapped: 19636224 heap: 123297792 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:11.536432+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103661568 unmapped: 19636224 heap: 123297792 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:12.536596+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103661568 unmapped: 19636224 heap: 123297792 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:13.536800+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103661568 unmapped: 19636224 heap: 123297792 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:14.536969+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103661568 unmapped: 19636224 heap: 123297792 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1060006 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:15.537112+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103661568 unmapped: 19636224 heap: 123297792 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:16.537293+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103661568 unmapped: 19636224 heap: 123297792 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:17.538319+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103661568 unmapped: 19636224 heap: 123297792 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:18.538506+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103661568 unmapped: 19636224 heap: 123297792 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 27.074157715s of 27.186643600s, submitted: 39
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:19.538676+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103661568 unmapped: 19636224 heap: 123297792 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059874 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:20.538856+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103661568 unmapped: 19636224 heap: 123297792 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:21.538991+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103661568 unmapped: 19636224 heap: 123297792 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:22.539118+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103661568 unmapped: 19636224 heap: 123297792 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:23.539306+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103661568 unmapped: 19636224 heap: 123297792 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:24.539470+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103661568 unmapped: 19636224 heap: 123297792 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059874 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:25.539641+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103661568 unmapped: 19636224 heap: 123297792 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:26.539809+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103661568 unmapped: 19636224 heap: 123297792 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:27.540022+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103661568 unmapped: 19636224 heap: 123297792 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35202000
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a35202000 session 0x559a34fb5e00
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:28.540297+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 102539264 unmapped: 22929408 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:29.540472+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 102539264 unmapped: 22929408 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1090172 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:30.540647+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 102539264 unmapped: 22929408 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa536000/0x0/0x4ffc00000, data 0xc6e578/0xd26000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:31.540961+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 102539264 unmapped: 22929408 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa536000/0x0/0x4ffc00000, data 0xc6e578/0xd26000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:32.541222+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 102539264 unmapped: 22929408 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa536000/0x0/0x4ffc00000, data 0xc6e578/0xd26000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a3644d400
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a3644d400 session 0x559a385fc000
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:33.541409+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 102539264 unmapped: 22929408 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a37929c00
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a37929c00 session 0x559a35e89e00
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a37c6b800
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a37c6b800 session 0x559a3852af00
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35eadc00
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a35eadc00 session 0x559a382e2d20
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:34.541896+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 102539264 unmapped: 22929408 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1090172 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:35.542056+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 102539264 unmapped: 22929408 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35eadc00
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:36.542238+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103456768 unmapped: 22011904 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:37.542452+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103456768 unmapped: 22011904 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa536000/0x0/0x4ffc00000, data 0xc6e578/0xd26000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:38.542556+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103456768 unmapped: 22011904 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:39.542684+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103456768 unmapped: 22011904 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa536000/0x0/0x4ffc00000, data 0xc6e578/0xd26000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1114340 data_alloc: 218103808 data_used: 3620864
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa536000/0x0/0x4ffc00000, data 0xc6e578/0xd26000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:40.542879+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103456768 unmapped: 22011904 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:41.543018+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103456768 unmapped: 22011904 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:42.543192+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103456768 unmapped: 22011904 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:43.543369+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103456768 unmapped: 22011904 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:44.543527+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103456768 unmapped: 22011904 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1114340 data_alloc: 218103808 data_used: 3620864
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:45.543663+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103456768 unmapped: 22011904 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa536000/0x0/0x4ffc00000, data 0xc6e578/0xd26000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 26.532819748s of 26.560840607s, submitted: 9
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:46.543849+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107814912 unmapped: 17653760 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f9f40000/0x0/0x4ffc00000, data 0x1264578/0x131c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:47.544062+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107945984 unmapped: 17522688 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:48.544228+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107954176 unmapped: 17514496 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:49.544357+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107954176 unmapped: 17514496 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1162178 data_alloc: 218103808 data_used: 3960832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:50.544501+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107954176 unmapped: 17514496 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:51.544639+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f9f3a000/0x0/0x4ffc00000, data 0x126a578/0x1322000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107954176 unmapped: 17514496 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:52.544760+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107954176 unmapped: 17514496 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:53.544906+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107954176 unmapped: 17514496 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:54.545078+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107954176 unmapped: 17514496 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1162178 data_alloc: 218103808 data_used: 3960832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:55.545251+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107954176 unmapped: 17514496 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f9f3a000/0x0/0x4ffc00000, data 0x126a578/0x1322000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:56.545437+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107954176 unmapped: 17514496 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:57.545660+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107954176 unmapped: 17514496 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:58.545826+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107954176 unmapped: 17514496 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:41:59.545991+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107954176 unmapped: 17514496 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1162178 data_alloc: 218103808 data_used: 3960832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:00.546258+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107954176 unmapped: 17514496 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:01.546405+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f9f3a000/0x0/0x4ffc00000, data 0x126a578/0x1322000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107954176 unmapped: 17514496 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:02.546562+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107954176 unmapped: 17514496 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:03.546757+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107954176 unmapped: 17514496 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f9f3a000/0x0/0x4ffc00000, data 0x126a578/0x1322000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:04.546889+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107954176 unmapped: 17514496 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1162330 data_alloc: 218103808 data_used: 3964928
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:05.547050+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f9f3a000/0x0/0x4ffc00000, data 0x126a578/0x1322000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107954176 unmapped: 17514496 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:06.547243+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f9f3a000/0x0/0x4ffc00000, data 0x126a578/0x1322000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107954176 unmapped: 17514496 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:07.547406+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a35eadc00 session 0x559a3750b860
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107954176 unmapped: 17514496 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a3792a000
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 21.976572037s of 22.099090576s, submitted: 62
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a3792a000 session 0x559a38098b40
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:08.547610+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103186432 unmapped: 22282240 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:09.547781+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103186432 unmapped: 22282240 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1061970 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:10.547943+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103186432 unmapped: 22282240 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:11.548092+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103186432 unmapped: 22282240 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:12.548273+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103186432 unmapped: 22282240 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:13.548425+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103186432 unmapped: 22282240 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:14.548570+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103186432 unmapped: 22282240 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1061970 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:15.548769+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103186432 unmapped: 22282240 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:16.548987+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103186432 unmapped: 22282240 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:17.549249+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103186432 unmapped: 22282240 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:18.549392+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103186432 unmapped: 22282240 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:19.549574+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103186432 unmapped: 22282240 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1061970 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:20.549710+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103186432 unmapped: 22282240 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:21.549846+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103186432 unmapped: 22282240 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:22.549966+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103186432 unmapped: 22282240 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:23.550104+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103186432 unmapped: 22282240 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:24.550256+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103186432 unmapped: 22282240 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1061970 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:25.550446+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103186432 unmapped: 22282240 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:26.550593+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103186432 unmapped: 22282240 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:27.550758+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103186432 unmapped: 22282240 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:28.550887+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103186432 unmapped: 22282240 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:29.550998+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103186432 unmapped: 22282240 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1061970 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:30.551135+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103186432 unmapped: 22282240 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:31.551286+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103186432 unmapped: 22282240 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:32.551467+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103186432 unmapped: 22282240 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:33.551633+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103186432 unmapped: 22282240 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:34.551799+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103186432 unmapped: 22282240 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1061970 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:35.551964+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103186432 unmapped: 22282240 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:36.552110+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103186432 unmapped: 22282240 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:37.552330+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103186432 unmapped: 22282240 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:38.552475+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103186432 unmapped: 22282240 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:39.552674+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103186432 unmapped: 22282240 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1061970 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:40.552891+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103186432 unmapped: 22282240 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:41.554060+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103186432 unmapped: 22282240 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a3644c800
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a3644c800 session 0x559a35e892c0
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a37f88800
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a37f88800 session 0x559a35b56f00
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a37f8ac00
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a37f8ac00 session 0x559a37d7e960
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:42.554269+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35eadc00
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a35eadc00 session 0x559a385c4d20
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a3644c800
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 34.519138336s of 34.540813446s, submitted: 6
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a3644c800 session 0x559a3811fa40
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104177664 unmapped: 24444928 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a3792a000
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a3792a000 session 0x559a37fec780
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:43.554419+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104185856 unmapped: 24436736 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:44.555259+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104185856 unmapped: 24436736 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa47f000/0x0/0x4ffc00000, data 0xd245da/0xddd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098689 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:45.555546+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104185856 unmapped: 24436736 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a37f88800
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a37f88800 session 0x559a35ecef00
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:46.555780+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35afa400
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a35afa400 session 0x559a37fedc20
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 103907328 unmapped: 24715264 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35eadc00
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a35eadc00 session 0x559a37feda40
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a3644c800
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a3644c800 session 0x559a37fec1e0
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:47.556038+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104210432 unmapped: 24412160 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a3792a000
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:48.556665+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104210432 unmapped: 24412160 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a37f88800
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:49.557201+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104357888 unmapped: 24264704 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1130042 data_alloc: 218103808 data_used: 4173824
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:50.557488+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104357888 unmapped: 24264704 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa45a000/0x0/0x4ffc00000, data 0xd485fd/0xe02000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:51.557947+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104357888 unmapped: 24264704 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:52.558261+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104357888 unmapped: 24264704 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:53.558432+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104357888 unmapped: 24264704 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:54.558584+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa45a000/0x0/0x4ffc00000, data 0xd485fd/0xe02000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104357888 unmapped: 24264704 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1130042 data_alloc: 218103808 data_used: 4173824
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:55.558769+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104357888 unmapped: 24264704 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:56.558946+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104357888 unmapped: 24264704 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:57.559128+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 24256512 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:58.559226+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.221706390s of 16.312311172s, submitted: 35
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104366080 unmapped: 24256512 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:42:59.559336+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108920832 unmapped: 19701760 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1191816 data_alloc: 218103808 data_used: 5292032
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:00.559566+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f9d00000/0x0/0x4ffc00000, data 0x14a25fd/0x155c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108527616 unmapped: 20094976 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:01.559771+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108527616 unmapped: 20094976 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:02.560069+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f9c7d000/0x0/0x4ffc00000, data 0x15255fd/0x15df000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108527616 unmapped: 20094976 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f9c7d000/0x0/0x4ffc00000, data 0x15255fd/0x15df000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:03.560376+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108527616 unmapped: 20094976 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:04.560597+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108527616 unmapped: 20094976 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1200426 data_alloc: 218103808 data_used: 5365760
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:05.560789+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108552192 unmapped: 20070400 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:06.560961+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108552192 unmapped: 20070400 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:07.561230+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108552192 unmapped: 20070400 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:08.561431+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f9c5c000/0x0/0x4ffc00000, data 0x15465fd/0x1600000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108552192 unmapped: 20070400 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:09.561646+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108552192 unmapped: 20070400 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1198370 data_alloc: 218103808 data_used: 5369856
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:10.561896+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108552192 unmapped: 20070400 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:11.562069+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.448533058s of 12.715208054s, submitted: 96
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108716032 unmapped: 19906560 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f9c5c000/0x0/0x4ffc00000, data 0x15465fd/0x1600000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:12.562241+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f9c52000/0x0/0x4ffc00000, data 0x15505fd/0x160a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108716032 unmapped: 19906560 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:13.562381+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108716032 unmapped: 19906560 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:14.562646+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108716032 unmapped: 19906560 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1198650 data_alloc: 218103808 data_used: 5369856
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:15.562817+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108716032 unmapped: 19906560 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:16.563009+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108716032 unmapped: 19906560 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a3644f000
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a3644f000 session 0x559a38705680
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35203000
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a35203000 session 0x559a352fbe00
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:17.563220+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f9b7c000/0x0/0x4ffc00000, data 0x1625626/0x16e0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108584960 unmapped: 20037632 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:18.563391+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108584960 unmapped: 20037632 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:19.563605+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108584960 unmapped: 20037632 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1221235 data_alloc: 218103808 data_used: 5369856
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:20.563856+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108593152 unmapped: 20029440 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:21.564021+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108593152 unmapped: 20029440 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f99ef000/0x0/0x4ffc00000, data 0x17b265f/0x186d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:22.564284+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108593152 unmapped: 20029440 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 11K writes, 40K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                           Cumulative WAL: 11K writes, 3104 syncs, 3.57 writes per sync, written: 0.03 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1677 writes, 5010 keys, 1677 commit groups, 1.0 writes per commit group, ingest: 5.64 MB, 0.01 MB/s
                                           Interval WAL: 1677 writes, 710 syncs, 2.36 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:23.564558+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108593152 unmapped: 20029440 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35eac000
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.542958260s of 12.671705246s, submitted: 30
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a35eac000 session 0x559a34585e00
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:24.564786+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108896256 unmapped: 19726336 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35203000
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225404 data_alloc: 218103808 data_used: 5369856
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:25.565002+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108896256 unmapped: 19726336 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35eadc00
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:26.565272+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109076480 unmapped: 19546112 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f99ca000/0x0/0x4ffc00000, data 0x17d6682/0x1892000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:27.565494+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110125056 unmapped: 18497536 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:28.565704+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110133248 unmapped: 18489344 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f99ca000/0x0/0x4ffc00000, data 0x17d6682/0x1892000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:29.565966+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110141440 unmapped: 18481152 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242124 data_alloc: 218103808 data_used: 7774208
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:30.566282+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110149632 unmapped: 18472960 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:31.566524+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110149632 unmapped: 18472960 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:32.566736+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110149632 unmapped: 18472960 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f99ca000/0x0/0x4ffc00000, data 0x17d6682/0x1892000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:33.566920+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110149632 unmapped: 18472960 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:34.567052+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110157824 unmapped: 18464768 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242748 data_alloc: 218103808 data_used: 7778304
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:35.567236+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110157824 unmapped: 18464768 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.805513382s of 11.841412544s, submitted: 9
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:36.567393+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110657536 unmapped: 17965056 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:37.567565+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111050752 unmapped: 17571840 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f95d2000/0x0/0x4ffc00000, data 0x1bce682/0x1c8a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:38.567735+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111050752 unmapped: 17571840 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:39.567895+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111050752 unmapped: 17571840 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:40.568095+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1298356 data_alloc: 234881024 data_used: 9617408
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111050752 unmapped: 17571840 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:41.568292+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f95d2000/0x0/0x4ffc00000, data 0x1bce682/0x1c8a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111050752 unmapped: 17571840 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:42.568449+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111050752 unmapped: 17571840 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:43.568651+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111050752 unmapped: 17571840 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:44.568873+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f95d2000/0x0/0x4ffc00000, data 0x1bce682/0x1c8a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111050752 unmapped: 17571840 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:45.569027+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1298356 data_alloc: 234881024 data_used: 9617408
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a35eadc00 session 0x559a385d0780
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a35203000 session 0x559a3450cb40
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111058944 unmapped: 17563648 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a3644c800
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.844120026s of 10.036936760s, submitted: 67
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:46.569423+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a3644c800 session 0x559a34fb6780
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107167744 unmapped: 21454848 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:47.569833+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107167744 unmapped: 21454848 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:48.570135+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f9c4d000/0x0/0x4ffc00000, data 0x15535fd/0x160d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107167744 unmapped: 21454848 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:49.570389+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107167744 unmapped: 21454848 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:50.570551+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206836 data_alloc: 218103808 data_used: 5369856
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107167744 unmapped: 21454848 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a37f88800 session 0x559a35f2de00
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f9c4d000/0x0/0x4ffc00000, data 0x15535fd/0x160d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a3792a000 session 0x559a380d70e0
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:51.571085+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a3792a000
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a3792a000 session 0x559a35a3d680
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104751104 unmapped: 23871488 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:52.571495+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104751104 unmapped: 23871488 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:53.572054+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104751104 unmapped: 23871488 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:54.572638+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104751104 unmapped: 23871488 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:55.572893+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1079774 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89b000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104751104 unmapped: 23871488 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89b000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:56.573383+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104751104 unmapped: 23871488 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:57.573827+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104751104 unmapped: 23871488 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:58.574292+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104751104 unmapped: 23871488 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:43:59.574709+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104751104 unmapped: 23871488 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:00.574911+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89b000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1079774 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104751104 unmapped: 23871488 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:01.575137+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104751104 unmapped: 23871488 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:02.575268+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104751104 unmapped: 23871488 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:03.575427+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89b000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104751104 unmapped: 23871488 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:04.576202+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104751104 unmapped: 23871488 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:05.576354+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1079774 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104759296 unmapped: 23863296 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:06.576473+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104759296 unmapped: 23863296 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:07.576635+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104759296 unmapped: 23863296 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:08.576808+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104759296 unmapped: 23863296 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89b000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:09.576966+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104759296 unmapped: 23863296 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:10.577146+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1079774 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104759296 unmapped: 23863296 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:11.577432+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89b000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104759296 unmapped: 23863296 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:12.577779+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104759296 unmapped: 23863296 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:13.577980+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104759296 unmapped: 23863296 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:14.578151+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104759296 unmapped: 23863296 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:15.578420+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1079774 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104759296 unmapped: 23863296 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89b000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:16.578689+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35203000
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a35203000 session 0x559a34fb7c20
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35eadc00
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a35eadc00 session 0x559a34fb6000
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a3644c800
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a3644c800 session 0x559a34fb74a0
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a37f88800
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a37f88800 session 0x559a34fb6960
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a37f88800
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 30.608356476s of 30.821556091s, submitted: 55
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a37f88800 session 0x559a37fed680
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104783872 unmapped: 23838720 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:17.578957+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104783872 unmapped: 23838720 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:18.579994+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa51b000/0x0/0x4ffc00000, data 0xc89578/0xd41000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104783872 unmapped: 23838720 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:19.580282+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104783872 unmapped: 23838720 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:20.580419+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1106332 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35203000
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a35203000 session 0x559a37fec000
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104783872 unmapped: 23838720 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35eadc00
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a35eadc00 session 0x559a37fede00
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:21.581160+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a3644c800
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a3644c800 session 0x559a37fecd20
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a3792a000
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a3792a000 session 0x559a3852be00
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104767488 unmapped: 23855104 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:22.581393+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa51a000/0x0/0x4ffc00000, data 0xc8959b/0xd42000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104767488 unmapped: 23855104 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:23.581825+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35203000
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104767488 unmapped: 23855104 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:24.581973+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104800256 unmapped: 23822336 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:25.582115+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133049 data_alloc: 218103808 data_used: 3821568
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104800256 unmapped: 23822336 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:26.582639+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104800256 unmapped: 23822336 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:27.582850+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104800256 unmapped: 23822336 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:28.583023+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa51a000/0x0/0x4ffc00000, data 0xc8959b/0xd42000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104800256 unmapped: 23822336 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:29.583220+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104800256 unmapped: 23822336 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:30.583423+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133049 data_alloc: 218103808 data_used: 3821568
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104800256 unmapped: 23822336 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:31.583581+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104800256 unmapped: 23822336 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:32.583749+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa51a000/0x0/0x4ffc00000, data 0xc8959b/0xd42000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 104800256 unmapped: 23822336 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:33.583915+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.136903763s of 17.181941986s, submitted: 9
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:34.584068+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109273088 unmapped: 19349504 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:35.584248+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109281280 unmapped: 19341312 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f9f40000/0x0/0x4ffc00000, data 0x126359b/0x131c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1178007 data_alloc: 218103808 data_used: 4296704
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:36.584424+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109281280 unmapped: 19341312 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:37.584615+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108388352 unmapped: 20234240 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:38.584788+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108388352 unmapped: 20234240 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:39.584952+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108396544 unmapped: 20226048 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 sudo[302200]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -- lvm list --format json
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:40.585212+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108396544 unmapped: 20226048 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182447 data_alloc: 218103808 data_used: 4304896
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f9f36000/0x0/0x4ffc00000, data 0x126d59b/0x1326000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:41.585447+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108396544 unmapped: 20226048 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:42.585636+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108396544 unmapped: 20226048 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:43.585887+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108396544 unmapped: 20226048 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:44.586071+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108396544 unmapped: 20226048 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:45.586228+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108396544 unmapped: 20226048 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f9f36000/0x0/0x4ffc00000, data 0x126d59b/0x1326000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182447 data_alloc: 218103808 data_used: 4304896
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:46.586490+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108396544 unmapped: 20226048 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f9f36000/0x0/0x4ffc00000, data 0x126d59b/0x1326000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:47.586664+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108396544 unmapped: 20226048 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:48.586823+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108396544 unmapped: 20226048 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:49.586944+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108396544 unmapped: 20226048 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:50.587145+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108404736 unmapped: 20217856 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182447 data_alloc: 218103808 data_used: 4304896
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:51.587654+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108404736 unmapped: 20217856 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:52.588058+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108404736 unmapped: 20217856 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f9f36000/0x0/0x4ffc00000, data 0x126d59b/0x1326000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:53.588209+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108404736 unmapped: 20217856 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f9f36000/0x0/0x4ffc00000, data 0x126d59b/0x1326000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:54.588457+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108404736 unmapped: 20217856 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a35a7d400 session 0x559a34fb7860
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35eadc00
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f9f36000/0x0/0x4ffc00000, data 0x126d59b/0x1326000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:55.588723+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108404736 unmapped: 20217856 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182447 data_alloc: 218103808 data_used: 4304896
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:56.588961+0000)
Sep 30 15:00:20 compute-0 sudo[302200]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108404736 unmapped: 20217856 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:57.589279+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108404736 unmapped: 20217856 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:58.589462+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108404736 unmapped: 20217856 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:44:59.589634+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108404736 unmapped: 20217856 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f9f36000/0x0/0x4ffc00000, data 0x126d59b/0x1326000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:00.589818+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108404736 unmapped: 20217856 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182447 data_alloc: 218103808 data_used: 4304896
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:01.590035+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108404736 unmapped: 20217856 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35a7d400
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a35a7d400 session 0x559a34fb54a0
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a3644c800
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a3644c800 session 0x559a385c0f00
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:02.590211+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a37f88800
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a37f88800 session 0x559a35b7ba40
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a3644f000
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a3644f000 session 0x559a35b7b4a0
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108421120 unmapped: 20201472 heap: 128622592 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a37944400
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 28.113771439s of 28.566558838s, submitted: 42
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a37944400 session 0x559a37fec5a0
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35a7d400
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a35a7d400 session 0x559a374d4f00
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:03.590376+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108494848 unmapped: 25387008 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:04.590524+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108494848 unmapped: 25387008 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f97b3000/0x0/0x4ffc00000, data 0x19ef5fd/0x1aa9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:05.590684+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108535808 unmapped: 25346048 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1237194 data_alloc: 218103808 data_used: 4304896
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:06.590826+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f97b3000/0x0/0x4ffc00000, data 0x19ef5fd/0x1aa9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107208704 unmapped: 26673152 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:07.591007+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107216896 unmapped: 26664960 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:08.591262+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a3644c800
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a3644c800 session 0x559a38704780
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107536384 unmapped: 26345472 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a3644f000
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:09.591373+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107536384 unmapped: 26345472 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a37f88800
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f978c000/0x0/0x4ffc00000, data 0x1a14620/0x1acf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:10.591522+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110084096 unmapped: 23797760 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297304 data_alloc: 234881024 data_used: 12181504
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:11.591667+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112812032 unmapped: 21069824 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:12.591812+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112812032 unmapped: 21069824 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:13.592003+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112812032 unmapped: 21069824 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f978c000/0x0/0x4ffc00000, data 0x1a14620/0x1acf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:14.592256+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112844800 unmapped: 21037056 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f978c000/0x0/0x4ffc00000, data 0x1a14620/0x1acf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:15.592415+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112877568 unmapped: 21004288 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297304 data_alloc: 234881024 data_used: 12181504
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:16.593247+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112877568 unmapped: 21004288 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:17.593409+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112877568 unmapped: 21004288 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:18.593532+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112885760 unmapped: 20996096 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:19.593706+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112885760 unmapped: 20996096 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.973478317s of 17.503797531s, submitted: 156
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:20.593862+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f978c000/0x0/0x4ffc00000, data 0x1a14620/0x1acf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 114573312 unmapped: 19308544 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1337706 data_alloc: 234881024 data_used: 12423168
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:21.594005+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 114843648 unmapped: 19038208 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:22.594127+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 114851840 unmapped: 19030016 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:23.594232+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 114851840 unmapped: 19030016 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f92a3000/0x0/0x4ffc00000, data 0x1efe620/0x1fb9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:24.594359+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f92a3000/0x0/0x4ffc00000, data 0x1efe620/0x1fb9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 114851840 unmapped: 19030016 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:25.594513+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 114851840 unmapped: 19030016 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341346 data_alloc: 234881024 data_used: 12734464
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:26.594655+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 114851840 unmapped: 19030016 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f92a3000/0x0/0x4ffc00000, data 0x1efe620/0x1fb9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:27.594818+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 114860032 unmapped: 19021824 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:28.594955+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 114860032 unmapped: 19021824 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:29.595091+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 114860032 unmapped: 19021824 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:30.595210+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 114860032 unmapped: 19021824 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341346 data_alloc: 234881024 data_used: 12734464
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:31.595482+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 114860032 unmapped: 19021824 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:32.595719+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f92a3000/0x0/0x4ffc00000, data 0x1efe620/0x1fb9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 114860032 unmapped: 19021824 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:33.595844+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 114860032 unmapped: 19021824 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:34.596080+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 114860032 unmapped: 19021824 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:35.596337+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 114868224 unmapped: 19013632 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341346 data_alloc: 234881024 data_used: 12734464
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:36.596572+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 114868224 unmapped: 19013632 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f92a3000/0x0/0x4ffc00000, data 0x1efe620/0x1fb9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:37.596763+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 114868224 unmapped: 19013632 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:38.596925+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 114868224 unmapped: 19013632 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:39.597093+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 114868224 unmapped: 19013632 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:40.597266+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 114868224 unmapped: 19013632 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341346 data_alloc: 234881024 data_used: 12734464
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:41.597448+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 114868224 unmapped: 19013632 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f92a3000/0x0/0x4ffc00000, data 0x1efe620/0x1fb9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a36459800 session 0x559a384e3680
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a36ff6400
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:42.597627+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 114876416 unmapped: 19005440 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 22.791982651s of 22.966978073s, submitted: 47
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:43.598053+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 114884608 unmapped: 18997248 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a3793b400 session 0x559a3852b680
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a37f71000
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:44.598243+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 114909184 unmapped: 18972672 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:45.598369+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113303552 unmapped: 20578304 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1338914 data_alloc: 234881024 data_used: 12738560
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:46.598532+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113336320 unmapped: 20545536 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:47.598759+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113377280 unmapped: 20504576 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a35203800 session 0x559a375712c0
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a3793b400
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f92a3000/0x0/0x4ffc00000, data 0x1efe620/0x1fb9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:48.598996+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113377280 unmapped: 20504576 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:49.599217+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113385472 unmapped: 20496384 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:50.599397+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113385472 unmapped: 20496384 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1338914 data_alloc: 234881024 data_used: 12738560
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:51.599614+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113393664 unmapped: 20488192 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f92a3000/0x0/0x4ffc00000, data 0x1efe620/0x1fb9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:52.599923+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113393664 unmapped: 20488192 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:53.600102+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113393664 unmapped: 20488192 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:54.600227+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113401856 unmapped: 20480000 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:55.600444+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f92a3000/0x0/0x4ffc00000, data 0x1efe620/0x1fb9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113401856 unmapped: 20480000 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1338914 data_alloc: 234881024 data_used: 12738560
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:56.600657+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113401856 unmapped: 20480000 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:57.600876+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113401856 unmapped: 20480000 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:58.601064+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f92a3000/0x0/0x4ffc00000, data 0x1efe620/0x1fb9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113410048 unmapped: 20471808 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:45:59.601249+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113410048 unmapped: 20471808 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:00.601364+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113410048 unmapped: 20471808 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1338914 data_alloc: 234881024 data_used: 12738560
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:01.601521+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113410048 unmapped: 20471808 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:02.601670+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113410048 unmapped: 20471808 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f92a3000/0x0/0x4ffc00000, data 0x1efe620/0x1fb9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:03.601808+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113410048 unmapped: 20471808 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:04.601960+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113410048 unmapped: 20471808 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:05.602094+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113410048 unmapped: 20471808 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1338914 data_alloc: 234881024 data_used: 12738560
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:06.602257+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 20463616 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:07.602430+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 20463616 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:08.602722+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113426432 unmapped: 20455424 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f92a3000/0x0/0x4ffc00000, data 0x1efe620/0x1fb9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:09.602842+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113426432 unmapped: 20455424 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:10.602973+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113426432 unmapped: 20455424 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1338914 data_alloc: 234881024 data_used: 12738560
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f92a3000/0x0/0x4ffc00000, data 0x1efe620/0x1fb9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:11.603142+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113426432 unmapped: 20455424 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f92a3000/0x0/0x4ffc00000, data 0x1efe620/0x1fb9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:12.603251+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113426432 unmapped: 20455424 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:13.603432+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113426432 unmapped: 20455424 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:14.603599+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113434624 unmapped: 20447232 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:15.603800+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113434624 unmapped: 20447232 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1338914 data_alloc: 234881024 data_used: 12738560
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:16.604024+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113434624 unmapped: 20447232 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f92a3000/0x0/0x4ffc00000, data 0x1efe620/0x1fb9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:17.604233+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113434624 unmapped: 20447232 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:18.604380+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113434624 unmapped: 20447232 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 35.032520294s of 36.020671844s, submitted: 236
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a37f88800 session 0x559a35a3d680
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a3644f000 session 0x559a37c61e00
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:19.604510+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f92a3000/0x0/0x4ffc00000, data 0x1efe620/0x1fb9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a37945800
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108929024 unmapped: 24952832 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a37945800 session 0x559a37cf61e0
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:20.604637+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108978176 unmapped: 24903680 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1190986 data_alloc: 218103808 data_used: 4304896
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:21.604796+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108978176 unmapped: 24903680 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:22.604975+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108978176 unmapped: 24903680 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:23.605123+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108978176 unmapped: 24903680 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:24.605250+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f9f2e000/0x0/0x4ffc00000, data 0x126e59b/0x1327000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108978176 unmapped: 24903680 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:25.605429+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108978176 unmapped: 24903680 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1190986 data_alloc: 218103808 data_used: 4304896
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:26.605565+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108978176 unmapped: 24903680 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:27.605755+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f9f2e000/0x0/0x4ffc00000, data 0x126e59b/0x1327000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108978176 unmapped: 24903680 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:28.605945+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f9f2e000/0x0/0x4ffc00000, data 0x126e59b/0x1327000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108978176 unmapped: 24903680 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:29.606257+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108978176 unmapped: 24903680 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:30.606392+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108978176 unmapped: 24903680 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1190986 data_alloc: 218103808 data_used: 4304896
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:31.606549+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a35203000 session 0x559a3852b0e0
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108978176 unmapped: 24903680 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35a7d400
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:32.606658+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108978176 unmapped: 24903680 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.846877098s of 13.960657120s, submitted: 42
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:33.606815+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a35a7d400 session 0x559a3750a1e0
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 26574848 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89c000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:34.606990+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 26574848 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:35.607163+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 26574848 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1094874 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:36.607400+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 26574848 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:37.607632+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 26574848 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89c000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:38.607784+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 26574848 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89c000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:39.607984+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89c000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 26574848 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:40.608134+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 26574848 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1094874 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:41.608333+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 26574848 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:42.608518+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 26574848 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:43.608690+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 26574848 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:44.608860+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89c000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 26574848 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89c000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:45.609001+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89c000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 26574848 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1094874 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:46.609213+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89c000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 26574848 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:47.609383+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 26574848 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:48.609529+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 26574848 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:49.609701+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 26574848 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:50.609872+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89c000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 26574848 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1094874 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:51.610056+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 26574848 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:52.610247+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 26574848 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:53.610417+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 26574848 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:54.610590+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 26574848 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:55.610823+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 26574848 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1094874 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:56.610994+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89c000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 26574848 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:57.611254+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 26574848 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:58.611443+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 26574848 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:46:59.611631+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89c000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 26574848 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:00.611826+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 26574848 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1094874 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:01.612267+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 26574848 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89c000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:02.612657+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 26574848 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a3644c800
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 30.090757370s of 30.147060394s, submitted: 17
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a3644c800 session 0x559a3852af00
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:03.612985+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107937792 unmapped: 25944064 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:04.613143+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107937792 unmapped: 25944064 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa2b7000/0x0/0x4ffc00000, data 0xeed578/0xfa5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:05.613279+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107937792 unmapped: 25944064 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141040 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:06.613442+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a3644f000
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a3644f000 session 0x559a374d5c20
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107937792 unmapped: 25944064 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a37f88800
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a37f88800 session 0x559a3852b4a0
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:07.613619+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107937792 unmapped: 25944064 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa2b7000/0x0/0x4ffc00000, data 0xeed578/0xfa5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:08.613805+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a37f88800
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a37f88800 session 0x559a378d2f00
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35203000
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a35203000 session 0x559a384e2780
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 26591232 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35a7d400
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a3644c800
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:09.613978+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 26591232 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:10.614128+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107323392 unmapped: 26558464 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170518 data_alloc: 218103808 data_used: 4247552
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:11.614275+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107847680 unmapped: 26034176 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:12.614456+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107847680 unmapped: 26034176 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a35a7d400 session 0x559a35a3d4a0
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a3644c800 session 0x559a380d7e00
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:13.614593+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a3644f000
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.329400063s of 10.396521568s, submitted: 12
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a3644f000 session 0x559a385fdc20
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89c000/0x0/0x4ffc00000, data 0x907588/0x9c0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107167744 unmapped: 26714112 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:14.614815+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107167744 unmapped: 26714112 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:15.614997+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107167744 unmapped: 26714112 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097695 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:16.615135+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107167744 unmapped: 26714112 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:17.615278+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107167744 unmapped: 26714112 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:18.615416+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107167744 unmapped: 26714112 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:19.615547+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107167744 unmapped: 26714112 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:20.615715+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107167744 unmapped: 26714112 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097695 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:21.615910+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107167744 unmapped: 26714112 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:22.616070+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107167744 unmapped: 26714112 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:23.616239+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 107167744 unmapped: 26714112 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:24.616361+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35203000
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a35203000 session 0x559a34fb74a0
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35a7d400
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a35a7d400 session 0x559a379c3a40
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a3644c800
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a3644c800 session 0x559a379c25a0
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a37f88800
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a37f88800 session 0x559a379c34a0
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a36456800
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.362629890s of 11.389686584s, submitted: 7
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109977600 unmapped: 23904256 heap: 133881856 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a36456800 session 0x559a379c2960
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a36456800
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a36456800 session 0x559a35075a40
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35203000
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a35203000 session 0x559a37d7e960
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35a7d400
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:25.616504+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a35a7d400 session 0x559a37d7fe00
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a3644c800
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a3644c800 session 0x559a37c601e0
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108027904 unmapped: 29532160 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1169478 data_alloc: 218103808 data_used: 147456
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:26.616669+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f9fef000/0x0/0x4ffc00000, data 0x11b35ea/0x126d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108027904 unmapped: 29532160 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:27.616852+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f9fef000/0x0/0x4ffc00000, data 0x11b35ea/0x126d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108027904 unmapped: 29532160 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:28.616988+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108027904 unmapped: 29532160 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:29.617134+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108027904 unmapped: 29532160 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:30.617272+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: mgrc ms_handle_reset ms_handle_reset con 0x559a36ff6c00
Sep 30 15:00:20 compute-0 ceph-osd[82707]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1364357926
Sep 30 15:00:20 compute-0 ceph-osd[82707]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1364357926,v1:192.168.122.100:6801/1364357926]
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: get_auth_request con 0x559a3644f000 auth_method 0
Sep 30 15:00:20 compute-0 ceph-osd[82707]: mgrc handle_mgr_configure stats_period=5
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f9fef000/0x0/0x4ffc00000, data 0x11b35ea/0x126d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108027904 unmapped: 29532160 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1169478 data_alloc: 218103808 data_used: 147456
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:31.617403+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35203000
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a35203000 session 0x559a37c610e0
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f9fcb000/0x0/0x4ffc00000, data 0x11d75ea/0x1291000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108339200 unmapped: 29220864 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:32.617555+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35a7d400
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a3644c800
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108339200 unmapped: 29220864 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:33.617673+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108142592 unmapped: 29417472 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:34.617799+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111403008 unmapped: 26157056 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:35.618128+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111403008 unmapped: 26157056 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a35a7d400 session 0x559a37c614a0
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a3644c800 session 0x559a37d7fa40
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225050 data_alloc: 218103808 data_used: 7917568
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:36.618227+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a34810800
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.493298531s of 11.615280151s, submitted: 40
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a34810800 session 0x559a378d2960
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108945408 unmapped: 28614656 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa878000/0x0/0x4ffc00000, data 0x92b5da/0x9e4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:37.618578+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108945408 unmapped: 28614656 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:38.618744+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108945408 unmapped: 28614656 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:39.618903+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89c000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108945408 unmapped: 28614656 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:40.619044+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108945408 unmapped: 28614656 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:41.619253+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1106714 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108945408 unmapped: 28614656 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:42.619386+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89c000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89c000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108945408 unmapped: 28614656 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:43.619514+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108945408 unmapped: 28614656 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:44.619630+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108945408 unmapped: 28614656 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89c000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:45.619692+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108945408 unmapped: 28614656 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:46.619867+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1106714 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108945408 unmapped: 28614656 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:47.620059+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108945408 unmapped: 28614656 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:48.620201+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108945408 unmapped: 28614656 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:49.620361+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89c000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108945408 unmapped: 28614656 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:50.620487+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89c000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108945408 unmapped: 28614656 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:51.620637+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1106714 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108945408 unmapped: 28614656 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:52.620768+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108945408 unmapped: 28614656 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:53.620924+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108945408 unmapped: 28614656 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:54.621106+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108945408 unmapped: 28614656 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:55.621254+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108945408 unmapped: 28614656 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89c000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:56.621401+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1106714 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108945408 unmapped: 28614656 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:57.621629+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89c000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108945408 unmapped: 28614656 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:58.621757+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89c000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108945408 unmapped: 28614656 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:47:59.621936+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108945408 unmapped: 28614656 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:00.622125+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108945408 unmapped: 28614656 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.27761 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:01.622277+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89c000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1106714 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108945408 unmapped: 28614656 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:02.622447+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108945408 unmapped: 28614656 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:03.623951+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 108945408 unmapped: 28614656 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89c000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:04.625379+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a37c0b800
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 28.454156876s of 28.542760849s, submitted: 19
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a37c0b800 session 0x559a37c61c20
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109494272 unmapped: 28065792 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:05.625589+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89c000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109494272 unmapped: 28065792 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:06.625850+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1122452 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109494272 unmapped: 28065792 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:07.626886+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa6cb000/0x0/0x4ffc00000, data 0xad9578/0xb91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109494272 unmapped: 28065792 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:08.627903+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a34810800
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a34810800 session 0x559a37f42960
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109494272 unmapped: 28065792 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:09.628082+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35203000
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a35203000 session 0x559a3750a960
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35a7d400
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a35a7d400 session 0x559a37570960
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a3644c800
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a3644c800 session 0x559a37fed4a0
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109576192 unmapped: 27983872 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:10.628344+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa6cb000/0x0/0x4ffc00000, data 0xad9578/0xb91000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a379d3800
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109576192 unmapped: 27983872 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:11.628498+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1124816 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a3644f400
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109527040 unmapped: 28033024 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:12.628712+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109527040 unmapped: 28033024 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:13.628907+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa6a7000/0x0/0x4ffc00000, data 0xafd578/0xbb5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109527040 unmapped: 28033024 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:14.629453+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109527040 unmapped: 28033024 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:15.629877+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109527040 unmapped: 28033024 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:16.630033+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa6a7000/0x0/0x4ffc00000, data 0xafd578/0xbb5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1136672 data_alloc: 218103808 data_used: 1855488
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa6a7000/0x0/0x4ffc00000, data 0xafd578/0xbb5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109527040 unmapped: 28033024 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:17.630290+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:18.630579+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109527040 unmapped: 28033024 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:19.631004+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109527040 unmapped: 28033024 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:20.631246+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109527040 unmapped: 28033024 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa6a7000/0x0/0x4ffc00000, data 0xafd578/0xbb5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:21.631527+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109527040 unmapped: 28033024 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1136672 data_alloc: 218103808 data_used: 1855488
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.582111359s of 16.620002747s, submitted: 6
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:22.631704+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112844800 unmapped: 24715264 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:23.631903+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112517120 unmapped: 25042944 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:24.632108+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112517120 unmapped: 25042944 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:25.632288+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112517120 unmapped: 25042944 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa340000/0x0/0x4ffc00000, data 0xe64578/0xf1c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:26.632424+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 24961024 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1181628 data_alloc: 218103808 data_used: 2711552
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:27.632627+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 24961024 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:28.632758+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112238592 unmapped: 25321472 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:29.633484+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112238592 unmapped: 25321472 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:30.633616+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa31f000/0x0/0x4ffc00000, data 0xe85578/0xf3d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112238592 unmapped: 25321472 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:31.633762+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112320512 unmapped: 25239552 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182204 data_alloc: 218103808 data_used: 2744320
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:32.633918+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110592000 unmapped: 26968064 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:33.634135+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110592000 unmapped: 26968064 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a3644f400 session 0x559a386e4d20
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a379d3800 session 0x559a386cf860
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a34810800
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.981079102s of 12.167107582s, submitted: 42
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:34.634299+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109117440 unmapped: 28442624 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:35.634854+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109117440 unmapped: 28442624 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a34810800 session 0x559a384e30e0
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa879000/0x0/0x4ffc00000, data 0x92b578/0x9e3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:36.635005+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109117440 unmapped: 28442624 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1112166 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:37.635316+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109117440 unmapped: 28442624 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:38.635585+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109117440 unmapped: 28442624 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:39.635819+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109117440 unmapped: 28442624 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:40.636033+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109117440 unmapped: 28442624 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:41.636261+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109117440 unmapped: 28442624 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1112166 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:42.636492+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109117440 unmapped: 28442624 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:43.636720+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109117440 unmapped: 28442624 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:44.636938+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109117440 unmapped: 28442624 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:45.637074+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109117440 unmapped: 28442624 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:46.637233+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109117440 unmapped: 28442624 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1112166 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:47.637456+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109117440 unmapped: 28442624 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:48.637644+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109117440 unmapped: 28442624 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:49.637847+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109117440 unmapped: 28442624 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:50.638080+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109117440 unmapped: 28442624 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:51.638301+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109117440 unmapped: 28442624 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1112166 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:52.638540+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109117440 unmapped: 28442624 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:53.638749+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109117440 unmapped: 28442624 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:54.638960+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109117440 unmapped: 28442624 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:55.639138+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109117440 unmapped: 28442624 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:56.639388+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109117440 unmapped: 28442624 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1112166 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:57.639620+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109117440 unmapped: 28442624 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:58.639887+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109117440 unmapped: 28442624 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa89d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:48:59.640122+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109117440 unmapped: 28442624 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:00.640283+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109117440 unmapped: 28442624 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:01.640448+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109117440 unmapped: 28442624 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1112166 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:02.640582+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109117440 unmapped: 28442624 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35203000
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 28.728111267s of 28.756206512s, submitted: 8
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a35203000 session 0x559a35e88000
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:03.640798+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa57f000/0x0/0x4ffc00000, data 0xc25578/0xcdd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109256704 unmapped: 28303360 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:04.641036+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109256704 unmapped: 28303360 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:05.641295+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109256704 unmapped: 28303360 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:06.641447+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109256704 unmapped: 28303360 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135562 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:07.641663+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109256704 unmapped: 28303360 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:08.641859+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109256704 unmapped: 28303360 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa57f000/0x0/0x4ffc00000, data 0xc25578/0xcdd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:09.642057+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109256704 unmapped: 28303360 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35a7d400
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:10.642233+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109273088 unmapped: 28286976 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa57f000/0x0/0x4ffc00000, data 0xc25578/0xcdd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:11.642407+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa57f000/0x0/0x4ffc00000, data 0xc25578/0xcdd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109273088 unmapped: 28286976 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1156842 data_alloc: 218103808 data_used: 3321856
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:12.642616+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109273088 unmapped: 28286976 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:13.642836+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109273088 unmapped: 28286976 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:14.643006+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109273088 unmapped: 28286976 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa57f000/0x0/0x4ffc00000, data 0xc25578/0xcdd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:15.643248+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109289472 unmapped: 28270592 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:16.643375+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109289472 unmapped: 28270592 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1156842 data_alloc: 218103808 data_used: 3321856
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:17.643653+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109289472 unmapped: 28270592 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa57f000/0x0/0x4ffc00000, data 0xc25578/0xcdd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:18.643854+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109289472 unmapped: 28270592 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:19.644046+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 109289472 unmapped: 28270592 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.302663803s of 17.326045990s, submitted: 2
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:20.644299+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112336896 unmapped: 25223168 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:21.644480+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112402432 unmapped: 25157632 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a3644c800
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1199073 data_alloc: 218103808 data_used: 3313664
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a3644c800 session 0x559a35b7ba40
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35afb400
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a35afb400 session 0x559a35b7a960
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a34810800
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a34810800 session 0x559a37c61860
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35203000
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a35203000 session 0x559a386e50e0
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:22.644637+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a3644c800
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a3644c800 session 0x559a34fb6960
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112795648 unmapped: 24764416 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f9b73000/0x0/0x4ffc00000, data 0x1219578/0x12d1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:23.644810+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112795648 unmapped: 24764416 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:24.645077+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112795648 unmapped: 24764416 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:25.645316+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112795648 unmapped: 24764416 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:26.645653+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112754688 unmapped: 24805376 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216955 data_alloc: 218103808 data_used: 3313664
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:27.645883+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112754688 unmapped: 24805376 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a379d3800
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:28.646099+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f9b73000/0x0/0x4ffc00000, data 0x1219578/0x12d1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112082944 unmapped: 25477120 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:29.646339+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111632384 unmapped: 25927680 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:30.646560+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112271360 unmapped: 25288704 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:31.646719+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112353280 unmapped: 25206784 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1227647 data_alloc: 218103808 data_used: 5545984
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f9b7b000/0x0/0x4ffc00000, data 0x1219578/0x12d1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:32.646842+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112353280 unmapped: 25206784 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.27301 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:33.647040+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112353280 unmapped: 25206784 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:34.647274+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112353280 unmapped: 25206784 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f9b7b000/0x0/0x4ffc00000, data 0x1219578/0x12d1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:35.647507+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112353280 unmapped: 25206784 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:36.647697+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f9b7b000/0x0/0x4ffc00000, data 0x1219578/0x12d1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112353280 unmapped: 25206784 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1227647 data_alloc: 218103808 data_used: 5545984
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:37.647843+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112353280 unmapped: 25206784 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:38.647998+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112353280 unmapped: 25206784 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:39.648391+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.042070389s of 19.426975250s, submitted: 48
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 115458048 unmapped: 22102016 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:40.648520+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 116129792 unmapped: 21430272 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:41.648648+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 116211712 unmapped: 21348352 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1266207 data_alloc: 218103808 data_used: 6287360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:42.648803+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f9836000/0x0/0x4ffc00000, data 0x155e578/0x1616000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 116211712 unmapped: 21348352 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:43.648983+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 116211712 unmapped: 21348352 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:44.650749+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 116211712 unmapped: 21348352 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:45.650872+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 116211712 unmapped: 21348352 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:46.650999+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 116211712 unmapped: 21348352 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263887 data_alloc: 218103808 data_used: 6287360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:47.651144+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 115646464 unmapped: 21913600 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f9815000/0x0/0x4ffc00000, data 0x157f578/0x1637000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:48.651299+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 115646464 unmapped: 21913600 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:49.651457+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 115646464 unmapped: 21913600 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:50.651618+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 115654656 unmapped: 21905408 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:51.651800+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 115654656 unmapped: 21905408 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1264799 data_alloc: 218103808 data_used: 6356992
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:52.651998+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f9815000/0x0/0x4ffc00000, data 0x157f578/0x1637000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.029740334s of 13.182536125s, submitted: 43
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 115744768 unmapped: 21815296 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:53.652143+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 115744768 unmapped: 21815296 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:54.652262+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a379d3800 session 0x559a3811eb40
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35203c00
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 114909184 unmapped: 22650880 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a35203c00 session 0x559a37c60f00
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:55.652528+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113876992 unmapped: 23683072 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f9dd0000/0x0/0x4ffc00000, data 0xfc4578/0x107c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:56.652735+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113876992 unmapped: 23683072 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193731 data_alloc: 218103808 data_used: 3313664
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:57.652949+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113876992 unmapped: 23683072 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a35a7d400 session 0x559a35f2d2c0
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a34810800
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:58.653124+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a34810800 session 0x559a35a3da40
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112427008 unmapped: 25133056 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:49:59.653299+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112427008 unmapped: 25133056 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:00.653481+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112427008 unmapped: 25133056 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:01.653631+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112427008 unmapped: 25133056 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1122767 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:02.653831+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112427008 unmapped: 25133056 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:03.654065+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112427008 unmapped: 25133056 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:04.654225+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112427008 unmapped: 25133056 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:05.654388+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112427008 unmapped: 25133056 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:06.654726+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112427008 unmapped: 25133056 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1122767 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:07.655150+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112427008 unmapped: 25133056 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:08.655363+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112427008 unmapped: 25133056 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:09.655523+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112427008 unmapped: 25133056 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:10.655835+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112427008 unmapped: 25133056 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:11.656031+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112427008 unmapped: 25133056 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1122767 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:12.656215+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112427008 unmapped: 25133056 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:13.656397+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112427008 unmapped: 25133056 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:14.656575+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112427008 unmapped: 25133056 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:15.656757+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112427008 unmapped: 25133056 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:16.656952+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112427008 unmapped: 25133056 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1122767 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:17.657154+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112427008 unmapped: 25133056 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:18.657345+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112427008 unmapped: 25133056 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:19.657495+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112427008 unmapped: 25133056 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:20.657659+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35203000
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 27.805879593s of 27.928400040s, submitted: 32
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a35203000 session 0x559a350752c0
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112427008 unmapped: 25133056 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:21.657812+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112427008 unmapped: 25133056 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144923 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:22.657918+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112427008 unmapped: 25133056 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35203c00
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a35203c00 session 0x559a386ce3c0
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa276000/0x0/0x4ffc00000, data 0xb1e578/0xbd6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:23.658113+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a3644c800
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a3644c800 session 0x559a386e4960
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112427008 unmapped: 25133056 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:24.658295+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa276000/0x0/0x4ffc00000, data 0xb1e578/0xbd6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a3644c800
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a3644c800 session 0x559a37fedc20
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a34810800
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a34810800 session 0x559a385c45a0
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112435200 unmapped: 25124864 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35203000
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a35203c00
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:25.658456+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112435200 unmapped: 25124864 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa275000/0x0/0x4ffc00000, data 0xb1e588/0xbd7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:26.658572+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa275000/0x0/0x4ffc00000, data 0xb1e588/0xbd7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112287744 unmapped: 25272320 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154646 data_alloc: 218103808 data_used: 1196032
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:27.658688+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112287744 unmapped: 25272320 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa275000/0x0/0x4ffc00000, data 0xb1e588/0xbd7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:28.658819+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112287744 unmapped: 25272320 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:29.658982+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112287744 unmapped: 25272320 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:30.659213+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112295936 unmapped: 25264128 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:31.659308+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112295936 unmapped: 25264128 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154646 data_alloc: 218103808 data_used: 1196032
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:32.659474+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa275000/0x0/0x4ffc00000, data 0xb1e588/0xbd7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112295936 unmapped: 25264128 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:33.659588+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112295936 unmapped: 25264128 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa275000/0x0/0x4ffc00000, data 0xb1e588/0xbd7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:34.659732+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112295936 unmapped: 25264128 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:35.659879+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112295936 unmapped: 25264128 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.447950363s of 15.509048462s, submitted: 16
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:36.660026+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 115269632 unmapped: 22290432 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261110 data_alloc: 218103808 data_used: 1269760
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:37.660218+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f9450000/0x0/0x4ffc00000, data 0x1942588/0x19fb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113729536 unmapped: 23830528 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:38.660390+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113729536 unmapped: 23830528 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:39.660583+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f9431000/0x0/0x4ffc00000, data 0x1961588/0x1a1a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113729536 unmapped: 23830528 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:40.660720+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f9431000/0x0/0x4ffc00000, data 0x1961588/0x1a1a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113737728 unmapped: 23822336 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:41.660867+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113737728 unmapped: 23822336 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263414 data_alloc: 218103808 data_used: 1269760
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:42.661032+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113737728 unmapped: 23822336 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:43.661247+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 24018944 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:44.661444+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 24018944 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:45.661641+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f942f000/0x0/0x4ffc00000, data 0x1964588/0x1a1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 24018944 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:46.661816+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 24018944 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1260926 data_alloc: 218103808 data_used: 1269760
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:47.662053+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113549312 unmapped: 24010752 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:48.662199+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113549312 unmapped: 24010752 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4f942f000/0x0/0x4ffc00000, data 0x1964588/0x1a1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a35203000 session 0x559a38705860
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.065621376s of 13.279171944s, submitted: 76
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a35203c00 session 0x559a380d6780
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:49.662358+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: handle_auth_request added challenge on 0x559a37c0a400
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113549312 unmapped: 24010752 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:50.662479+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 113549312 unmapped: 24010752 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:51.662640+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111755264 unmapped: 25804800 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 ms_handle_reset con 0x559a37c0a400 session 0x559a37fed4a0
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:52.662781+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 25788416 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:53.662878+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 25788416 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:54.663021+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 25788416 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:55.663155+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 25788416 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:56.663296+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 25788416 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:57.663582+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 25788416 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:58.663755+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 25788416 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:50:59.663925+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 25788416 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:00.664105+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 25788416 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:01.664211+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 25788416 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:02.664325+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 25788416 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:03.664427+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 25788416 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:04.664591+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 25788416 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:05.664729+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 25788416 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:06.664908+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 25788416 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:07.665076+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 25788416 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:08.665206+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 25788416 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:09.665444+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 25788416 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:10.665566+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 25788416 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:11.665799+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 25788416 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:12.665962+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 25788416 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:13.666266+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 25788416 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:14.666444+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 25788416 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:15.666583+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 25788416 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:16.666767+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 25788416 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:17.666971+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 25788416 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:18.667097+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 25788416 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:19.667263+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 25788416 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:20.667475+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 25788416 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:21.667637+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 25788416 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:22.667750+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 25788416 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:23.667958+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 25788416 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:24.668109+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 25788416 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:25.668307+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 25788416 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:26.668419+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 25788416 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:27.668604+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 25788416 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:28.668775+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 25788416 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:29.668903+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 25788416 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:30.669056+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 25788416 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:31.669270+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 25788416 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:32.669381+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 25788416 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:33.669537+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 25788416 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:34.669694+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 25788416 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:35.669845+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111779840 unmapped: 25780224 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:36.669924+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111779840 unmapped: 25780224 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:37.670053+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111779840 unmapped: 25780224 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:38.670268+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111779840 unmapped: 25780224 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:39.670430+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111779840 unmapped: 25780224 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:40.670618+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111779840 unmapped: 25780224 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:41.670760+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111779840 unmapped: 25780224 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:42.670895+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111779840 unmapped: 25780224 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:43.671030+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111779840 unmapped: 25780224 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:44.671229+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111779840 unmapped: 25780224 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:45.671369+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111779840 unmapped: 25780224 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:46.671499+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111779840 unmapped: 25780224 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:47.671691+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111779840 unmapped: 25780224 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:48.671816+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111779840 unmapped: 25780224 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:49.671936+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111779840 unmapped: 25780224 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:50.672057+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111788032 unmapped: 25772032 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:51.672228+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111788032 unmapped: 25772032 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:52.672340+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111788032 unmapped: 25772032 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:53.672458+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111788032 unmapped: 25772032 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:54.672572+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111788032 unmapped: 25772032 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:55.672713+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: do_command 'config diff' '{prefix=config diff}'
Sep 30 15:00:20 compute-0 ceph-osd[82707]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Sep 30 15:00:20 compute-0 ceph-osd[82707]: do_command 'config show' '{prefix=config show}'
Sep 30 15:00:20 compute-0 ceph-osd[82707]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Sep 30 15:00:20 compute-0 ceph-osd[82707]: do_command 'counter dump' '{prefix=counter dump}'
Sep 30 15:00:20 compute-0 ceph-osd[82707]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111788032 unmapped: 25772032 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: do_command 'counter schema' '{prefix=counter schema}'
Sep 30 15:00:20 compute-0 ceph-osd[82707]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:56.672833+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111501312 unmapped: 26058752 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:57.673006+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111869952 unmapped: 25690112 heap: 137560064 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:58.673136+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: do_command 'log dump' '{prefix=log dump}'
Sep 30 15:00:20 compute-0 ceph-osd[82707]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Sep 30 15:00:20 compute-0 ceph-osd[82707]: do_command 'perf dump' '{prefix=perf dump}'
Sep 30 15:00:20 compute-0 ceph-osd[82707]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111591424 unmapped: 37011456 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Sep 30 15:00:20 compute-0 ceph-osd[82707]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:51:59.673217+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: do_command 'perf schema' '{prefix=perf schema}'
Sep 30 15:00:20 compute-0 ceph-osd[82707]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111222784 unmapped: 37380096 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:52:00.673354+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111222784 unmapped: 37380096 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:52:01.673474+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111222784 unmapped: 37380096 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:52:02.673593+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111222784 unmapped: 37380096 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:52:03.673735+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111222784 unmapped: 37380096 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:52:04.673855+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111222784 unmapped: 37380096 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:52:05.674003+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111230976 unmapped: 37371904 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:52:06.674131+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111230976 unmapped: 37371904 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:52:07.674327+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111230976 unmapped: 37371904 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:52:08.674463+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111230976 unmapped: 37371904 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:52:09.674577+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111230976 unmapped: 37371904 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:52:10.674717+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111230976 unmapped: 37371904 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:52:11.674888+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111230976 unmapped: 37371904 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:52:12.675021+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111230976 unmapped: 37371904 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:52:13.675159+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111230976 unmapped: 37371904 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:52:14.675334+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111239168 unmapped: 37363712 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:52:15.675448+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111239168 unmapped: 37363712 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:52:16.675566+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111239168 unmapped: 37363712 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:52:17.675726+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111239168 unmapped: 37363712 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:52:18.675847+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111239168 unmapped: 37363712 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:52:19.675976+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111239168 unmapped: 37363712 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:52:20.676093+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111239168 unmapped: 37363712 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:52:21.676276+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111239168 unmapped: 37363712 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:52:22.676466+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111239168 unmapped: 37363712 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:52:23.676584+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111239168 unmapped: 37363712 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:52:24.676709+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111239168 unmapped: 37363712 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:52:25.676832+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111239168 unmapped: 37363712 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:52:26.676946+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111239168 unmapped: 37363712 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:52:27.677091+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111247360 unmapped: 37355520 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:52:28.677247+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111247360 unmapped: 37355520 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:52:29.677366+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111247360 unmapped: 37355520 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:52:30.677521+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111247360 unmapped: 37355520 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:52:31.677737+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111247360 unmapped: 37355520 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:52:32.677888+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111247360 unmapped: 37355520 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:52:33.677977+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111247360 unmapped: 37355520 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:52:34.678119+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111247360 unmapped: 37355520 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:52:35.678311+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111247360 unmapped: 37355520 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:52:36.678451+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111247360 unmapped: 37355520 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:52:37.678607+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111247360 unmapped: 37355520 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:52:38.678740+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111247360 unmapped: 37355520 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:52:39.678868+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111255552 unmapped: 37347328 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:52:40.678997+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:52:41.679120+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111255552 unmapped: 37347328 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:52:42.679252+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111255552 unmapped: 37347328 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:52:43.679405+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111255552 unmapped: 37347328 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:52:44.679570+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111255552 unmapped: 37347328 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:52:45.679726+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111255552 unmapped: 37347328 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:52:46.679987+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111255552 unmapped: 37347328 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:52:47.680233+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111255552 unmapped: 37347328 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:52:48.680391+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111255552 unmapped: 37347328 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:52:49.680756+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111255552 unmapped: 37347328 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:52:50.680931+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111255552 unmapped: 37347328 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:52:51.681079+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111255552 unmapped: 37347328 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:52:52.681229+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111255552 unmapped: 37347328 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:52:53.681356+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111255552 unmapped: 37347328 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:52:54.681710+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111263744 unmapped: 37339136 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:52:55.681917+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111263744 unmapped: 37339136 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:52:56.682131+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111263744 unmapped: 37339136 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:52:57.682367+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111263744 unmapped: 37339136 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:52:58.682491+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111263744 unmapped: 37339136 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:52:59.682604+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111263744 unmapped: 37339136 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:53:00.682930+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111263744 unmapped: 37339136 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:53:01.683142+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111263744 unmapped: 37339136 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:53:02.683289+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111263744 unmapped: 37339136 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:53:03.683534+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111263744 unmapped: 37339136 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:53:04.683777+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111263744 unmapped: 37339136 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:53:05.683988+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111263744 unmapped: 37339136 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:53:06.684302+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111263744 unmapped: 37339136 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:53:07.684594+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111263744 unmapped: 37339136 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:53:08.684854+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111263744 unmapped: 37339136 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:53:09.685052+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111263744 unmapped: 37339136 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:53:10.685277+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111271936 unmapped: 37330944 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:53:11.685561+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111271936 unmapped: 37330944 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:53:12.685866+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111271936 unmapped: 37330944 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:53:13.686126+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111271936 unmapped: 37330944 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:53:14.686416+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111271936 unmapped: 37330944 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:53:15.686679+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111271936 unmapped: 37330944 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:53:16.686950+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111271936 unmapped: 37330944 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:53:17.687289+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111271936 unmapped: 37330944 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:53:18.687649+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111271936 unmapped: 37330944 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:53:19.687961+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111280128 unmapped: 37322752 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:53:20.688219+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111280128 unmapped: 37322752 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:53:21.688395+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111280128 unmapped: 37322752 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:53:22.688562+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111280128 unmapped: 37322752 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 12K writes, 46K keys, 12K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 12K writes, 3971 syncs, 3.27 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1924 writes, 5855 keys, 1924 commit groups, 1.0 writes per commit group, ingest: 5.83 MB, 0.01 MB/s
                                           Interval WAL: 1924 writes, 867 syncs, 2.22 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:53:23.688747+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111280128 unmapped: 37322752 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:53:24.688978+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111280128 unmapped: 37322752 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:53:25.689149+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111280128 unmapped: 37322752 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:53:26.689379+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111280128 unmapped: 37322752 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:53:27.689629+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111280128 unmapped: 37322752 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:53:28.689824+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111280128 unmapped: 37322752 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:53:29.689980+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111280128 unmapped: 37322752 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:53:30.690237+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111280128 unmapped: 37322752 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:53:31.690483+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111280128 unmapped: 37322752 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:53:32.690659+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111280128 unmapped: 37322752 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:53:33.690823+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111280128 unmapped: 37322752 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:53:34.690996+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111288320 unmapped: 37314560 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:53:35.691153+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111288320 unmapped: 37314560 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:53:36.691364+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111288320 unmapped: 37314560 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:53:37.691639+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111288320 unmapped: 37314560 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:53:38.691831+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111288320 unmapped: 37314560 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:53:39.691969+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111288320 unmapped: 37314560 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:53:40.692112+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111288320 unmapped: 37314560 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:53:41.692273+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111288320 unmapped: 37314560 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:53:42.692438+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111296512 unmapped: 37306368 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:53:43.692619+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111296512 unmapped: 37306368 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:53:44.692761+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111296512 unmapped: 37306368 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:53:45.692972+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111296512 unmapped: 37306368 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:53:46.693234+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111296512 unmapped: 37306368 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:53:47.693462+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111296512 unmapped: 37306368 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:53:48.693673+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111296512 unmapped: 37306368 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:53:49.693888+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111296512 unmapped: 37306368 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:53:50.694086+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111296512 unmapped: 37306368 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:53:51.694288+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111296512 unmapped: 37306368 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:53:52.694461+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111296512 unmapped: 37306368 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:53:53.694601+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111296512 unmapped: 37306368 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:53:54.694756+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111296512 unmapped: 37306368 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:53:55.694925+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111296512 unmapped: 37306368 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:53:56.695076+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111296512 unmapped: 37306368 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:53:57.695565+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111304704 unmapped: 37298176 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:53:58.695971+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111304704 unmapped: 37298176 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:53:59.696210+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111304704 unmapped: 37298176 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:54:00.696702+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111304704 unmapped: 37298176 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:54:01.697004+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111304704 unmapped: 37298176 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:54:02.697374+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111304704 unmapped: 37298176 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:54:03.697698+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111304704 unmapped: 37298176 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:54:04.697895+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111304704 unmapped: 37298176 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:54:05.705247+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111304704 unmapped: 37298176 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:54:06.705512+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111312896 unmapped: 37289984 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:54:07.705707+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111312896 unmapped: 37289984 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:54:08.705996+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111312896 unmapped: 37289984 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:54:09.706282+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111312896 unmapped: 37289984 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:54:10.706441+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111312896 unmapped: 37289984 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:54:11.706676+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111312896 unmapped: 37289984 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:54:12.706808+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111312896 unmapped: 37289984 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:54:13.707024+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111312896 unmapped: 37289984 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:54:14.707240+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111312896 unmapped: 37289984 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:54:15.707349+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111312896 unmapped: 37289984 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:54:16.707472+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111312896 unmapped: 37289984 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:54:17.707664+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111312896 unmapped: 37289984 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:54:18.707852+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111312896 unmapped: 37289984 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:54:19.707977+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111312896 unmapped: 37289984 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:54:20.708094+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111312896 unmapped: 37289984 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:54:21.708283+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111312896 unmapped: 37289984 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:54:22.708442+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111321088 unmapped: 37281792 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:54:23.708610+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111321088 unmapped: 37281792 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:54:24.708774+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111321088 unmapped: 37281792 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:54:25.708955+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111321088 unmapped: 37281792 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:54:26.709147+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111321088 unmapped: 37281792 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:54:27.709398+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111321088 unmapped: 37281792 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:54:28.709524+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111321088 unmapped: 37281792 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:54:29.709740+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111321088 unmapped: 37281792 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:54:30.710471+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111329280 unmapped: 37273600 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:54:31.711727+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111329280 unmapped: 37273600 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:54:32.712642+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111329280 unmapped: 37273600 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:54:33.712796+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111329280 unmapped: 37273600 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:54:34.713401+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111329280 unmapped: 37273600 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:54:35.714162+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111329280 unmapped: 37273600 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:54:36.714456+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111329280 unmapped: 37273600 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:54:37.714715+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111329280 unmapped: 37273600 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:54:38.715111+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111329280 unmapped: 37273600 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:54:39.715325+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111329280 unmapped: 37273600 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:54:40.715885+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111329280 unmapped: 37273600 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:54:41.716185+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111329280 unmapped: 37273600 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:54:42.716360+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111329280 unmapped: 37273600 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:54:43.716610+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111329280 unmapped: 37273600 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:54:44.716828+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111329280 unmapped: 37273600 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:54:45.716984+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111329280 unmapped: 37273600 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:54:46.717142+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111337472 unmapped: 37265408 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:54:47.717348+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111337472 unmapped: 37265408 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:54:48.717502+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111337472 unmapped: 37265408 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:54:49.717632+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111337472 unmapped: 37265408 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:54:50.717795+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111337472 unmapped: 37265408 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:54:51.717944+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111337472 unmapped: 37265408 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:54:52.718093+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111337472 unmapped: 37265408 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:54:53.718259+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111337472 unmapped: 37265408 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:54:54.718413+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111337472 unmapped: 37265408 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:54:55.718556+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111337472 unmapped: 37265408 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:54:56.718700+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111337472 unmapped: 37265408 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:54:57.718938+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111345664 unmapped: 37257216 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:54:58.719106+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111345664 unmapped: 37257216 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:54:59.719281+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111345664 unmapped: 37257216 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:55:00.719460+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111345664 unmapped: 37257216 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:55:01.719634+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111345664 unmapped: 37257216 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:55:02.719780+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111345664 unmapped: 37257216 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:55:03.719944+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111345664 unmapped: 37257216 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:55:04.720118+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 255.292510986s of 255.510238647s, submitted: 26
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa48d000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111345664 unmapped: 37257216 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:55:05.720238+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111353856 unmapped: 37249024 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:55:06.720360+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111386624 unmapped: 37216256 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:55:07.720531+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112476160 unmapped: 36126720 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:55:08.720680+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fb4ad000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112476160 unmapped: 36126720 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:55:09.720847+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112476160 unmapped: 36126720 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:55:10.720994+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112476160 unmapped: 36126720 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fb4ad000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:55:11.721116+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112476160 unmapped: 36126720 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:55:12.721310+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112476160 unmapped: 36126720 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:55:13.721456+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112476160 unmapped: 36126720 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:55:14.721621+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fb4ad000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112484352 unmapped: 36118528 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:55:15.721845+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112484352 unmapped: 36118528 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:55:16.722031+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112484352 unmapped: 36118528 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:55:17.722303+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112484352 unmapped: 36118528 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fb4ad000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:55:18.722433+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112492544 unmapped: 36110336 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:55:19.722568+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112492544 unmapped: 36110336 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:55:20.722728+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fb4ad000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112492544 unmapped: 36110336 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:55:21.722877+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fb4ad000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112492544 unmapped: 36110336 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:55:22.723009+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112492544 unmapped: 36110336 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:55:23.723210+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112492544 unmapped: 36110336 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:55:24.723384+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112492544 unmapped: 36110336 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:55:25.723549+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fb4ad000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112492544 unmapped: 36110336 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:55:26.723688+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112492544 unmapped: 36110336 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:55:27.723894+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112492544 unmapped: 36110336 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:55:28.724020+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112492544 unmapped: 36110336 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:55:29.724242+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112492544 unmapped: 36110336 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:55:30.724382+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fb4ad000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112500736 unmapped: 36102144 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:55:31.724513+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112500736 unmapped: 36102144 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:55:32.724634+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112500736 unmapped: 36102144 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:55:33.724823+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112500736 unmapped: 36102144 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:55:34.725032+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112500736 unmapped: 36102144 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fb4ad000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:55:35.725296+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112500736 unmapped: 36102144 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:55:36.725431+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112500736 unmapped: 36102144 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:55:37.725745+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fb4ad000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112500736 unmapped: 36102144 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:55:38.725968+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112500736 unmapped: 36102144 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fb4ad000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:55:39.726218+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112500736 unmapped: 36102144 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:55:40.726355+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112500736 unmapped: 36102144 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:55:41.726611+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 112500736 unmapped: 36102144 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:55:42.726771+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 38.010948181s of 38.385242462s, submitted: 118
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111460352 unmapped: 37142528 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fb4ad000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:55:43.726921+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111460352 unmapped: 37142528 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:55:44.727091+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fb4ad000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [0,0,1])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111517696 unmapped: 37085184 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:55:45.727283+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 111542272 unmapped: 37060608 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:55:46.727426+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110059520 unmapped: 38543360 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:55:47.727644+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110108672 unmapped: 38494208 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:55:48.727845+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110125056 unmapped: 38477824 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:55:49.728088+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110125056 unmapped: 38477824 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:55:50.728459+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fb4ad000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110133248 unmapped: 38469632 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:55:51.728629+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110133248 unmapped: 38469632 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:55:52.728919+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110133248 unmapped: 38469632 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:55:53.729280+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110133248 unmapped: 38469632 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:55:54.729648+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110133248 unmapped: 38469632 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:55:55.729786+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110133248 unmapped: 38469632 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:55:56.729946+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fb4ad000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110133248 unmapped: 38469632 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:55:57.730133+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fb4ad000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110141440 unmapped: 38461440 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:55:58.730309+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110141440 unmapped: 38461440 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:55:59.730483+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110141440 unmapped: 38461440 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:56:00.730700+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110141440 unmapped: 38461440 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:56:01.730848+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fb4ad000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110141440 unmapped: 38461440 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:56:02.731103+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110141440 unmapped: 38461440 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:56:03.731311+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fb4ad000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fb4ad000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110141440 unmapped: 38461440 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:56:04.731537+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fb4ad000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110141440 unmapped: 38461440 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:56:05.731773+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110149632 unmapped: 38453248 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:56:06.731917+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110149632 unmapped: 38453248 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:56:07.732234+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110157824 unmapped: 38445056 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:56:08.733013+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110157824 unmapped: 38445056 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:56:09.733324+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fb4ad000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110157824 unmapped: 38445056 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:56:10.734996+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fb4ad000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110157824 unmapped: 38445056 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:56:11.735249+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fb4ad000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110157824 unmapped: 38445056 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:56:12.735533+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110157824 unmapped: 38445056 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:56:13.735892+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110166016 unmapped: 38436864 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:56:14.736204+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110166016 unmapped: 38436864 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:56:15.736436+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fb4ad000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110166016 unmapped: 38436864 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:56:16.736636+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110166016 unmapped: 38436864 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:56:17.736960+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110166016 unmapped: 38436864 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:56:18.737274+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fb4ad000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:56:19.737517+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110166016 unmapped: 38436864 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fb4ad000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:56:20.737744+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110166016 unmapped: 38436864 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:56:21.737949+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110166016 unmapped: 38436864 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:56:22.738200+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110174208 unmapped: 38428672 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fb4ad000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:56:23.738619+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110174208 unmapped: 38428672 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:56:24.738745+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110174208 unmapped: 38428672 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:56:25.738879+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110174208 unmapped: 38428672 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:56:26.739028+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110174208 unmapped: 38428672 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:56:27.739254+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110174208 unmapped: 38428672 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:56:28.739363+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110174208 unmapped: 38428672 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fb4ad000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:56:29.739489+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110174208 unmapped: 38428672 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:56:30.739657+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110174208 unmapped: 38428672 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:56:31.739835+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110174208 unmapped: 38428672 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:56:32.743699+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110174208 unmapped: 38428672 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fb4ad000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:56:33.743837+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110174208 unmapped: 38428672 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fb4ad000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fb4ad000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:56:34.744003+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110174208 unmapped: 38428672 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:56:35.744150+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110174208 unmapped: 38428672 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:56:36.744393+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110174208 unmapped: 38428672 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fb4ad000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:56:37.744639+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110174208 unmapped: 38428672 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:56:38.744905+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110174208 unmapped: 38428672 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:56:39.745098+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fb4ad000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110174208 unmapped: 38428672 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:56:40.745263+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110174208 unmapped: 38428672 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:56:41.745389+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110174208 unmapped: 38428672 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:56:42.745523+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110174208 unmapped: 38428672 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:56:43.745668+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110174208 unmapped: 38428672 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fb4ad000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:56:44.745840+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110174208 unmapped: 38428672 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fb4ad000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:56:45.746020+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110174208 unmapped: 38428672 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:56:46.746469+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110174208 unmapped: 38428672 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:56:47.746669+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110174208 unmapped: 38428672 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:56:48.746839+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110174208 unmapped: 38428672 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:56:49.747009+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110174208 unmapped: 38428672 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:56:50.747209+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fb4ad000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110174208 unmapped: 38428672 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:56:51.747369+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110174208 unmapped: 38428672 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:56:52.747539+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110174208 unmapped: 38428672 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:56:53.747750+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110174208 unmapped: 38428672 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:56:54.747872+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110174208 unmapped: 38428672 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:56:55.748016+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110174208 unmapped: 38428672 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:56:56.748223+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fb4ad000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110174208 unmapped: 38428672 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:56:57.748446+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110174208 unmapped: 38428672 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:56:58.748594+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110174208 unmapped: 38428672 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:56:59.748983+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110174208 unmapped: 38428672 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:57:00.749119+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110174208 unmapped: 38428672 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fb4ad000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:57:01.749278+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110174208 unmapped: 38428672 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:57:02.749464+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110174208 unmapped: 38428672 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:57:03.749635+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110174208 unmapped: 38428672 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:57:04.749796+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110174208 unmapped: 38428672 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fb4ad000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:57:05.749906+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110174208 unmapped: 38428672 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:57:06.750093+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110174208 unmapped: 38428672 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:57:07.750310+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110174208 unmapped: 38428672 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:57:08.750465+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110174208 unmapped: 38428672 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:57:09.750613+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110174208 unmapped: 38428672 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:57:10.750782+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fb4ad000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110174208 unmapped: 38428672 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:57:11.750924+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110174208 unmapped: 38428672 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fb4ad000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fb4ad000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:57:12.751137+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110174208 unmapped: 38428672 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:57:13.751277+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110174208 unmapped: 38428672 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:57:14.751501+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110174208 unmapped: 38428672 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fb4ad000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:57:15.751640+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110174208 unmapped: 38428672 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:57:16.751796+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110174208 unmapped: 38428672 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:57:17.751953+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110174208 unmapped: 38428672 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:57:18.752209+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fb4ad000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110174208 unmapped: 38428672 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:57:19.752383+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110174208 unmapped: 38428672 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:57:20.752582+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110174208 unmapped: 38428672 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:57:21.752743+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110174208 unmapped: 38428672 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:57:22.752893+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110174208 unmapped: 38428672 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:57:23.753030+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110174208 unmapped: 38428672 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:57:24.753154+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fb4ad000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110174208 unmapped: 38428672 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:57:25.753317+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110174208 unmapped: 38428672 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:57:26.753467+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110182400 unmapped: 38420480 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fb4ad000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:57:27.753656+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110182400 unmapped: 38420480 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:57:28.753827+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110182400 unmapped: 38420480 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:57:29.753976+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110182400 unmapped: 38420480 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:57:30.754202+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110182400 unmapped: 38420480 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fb4ad000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:57:31.754357+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110182400 unmapped: 38420480 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:57:32.754510+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110182400 unmapped: 38420480 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:57:33.754638+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110182400 unmapped: 38420480 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:57:34.754795+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110182400 unmapped: 38420480 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:57:35.754968+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fb4ad000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110182400 unmapped: 38420480 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:57:36.755090+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110182400 unmapped: 38420480 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:57:37.755251+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110182400 unmapped: 38420480 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:57:38.755420+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110190592 unmapped: 38412288 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:57:39.755579+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110190592 unmapped: 38412288 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:57:40.755755+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110190592 unmapped: 38412288 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:57:41.755909+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fb4ad000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110190592 unmapped: 38412288 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:57:42.756143+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110190592 unmapped: 38412288 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:57:43.756392+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110190592 unmapped: 38412288 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:57:44.756540+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110190592 unmapped: 38412288 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:57:45.756667+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110190592 unmapped: 38412288 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:57:46.756833+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110190592 unmapped: 38412288 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fb4ad000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:57:47.757049+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110190592 unmapped: 38412288 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:57:48.757298+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110190592 unmapped: 38412288 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:57:49.757491+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110190592 unmapped: 38412288 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:57:50.757705+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110198784 unmapped: 38404096 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:57:51.757899+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110198784 unmapped: 38404096 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fb4ad000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:57:52.758069+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110198784 unmapped: 38404096 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:57:53.758258+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110198784 unmapped: 38404096 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:57:54.758605+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110198784 unmapped: 38404096 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:57:55.758794+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110198784 unmapped: 38404096 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:57:56.759017+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fb4ad000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110198784 unmapped: 38404096 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:57:57.759423+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110198784 unmapped: 38404096 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fb4ad000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fb4ad000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:57:58.759594+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110198784 unmapped: 38404096 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:57:59.759760+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110198784 unmapped: 38404096 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:58:00.759921+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110198784 unmapped: 38404096 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:58:01.760145+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110206976 unmapped: 38395904 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:58:02.760337+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fb4ad000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110206976 unmapped: 38395904 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:58:03.760478+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110206976 unmapped: 38395904 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:58:04.760640+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110206976 unmapped: 38395904 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fb4ad000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:58:05.760833+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110206976 unmapped: 38395904 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:58:06.761042+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fb4ad000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110206976 unmapped: 38395904 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:58:07.761264+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110206976 unmapped: 38395904 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:58:08.761476+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fb4ad000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110206976 unmapped: 38395904 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:58:09.761652+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110206976 unmapped: 38395904 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:58:10.761816+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110206976 unmapped: 38395904 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:58:11.761989+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110206976 unmapped: 38395904 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:58:12.762142+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fb4ad000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110206976 unmapped: 38395904 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fb4ad000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:58:13.762285+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110206976 unmapped: 38395904 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:58:14.762471+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110215168 unmapped: 38387712 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:58:15.762623+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fb4ad000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110215168 unmapped: 38387712 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:58:16.762764+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110215168 unmapped: 38387712 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:58:17.762943+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110215168 unmapped: 38387712 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:58:18.763101+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110215168 unmapped: 38387712 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:58:19.763265+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fb4ad000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110215168 unmapped: 38387712 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:58:20.763392+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110215168 unmapped: 38387712 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:58:21.763520+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110215168 unmapped: 38387712 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:58:22.763717+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110215168 unmapped: 38387712 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fb4ad000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets getting new tickets!
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:58:23.763960+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _finish_auth 0
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:58:23.764840+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110223360 unmapped: 38379520 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:58:24.764092+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110223360 unmapped: 38379520 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:58:25.764254+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110223360 unmapped: 38379520 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:58:26.764390+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110223360 unmapped: 38379520 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:58:27.764582+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110223360 unmapped: 38379520 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:58:28.764703+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110223360 unmapped: 38379520 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fb4ad000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:58:29.764862+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110223360 unmapped: 38379520 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:58:30.765126+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110223360 unmapped: 38379520 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:58:31.765282+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110223360 unmapped: 38379520 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fb4ad000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:58:32.765455+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110223360 unmapped: 38379520 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:58:33.765610+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110223360 unmapped: 38379520 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:58:34.765765+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110231552 unmapped: 38371328 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:58:35.765919+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fb4ad000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110231552 unmapped: 38371328 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:58:36.766099+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110231552 unmapped: 38371328 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:58:37.766328+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110231552 unmapped: 38371328 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:58:38.766514+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110231552 unmapped: 38371328 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:58:39.766650+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110231552 unmapped: 38371328 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:58:40.766782+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110231552 unmapped: 38371328 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:58:41.766918+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fb4ad000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110231552 unmapped: 38371328 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:58:42.767248+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110231552 unmapped: 38371328 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:58:43.767389+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110231552 unmapped: 38371328 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:58:44.767525+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fb4ad000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110231552 unmapped: 38371328 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:58:45.767703+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110239744 unmapped: 38363136 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:58:46.767869+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110247936 unmapped: 38354944 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:58:47.768050+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fb4ad000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110247936 unmapped: 38354944 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:58:48.769052+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110247936 unmapped: 38354944 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:58:49.769955+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110247936 unmapped: 38354944 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:58:50.770691+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110247936 unmapped: 38354944 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:58:51.770869+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110247936 unmapped: 38354944 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:58:52.771326+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fb4ad000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110247936 unmapped: 38354944 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:58:53.771536+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110247936 unmapped: 38354944 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:58:54.771781+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fb4ad000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110247936 unmapped: 38354944 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:58:55.771974+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110247936 unmapped: 38354944 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:58:56.772201+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110247936 unmapped: 38354944 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:58:57.772480+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110247936 unmapped: 38354944 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:58:58.772908+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fb4ad000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110247936 unmapped: 38354944 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:58:59.773230+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110247936 unmapped: 38354944 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:59:00.773419+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110247936 unmapped: 38354944 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:59:01.773621+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110256128 unmapped: 38346752 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:59:02.773908+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110256128 unmapped: 38346752 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:59:03.774053+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fb4ad000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110256128 unmapped: 38346752 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:59:04.774406+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110256128 unmapped: 38346752 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:59:05.774705+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110256128 unmapped: 38346752 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:59:06.775007+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110256128 unmapped: 38346752 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:59:07.775323+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110256128 unmapped: 38346752 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:59:08.775572+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fb4ad000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110256128 unmapped: 38346752 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:59:09.775935+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fb4ad000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110256128 unmapped: 38346752 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:59:10.776276+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110256128 unmapped: 38346752 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:59:11.776454+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110256128 unmapped: 38346752 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:59:12.776681+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110256128 unmapped: 38346752 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:59:13.776852+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110256128 unmapped: 38346752 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:59:14.776996+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110256128 unmapped: 38346752 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:59:15.777215+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fb4ad000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110264320 unmapped: 38338560 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:59:16.777632+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110264320 unmapped: 38338560 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:59:17.777901+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110264320 unmapped: 38338560 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fb4ad000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:59:18.778061+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fb4ad000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110264320 unmapped: 38338560 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:59:19.778251+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110264320 unmapped: 38338560 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:59:20.778468+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110264320 unmapped: 38338560 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:59:21.778674+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110264320 unmapped: 38338560 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:59:22.778840+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fb4ad000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110264320 unmapped: 38338560 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:59:23.779010+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110264320 unmapped: 38338560 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:59:24.779161+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fb4ad000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110264320 unmapped: 38338560 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:59:25.779351+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fb4ad000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110264320 unmapped: 38338560 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:59:26.779493+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110264320 unmapped: 38338560 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:59:27.779674+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fb4ad000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110264320 unmapped: 38338560 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:59:28.779860+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fb4ad000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110264320 unmapped: 38338560 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:59:29.780008+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110264320 unmapped: 38338560 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:59:30.780222+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fb4ad000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110272512 unmapped: 38330368 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:59:31.780369+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110272512 unmapped: 38330368 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:59:32.780555+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110272512 unmapped: 38330368 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:59:33.780690+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110272512 unmapped: 38330368 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fb4ad000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:59:34.780860+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fb4ad000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110272512 unmapped: 38330368 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:59:35.781108+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110272512 unmapped: 38330368 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:59:36.781238+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110272512 unmapped: 38330368 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:59:37.781454+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110272512 unmapped: 38330368 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:59:38.781593+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110272512 unmapped: 38330368 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:59:39.781766+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fb4ad000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110272512 unmapped: 38330368 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:59:40.781951+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110272512 unmapped: 38330368 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:59:41.782082+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110272512 unmapped: 38330368 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:59:42.782338+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110272512 unmapped: 38330368 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:59:43.782493+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110272512 unmapped: 38330368 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:59:44.782620+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110272512 unmapped: 38330368 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fb4ad000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:59:45.782741+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110272512 unmapped: 38330368 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:59:46.782868+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110288896 unmapped: 38313984 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: do_command 'config diff' '{prefix=config diff}'
Sep 30 15:00:20 compute-0 ceph-osd[82707]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:59:47.782998+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: do_command 'config show' '{prefix=config show}'
Sep 30 15:00:20 compute-0 ceph-osd[82707]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Sep 30 15:00:20 compute-0 ceph-osd[82707]: do_command 'counter dump' '{prefix=counter dump}'
Sep 30 15:00:20 compute-0 ceph-osd[82707]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Sep 30 15:00:20 compute-0 ceph-osd[82707]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fb4ad000/0x0/0x4ffc00000, data 0x907578/0x9bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Sep 30 15:00:20 compute-0 ceph-osd[82707]: do_command 'counter schema' '{prefix=counter schema}'
Sep 30 15:00:20 compute-0 ceph-osd[82707]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110428160 unmapped: 38174720 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:59:48.783101+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: prioritycache tune_memory target: 4294967296 mapped: 110723072 unmapped: 37879808 heap: 148602880 old mem: 2845415832 new mem: 2845415832
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Sep 30 15:00:20 compute-0 ceph-osd[82707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Sep 30 15:00:20 compute-0 ceph-osd[82707]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132897 data_alloc: 218103808 data_used: 143360
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: tick
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_tickets
Sep 30 15:00:20 compute-0 ceph-osd[82707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-09-30T14:59:49.783233+0000)
Sep 30 15:00:20 compute-0 ceph-osd[82707]: do_command 'log dump' '{prefix=log dump}'
Sep 30 15:00:20 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Sep 30 15:00:20 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3316553745' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Sep 30 15:00:20 compute-0 rsyslogd[1004]: imjournal from <np0005462840:ceph-osd>: begin to drop messages due to rate-limiting
Sep 30 15:00:20 compute-0 ceph-mon[74194]: from='client.18183 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:20 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/612524074' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Sep 30 15:00:20 compute-0 ceph-mon[74194]: from='client.27271 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:20 compute-0 ceph-mon[74194]: from='client.18189 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:20 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/3845630725' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Sep 30 15:00:20 compute-0 ceph-mon[74194]: from='client.27743 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:20 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/1824593376' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Sep 30 15:00:20 compute-0 ceph-mon[74194]: from='client.27289 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:20 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/2277989491' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Sep 30 15:00:20 compute-0 ceph-mon[74194]: from='client.18204 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:20 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/3165593364' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Sep 30 15:00:20 compute-0 ceph-mon[74194]: from='client.27761 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:20 compute-0 ceph-mon[74194]: from='client.27301 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:20 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/1955644768' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Sep 30 15:00:20 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/3316553745' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Sep 30 15:00:20 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.18219 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:20 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1352: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 544 B/s rd, 0 op/s
Sep 30 15:00:20 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.27313 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:20 compute-0 podman[302311]: 2025-09-30 15:00:20.961166583 +0000 UTC m=+0.052132872 container create 9c04208e618be49376e730ca404a90b6ae1373877e06c3ff40d2d09c71df4353 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_cannon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Sep 30 15:00:20 compute-0 systemd[1]: Started libpod-conmon-9c04208e618be49376e730ca404a90b6ae1373877e06c3ff40d2d09c71df4353.scope.
Sep 30 15:00:20 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.27782 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:21 compute-0 systemd[1]: Started libcrun container.
Sep 30 15:00:21 compute-0 podman[302311]: 2025-09-30 15:00:20.934794815 +0000 UTC m=+0.025761124 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 15:00:21 compute-0 podman[302311]: 2025-09-30 15:00:21.036902042 +0000 UTC m=+0.127868351 container init 9c04208e618be49376e730ca404a90b6ae1373877e06c3ff40d2d09c71df4353 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_cannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Sep 30 15:00:21 compute-0 podman[302311]: 2025-09-30 15:00:21.043946698 +0000 UTC m=+0.134912977 container start 9c04208e618be49376e730ca404a90b6ae1373877e06c3ff40d2d09c71df4353 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_cannon, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Sep 30 15:00:21 compute-0 jovial_cannon[302346]: 167 167
Sep 30 15:00:21 compute-0 systemd[1]: libpod-9c04208e618be49376e730ca404a90b6ae1373877e06c3ff40d2d09c71df4353.scope: Deactivated successfully.
Sep 30 15:00:21 compute-0 podman[302311]: 2025-09-30 15:00:21.05163658 +0000 UTC m=+0.142602859 container attach 9c04208e618be49376e730ca404a90b6ae1373877e06c3ff40d2d09c71df4353 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_cannon, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Sep 30 15:00:21 compute-0 podman[302311]: 2025-09-30 15:00:21.052216795 +0000 UTC m=+0.143183074 container died 9c04208e618be49376e730ca404a90b6ae1373877e06c3ff40d2d09c71df4353 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_cannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True)
Sep 30 15:00:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-80b3069b2ae62dc86078dbd192630cd0e853581e264825f291a40fe602646add-merged.mount: Deactivated successfully.
Sep 30 15:00:21 compute-0 podman[302311]: 2025-09-30 15:00:21.091899994 +0000 UTC m=+0.182866273 container remove 9c04208e618be49376e730ca404a90b6ae1373877e06c3ff40d2d09c71df4353 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_cannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Sep 30 15:00:21 compute-0 systemd[1]: libpod-conmon-9c04208e618be49376e730ca404a90b6ae1373877e06c3ff40d2d09c71df4353.scope: Deactivated successfully.
Sep 30 15:00:21 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Sep 30 15:00:21 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2307955320' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Sep 30 15:00:21 compute-0 nova_compute[261524]: 2025-09-30 15:00:21.186 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 15:00:21 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.18237 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:21 compute-0 podman[302376]: 2025-09-30 15:00:21.297434692 +0000 UTC m=+0.068336616 container create 65c156178fb23e7ff6dc6d79dd74c539ec5932edcfbc2ca259cabd7ec7485687 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_austin, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Sep 30 15:00:21 compute-0 systemd[1]: Started libpod-conmon-65c156178fb23e7ff6dc6d79dd74c539ec5932edcfbc2ca259cabd7ec7485687.scope.
Sep 30 15:00:21 compute-0 systemd[1]: Started libcrun container.
Sep 30 15:00:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61fb88b9b55c97ac3b77810fd284d3658c2a44803633f7d691f614b4195993c2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 15:00:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61fb88b9b55c97ac3b77810fd284d3658c2a44803633f7d691f614b4195993c2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 15:00:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61fb88b9b55c97ac3b77810fd284d3658c2a44803633f7d691f614b4195993c2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 15:00:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61fb88b9b55c97ac3b77810fd284d3658c2a44803633f7d691f614b4195993c2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 15:00:21 compute-0 podman[302376]: 2025-09-30 15:00:21.383623752 +0000 UTC m=+0.154525776 container init 65c156178fb23e7ff6dc6d79dd74c539ec5932edcfbc2ca259cabd7ec7485687 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_austin, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Sep 30 15:00:21 compute-0 podman[302376]: 2025-09-30 15:00:21.276184052 +0000 UTC m=+0.047085996 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 15:00:21 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 15:00:21 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000024s ======
Sep 30 15:00:21 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:15:00:21.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Sep 30 15:00:21 compute-0 podman[302376]: 2025-09-30 15:00:21.392633807 +0000 UTC m=+0.163535741 container start 65c156178fb23e7ff6dc6d79dd74c539ec5932edcfbc2ca259cabd7ec7485687 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_austin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True)
Sep 30 15:00:21 compute-0 podman[302376]: 2025-09-30 15:00:21.396187446 +0000 UTC m=+0.167089390 container attach 65c156178fb23e7ff6dc6d79dd74c539ec5932edcfbc2ca259cabd7ec7485687 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_austin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Sep 30 15:00:21 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.27322 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:21 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.27794 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:21 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Sep 30 15:00:21 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1053074916' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Sep 30 15:00:21 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.18258 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:21 compute-0 ecstatic_austin[302416]: {
Sep 30 15:00:21 compute-0 ecstatic_austin[302416]:     "0": [
Sep 30 15:00:21 compute-0 ecstatic_austin[302416]:         {
Sep 30 15:00:21 compute-0 ecstatic_austin[302416]:             "devices": [
Sep 30 15:00:21 compute-0 ecstatic_austin[302416]:                 "/dev/loop3"
Sep 30 15:00:21 compute-0 ecstatic_austin[302416]:             ],
Sep 30 15:00:21 compute-0 ecstatic_austin[302416]:             "lv_name": "ceph_lv0",
Sep 30 15:00:21 compute-0 ecstatic_austin[302416]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 15:00:21 compute-0 ecstatic_austin[302416]:             "lv_size": "21470642176",
Sep 30 15:00:21 compute-0 ecstatic_austin[302416]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5e3c7776-ac03-5698-b79f-a6dc2d80cae6,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1bf35304-bfb4-41f5-b832-570aa31de1b2,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Sep 30 15:00:21 compute-0 ecstatic_austin[302416]:             "lv_uuid": "f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L",
Sep 30 15:00:21 compute-0 ecstatic_austin[302416]:             "name": "ceph_lv0",
Sep 30 15:00:21 compute-0 ecstatic_austin[302416]:             "path": "/dev/ceph_vg0/ceph_lv0",
Sep 30 15:00:21 compute-0 ecstatic_austin[302416]:             "tags": {
Sep 30 15:00:21 compute-0 ecstatic_austin[302416]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Sep 30 15:00:21 compute-0 ecstatic_austin[302416]:                 "ceph.block_uuid": "f0B035-p06E-P7cT-d2fb-almf-6QtT-b1OY7L",
Sep 30 15:00:21 compute-0 ecstatic_austin[302416]:                 "ceph.cephx_lockbox_secret": "",
Sep 30 15:00:21 compute-0 ecstatic_austin[302416]:                 "ceph.cluster_fsid": "5e3c7776-ac03-5698-b79f-a6dc2d80cae6",
Sep 30 15:00:21 compute-0 ecstatic_austin[302416]:                 "ceph.cluster_name": "ceph",
Sep 30 15:00:21 compute-0 ecstatic_austin[302416]:                 "ceph.crush_device_class": "",
Sep 30 15:00:21 compute-0 ecstatic_austin[302416]:                 "ceph.encrypted": "0",
Sep 30 15:00:21 compute-0 ecstatic_austin[302416]:                 "ceph.osd_fsid": "1bf35304-bfb4-41f5-b832-570aa31de1b2",
Sep 30 15:00:21 compute-0 ecstatic_austin[302416]:                 "ceph.osd_id": "0",
Sep 30 15:00:21 compute-0 ecstatic_austin[302416]:                 "ceph.osdspec_affinity": "default_drive_group",
Sep 30 15:00:21 compute-0 ecstatic_austin[302416]:                 "ceph.type": "block",
Sep 30 15:00:21 compute-0 ecstatic_austin[302416]:                 "ceph.vdo": "0",
Sep 30 15:00:21 compute-0 ecstatic_austin[302416]:                 "ceph.with_tpm": "0"
Sep 30 15:00:21 compute-0 ecstatic_austin[302416]:             },
Sep 30 15:00:21 compute-0 ecstatic_austin[302416]:             "type": "block",
Sep 30 15:00:21 compute-0 ecstatic_austin[302416]:             "vg_name": "ceph_vg0"
Sep 30 15:00:21 compute-0 ecstatic_austin[302416]:         }
Sep 30 15:00:21 compute-0 ecstatic_austin[302416]:     ]
Sep 30 15:00:21 compute-0 ecstatic_austin[302416]: }
Sep 30 15:00:21 compute-0 systemd[1]: libpod-65c156178fb23e7ff6dc6d79dd74c539ec5932edcfbc2ca259cabd7ec7485687.scope: Deactivated successfully.
Sep 30 15:00:21 compute-0 podman[302376]: 2025-09-30 15:00:21.760285019 +0000 UTC m=+0.531186953 container died 65c156178fb23e7ff6dc6d79dd74c539ec5932edcfbc2ca259cabd7ec7485687 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_austin, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 15:00:21 compute-0 ceph-mon[74194]: from='client.18219 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:21 compute-0 ceph-mon[74194]: pgmap v1352: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 544 B/s rd, 0 op/s
Sep 30 15:00:21 compute-0 ceph-mon[74194]: from='client.27313 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:21 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/610067927' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Sep 30 15:00:21 compute-0 ceph-mon[74194]: from='client.27782 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:21 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/1186070082' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Sep 30 15:00:21 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/2307955320' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Sep 30 15:00:21 compute-0 ceph-mon[74194]: from='client.18237 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:21 compute-0 ceph-mon[74194]: from='client.27322 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:21 compute-0 ceph-mon[74194]: from='client.27794 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:21 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/687688909' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Sep 30 15:00:21 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/2625842095' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Sep 30 15:00:21 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/1053074916' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Sep 30 15:00:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-61fb88b9b55c97ac3b77810fd284d3658c2a44803633f7d691f614b4195993c2-merged.mount: Deactivated successfully.
Sep 30 15:00:21 compute-0 podman[302376]: 2025-09-30 15:00:21.813897467 +0000 UTC m=+0.584799391 container remove 65c156178fb23e7ff6dc6d79dd74c539ec5932edcfbc2ca259cabd7ec7485687 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_austin, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Sep 30 15:00:21 compute-0 systemd[1]: libpod-conmon-65c156178fb23e7ff6dc6d79dd74c539ec5932edcfbc2ca259cabd7ec7485687.scope: Deactivated successfully.
Sep 30 15:00:21 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.27337 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:21 compute-0 sudo[302200]: pam_unix(sudo:session): session closed for user root
Sep 30 15:00:21 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.18261 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:21 compute-0 sudo[302519]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Sep 30 15:00:21 compute-0 sudo[302519]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 15:00:21 compute-0 sudo[302519]: pam_unix(sudo:session): session closed for user root
Sep 30 15:00:22 compute-0 sudo[302552]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/5e3c7776-ac03-5698-b79f-a6dc2d80cae6/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 5e3c7776-ac03-5698-b79f-a6dc2d80cae6 -- raw list --format json
Sep 30 15:00:22 compute-0 sudo[302552]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 15:00:22 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.18273 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:22 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon stat"} v 0)
Sep 30 15:00:22 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1592697873' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Sep 30 15:00:22 compute-0 crontab[302610]: (root) LIST (root)
Sep 30 15:00:22 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.27352 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:22 compute-0 nova_compute[261524]: 2025-09-30 15:00:22.345 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 15:00:22 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 15:00:22 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 15:00:22 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:15:00:22.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 15:00:22 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.18279 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 15:00:22 compute-0 podman[302701]: 2025-09-30 15:00:22.468472686 +0000 UTC m=+0.057482685 container create e60b386eaf594144ef523b6ce2d66706705f4d65f8846ec96c7b47edac158421 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_booth, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid)
Sep 30 15:00:22 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.18285 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 15:00:22 compute-0 systemd[1]: Started libpod-conmon-e60b386eaf594144ef523b6ce2d66706705f4d65f8846ec96c7b47edac158421.scope.
Sep 30 15:00:22 compute-0 podman[302701]: 2025-09-30 15:00:22.436318374 +0000 UTC m=+0.025328413 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 15:00:22 compute-0 systemd[1]: Started libcrun container.
Sep 30 15:00:22 compute-0 podman[302701]: 2025-09-30 15:00:22.574916132 +0000 UTC m=+0.163926161 container init e60b386eaf594144ef523b6ce2d66706705f4d65f8846ec96c7b47edac158421 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_booth, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 15:00:22 compute-0 podman[302701]: 2025-09-30 15:00:22.583927167 +0000 UTC m=+0.172937166 container start e60b386eaf594144ef523b6ce2d66706705f4d65f8846ec96c7b47edac158421 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_booth, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 15:00:22 compute-0 podman[302701]: 2025-09-30 15:00:22.588009899 +0000 UTC m=+0.177019918 container attach e60b386eaf594144ef523b6ce2d66706705f4d65f8846ec96c7b47edac158421 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_booth, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2)
Sep 30 15:00:22 compute-0 happy_booth[302750]: 167 167
Sep 30 15:00:22 compute-0 systemd[1]: libpod-e60b386eaf594144ef523b6ce2d66706705f4d65f8846ec96c7b47edac158421.scope: Deactivated successfully.
Sep 30 15:00:22 compute-0 podman[302701]: 2025-09-30 15:00:22.596403348 +0000 UTC m=+0.185413357 container died e60b386eaf594144ef523b6ce2d66706705f4d65f8846ec96c7b47edac158421 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_booth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1)
Sep 30 15:00:22 compute-0 podman[302721]: 2025-09-30 15:00:22.607787892 +0000 UTC m=+0.084025737 container health_status b3fb2fd96e3ed0561f41ccf5f3a6a8f5edc1610051f196882b9fe64f54041c07 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Sep 30 15:00:22 compute-0 podman[302732]: 2025-09-30 15:00:22.622301334 +0000 UTC m=+0.092716724 container health_status c96c6cc1b89de09f21811bef665f35e069874e3ad1bbd4c37d02227af98e3458 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Sep 30 15:00:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-be9ceef08889c474dc4611215144f3867bc68e24a80884e40ab450e73645c0ae-merged.mount: Deactivated successfully.
Sep 30 15:00:22 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 15:00:22 compute-0 podman[302701]: 2025-09-30 15:00:22.645995455 +0000 UTC m=+0.235005454 container remove e60b386eaf594144ef523b6ce2d66706705f4d65f8846ec96c7b47edac158421 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_booth, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 15:00:22 compute-0 podman[302715]: 2025-09-30 15:00:22.647649416 +0000 UTC m=+0.135396679 container health_status 3f9405f717bf7bccb1d94628a6cea0442375ebf8d5cf43ef2536ee30dce6c6e0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=iscsid)
Sep 30 15:00:22 compute-0 systemd[1]: libpod-conmon-e60b386eaf594144ef523b6ce2d66706705f4d65f8846ec96c7b47edac158421.scope: Deactivated successfully.
Sep 30 15:00:22 compute-0 podman[302718]: 2025-09-30 15:00:22.662062026 +0000 UTC m=+0.148082145 container health_status 8ace90c1ad44d98a416105b72e1c541724757ab08efa10adfaa0df7e8c2027c6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Sep 30 15:00:22 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.27370 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 15:00:22 compute-0 ceph-mon[74194]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #81. Immutable memtables: 0.
Sep 30 15:00:22 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-15:00:22.693883) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Sep 30 15:00:22 compute-0 ceph-mon[74194]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 81
Sep 30 15:00:22 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759244422693914, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 1227, "num_deletes": 261, "total_data_size": 2061298, "memory_usage": 2089152, "flush_reason": "Manual Compaction"}
Sep 30 15:00:22 compute-0 ceph-mon[74194]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #82: started
Sep 30 15:00:22 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759244422704468, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 82, "file_size": 2022944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 37135, "largest_seqno": 38361, "table_properties": {"data_size": 2016791, "index_size": 3292, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14882, "raw_average_key_size": 20, "raw_value_size": 2003791, "raw_average_value_size": 2818, "num_data_blocks": 141, "num_entries": 711, "num_filter_entries": 711, "num_deletions": 261, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759244328, "oldest_key_time": 1759244328, "file_creation_time": 1759244422, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4a74fe2f-a33e-416b-ba25-743e7942b3ac", "db_session_id": "KY5CTSKWFSFJYE5835A9", "orig_file_number": 82, "seqno_to_time_mapping": "N/A"}}
Sep 30 15:00:22 compute-0 ceph-mon[74194]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 10643 microseconds, and 4456 cpu microseconds.
Sep 30 15:00:22 compute-0 ceph-mon[74194]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 15:00:22 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-15:00:22.704520) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #82: 2022944 bytes OK
Sep 30 15:00:22 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-15:00:22.704558) [db/memtable_list.cc:519] [default] Level-0 commit table #82 started
Sep 30 15:00:22 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-15:00:22.705623) [db/memtable_list.cc:722] [default] Level-0 commit table #82: memtable #1 done
Sep 30 15:00:22 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-15:00:22.705639) EVENT_LOG_v1 {"time_micros": 1759244422705633, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Sep 30 15:00:22 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-15:00:22.705660) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Sep 30 15:00:22 compute-0 ceph-mon[74194]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 2055412, prev total WAL file size 2055412, number of live WAL files 2.
Sep 30 15:00:22 compute-0 ceph-mon[74194]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000078.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 15:00:22 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-15:00:22.706314) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303032' seq:72057594037927935, type:22 .. '6C6F676D0031323539' seq:0, type:0; will stop at (end)
Sep 30 15:00:22 compute-0 ceph-mon[74194]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00
Sep 30 15:00:22 compute-0 ceph-mon[74194]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [82(1975KB)], [80(12MB)]
Sep 30 15:00:22 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759244422706345, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [82], "files_L6": [80], "score": -1, "input_data_size": 15045026, "oldest_snapshot_seqno": -1}
Sep 30 15:00:22 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.27836 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 15:00:22 compute-0 ceph-mon[74194]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #83: 6941 keys, 14875869 bytes, temperature: kUnknown
Sep 30 15:00:22 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759244422866933, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 83, "file_size": 14875869, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14830004, "index_size": 27385, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17413, "raw_key_size": 182834, "raw_average_key_size": 26, "raw_value_size": 14705529, "raw_average_value_size": 2118, "num_data_blocks": 1080, "num_entries": 6941, "num_filter_entries": 6941, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759241526, "oldest_key_time": 0, "file_creation_time": 1759244422, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4a74fe2f-a33e-416b-ba25-743e7942b3ac", "db_session_id": "KY5CTSKWFSFJYE5835A9", "orig_file_number": 83, "seqno_to_time_mapping": "N/A"}}
Sep 30 15:00:22 compute-0 podman[302866]: 2025-09-30 15:00:22.866833055 +0000 UTC m=+0.096579711 container create 5e98b49b95dee53ed58cee60b968bc10e3730cb72405d62640157db1e5817369 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_lewin, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Sep 30 15:00:22 compute-0 ceph-mon[74194]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Sep 30 15:00:22 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-15:00:22.867303) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 14875869 bytes
Sep 30 15:00:22 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-15:00:22.870932) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 93.6 rd, 92.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 12.4 +0.0 blob) out(14.2 +0.0 blob), read-write-amplify(14.8) write-amplify(7.4) OK, records in: 7481, records dropped: 540 output_compression: NoCompression
Sep 30 15:00:22 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-15:00:22.870971) EVENT_LOG_v1 {"time_micros": 1759244422870956, "job": 46, "event": "compaction_finished", "compaction_time_micros": 160679, "compaction_time_cpu_micros": 28359, "output_level": 6, "num_output_files": 1, "total_output_size": 14875869, "num_input_records": 7481, "num_output_records": 6941, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Sep 30 15:00:22 compute-0 ceph-mon[74194]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000082.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 15:00:22 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759244422871634, "job": 46, "event": "table_file_deletion", "file_number": 82}
Sep 30 15:00:22 compute-0 ceph-mon[74194]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000080.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Sep 30 15:00:22 compute-0 ceph-mon[74194]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759244422874461, "job": 46, "event": "table_file_deletion", "file_number": 80}
Sep 30 15:00:22 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-15:00:22.706273) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 15:00:22 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-15:00:22.874560) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 15:00:22 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-15:00:22.874566) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 15:00:22 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-15:00:22.874567) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 15:00:22 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-15:00:22.874569) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 15:00:22 compute-0 ceph-mon[74194]: rocksdb: (Original Log Time 2025/09/30-15:00:22.874571) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Sep 30 15:00:22 compute-0 podman[302866]: 2025-09-30 15:00:22.793321801 +0000 UTC m=+0.023068457 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Sep 30 15:00:22 compute-0 ceph-mon[74194]: from='client.18258 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:22 compute-0 ceph-mon[74194]: from='client.27337 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:22 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/2972698065' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Sep 30 15:00:22 compute-0 ceph-mon[74194]: from='client.18261 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:22 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/2096603462' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Sep 30 15:00:22 compute-0 ceph-mon[74194]: from='client.18273 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:22 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/1592697873' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Sep 30 15:00:22 compute-0 ceph-mon[74194]: from='client.27352 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:22 compute-0 ceph-mon[74194]: from='client.18279 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 15:00:22 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/3581572178' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Sep 30 15:00:22 compute-0 ceph-mon[74194]: from='client.18285 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 15:00:22 compute-0 systemd[1]: Started libpod-conmon-5e98b49b95dee53ed58cee60b968bc10e3730cb72405d62640157db1e5817369.scope.
Sep 30 15:00:22 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.18300 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 15:00:22 compute-0 systemd[1]: Started libcrun container.
Sep 30 15:00:22 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1353: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 544 B/s rd, 0 op/s
Sep 30 15:00:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a4c022653f5667c0052a0a46e065614f1252e7858807e9ec4523a01ea30e2b0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Sep 30 15:00:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a4c022653f5667c0052a0a46e065614f1252e7858807e9ec4523a01ea30e2b0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Sep 30 15:00:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a4c022653f5667c0052a0a46e065614f1252e7858807e9ec4523a01ea30e2b0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Sep 30 15:00:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a4c022653f5667c0052a0a46e065614f1252e7858807e9ec4523a01ea30e2b0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Sep 30 15:00:22 compute-0 podman[302866]: 2025-09-30 15:00:22.971848655 +0000 UTC m=+0.201595311 container init 5e98b49b95dee53ed58cee60b968bc10e3730cb72405d62640157db1e5817369 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_lewin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Sep 30 15:00:22 compute-0 podman[302866]: 2025-09-30 15:00:22.979927906 +0000 UTC m=+0.209674542 container start 5e98b49b95dee53ed58cee60b968bc10e3730cb72405d62640157db1e5817369 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_lewin, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 15:00:22 compute-0 podman[302866]: 2025-09-30 15:00:22.983409273 +0000 UTC m=+0.213155939 container attach 5e98b49b95dee53ed58cee60b968bc10e3730cb72405d62640157db1e5817369 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_lewin, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Sep 30 15:00:23 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.27385 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 15:00:23 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.27851 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 15:00:23 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "node ls"} v 0)
Sep 30 15:00:23 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2132511110' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Sep 30 15:00:23 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Sep 30 15:00:23 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2332530223' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Sep 30 15:00:23 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 15:00:23 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 15:00:23 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:15:00:23.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 15:00:23 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.18318 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 15:00:23 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.27400 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 15:00:23 compute-0 lvm[303046]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 15:00:23 compute-0 lvm[303046]: VG ceph_vg0 finished
Sep 30 15:00:23 compute-0 lvm[303049]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Sep 30 15:00:23 compute-0 lvm[303049]: VG ceph_vg0 finished
Sep 30 15:00:23 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.27860 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 15:00:23 compute-0 amazing_lewin[302901]: {}
Sep 30 15:00:23 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Sep 30 15:00:23 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4178876532' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Sep 30 15:00:23 compute-0 systemd[1]: libpod-5e98b49b95dee53ed58cee60b968bc10e3730cb72405d62640157db1e5817369.scope: Deactivated successfully.
Sep 30 15:00:23 compute-0 systemd[1]: libpod-5e98b49b95dee53ed58cee60b968bc10e3730cb72405d62640157db1e5817369.scope: Consumed 1.094s CPU time.
Sep 30 15:00:23 compute-0 podman[302866]: 2025-09-30 15:00:23.711360734 +0000 UTC m=+0.941107390 container died 5e98b49b95dee53ed58cee60b968bc10e3730cb72405d62640157db1e5817369 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_lewin, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Sep 30 15:00:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-5a4c022653f5667c0052a0a46e065614f1252e7858807e9ec4523a01ea30e2b0-merged.mount: Deactivated successfully.
Sep 30 15:00:23 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T15:00:23.763Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 15:00:23 compute-0 podman[302866]: 2025-09-30 15:00:23.785063243 +0000 UTC m=+1.014809869 container remove 5e98b49b95dee53ed58cee60b968bc10e3730cb72405d62640157db1e5817369 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_lewin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Sep 30 15:00:23 compute-0 systemd[1]: libpod-conmon-5e98b49b95dee53ed58cee60b968bc10e3730cb72405d62640157db1e5817369.scope: Deactivated successfully.
Sep 30 15:00:23 compute-0 sudo[302552]: pam_unix(sudo:session): session closed for user root
Sep 30 15:00:23 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Sep 30 15:00:23 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.18345 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 15:00:23 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 15:00:23 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Sep 30 15:00:23 compute-0 ceph-mon[74194]: log_channel(audit) log [INF] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 15:00:23 compute-0 ceph-mon[74194]: from='client.27370 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 15:00:23 compute-0 ceph-mon[74194]: from='client.27836 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 15:00:23 compute-0 ceph-mon[74194]: from='client.18300 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 15:00:23 compute-0 ceph-mon[74194]: pgmap v1353: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 544 B/s rd, 0 op/s
Sep 30 15:00:23 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/2146544097' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Sep 30 15:00:23 compute-0 ceph-mon[74194]: from='client.27385 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 15:00:23 compute-0 ceph-mon[74194]: from='client.27851 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 15:00:23 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/2132511110' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Sep 30 15:00:23 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/2332530223' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Sep 30 15:00:23 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/4246225559' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Sep 30 15:00:23 compute-0 ceph-mon[74194]: from='client.18318 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 15:00:23 compute-0 ceph-mon[74194]: from='client.27400 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 15:00:23 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/4178876532' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Sep 30 15:00:23 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/3725818950' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Sep 30 15:00:23 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/3988245963' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Sep 30 15:00:23 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 15:00:23 compute-0 sudo[303090]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Sep 30 15:00:23 compute-0 sudo[303090]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 15:00:23 compute-0 sudo[303090]: pam_unix(sudo:session): session closed for user root
Sep 30 15:00:23 compute-0 nova_compute[261524]: 2025-09-30 15:00:23.970 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 15:00:23 compute-0 nova_compute[261524]: 2025-09-30 15:00:23.970 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Sep 30 15:00:23 compute-0 nova_compute[261524]: 2025-09-30 15:00:23.971 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Sep 30 15:00:23 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.27418 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 15:00:23 compute-0 nova_compute[261524]: 2025-09-30 15:00:23.985 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Sep 30 15:00:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 15:00:23 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 15:00:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 15:00:23 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 15:00:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 15:00:23 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 15:00:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 15:00:24 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 15:00:24 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Sep 30 15:00:24 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2544445587' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Sep 30 15:00:24 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Sep 30 15:00:24 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/469547326' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Sep 30 15:00:24 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 15:00:24 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 15:00:24 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:15:00:24.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 15:00:24 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Sep 30 15:00:24 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4105469642' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Sep 30 15:00:24 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Sep 30 15:00:24 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1049965903' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Sep 30 15:00:24 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:15:00:24] "GET /metrics HTTP/1.1" 200 48526 "" "Prometheus/2.51.0"
Sep 30 15:00:24 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:15:00:24] "GET /metrics HTTP/1.1" 200 48526 "" "Prometheus/2.51.0"
Sep 30 15:00:24 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Sep 30 15:00:24 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2982375349' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Sep 30 15:00:24 compute-0 ceph-mon[74194]: from='client.27860 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 15:00:24 compute-0 ceph-mon[74194]: from='client.18345 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 15:00:24 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' 
Sep 30 15:00:24 compute-0 ceph-mon[74194]: from='client.27418 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 15:00:24 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/2544445587' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Sep 30 15:00:24 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/3427134988' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Sep 30 15:00:24 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/3416622452' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Sep 30 15:00:24 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/2858124576' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Sep 30 15:00:24 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/3790508759' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 15:00:24 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/469547326' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Sep 30 15:00:24 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/1197566716' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Sep 30 15:00:24 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/2514511684' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Sep 30 15:00:24 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/4105469642' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Sep 30 15:00:24 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/470998150' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Sep 30 15:00:24 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/1049965903' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Sep 30 15:00:24 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/2982375349' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Sep 30 15:00:24 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0)
Sep 30 15:00:24 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3178389753' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Sep 30 15:00:24 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1354: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 544 B/s rd, 0 op/s
Sep 30 15:00:25 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0)
Sep 30 15:00:25 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4149877547' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Sep 30 15:00:25 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Sep 30 15:00:25 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/928694564' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Sep 30 15:00:25 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 15:00:25 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 15:00:25 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:15:00:25.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 15:00:25 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Sep 30 15:00:25 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3829374134' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Sep 30 15:00:25 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Sep 30 15:00:25 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2756572804' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Sep 30 15:00:26 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/680175931' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Sep 30 15:00:26 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/3178389753' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Sep 30 15:00:26 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/2532094848' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Sep 30 15:00:26 compute-0 ceph-mon[74194]: pgmap v1354: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 544 B/s rd, 0 op/s
Sep 30 15:00:26 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/2064823419' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Sep 30 15:00:26 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/1533100333' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Sep 30 15:00:26 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/1884222502' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 15:00:26 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/2523769826' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Sep 30 15:00:26 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/4149877547' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Sep 30 15:00:26 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/928694564' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Sep 30 15:00:26 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/1886196490' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Sep 30 15:00:26 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/2277509402' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Sep 30 15:00:26 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/158164356' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Sep 30 15:00:26 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/687792466' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Sep 30 15:00:26 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/3829374134' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Sep 30 15:00:26 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/2756572804' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Sep 30 15:00:26 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/3882988074' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Sep 30 15:00:26 compute-0 nova_compute[261524]: 2025-09-30 15:00:26.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 15:00:26 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Sep 30 15:00:26 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3145002520' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Sep 30 15:00:26 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Sep 30 15:00:26 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1605193171' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Sep 30 15:00:26 compute-0 systemd[1]: Starting Hostname Service...
Sep 30 15:00:26 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 15:00:26 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 15:00:26 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:15:00:26.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 15:00:26 compute-0 systemd[1]: Started Hostname Service.
Sep 30 15:00:26 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.27995 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:26 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd utilization"} v 0)
Sep 30 15:00:26 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1838169163' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Sep 30 15:00:26 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Sep 30 15:00:26 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2688777113' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Sep 30 15:00:26 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1355: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 817 B/s rd, 0 op/s
Sep 30 15:00:27 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.28022 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:27 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.18501 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:27 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.27532 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:27 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Sep 30 15:00:27 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2051850614' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Sep 30 15:00:27 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T15:00:27.260Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 15:00:27 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/2276002157' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Sep 30 15:00:27 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/3289224291' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Sep 30 15:00:27 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/2497140182' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Sep 30 15:00:27 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/3145002520' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Sep 30 15:00:27 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/1605193171' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Sep 30 15:00:27 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/296394949' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Sep 30 15:00:27 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/4081498396' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Sep 30 15:00:27 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/3206052875' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Sep 30 15:00:27 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/1838169163' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Sep 30 15:00:27 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/2688777113' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Sep 30 15:00:27 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/1818537714' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Sep 30 15:00:27 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/4098444706' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Sep 30 15:00:27 compute-0 nova_compute[261524]: 2025-09-30 15:00:27.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 15:00:27 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 15:00:27 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 15:00:27 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:15:00:27.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 15:00:27 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.18525 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:27 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.18531 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 15:00:27 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.28043 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 15:00:27 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 15:00:27 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.27550 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:27 compute-0 nova_compute[261524]: 2025-09-30 15:00:27.952 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 15:00:27 compute-0 nova_compute[261524]: 2025-09-30 15:00:27.952 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 15:00:27 compute-0 nova_compute[261524]: 2025-09-30 15:00:27.952 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 15:00:27 compute-0 nova_compute[261524]: 2025-09-30 15:00:27.952 2 DEBUG nova.compute.manager [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Sep 30 15:00:27 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.18543 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 15:00:27 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.27562 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 15:00:28 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.28070 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 15:00:28 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 15:00:28 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 15:00:28 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:15:00:28.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 15:00:28 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.18561 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 15:00:28 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.27577 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 15:00:28 compute-0 ceph-mon[74194]: from='client.27995 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:28 compute-0 ceph-mon[74194]: pgmap v1355: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 817 B/s rd, 0 op/s
Sep 30 15:00:28 compute-0 ceph-mon[74194]: from='client.28022 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:28 compute-0 ceph-mon[74194]: from='client.18501 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:28 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/1820244483' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Sep 30 15:00:28 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/837919327' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 15:00:28 compute-0 ceph-mon[74194]: from='client.27532 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:28 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/2051850614' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Sep 30 15:00:28 compute-0 ceph-mon[74194]: from='client.18525 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:28 compute-0 ceph-mon[74194]: from='client.18531 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 15:00:28 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/140215070' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Sep 30 15:00:28 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/550275605' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 15:00:28 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/183842408' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Sep 30 15:00:28 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.28085 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 15:00:28 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "quorum_status"} v 0)
Sep 30 15:00:28 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3896336014' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Sep 30 15:00:28 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.27586 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 15:00:28 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.28094 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 15:00:28 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.18576 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 15:00:28 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T15:00:28.912Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 15:00:28 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1356: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 544 B/s rd, 0 op/s
Sep 30 15:00:28 compute-0 nova_compute[261524]: 2025-09-30 15:00:28.952 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 15:00:28 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions"} v 0)
Sep 30 15:00:28 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3611667625' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Sep 30 15:00:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 15:00:28 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 15:00:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 15:00:29 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 15:00:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 15:00:29 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 15:00:29 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 15:00:29 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 15:00:29 compute-0 nova_compute[261524]: 2025-09-30 15:00:29.006 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 15:00:29 compute-0 nova_compute[261524]: 2025-09-30 15:00:29.006 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 15:00:29 compute-0 nova_compute[261524]: 2025-09-30 15:00:29.007 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 15:00:29 compute-0 nova_compute[261524]: 2025-09-30 15:00:29.007 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Sep 30 15:00:29 compute-0 nova_compute[261524]: 2025-09-30 15:00:29.007 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Sep 30 15:00:29 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.28106 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 15:00:29 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.27598 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 15:00:29 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.18600 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 15:00:29 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 15:00:29 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 15:00:29 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:15:00:29.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 15:00:29 compute-0 sudo[303828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Sep 30 15:00:29 compute-0 sudo[303828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Sep 30 15:00:29 compute-0 sudo[303828]: pam_unix(sudo:session): session closed for user root
Sep 30 15:00:29 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 15:00:29 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2786686936' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 15:00:29 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Sep 30 15:00:29 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1197740404' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Sep 30 15:00:29 compute-0 nova_compute[261524]: 2025-09-30 15:00:29.623 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.616s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Sep 30 15:00:29 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.27616 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 15:00:29 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Sep 30 15:00:29 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 15:00:29 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.28127 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 15:00:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 15:00:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 15:00:29 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Sep 30 15:00:29 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Sep 30 15:00:29 compute-0 nova_compute[261524]: 2025-09-30 15:00:29.818 2 WARNING nova.virt.libvirt.driver [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Sep 30 15:00:29 compute-0 nova_compute[261524]: 2025-09-30 15:00:29.819 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4218MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Sep 30 15:00:29 compute-0 nova_compute[261524]: 2025-09-30 15:00:29.819 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Sep 30 15:00:29 compute-0 nova_compute[261524]: 2025-09-30 15:00:29.820 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Sep 30 15:00:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 15:00:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 15:00:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] scanning for idle connections..
Sep 30 15:00:29 compute-0 ceph-mgr[74485]: [volumes INFO mgr_util] cleaning up connections: []
Sep 30 15:00:29 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.18630 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 15:00:30 compute-0 nova_compute[261524]: 2025-09-30 15:00:30.020 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Sep 30 15:00:30 compute-0 nova_compute[261524]: 2025-09-30 15:00:30.021 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Sep 30 15:00:30 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Sep 30 15:00:30 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3862948912' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Sep 30 15:00:30 compute-0 nova_compute[261524]: 2025-09-30 15:00:30.112 2 DEBUG nova.scheduler.client.report [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Refreshing inventories for resource provider 06783cfc-6d32-454d-9501-ebd8adea3735 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Sep 30 15:00:30 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.27634 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 15:00:30 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.28148 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 15:00:30 compute-0 nova_compute[261524]: 2025-09-30 15:00:30.134 2 DEBUG nova.scheduler.client.report [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Updating ProviderTree inventory for provider 06783cfc-6d32-454d-9501-ebd8adea3735 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Sep 30 15:00:30 compute-0 nova_compute[261524]: 2025-09-30 15:00:30.134 2 DEBUG nova.compute.provider_tree [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Updating inventory in ProviderTree for provider 06783cfc-6d32-454d-9501-ebd8adea3735 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Sep 30 15:00:30 compute-0 nova_compute[261524]: 2025-09-30 15:00:30.152 2 DEBUG nova.scheduler.client.report [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Refreshing aggregate associations for resource provider 06783cfc-6d32-454d-9501-ebd8adea3735, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Sep 30 15:00:30 compute-0 nova_compute[261524]: 2025-09-30 15:00:30.174 2 DEBUG nova.scheduler.client.report [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Refreshing trait associations for resource provider 06783cfc-6d32-454d-9501-ebd8adea3735, traits: COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AVX,HW_CPU_X86_SSE2,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_BMI2,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SVM,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_BMI,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_FMA3,HW_CPU_X86_AVX2,HW_CPU_X86_SSE42,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_F16C,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_RESCUE_BFV,COMPUTE_NODE,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_USB,COMPUTE_ACCELERATORS,HW_CPU_X86_CLMUL,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE4A,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_AMD_SVM _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Sep 30 15:00:30 compute-0 nova_compute[261524]: 2025-09-30 15:00:30.189 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Sep 30 15:00:30 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Sep 30 15:00:30 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Sep 30 15:00:30 compute-0 ceph-mon[74194]: from='client.28043 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 15:00:30 compute-0 ceph-mon[74194]: from='client.27550 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:30 compute-0 ceph-mon[74194]: from='client.18543 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 15:00:30 compute-0 ceph-mon[74194]: from='client.27562 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 15:00:30 compute-0 ceph-mon[74194]: from='client.28070 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 15:00:30 compute-0 ceph-mon[74194]: from='client.18561 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 15:00:30 compute-0 ceph-mon[74194]: from='client.27577 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 15:00:30 compute-0 ceph-mon[74194]: from='client.28085 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 15:00:30 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/3896336014' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Sep 30 15:00:30 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/1990802672' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Sep 30 15:00:30 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/4086652481' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Sep 30 15:00:30 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/3611667625' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Sep 30 15:00:30 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/2921001584' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Sep 30 15:00:30 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/2501091601' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Sep 30 15:00:30 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.18651 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 15:00:30 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 15:00:30 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 15:00:30 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:15:00:30.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 15:00:30 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.27649 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 15:00:30 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1357: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 15:00:31 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Sep 30 15:00:31 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Sep 30 15:00:31 compute-0 nova_compute[261524]: 2025-09-30 15:00:31.190 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 15:00:31 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.28181 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:31 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 15:00:31 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 15:00:31 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:15:00:31.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 15:00:31 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Sep 30 15:00:31 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/855795286' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 15:00:31 compute-0 nova_compute[261524]: 2025-09-30 15:00:31.433 2 DEBUG oslo_concurrency.processutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.244s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Sep 30 15:00:31 compute-0 nova_compute[261524]: 2025-09-30 15:00:31.438 2 DEBUG nova.compute.provider_tree [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Inventory has not changed in ProviderTree for provider: 06783cfc-6d32-454d-9501-ebd8adea3735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Sep 30 15:00:31 compute-0 ceph-mon[74194]: from='client.27586 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 15:00:31 compute-0 ceph-mon[74194]: from='client.28094 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 15:00:31 compute-0 ceph-mon[74194]: from='client.18576 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 15:00:31 compute-0 ceph-mon[74194]: pgmap v1356: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 544 B/s rd, 0 op/s
Sep 30 15:00:31 compute-0 ceph-mon[74194]: from='client.28106 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 15:00:31 compute-0 ceph-mon[74194]: from='client.27598 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 15:00:31 compute-0 ceph-mon[74194]: from='client.18600 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 15:00:31 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/421968462' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Sep 30 15:00:31 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/2786686936' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 15:00:31 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/1197740404' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Sep 30 15:00:31 compute-0 ceph-mon[74194]: from='client.27616 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 15:00:31 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/3263838503' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Sep 30 15:00:31 compute-0 ceph-mon[74194]: from='mgr.14673 192.168.122.100:0/2494440785' entity='mgr.compute-0.buxlkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Sep 30 15:00:31 compute-0 ceph-mon[74194]: from='client.28127 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 15:00:31 compute-0 ceph-mon[74194]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Sep 30 15:00:31 compute-0 ceph-mon[74194]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Sep 30 15:00:31 compute-0 ceph-mon[74194]: from='client.18630 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 15:00:31 compute-0 ceph-mon[74194]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Sep 30 15:00:31 compute-0 ceph-mon[74194]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Sep 30 15:00:31 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/2441069486' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Sep 30 15:00:31 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/3862948912' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Sep 30 15:00:31 compute-0 ceph-mon[74194]: from='client.27634 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 15:00:31 compute-0 ceph-mon[74194]: from='client.28148 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 15:00:31 compute-0 ceph-mon[74194]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Sep 30 15:00:31 compute-0 ceph-mon[74194]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Sep 30 15:00:31 compute-0 ceph-mon[74194]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Sep 30 15:00:31 compute-0 ceph-mon[74194]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Sep 30 15:00:31 compute-0 ceph-mon[74194]: from='client.18651 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 15:00:31 compute-0 ceph-mon[74194]: from='client.27649 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Sep 30 15:00:31 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/2292571660' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Sep 30 15:00:31 compute-0 ceph-mon[74194]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Sep 30 15:00:31 compute-0 ceph-mon[74194]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Sep 30 15:00:31 compute-0 ceph-mon[74194]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Sep 30 15:00:31 compute-0 ceph-mon[74194]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Sep 30 15:00:31 compute-0 ceph-mon[74194]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Sep 30 15:00:31 compute-0 ceph-mon[74194]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Sep 30 15:00:31 compute-0 ceph-mon[74194]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Sep 30 15:00:31 compute-0 ceph-mon[74194]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Sep 30 15:00:31 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump"} v 0)
Sep 30 15:00:31 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2366524519' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Sep 30 15:00:31 compute-0 nova_compute[261524]: 2025-09-30 15:00:31.836 2 DEBUG nova.scheduler.client.report [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Inventory has not changed for provider 06783cfc-6d32-454d-9501-ebd8adea3735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Sep 30 15:00:31 compute-0 nova_compute[261524]: 2025-09-30 15:00:31.837 2 DEBUG nova.compute.resource_tracker [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Sep 30 15:00:31 compute-0 nova_compute[261524]: 2025-09-30 15:00:31.837 2 DEBUG oslo_concurrency.lockutils [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.018s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Sep 30 15:00:31 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0)
Sep 30 15:00:31 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/925980742' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Sep 30 15:00:32 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.27697 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:32 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.18708 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:32 compute-0 nova_compute[261524]: 2025-09-30 15:00:32.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Sep 30 15:00:32 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 15:00:32 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 15:00:32 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:15:00:32.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 15:00:32 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0)
Sep 30 15:00:32 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/142795229' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Sep 30 15:00:32 compute-0 ceph-mon[74194]: pgmap v1357: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 15:00:32 compute-0 ceph-mon[74194]: from='client.28181 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:32 compute-0 ceph-mon[74194]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Sep 30 15:00:32 compute-0 ceph-mon[74194]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Sep 30 15:00:32 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/855795286' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Sep 30 15:00:32 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/2595080827' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Sep 30 15:00:32 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/2366524519' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Sep 30 15:00:32 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/925980742' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Sep 30 15:00:32 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/2040146905' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Sep 30 15:00:32 compute-0 ceph-mon[74194]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Sep 30 15:00:32 compute-0 nova_compute[261524]: 2025-09-30 15:00:32.838 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 15:00:32 compute-0 nova_compute[261524]: 2025-09-30 15:00:32.838 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 15:00:32 compute-0 nova_compute[261524]: 2025-09-30 15:00:32.870 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 15:00:32 compute-0 nova_compute[261524]: 2025-09-30 15:00:32.871 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 15:00:32 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1358: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 15:00:32 compute-0 nova_compute[261524]: 2025-09-30 15:00:32.952 2 DEBUG oslo_service.periodic_task [None req-bb444bc8-acc0-4fb4-be7d-037eb79d4991 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Sep 30 15:00:33 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df"} v 0)
Sep 30 15:00:33 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/309702777' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Sep 30 15:00:33 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 15:00:33 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.001000025s ======
Sep 30 15:00:33 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:15:00:33.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Sep 30 15:00:33 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.28229 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:33 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs dump"} v 0)
Sep 30 15:00:33 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3691240882' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Sep 30 15:00:33 compute-0 ceph-mon[74194]: from='client.27697 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:33 compute-0 ceph-mon[74194]: from='client.18708 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:33 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/1504301565' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Sep 30 15:00:33 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/142795229' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Sep 30 15:00:33 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/4135938025' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Sep 30 15:00:33 compute-0 ceph-mon[74194]: pgmap v1358: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Sep 30 15:00:33 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/3207677746' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Sep 30 15:00:33 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/309702777' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Sep 30 15:00:33 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/3211958474' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Sep 30 15:00:33 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/4098314644' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Sep 30 15:00:33 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/3691240882' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Sep 30 15:00:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T15:00:33.764Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Sep 30 15:00:33 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-alertmanager-compute-0[105913]: ts=2025-09-30T15:00:33.765Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Sep 30 15:00:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 15:00:33 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Sep 30 15:00:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 15:00:34 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Sep 30 15:00:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 15:00:34 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Sep 30 15:00:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-nfs-cephfs-2-0-compute-0-qrbicy[269460]: 30/09/2025 15:00:34 : epoch 68dbebf4 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Sep 30 15:00:34 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs ls"} v 0)
Sep 30 15:00:34 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2107933434' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Sep 30 15:00:34 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.27736 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:34 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 15:00:34 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 15:00:34 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.100 - anonymous [30/Sep/2025:15:00:34.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 15:00:34 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.18753 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:34 compute-0 ceph-mon[74194]: from='client.28229 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:34 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/4086535850' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Sep 30 15:00:34 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/2107933434' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Sep 30 15:00:34 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/2973653588' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Sep 30 15:00:34 compute-0 ceph-mon[74194]: from='client.27736 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:34 compute-0 ceph-mon[74194]: from='client.18753 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:34 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/1659221764' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Sep 30 15:00:34 compute-0 ceph-5e3c7776-ac03-5698-b79f-a6dc2d80cae6-mgr-compute-0-buxlkm[74481]: ::ffff:192.168.122.100 - - [30/Sep/2025:15:00:34] "GET /metrics HTTP/1.1" 200 48533 "" "Prometheus/2.51.0"
Sep 30 15:00:34 compute-0 ceph-mgr[74485]: [prometheus INFO cherrypy.access.140711391988992] ::ffff:192.168.122.100 - - [30/Sep/2025:15:00:34] "GET /metrics HTTP/1.1" 200 48533 "" "Prometheus/2.51.0"
Sep 30 15:00:34 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds stat"} v 0)
Sep 30 15:00:34 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1769018614' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Sep 30 15:00:34 compute-0 ceph-mgr[74485]: log_channel(cluster) log [DBG] : pgmap v1359: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 15:00:35 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.28253 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:35 compute-0 radosgw[95456]: ====== starting new request req=0x7f91011da5d0 =====
Sep 30 15:00:35 compute-0 radosgw[95456]: ====== req done req=0x7f91011da5d0 op status=0 http_status=200 latency=0.000000000s ======
Sep 30 15:00:35 compute-0 radosgw[95456]: beast: 0x7f91011da5d0: 192.168.122.102 - anonymous [30/Sep/2025:15:00:35.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Sep 30 15:00:35 compute-0 ceph-mon[74194]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump"} v 0)
Sep 30 15:00:35 compute-0 ceph-mon[74194]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1374464096' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Sep 30 15:00:35 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.27760 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:35 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.18777 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:35 compute-0 ceph-mgr[74485]: log_channel(audit) log [DBG] : from='client.28271 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:36 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/1967874683' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Sep 30 15:00:36 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/1769018614' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Sep 30 15:00:36 compute-0 ceph-mon[74194]: pgmap v1359: 337 pgs: 337 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Sep 30 15:00:36 compute-0 ceph-mon[74194]: from='client.28253 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Sep 30 15:00:36 compute-0 ceph-mon[74194]: from='client.? 192.168.122.102:0/1832879642' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Sep 30 15:00:36 compute-0 ceph-mon[74194]: from='client.? 192.168.122.100:0/1374464096' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Sep 30 15:00:36 compute-0 ceph-mon[74194]: from='client.? 192.168.122.101:0/4198091271' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Sep 30 15:00:36 compute-0 nova_compute[261524]: 2025-09-30 15:00:36.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
